Jan 20 17:56:46 localhost kernel: Linux version 5.14.0-661.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-69.el9) #1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026
Jan 20 17:56:46 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Jan 20 17:56:46 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 20 17:56:46 localhost kernel: BIOS-provided physical RAM map:
Jan 20 17:56:46 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Jan 20 17:56:46 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Jan 20 17:56:46 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Jan 20 17:56:46 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Jan 20 17:56:46 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Jan 20 17:56:46 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Jan 20 17:56:46 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Jan 20 17:56:46 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Jan 20 17:56:46 localhost kernel: NX (Execute Disable) protection: active
Jan 20 17:56:46 localhost kernel: APIC: Static calls initialized
Jan 20 17:56:46 localhost kernel: SMBIOS 2.8 present.
Jan 20 17:56:46 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Jan 20 17:56:46 localhost kernel: Hypervisor detected: KVM
Jan 20 17:56:46 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Jan 20 17:56:46 localhost kernel: kvm-clock: using sched offset of 3844585731 cycles
Jan 20 17:56:46 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Jan 20 17:56:46 localhost kernel: tsc: Detected 2799.998 MHz processor
Jan 20 17:56:46 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Jan 20 17:56:46 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Jan 20 17:56:46 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Jan 20 17:56:46 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Jan 20 17:56:46 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Jan 20 17:56:46 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Jan 20 17:56:46 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Jan 20 17:56:46 localhost kernel: Using GB pages for direct mapping
Jan 20 17:56:46 localhost kernel: RAMDISK: [mem 0x2d426000-0x32a0afff]
Jan 20 17:56:46 localhost kernel: ACPI: Early table checksum verification disabled
Jan 20 17:56:46 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Jan 20 17:56:46 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 20 17:56:46 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 20 17:56:46 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 20 17:56:46 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Jan 20 17:56:46 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 20 17:56:46 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 20 17:56:46 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Jan 20 17:56:46 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Jan 20 17:56:46 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Jan 20 17:56:46 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Jan 20 17:56:46 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Jan 20 17:56:46 localhost kernel: No NUMA configuration found
Jan 20 17:56:46 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Jan 20 17:56:46 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Jan 20 17:56:46 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Jan 20 17:56:46 localhost kernel: Zone ranges:
Jan 20 17:56:46 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Jan 20 17:56:46 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Jan 20 17:56:46 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Jan 20 17:56:46 localhost kernel:   Device   empty
Jan 20 17:56:46 localhost kernel: Movable zone start for each node
Jan 20 17:56:46 localhost kernel: Early memory node ranges
Jan 20 17:56:46 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Jan 20 17:56:46 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Jan 20 17:56:46 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Jan 20 17:56:46 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Jan 20 17:56:46 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Jan 20 17:56:46 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Jan 20 17:56:46 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Jan 20 17:56:46 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Jan 20 17:56:46 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Jan 20 17:56:46 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Jan 20 17:56:46 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Jan 20 17:56:46 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Jan 20 17:56:46 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Jan 20 17:56:46 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Jan 20 17:56:46 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Jan 20 17:56:46 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Jan 20 17:56:46 localhost kernel: TSC deadline timer available
Jan 20 17:56:46 localhost kernel: CPU topo: Max. logical packages:   8
Jan 20 17:56:46 localhost kernel: CPU topo: Max. logical dies:       8
Jan 20 17:56:46 localhost kernel: CPU topo: Max. dies per package:   1
Jan 20 17:56:46 localhost kernel: CPU topo: Max. threads per core:   1
Jan 20 17:56:46 localhost kernel: CPU topo: Num. cores per package:     1
Jan 20 17:56:46 localhost kernel: CPU topo: Num. threads per package:   1
Jan 20 17:56:46 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Jan 20 17:56:46 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Jan 20 17:56:46 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Jan 20 17:56:46 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Jan 20 17:56:46 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Jan 20 17:56:46 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Jan 20 17:56:46 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Jan 20 17:56:46 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Jan 20 17:56:46 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Jan 20 17:56:46 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Jan 20 17:56:46 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Jan 20 17:56:46 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Jan 20 17:56:46 localhost kernel: Booting paravirtualized kernel on KVM
Jan 20 17:56:46 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Jan 20 17:56:46 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Jan 20 17:56:46 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Jan 20 17:56:46 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Jan 20 17:56:46 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Jan 20 17:56:46 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Jan 20 17:56:46 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 20 17:56:46 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64", will be passed to user space.
Jan 20 17:56:46 localhost kernel: random: crng init done
Jan 20 17:56:46 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Jan 20 17:56:46 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Jan 20 17:56:46 localhost kernel: Fallback order for Node 0: 0 
Jan 20 17:56:46 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Jan 20 17:56:46 localhost kernel: Policy zone: Normal
Jan 20 17:56:46 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Jan 20 17:56:46 localhost kernel: software IO TLB: area num 8.
Jan 20 17:56:46 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Jan 20 17:56:46 localhost kernel: ftrace: allocating 49417 entries in 194 pages
Jan 20 17:56:46 localhost kernel: ftrace: allocated 194 pages with 3 groups
Jan 20 17:56:46 localhost kernel: Dynamic Preempt: voluntary
Jan 20 17:56:46 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Jan 20 17:56:46 localhost kernel: rcu:         RCU event tracing is enabled.
Jan 20 17:56:46 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Jan 20 17:56:46 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Jan 20 17:56:46 localhost kernel:         Rude variant of Tasks RCU enabled.
Jan 20 17:56:46 localhost kernel:         Tracing variant of Tasks RCU enabled.
Jan 20 17:56:46 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Jan 20 17:56:46 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Jan 20 17:56:46 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 20 17:56:46 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 20 17:56:46 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 20 17:56:46 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Jan 20 17:56:46 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Jan 20 17:56:46 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Jan 20 17:56:46 localhost kernel: Console: colour VGA+ 80x25
Jan 20 17:56:46 localhost kernel: printk: console [ttyS0] enabled
Jan 20 17:56:46 localhost kernel: ACPI: Core revision 20230331
Jan 20 17:56:46 localhost kernel: APIC: Switch to symmetric I/O mode setup
Jan 20 17:56:46 localhost kernel: x2apic enabled
Jan 20 17:56:46 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Jan 20 17:56:46 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Jan 20 17:56:46 localhost kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Jan 20 17:56:46 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Jan 20 17:56:46 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Jan 20 17:56:46 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Jan 20 17:56:46 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Jan 20 17:56:46 localhost kernel: Spectre V2 : Mitigation: Retpolines
Jan 20 17:56:46 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Jan 20 17:56:46 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Jan 20 17:56:46 localhost kernel: RETBleed: Mitigation: untrained return thunk
Jan 20 17:56:46 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Jan 20 17:56:46 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Jan 20 17:56:46 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Jan 20 17:56:46 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Jan 20 17:56:46 localhost kernel: x86/bugs: return thunk changed
Jan 20 17:56:46 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Jan 20 17:56:46 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Jan 20 17:56:46 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Jan 20 17:56:46 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Jan 20 17:56:46 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Jan 20 17:56:46 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Jan 20 17:56:46 localhost kernel: Freeing SMP alternatives memory: 40K
Jan 20 17:56:46 localhost kernel: pid_max: default: 32768 minimum: 301
Jan 20 17:56:46 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Jan 20 17:56:46 localhost kernel: landlock: Up and running.
Jan 20 17:56:46 localhost kernel: Yama: becoming mindful.
Jan 20 17:56:46 localhost kernel: SELinux:  Initializing.
Jan 20 17:56:46 localhost kernel: LSM support for eBPF active
Jan 20 17:56:46 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 20 17:56:46 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 20 17:56:46 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Jan 20 17:56:46 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Jan 20 17:56:46 localhost kernel: ... version:                0
Jan 20 17:56:46 localhost kernel: ... bit width:              48
Jan 20 17:56:46 localhost kernel: ... generic registers:      6
Jan 20 17:56:46 localhost kernel: ... value mask:             0000ffffffffffff
Jan 20 17:56:46 localhost kernel: ... max period:             00007fffffffffff
Jan 20 17:56:46 localhost kernel: ... fixed-purpose events:   0
Jan 20 17:56:46 localhost kernel: ... event mask:             000000000000003f
Jan 20 17:56:46 localhost kernel: signal: max sigframe size: 1776
Jan 20 17:56:46 localhost kernel: rcu: Hierarchical SRCU implementation.
Jan 20 17:56:46 localhost kernel: rcu:         Max phase no-delay instances is 400.
Jan 20 17:56:46 localhost kernel: smp: Bringing up secondary CPUs ...
Jan 20 17:56:46 localhost kernel: smpboot: x86: Booting SMP configuration:
Jan 20 17:56:46 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Jan 20 17:56:46 localhost kernel: smp: Brought up 1 node, 8 CPUs
Jan 20 17:56:46 localhost kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Jan 20 17:56:46 localhost kernel: node 0 deferred pages initialised in 26ms
Jan 20 17:56:46 localhost kernel: Memory: 7763888K/8388068K available (16384K kernel code, 5797K rwdata, 13916K rodata, 4200K init, 7192K bss, 618356K reserved, 0K cma-reserved)
Jan 20 17:56:46 localhost kernel: devtmpfs: initialized
Jan 20 17:56:46 localhost kernel: x86/mm: Memory block size: 128MB
Jan 20 17:56:46 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Jan 20 17:56:46 localhost kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Jan 20 17:56:46 localhost kernel: pinctrl core: initialized pinctrl subsystem
Jan 20 17:56:46 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Jan 20 17:56:46 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Jan 20 17:56:46 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Jan 20 17:56:46 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Jan 20 17:56:46 localhost kernel: audit: initializing netlink subsys (disabled)
Jan 20 17:56:46 localhost kernel: audit: type=2000 audit(1768931804.228:1): state=initialized audit_enabled=0 res=1
Jan 20 17:56:46 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Jan 20 17:56:46 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Jan 20 17:56:46 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Jan 20 17:56:46 localhost kernel: cpuidle: using governor menu
Jan 20 17:56:46 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Jan 20 17:56:46 localhost kernel: PCI: Using configuration type 1 for base access
Jan 20 17:56:46 localhost kernel: PCI: Using configuration type 1 for extended access
Jan 20 17:56:46 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Jan 20 17:56:46 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Jan 20 17:56:46 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Jan 20 17:56:46 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Jan 20 17:56:46 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Jan 20 17:56:46 localhost kernel: Demotion targets for Node 0: null
Jan 20 17:56:46 localhost kernel: cryptd: max_cpu_qlen set to 1000
Jan 20 17:56:46 localhost kernel: ACPI: Added _OSI(Module Device)
Jan 20 17:56:46 localhost kernel: ACPI: Added _OSI(Processor Device)
Jan 20 17:56:46 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Jan 20 17:56:46 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Jan 20 17:56:46 localhost kernel: ACPI: Interpreter enabled
Jan 20 17:56:46 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Jan 20 17:56:46 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Jan 20 17:56:46 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Jan 20 17:56:46 localhost kernel: PCI: Using E820 reservations for host bridge windows
Jan 20 17:56:46 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Jan 20 17:56:46 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Jan 20 17:56:46 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Jan 20 17:56:46 localhost kernel: acpiphp: Slot [3] registered
Jan 20 17:56:46 localhost kernel: acpiphp: Slot [4] registered
Jan 20 17:56:46 localhost kernel: acpiphp: Slot [5] registered
Jan 20 17:56:46 localhost kernel: acpiphp: Slot [6] registered
Jan 20 17:56:46 localhost kernel: acpiphp: Slot [7] registered
Jan 20 17:56:46 localhost kernel: acpiphp: Slot [8] registered
Jan 20 17:56:46 localhost kernel: acpiphp: Slot [9] registered
Jan 20 17:56:46 localhost kernel: acpiphp: Slot [10] registered
Jan 20 17:56:46 localhost kernel: acpiphp: Slot [11] registered
Jan 20 17:56:46 localhost kernel: acpiphp: Slot [12] registered
Jan 20 17:56:46 localhost kernel: acpiphp: Slot [13] registered
Jan 20 17:56:46 localhost kernel: acpiphp: Slot [14] registered
Jan 20 17:56:46 localhost kernel: acpiphp: Slot [15] registered
Jan 20 17:56:46 localhost kernel: acpiphp: Slot [16] registered
Jan 20 17:56:46 localhost kernel: acpiphp: Slot [17] registered
Jan 20 17:56:46 localhost kernel: acpiphp: Slot [18] registered
Jan 20 17:56:46 localhost kernel: acpiphp: Slot [19] registered
Jan 20 17:56:46 localhost kernel: acpiphp: Slot [20] registered
Jan 20 17:56:46 localhost kernel: acpiphp: Slot [21] registered
Jan 20 17:56:46 localhost kernel: acpiphp: Slot [22] registered
Jan 20 17:56:46 localhost kernel: acpiphp: Slot [23] registered
Jan 20 17:56:46 localhost kernel: acpiphp: Slot [24] registered
Jan 20 17:56:46 localhost kernel: acpiphp: Slot [25] registered
Jan 20 17:56:46 localhost kernel: acpiphp: Slot [26] registered
Jan 20 17:56:46 localhost kernel: acpiphp: Slot [27] registered
Jan 20 17:56:46 localhost kernel: acpiphp: Slot [28] registered
Jan 20 17:56:46 localhost kernel: acpiphp: Slot [29] registered
Jan 20 17:56:46 localhost kernel: acpiphp: Slot [30] registered
Jan 20 17:56:46 localhost kernel: acpiphp: Slot [31] registered
Jan 20 17:56:46 localhost kernel: PCI host bridge to bus 0000:00
Jan 20 17:56:46 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Jan 20 17:56:46 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Jan 20 17:56:46 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Jan 20 17:56:46 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Jan 20 17:56:46 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Jan 20 17:56:46 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Jan 20 17:56:46 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Jan 20 17:56:46 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Jan 20 17:56:46 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Jan 20 17:56:46 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Jan 20 17:56:46 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Jan 20 17:56:46 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Jan 20 17:56:46 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Jan 20 17:56:46 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Jan 20 17:56:46 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Jan 20 17:56:46 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Jan 20 17:56:46 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Jan 20 17:56:46 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Jan 20 17:56:46 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Jan 20 17:56:46 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Jan 20 17:56:46 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Jan 20 17:56:46 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Jan 20 17:56:46 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Jan 20 17:56:46 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Jan 20 17:56:46 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Jan 20 17:56:46 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 20 17:56:46 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Jan 20 17:56:46 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Jan 20 17:56:46 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Jan 20 17:56:46 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Jan 20 17:56:46 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Jan 20 17:56:46 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Jan 20 17:56:46 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Jan 20 17:56:46 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Jan 20 17:56:46 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Jan 20 17:56:46 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Jan 20 17:56:46 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Jan 20 17:56:46 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Jan 20 17:56:46 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Jan 20 17:56:46 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Jan 20 17:56:46 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Jan 20 17:56:46 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Jan 20 17:56:46 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Jan 20 17:56:46 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Jan 20 17:56:46 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Jan 20 17:56:46 localhost kernel: iommu: Default domain type: Translated
Jan 20 17:56:46 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Jan 20 17:56:46 localhost kernel: SCSI subsystem initialized
Jan 20 17:56:46 localhost kernel: ACPI: bus type USB registered
Jan 20 17:56:46 localhost kernel: usbcore: registered new interface driver usbfs
Jan 20 17:56:46 localhost kernel: usbcore: registered new interface driver hub
Jan 20 17:56:46 localhost kernel: usbcore: registered new device driver usb
Jan 20 17:56:46 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Jan 20 17:56:46 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Jan 20 17:56:46 localhost kernel: PTP clock support registered
Jan 20 17:56:46 localhost kernel: EDAC MC: Ver: 3.0.0
Jan 20 17:56:46 localhost kernel: NetLabel: Initializing
Jan 20 17:56:46 localhost kernel: NetLabel:  domain hash size = 128
Jan 20 17:56:46 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Jan 20 17:56:46 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Jan 20 17:56:46 localhost kernel: PCI: Using ACPI for IRQ routing
Jan 20 17:56:46 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Jan 20 17:56:46 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Jan 20 17:56:46 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Jan 20 17:56:46 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Jan 20 17:56:46 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Jan 20 17:56:46 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Jan 20 17:56:46 localhost kernel: vgaarb: loaded
Jan 20 17:56:46 localhost kernel: clocksource: Switched to clocksource kvm-clock
Jan 20 17:56:46 localhost kernel: VFS: Disk quotas dquot_6.6.0
Jan 20 17:56:46 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Jan 20 17:56:46 localhost kernel: pnp: PnP ACPI init
Jan 20 17:56:46 localhost kernel: pnp 00:03: [dma 2]
Jan 20 17:56:46 localhost kernel: pnp: PnP ACPI: found 5 devices
Jan 20 17:56:46 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Jan 20 17:56:46 localhost kernel: NET: Registered PF_INET protocol family
Jan 20 17:56:46 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Jan 20 17:56:46 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Jan 20 17:56:46 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Jan 20 17:56:46 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Jan 20 17:56:46 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Jan 20 17:56:46 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Jan 20 17:56:46 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Jan 20 17:56:46 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 20 17:56:46 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 20 17:56:46 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Jan 20 17:56:46 localhost kernel: NET: Registered PF_XDP protocol family
Jan 20 17:56:46 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Jan 20 17:56:46 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Jan 20 17:56:46 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Jan 20 17:56:46 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Jan 20 17:56:46 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Jan 20 17:56:46 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Jan 20 17:56:46 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Jan 20 17:56:46 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Jan 20 17:56:46 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 71816 usecs
Jan 20 17:56:46 localhost kernel: PCI: CLS 0 bytes, default 64
Jan 20 17:56:46 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Jan 20 17:56:46 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Jan 20 17:56:46 localhost kernel: ACPI: bus type thunderbolt registered
Jan 20 17:56:46 localhost kernel: Trying to unpack rootfs image as initramfs...
Jan 20 17:56:46 localhost kernel: Initialise system trusted keyrings
Jan 20 17:56:46 localhost kernel: Key type blacklist registered
Jan 20 17:56:46 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Jan 20 17:56:46 localhost kernel: zbud: loaded
Jan 20 17:56:46 localhost kernel: integrity: Platform Keyring initialized
Jan 20 17:56:46 localhost kernel: integrity: Machine keyring initialized
Jan 20 17:56:46 localhost kernel: Freeing initrd memory: 87956K
Jan 20 17:56:46 localhost kernel: NET: Registered PF_ALG protocol family
Jan 20 17:56:46 localhost kernel: xor: automatically using best checksumming function   avx       
Jan 20 17:56:46 localhost kernel: Key type asymmetric registered
Jan 20 17:56:46 localhost kernel: Asymmetric key parser 'x509' registered
Jan 20 17:56:46 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Jan 20 17:56:46 localhost kernel: io scheduler mq-deadline registered
Jan 20 17:56:46 localhost kernel: io scheduler kyber registered
Jan 20 17:56:46 localhost kernel: io scheduler bfq registered
Jan 20 17:56:46 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Jan 20 17:56:46 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Jan 20 17:56:46 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Jan 20 17:56:46 localhost kernel: ACPI: button: Power Button [PWRF]
Jan 20 17:56:46 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Jan 20 17:56:46 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Jan 20 17:56:46 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Jan 20 17:56:46 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Jan 20 17:56:46 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Jan 20 17:56:46 localhost kernel: Non-volatile memory driver v1.3
Jan 20 17:56:46 localhost kernel: rdac: device handler registered
Jan 20 17:56:46 localhost kernel: hp_sw: device handler registered
Jan 20 17:56:46 localhost kernel: emc: device handler registered
Jan 20 17:56:46 localhost kernel: alua: device handler registered
Jan 20 17:56:46 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Jan 20 17:56:46 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Jan 20 17:56:46 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Jan 20 17:56:46 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Jan 20 17:56:46 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Jan 20 17:56:46 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jan 20 17:56:46 localhost kernel: usb usb1: Product: UHCI Host Controller
Jan 20 17:56:46 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-661.el9.x86_64 uhci_hcd
Jan 20 17:56:46 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Jan 20 17:56:46 localhost kernel: hub 1-0:1.0: USB hub found
Jan 20 17:56:46 localhost kernel: hub 1-0:1.0: 2 ports detected
Jan 20 17:56:46 localhost kernel: usbcore: registered new interface driver usbserial_generic
Jan 20 17:56:46 localhost kernel: usbserial: USB Serial support registered for generic
Jan 20 17:56:46 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Jan 20 17:56:46 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Jan 20 17:56:46 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Jan 20 17:56:46 localhost kernel: mousedev: PS/2 mouse device common for all mice
Jan 20 17:56:46 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Jan 20 17:56:46 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Jan 20 17:56:46 localhost kernel: rtc_cmos 00:04: registered as rtc0
Jan 20 17:56:46 localhost kernel: rtc_cmos 00:04: setting system clock to 2026-01-20T17:56:45 UTC (1768931805)
Jan 20 17:56:46 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Jan 20 17:56:46 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Jan 20 17:56:46 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Jan 20 17:56:46 localhost kernel: usbcore: registered new interface driver usbhid
Jan 20 17:56:46 localhost kernel: usbhid: USB HID core driver
Jan 20 17:56:46 localhost kernel: drop_monitor: Initializing network drop monitor service
Jan 20 17:56:46 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Jan 20 17:56:46 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Jan 20 17:56:46 localhost kernel: Initializing XFRM netlink socket
Jan 20 17:56:46 localhost kernel: NET: Registered PF_INET6 protocol family
Jan 20 17:56:46 localhost kernel: Segment Routing with IPv6
Jan 20 17:56:46 localhost kernel: NET: Registered PF_PACKET protocol family
Jan 20 17:56:46 localhost kernel: mpls_gso: MPLS GSO support
Jan 20 17:56:46 localhost kernel: IPI shorthand broadcast: enabled
Jan 20 17:56:46 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Jan 20 17:56:46 localhost kernel: AES CTR mode by8 optimization enabled
Jan 20 17:56:46 localhost kernel: sched_clock: Marking stable (1872011175, 159318944)->(2145473503, -114143384)
Jan 20 17:56:46 localhost kernel: registered taskstats version 1
Jan 20 17:56:46 localhost kernel: Loading compiled-in X.509 certificates
Jan 20 17:56:46 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 04453f216699002fd63185eeab832de990bee6d7'
Jan 20 17:56:46 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Jan 20 17:56:46 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Jan 20 17:56:46 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Jan 20 17:56:46 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Jan 20 17:56:46 localhost kernel: Demotion targets for Node 0: null
Jan 20 17:56:46 localhost kernel: page_owner is disabled
Jan 20 17:56:46 localhost kernel: Key type .fscrypt registered
Jan 20 17:56:46 localhost kernel: Key type fscrypt-provisioning registered
Jan 20 17:56:46 localhost kernel: Key type big_key registered
Jan 20 17:56:46 localhost kernel: Key type encrypted registered
Jan 20 17:56:46 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Jan 20 17:56:46 localhost kernel: Loading compiled-in module X.509 certificates
Jan 20 17:56:46 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 04453f216699002fd63185eeab832de990bee6d7'
Jan 20 17:56:46 localhost kernel: ima: Allocated hash algorithm: sha256
Jan 20 17:56:46 localhost kernel: ima: No architecture policies found
Jan 20 17:56:46 localhost kernel: evm: Initialising EVM extended attributes:
Jan 20 17:56:46 localhost kernel: evm: security.selinux
Jan 20 17:56:46 localhost kernel: evm: security.SMACK64 (disabled)
Jan 20 17:56:46 localhost kernel: evm: security.SMACK64EXEC (disabled)
Jan 20 17:56:46 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Jan 20 17:56:46 localhost kernel: evm: security.SMACK64MMAP (disabled)
Jan 20 17:56:46 localhost kernel: evm: security.apparmor (disabled)
Jan 20 17:56:46 localhost kernel: evm: security.ima
Jan 20 17:56:46 localhost kernel: evm: security.capability
Jan 20 17:56:46 localhost kernel: evm: HMAC attrs: 0x1
Jan 20 17:56:46 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Jan 20 17:56:46 localhost kernel: Running certificate verification RSA selftest
Jan 20 17:56:46 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Jan 20 17:56:46 localhost kernel: Running certificate verification ECDSA selftest
Jan 20 17:56:46 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Jan 20 17:56:46 localhost kernel: clk: Disabling unused clocks
Jan 20 17:56:46 localhost kernel: Freeing unused decrypted memory: 2028K
Jan 20 17:56:46 localhost kernel: Freeing unused kernel image (initmem) memory: 4200K
Jan 20 17:56:46 localhost kernel: Write protecting the kernel read-only data: 30720k
Jan 20 17:56:46 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 420K
Jan 20 17:56:46 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Jan 20 17:56:46 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Jan 20 17:56:46 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Jan 20 17:56:46 localhost kernel: usb 1-1: Manufacturer: QEMU
Jan 20 17:56:46 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Jan 20 17:56:46 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Jan 20 17:56:46 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Jan 20 17:56:46 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Jan 20 17:56:46 localhost kernel: Run /init as init process
Jan 20 17:56:46 localhost kernel:   with arguments:
Jan 20 17:56:46 localhost kernel:     /init
Jan 20 17:56:46 localhost kernel:   with environment:
Jan 20 17:56:46 localhost kernel:     HOME=/
Jan 20 17:56:46 localhost kernel:     TERM=linux
Jan 20 17:56:46 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64
Jan 20 17:56:46 localhost systemd[1]: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 20 17:56:46 localhost systemd[1]: Detected virtualization kvm.
Jan 20 17:56:46 localhost systemd[1]: Detected architecture x86-64.
Jan 20 17:56:46 localhost systemd[1]: Running in initrd.
Jan 20 17:56:46 localhost systemd[1]: No hostname configured, using default hostname.
Jan 20 17:56:46 localhost systemd[1]: Hostname set to <localhost>.
Jan 20 17:56:46 localhost systemd[1]: Initializing machine ID from VM UUID.
Jan 20 17:56:46 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Jan 20 17:56:46 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Jan 20 17:56:46 localhost systemd[1]: Reached target Local Encrypted Volumes.
Jan 20 17:56:46 localhost systemd[1]: Reached target Initrd /usr File System.
Jan 20 17:56:46 localhost systemd[1]: Reached target Local File Systems.
Jan 20 17:56:46 localhost systemd[1]: Reached target Path Units.
Jan 20 17:56:46 localhost systemd[1]: Reached target Slice Units.
Jan 20 17:56:46 localhost systemd[1]: Reached target Swaps.
Jan 20 17:56:46 localhost systemd[1]: Reached target Timer Units.
Jan 20 17:56:46 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Jan 20 17:56:46 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Jan 20 17:56:46 localhost systemd[1]: Listening on Journal Socket.
Jan 20 17:56:46 localhost systemd[1]: Listening on udev Control Socket.
Jan 20 17:56:46 localhost systemd[1]: Listening on udev Kernel Socket.
Jan 20 17:56:46 localhost systemd[1]: Reached target Socket Units.
Jan 20 17:56:46 localhost systemd[1]: Starting Create List of Static Device Nodes...
Jan 20 17:56:46 localhost systemd[1]: Starting Journal Service...
Jan 20 17:56:46 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 20 17:56:46 localhost systemd[1]: Starting Apply Kernel Variables...
Jan 20 17:56:46 localhost systemd[1]: Starting Create System Users...
Jan 20 17:56:46 localhost systemd[1]: Starting Setup Virtual Console...
Jan 20 17:56:46 localhost systemd[1]: Finished Create List of Static Device Nodes.
Jan 20 17:56:46 localhost systemd[1]: Finished Apply Kernel Variables.
Jan 20 17:56:46 localhost systemd-journald[301]: Journal started
Jan 20 17:56:46 localhost systemd-journald[301]: Runtime Journal (/run/log/journal/19a62fa872e04d98a48bb9301ceb89c2) is 8.0M, max 153.6M, 145.6M free.
Jan 20 17:56:46 localhost systemd-sysusers[306]: Creating group 'users' with GID 100.
Jan 20 17:56:46 localhost systemd-sysusers[306]: Creating group 'dbus' with GID 81.
Jan 20 17:56:46 localhost systemd[1]: Started Journal Service.
Jan 20 17:56:46 localhost systemd-sysusers[306]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Jan 20 17:56:46 localhost systemd[1]: Finished Create System Users.
Jan 20 17:56:46 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 20 17:56:46 localhost systemd[1]: Starting Create Volatile Files and Directories...
Jan 20 17:56:46 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 20 17:56:46 localhost systemd[1]: Finished Create Volatile Files and Directories.
Jan 20 17:56:46 localhost systemd[1]: Finished Setup Virtual Console.
Jan 20 17:56:46 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Jan 20 17:56:46 localhost systemd[1]: Starting dracut cmdline hook...
Jan 20 17:56:46 localhost dracut-cmdline[321]: dracut-9 dracut-057-102.git20250818.el9
Jan 20 17:56:46 localhost dracut-cmdline[321]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 20 17:56:46 localhost systemd[1]: Finished dracut cmdline hook.
Jan 20 17:56:46 localhost systemd[1]: Starting dracut pre-udev hook...
Jan 20 17:56:46 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Jan 20 17:56:46 localhost kernel: device-mapper: uevent: version 1.0.3
Jan 20 17:56:46 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Jan 20 17:56:46 localhost kernel: RPC: Registered named UNIX socket transport module.
Jan 20 17:56:46 localhost kernel: RPC: Registered udp transport module.
Jan 20 17:56:46 localhost kernel: RPC: Registered tcp transport module.
Jan 20 17:56:46 localhost kernel: RPC: Registered tcp-with-tls transport module.
Jan 20 17:56:46 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Jan 20 17:56:46 localhost rpc.statd[440]: Version 2.5.4 starting
Jan 20 17:56:46 localhost rpc.statd[440]: Initializing NSM state
Jan 20 17:56:46 localhost rpc.idmapd[446]: Setting log level to 0
Jan 20 17:56:46 localhost systemd[1]: Finished dracut pre-udev hook.
Jan 20 17:56:46 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 20 17:56:46 localhost systemd-udevd[459]: Using default interface naming scheme 'rhel-9.0'.
Jan 20 17:56:46 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 20 17:56:46 localhost systemd[1]: Starting dracut pre-trigger hook...
Jan 20 17:56:46 localhost systemd[1]: Finished dracut pre-trigger hook.
Jan 20 17:56:46 localhost systemd[1]: Starting Coldplug All udev Devices...
Jan 20 17:56:47 localhost systemd[1]: Created slice Slice /system/modprobe.
Jan 20 17:56:47 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 20 17:56:47 localhost systemd[1]: Finished Coldplug All udev Devices.
Jan 20 17:56:47 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 20 17:56:47 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 20 17:56:47 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 20 17:56:47 localhost systemd[1]: Reached target Network.
Jan 20 17:56:47 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 20 17:56:47 localhost systemd[1]: Starting dracut initqueue hook...
Jan 20 17:56:47 localhost systemd-udevd[477]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 17:56:47 localhost kernel: libata version 3.00 loaded.
Jan 20 17:56:47 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Jan 20 17:56:47 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Jan 20 17:56:47 localhost kernel: scsi host0: ata_piix
Jan 20 17:56:47 localhost kernel: scsi host1: ata_piix
Jan 20 17:56:47 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Jan 20 17:56:47 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Jan 20 17:56:47 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Jan 20 17:56:47 localhost kernel:  vda: vda1
Jan 20 17:56:47 localhost systemd[1]: Mounting Kernel Configuration File System...
Jan 20 17:56:47 localhost systemd[1]: Mounted Kernel Configuration File System.
Jan 20 17:56:47 localhost systemd[1]: Reached target System Initialization.
Jan 20 17:56:47 localhost systemd[1]: Reached target Basic System.
Jan 20 17:56:47 localhost kernel: ata1: found unknown device (class 0)
Jan 20 17:56:47 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Jan 20 17:56:47 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Jan 20 17:56:47 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Jan 20 17:56:47 localhost systemd[1]: Found device /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40.
Jan 20 17:56:47 localhost systemd[1]: Reached target Initrd Root Device.
Jan 20 17:56:47 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Jan 20 17:56:47 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Jan 20 17:56:47 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Jan 20 17:56:47 localhost systemd[1]: Finished dracut initqueue hook.
Jan 20 17:56:47 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Jan 20 17:56:47 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Jan 20 17:56:47 localhost systemd[1]: Reached target Remote File Systems.
Jan 20 17:56:47 localhost systemd[1]: Starting dracut pre-mount hook...
Jan 20 17:56:47 localhost systemd[1]: Finished dracut pre-mount hook.
Jan 20 17:56:47 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40...
Jan 20 17:56:47 localhost systemd-fsck[552]: /usr/sbin/fsck.xfs: XFS file system.
Jan 20 17:56:47 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40.
Jan 20 17:56:47 localhost systemd[1]: Mounting /sysroot...
Jan 20 17:56:48 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Jan 20 17:56:48 localhost kernel: XFS (vda1): Mounting V5 Filesystem 22ac9141-3960-4912-b20e-19fc8a328d40
Jan 20 17:56:48 localhost kernel: XFS (vda1): Ending clean mount
Jan 20 17:56:48 localhost systemd[1]: Mounted /sysroot.
Jan 20 17:56:48 localhost systemd[1]: Reached target Initrd Root File System.
Jan 20 17:56:48 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Jan 20 17:56:48 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Jan 20 17:56:48 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Jan 20 17:56:48 localhost systemd[1]: Reached target Initrd File Systems.
Jan 20 17:56:48 localhost systemd[1]: Reached target Initrd Default Target.
Jan 20 17:56:48 localhost systemd[1]: Starting dracut mount hook...
Jan 20 17:56:48 localhost systemd[1]: Finished dracut mount hook.
Jan 20 17:56:48 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Jan 20 17:56:48 localhost rpc.idmapd[446]: exiting on signal 15
Jan 20 17:56:48 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Jan 20 17:56:48 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Jan 20 17:56:48 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Jan 20 17:56:48 localhost systemd[1]: Stopped target Network.
Jan 20 17:56:48 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Jan 20 17:56:48 localhost systemd[1]: Stopped target Timer Units.
Jan 20 17:56:48 localhost systemd[1]: dbus.socket: Deactivated successfully.
Jan 20 17:56:48 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Jan 20 17:56:48 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Jan 20 17:56:48 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Jan 20 17:56:48 localhost systemd[1]: Stopped target Initrd Default Target.
Jan 20 17:56:48 localhost systemd[1]: Stopped target Basic System.
Jan 20 17:56:48 localhost systemd[1]: Stopped target Initrd Root Device.
Jan 20 17:56:48 localhost systemd[1]: Stopped target Initrd /usr File System.
Jan 20 17:56:48 localhost systemd[1]: Stopped target Path Units.
Jan 20 17:56:48 localhost systemd[1]: Stopped target Remote File Systems.
Jan 20 17:56:48 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Jan 20 17:56:48 localhost systemd[1]: Stopped target Slice Units.
Jan 20 17:56:48 localhost systemd[1]: Stopped target Socket Units.
Jan 20 17:56:48 localhost systemd[1]: Stopped target System Initialization.
Jan 20 17:56:48 localhost systemd[1]: Stopped target Local File Systems.
Jan 20 17:56:48 localhost systemd[1]: Stopped target Swaps.
Jan 20 17:56:48 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Jan 20 17:56:48 localhost systemd[1]: Stopped dracut mount hook.
Jan 20 17:56:48 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Jan 20 17:56:48 localhost systemd[1]: Stopped dracut pre-mount hook.
Jan 20 17:56:48 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Jan 20 17:56:48 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Jan 20 17:56:48 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Jan 20 17:56:48 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Jan 20 17:56:48 localhost systemd[1]: Stopped dracut initqueue hook.
Jan 20 17:56:48 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 20 17:56:48 localhost systemd[1]: Stopped Apply Kernel Variables.
Jan 20 17:56:48 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Jan 20 17:56:48 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Jan 20 17:56:48 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Jan 20 17:56:48 localhost systemd[1]: Stopped Coldplug All udev Devices.
Jan 20 17:56:48 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Jan 20 17:56:48 localhost systemd[1]: Stopped dracut pre-trigger hook.
Jan 20 17:56:48 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Jan 20 17:56:48 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Jan 20 17:56:48 localhost systemd[1]: Stopped Setup Virtual Console.
Jan 20 17:56:48 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Jan 20 17:56:48 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 20 17:56:48 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Jan 20 17:56:48 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Jan 20 17:56:48 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Jan 20 17:56:48 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Jan 20 17:56:48 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Jan 20 17:56:48 localhost systemd[1]: Closed udev Control Socket.
Jan 20 17:56:48 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Jan 20 17:56:48 localhost systemd[1]: Closed udev Kernel Socket.
Jan 20 17:56:48 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Jan 20 17:56:48 localhost systemd[1]: Stopped dracut pre-udev hook.
Jan 20 17:56:48 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Jan 20 17:56:48 localhost systemd[1]: Stopped dracut cmdline hook.
Jan 20 17:56:48 localhost systemd[1]: Starting Cleanup udev Database...
Jan 20 17:56:48 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Jan 20 17:56:48 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Jan 20 17:56:48 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Jan 20 17:56:48 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Jan 20 17:56:48 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Jan 20 17:56:48 localhost systemd[1]: Stopped Create System Users.
Jan 20 17:56:48 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Jan 20 17:56:48 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Jan 20 17:56:48 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Jan 20 17:56:48 localhost systemd[1]: Finished Cleanup udev Database.
Jan 20 17:56:48 localhost systemd[1]: Reached target Switch Root.
Jan 20 17:56:48 localhost systemd[1]: Starting Switch Root...
Jan 20 17:56:48 localhost systemd[1]: Switching root.
Jan 20 17:56:48 localhost systemd-journald[301]: Journal stopped
Jan 20 17:56:49 localhost systemd-journald[301]: Received SIGTERM from PID 1 (systemd).
Jan 20 17:56:49 localhost kernel: audit: type=1404 audit(1768931808.540:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Jan 20 17:56:49 localhost kernel: SELinux:  policy capability network_peer_controls=1
Jan 20 17:56:49 localhost kernel: SELinux:  policy capability open_perms=1
Jan 20 17:56:49 localhost kernel: SELinux:  policy capability extended_socket_class=1
Jan 20 17:56:49 localhost kernel: SELinux:  policy capability always_check_network=0
Jan 20 17:56:49 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 20 17:56:49 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 20 17:56:49 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 20 17:56:49 localhost kernel: audit: type=1403 audit(1768931808.659:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Jan 20 17:56:49 localhost systemd[1]: Successfully loaded SELinux policy in 122.326ms.
Jan 20 17:56:49 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 28.315ms.
Jan 20 17:56:49 localhost systemd[1]: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 20 17:56:49 localhost systemd[1]: Detected virtualization kvm.
Jan 20 17:56:49 localhost systemd[1]: Detected architecture x86-64.
Jan 20 17:56:49 localhost systemd-rc-local-generator[635]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 17:56:49 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Jan 20 17:56:49 localhost systemd[1]: Stopped Switch Root.
Jan 20 17:56:49 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Jan 20 17:56:49 localhost systemd[1]: Created slice Slice /system/getty.
Jan 20 17:56:49 localhost systemd[1]: Created slice Slice /system/serial-getty.
Jan 20 17:56:49 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Jan 20 17:56:49 localhost systemd[1]: Created slice User and Session Slice.
Jan 20 17:56:49 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Jan 20 17:56:49 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Jan 20 17:56:49 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Jan 20 17:56:49 localhost systemd[1]: Reached target Local Encrypted Volumes.
Jan 20 17:56:49 localhost systemd[1]: Stopped target Switch Root.
Jan 20 17:56:49 localhost systemd[1]: Stopped target Initrd File Systems.
Jan 20 17:56:49 localhost systemd[1]: Stopped target Initrd Root File System.
Jan 20 17:56:49 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Jan 20 17:56:49 localhost systemd[1]: Reached target Path Units.
Jan 20 17:56:49 localhost systemd[1]: Reached target rpc_pipefs.target.
Jan 20 17:56:49 localhost systemd[1]: Reached target Slice Units.
Jan 20 17:56:49 localhost systemd[1]: Reached target Swaps.
Jan 20 17:56:49 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Jan 20 17:56:49 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Jan 20 17:56:49 localhost systemd[1]: Reached target RPC Port Mapper.
Jan 20 17:56:49 localhost systemd[1]: Listening on Process Core Dump Socket.
Jan 20 17:56:49 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Jan 20 17:56:49 localhost systemd[1]: Listening on udev Control Socket.
Jan 20 17:56:49 localhost systemd[1]: Listening on udev Kernel Socket.
Jan 20 17:56:49 localhost systemd[1]: Mounting Huge Pages File System...
Jan 20 17:56:49 localhost systemd[1]: Mounting POSIX Message Queue File System...
Jan 20 17:56:49 localhost systemd[1]: Mounting Kernel Debug File System...
Jan 20 17:56:49 localhost systemd[1]: Mounting Kernel Trace File System...
Jan 20 17:56:49 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 20 17:56:49 localhost systemd[1]: Starting Create List of Static Device Nodes...
Jan 20 17:56:49 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 20 17:56:49 localhost systemd[1]: Starting Load Kernel Module drm...
Jan 20 17:56:49 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Jan 20 17:56:49 localhost systemd[1]: Starting Load Kernel Module fuse...
Jan 20 17:56:49 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Jan 20 17:56:49 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Jan 20 17:56:49 localhost systemd[1]: Stopped File System Check on Root Device.
Jan 20 17:56:49 localhost systemd[1]: Stopped Journal Service.
Jan 20 17:56:49 localhost systemd[1]: Starting Journal Service...
Jan 20 17:56:49 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 20 17:56:49 localhost systemd[1]: Starting Generate network units from Kernel command line...
Jan 20 17:56:49 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 20 17:56:49 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Jan 20 17:56:49 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Jan 20 17:56:49 localhost systemd[1]: Starting Apply Kernel Variables...
Jan 20 17:56:49 localhost kernel: fuse: init (API version 7.37)
Jan 20 17:56:49 localhost systemd[1]: Starting Coldplug All udev Devices...
Jan 20 17:56:49 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Jan 20 17:56:49 localhost systemd[1]: Mounted Huge Pages File System.
Jan 20 17:56:49 localhost systemd[1]: Mounted POSIX Message Queue File System.
Jan 20 17:56:49 localhost systemd[1]: Mounted Kernel Debug File System.
Jan 20 17:56:49 localhost systemd[1]: Mounted Kernel Trace File System.
Jan 20 17:56:49 localhost systemd[1]: Finished Create List of Static Device Nodes.
Jan 20 17:56:49 localhost systemd-journald[676]: Journal started
Jan 20 17:56:49 localhost systemd-journald[676]: Runtime Journal (/run/log/journal/85ac68c10a6e7ae08ceb898dbdca0cb5) is 8.0M, max 153.6M, 145.6M free.
Jan 20 17:56:48 localhost systemd[1]: Queued start job for default target Multi-User System.
Jan 20 17:56:48 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Jan 20 17:56:49 localhost systemd[1]: Started Journal Service.
Jan 20 17:56:49 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 20 17:56:49 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 20 17:56:49 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jan 20 17:56:49 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Jan 20 17:56:49 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Jan 20 17:56:49 localhost systemd[1]: Finished Load Kernel Module fuse.
Jan 20 17:56:49 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Jan 20 17:56:49 localhost systemd[1]: Finished Generate network units from Kernel command line.
Jan 20 17:56:49 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Jan 20 17:56:49 localhost systemd[1]: Finished Apply Kernel Variables.
Jan 20 17:56:49 localhost kernel: ACPI: bus type drm_connector registered
Jan 20 17:56:49 localhost systemd[1]: Mounting FUSE Control File System...
Jan 20 17:56:49 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 20 17:56:49 localhost systemd[1]: Starting Rebuild Hardware Database...
Jan 20 17:56:49 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Jan 20 17:56:49 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Jan 20 17:56:49 localhost systemd[1]: Starting Load/Save OS Random Seed...
Jan 20 17:56:49 localhost systemd[1]: Starting Create System Users...
Jan 20 17:56:49 localhost systemd-journald[676]: Runtime Journal (/run/log/journal/85ac68c10a6e7ae08ceb898dbdca0cb5) is 8.0M, max 153.6M, 145.6M free.
Jan 20 17:56:49 localhost systemd-journald[676]: Received client request to flush runtime journal.
Jan 20 17:56:49 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Jan 20 17:56:49 localhost systemd[1]: Finished Load Kernel Module drm.
Jan 20 17:56:49 localhost systemd[1]: Mounted FUSE Control File System.
Jan 20 17:56:49 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Jan 20 17:56:49 localhost systemd[1]: Finished Load/Save OS Random Seed.
Jan 20 17:56:49 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 20 17:56:49 localhost systemd[1]: Finished Coldplug All udev Devices.
Jan 20 17:56:49 localhost systemd[1]: Finished Create System Users.
Jan 20 17:56:49 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 20 17:56:49 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 20 17:56:49 localhost systemd[1]: Reached target Preparation for Local File Systems.
Jan 20 17:56:49 localhost systemd[1]: Reached target Local File Systems.
Jan 20 17:56:49 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Jan 20 17:56:49 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Jan 20 17:56:49 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 20 17:56:49 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Jan 20 17:56:49 localhost systemd[1]: Starting Automatic Boot Loader Update...
Jan 20 17:56:49 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Jan 20 17:56:49 localhost systemd[1]: Starting Create Volatile Files and Directories...
Jan 20 17:56:49 localhost bootctl[694]: Couldn't find EFI system partition, skipping.
Jan 20 17:56:49 localhost systemd[1]: Finished Automatic Boot Loader Update.
Jan 20 17:56:49 localhost systemd[1]: Finished Create Volatile Files and Directories.
Jan 20 17:56:49 localhost systemd[1]: Starting Security Auditing Service...
Jan 20 17:56:49 localhost systemd[1]: Starting RPC Bind...
Jan 20 17:56:49 localhost systemd[1]: Starting Rebuild Journal Catalog...
Jan 20 17:56:49 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Jan 20 17:56:49 localhost auditd[700]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Jan 20 17:56:49 localhost auditd[700]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Jan 20 17:56:49 localhost augenrules[705]: /sbin/augenrules: No change
Jan 20 17:56:49 localhost systemd[1]: Finished Rebuild Journal Catalog.
Jan 20 17:56:49 localhost systemd[1]: Started RPC Bind.
Jan 20 17:56:49 localhost augenrules[720]: No rules
Jan 20 17:56:49 localhost augenrules[720]: enabled 1
Jan 20 17:56:49 localhost augenrules[720]: failure 1
Jan 20 17:56:49 localhost augenrules[720]: pid 700
Jan 20 17:56:49 localhost augenrules[720]: rate_limit 0
Jan 20 17:56:49 localhost augenrules[720]: backlog_limit 8192
Jan 20 17:56:49 localhost augenrules[720]: lost 0
Jan 20 17:56:49 localhost augenrules[720]: backlog 2
Jan 20 17:56:49 localhost augenrules[720]: backlog_wait_time 60000
Jan 20 17:56:49 localhost augenrules[720]: backlog_wait_time_actual 0
Jan 20 17:56:49 localhost augenrules[720]: enabled 1
Jan 20 17:56:49 localhost augenrules[720]: failure 1
Jan 20 17:56:49 localhost augenrules[720]: pid 700
Jan 20 17:56:49 localhost augenrules[720]: rate_limit 0
Jan 20 17:56:49 localhost augenrules[720]: backlog_limit 8192
Jan 20 17:56:49 localhost augenrules[720]: lost 0
Jan 20 17:56:49 localhost augenrules[720]: backlog 0
Jan 20 17:56:49 localhost augenrules[720]: backlog_wait_time 60000
Jan 20 17:56:49 localhost augenrules[720]: backlog_wait_time_actual 0
Jan 20 17:56:49 localhost augenrules[720]: enabled 1
Jan 20 17:56:49 localhost augenrules[720]: failure 1
Jan 20 17:56:49 localhost augenrules[720]: pid 700
Jan 20 17:56:49 localhost augenrules[720]: rate_limit 0
Jan 20 17:56:49 localhost augenrules[720]: backlog_limit 8192
Jan 20 17:56:49 localhost augenrules[720]: lost 0
Jan 20 17:56:49 localhost augenrules[720]: backlog 3
Jan 20 17:56:49 localhost augenrules[720]: backlog_wait_time 60000
Jan 20 17:56:49 localhost augenrules[720]: backlog_wait_time_actual 0
Jan 20 17:56:49 localhost systemd[1]: Started Security Auditing Service.
Jan 20 17:56:49 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Jan 20 17:56:49 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Jan 20 17:56:49 localhost systemd[1]: Finished Rebuild Hardware Database.
Jan 20 17:56:49 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 20 17:56:49 localhost systemd[1]: Starting Update is Completed...
Jan 20 17:56:49 localhost systemd[1]: Finished Update is Completed.
Jan 20 17:56:49 localhost systemd-udevd[728]: Using default interface naming scheme 'rhel-9.0'.
Jan 20 17:56:49 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 20 17:56:49 localhost systemd[1]: Reached target System Initialization.
Jan 20 17:56:49 localhost systemd[1]: Started dnf makecache --timer.
Jan 20 17:56:49 localhost systemd[1]: Started Daily rotation of log files.
Jan 20 17:56:49 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Jan 20 17:56:49 localhost systemd[1]: Reached target Timer Units.
Jan 20 17:56:49 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Jan 20 17:56:49 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Jan 20 17:56:49 localhost systemd[1]: Reached target Socket Units.
Jan 20 17:56:49 localhost systemd[1]: Starting D-Bus System Message Bus...
Jan 20 17:56:49 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 20 17:56:49 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Jan 20 17:56:49 localhost systemd-udevd[739]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 17:56:49 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 20 17:56:49 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 20 17:56:49 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 20 17:56:49 localhost systemd[1]: Started D-Bus System Message Bus.
Jan 20 17:56:49 localhost systemd[1]: Reached target Basic System.
Jan 20 17:56:49 localhost dbus-broker-lau[758]: Ready
Jan 20 17:56:49 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Jan 20 17:56:49 localhost systemd[1]: Starting NTP client/server...
Jan 20 17:56:49 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Jan 20 17:56:49 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Jan 20 17:56:50 localhost chronyd[784]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 20 17:56:50 localhost chronyd[784]: Loaded 0 symmetric keys
Jan 20 17:56:50 localhost chronyd[784]: Using right/UTC timezone to obtain leap second data
Jan 20 17:56:50 localhost chronyd[784]: Loaded seccomp filter (level 2)
Jan 20 17:56:50 localhost systemd[1]: Starting IPv4 firewall with iptables...
Jan 20 17:56:50 localhost systemd[1]: Started irqbalance daemon.
Jan 20 17:56:50 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Jan 20 17:56:50 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 20 17:56:50 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 20 17:56:50 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 20 17:56:50 localhost systemd[1]: Reached target sshd-keygen.target.
Jan 20 17:56:50 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Jan 20 17:56:50 localhost systemd[1]: Reached target User and Group Name Lookups.
Jan 20 17:56:50 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Jan 20 17:56:50 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Jan 20 17:56:50 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Jan 20 17:56:50 localhost systemd[1]: Starting User Login Management...
Jan 20 17:56:50 localhost systemd[1]: Started NTP client/server.
Jan 20 17:56:50 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Jan 20 17:56:50 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Jan 20 17:56:50 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Jan 20 17:56:50 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Jan 20 17:56:50 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Jan 20 17:56:50 localhost kernel: kvm_amd: TSC scaling supported
Jan 20 17:56:50 localhost kernel: kvm_amd: Nested Virtualization enabled
Jan 20 17:56:50 localhost kernel: kvm_amd: Nested Paging enabled
Jan 20 17:56:50 localhost kernel: kvm_amd: LBR virtualization supported
Jan 20 17:56:50 localhost systemd-logind[796]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 20 17:56:50 localhost systemd-logind[796]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 20 17:56:50 localhost kernel: Console: switching to colour dummy device 80x25
Jan 20 17:56:50 localhost systemd-logind[796]: New seat seat0.
Jan 20 17:56:50 localhost systemd[1]: Started User Login Management.
Jan 20 17:56:50 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Jan 20 17:56:50 localhost kernel: [drm] features: -context_init
Jan 20 17:56:50 localhost kernel: [drm] number of scanouts: 1
Jan 20 17:56:50 localhost kernel: [drm] number of cap sets: 0
Jan 20 17:56:50 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Jan 20 17:56:50 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Jan 20 17:56:50 localhost kernel: Console: switching to colour frame buffer device 128x48
Jan 20 17:56:50 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Jan 20 17:56:50 localhost iptables.init[778]: iptables: Applying firewall rules: [  OK  ]
Jan 20 17:56:50 localhost systemd[1]: Finished IPv4 firewall with iptables.
Jan 20 17:56:50 localhost cloud-init[837]: Cloud-init v. 24.4-8.el9 running 'init-local' at Tue, 20 Jan 2026 17:56:50 +0000. Up 6.69 seconds.
Jan 20 17:56:50 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Jan 20 17:56:50 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Jan 20 17:56:50 localhost systemd[1]: run-cloud\x2dinit-tmp-tmpcavbebdn.mount: Deactivated successfully.
Jan 20 17:56:50 localhost systemd[1]: Starting Hostname Service...
Jan 20 17:56:50 localhost systemd[1]: Started Hostname Service.
Jan 20 17:56:50 np0005589270.novalocal systemd-hostnamed[851]: Hostname set to <np0005589270.novalocal> (static)
Jan 20 17:56:50 np0005589270.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Jan 20 17:56:50 np0005589270.novalocal systemd[1]: Reached target Preparation for Network.
Jan 20 17:56:50 np0005589270.novalocal systemd[1]: Starting Network Manager...
Jan 20 17:56:50 np0005589270.novalocal NetworkManager[855]: <info>  [1768931810.9012] NetworkManager (version 1.54.3-2.el9) is starting... (boot:7a60faef-372d-4827-b0d0-8fdd6d433663)
Jan 20 17:56:50 np0005589270.novalocal NetworkManager[855]: <info>  [1768931810.9018] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 20 17:56:50 np0005589270.novalocal NetworkManager[855]: <info>  [1768931810.9088] manager[0x55cf84447000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 20 17:56:50 np0005589270.novalocal NetworkManager[855]: <info>  [1768931810.9126] hostname: hostname: using hostnamed
Jan 20 17:56:50 np0005589270.novalocal NetworkManager[855]: <info>  [1768931810.9127] hostname: static hostname changed from (none) to "np0005589270.novalocal"
Jan 20 17:56:50 np0005589270.novalocal NetworkManager[855]: <info>  [1768931810.9131] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 20 17:56:50 np0005589270.novalocal NetworkManager[855]: <info>  [1768931810.9230] manager[0x55cf84447000]: rfkill: Wi-Fi hardware radio set enabled
Jan 20 17:56:50 np0005589270.novalocal NetworkManager[855]: <info>  [1768931810.9230] manager[0x55cf84447000]: rfkill: WWAN hardware radio set enabled
Jan 20 17:56:50 np0005589270.novalocal NetworkManager[855]: <info>  [1768931810.9265] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 20 17:56:50 np0005589270.novalocal NetworkManager[855]: <info>  [1768931810.9266] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 20 17:56:50 np0005589270.novalocal NetworkManager[855]: <info>  [1768931810.9266] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 20 17:56:50 np0005589270.novalocal NetworkManager[855]: <info>  [1768931810.9267] manager: Networking is enabled by state file
Jan 20 17:56:50 np0005589270.novalocal NetworkManager[855]: <info>  [1768931810.9269] settings: Loaded settings plugin: keyfile (internal)
Jan 20 17:56:50 np0005589270.novalocal NetworkManager[855]: <info>  [1768931810.9277] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 20 17:56:50 np0005589270.novalocal NetworkManager[855]: <info>  [1768931810.9293] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 20 17:56:50 np0005589270.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Jan 20 17:56:50 np0005589270.novalocal NetworkManager[855]: <info>  [1768931810.9302] dhcp: init: Using DHCP client 'internal'
Jan 20 17:56:50 np0005589270.novalocal NetworkManager[855]: <info>  [1768931810.9306] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 20 17:56:50 np0005589270.novalocal NetworkManager[855]: <info>  [1768931810.9319] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 17:56:50 np0005589270.novalocal NetworkManager[855]: <info>  [1768931810.9327] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 20 17:56:50 np0005589270.novalocal NetworkManager[855]: <info>  [1768931810.9335] device (lo): Activation: starting connection 'lo' (ee6edf19-39c6-4a96-abbb-0d8aa9c964b6)
Jan 20 17:56:50 np0005589270.novalocal NetworkManager[855]: <info>  [1768931810.9346] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 20 17:56:50 np0005589270.novalocal NetworkManager[855]: <info>  [1768931810.9350] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 17:56:50 np0005589270.novalocal NetworkManager[855]: <info>  [1768931810.9375] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 20 17:56:50 np0005589270.novalocal NetworkManager[855]: <info>  [1768931810.9380] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 20 17:56:50 np0005589270.novalocal NetworkManager[855]: <info>  [1768931810.9384] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 20 17:56:50 np0005589270.novalocal NetworkManager[855]: <info>  [1768931810.9386] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 20 17:56:50 np0005589270.novalocal NetworkManager[855]: <info>  [1768931810.9387] device (eth0): carrier: link connected
Jan 20 17:56:50 np0005589270.novalocal NetworkManager[855]: <info>  [1768931810.9391] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 20 17:56:50 np0005589270.novalocal NetworkManager[855]: <info>  [1768931810.9396] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 20 17:56:50 np0005589270.novalocal NetworkManager[855]: <info>  [1768931810.9402] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 20 17:56:50 np0005589270.novalocal NetworkManager[855]: <info>  [1768931810.9406] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 20 17:56:50 np0005589270.novalocal NetworkManager[855]: <info>  [1768931810.9406] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 20 17:56:50 np0005589270.novalocal NetworkManager[855]: <info>  [1768931810.9408] manager: NetworkManager state is now CONNECTING
Jan 20 17:56:50 np0005589270.novalocal NetworkManager[855]: <info>  [1768931810.9409] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 20 17:56:50 np0005589270.novalocal NetworkManager[855]: <info>  [1768931810.9416] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 20 17:56:50 np0005589270.novalocal NetworkManager[855]: <info>  [1768931810.9419] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 20 17:56:50 np0005589270.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 20 17:56:50 np0005589270.novalocal systemd[1]: Started Network Manager.
Jan 20 17:56:50 np0005589270.novalocal systemd[1]: Reached target Network.
Jan 20 17:56:50 np0005589270.novalocal systemd[1]: Starting Network Manager Wait Online...
Jan 20 17:56:50 np0005589270.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Jan 20 17:56:50 np0005589270.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 20 17:56:50 np0005589270.novalocal NetworkManager[855]: <info>  [1768931810.9693] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 20 17:56:50 np0005589270.novalocal NetworkManager[855]: <info>  [1768931810.9697] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 20 17:56:50 np0005589270.novalocal NetworkManager[855]: <info>  [1768931810.9703] device (lo): Activation: successful, device activated.
Jan 20 17:56:50 np0005589270.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Jan 20 17:56:50 np0005589270.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 20 17:56:50 np0005589270.novalocal systemd[1]: Reached target NFS client services.
Jan 20 17:56:50 np0005589270.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Jan 20 17:56:50 np0005589270.novalocal systemd[1]: Reached target Remote File Systems.
Jan 20 17:56:50 np0005589270.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 20 17:56:51 np0005589270.novalocal NetworkManager[855]: <info>  [1768931811.0041] dhcp4 (eth0): state changed new lease, address=38.102.83.13
Jan 20 17:56:51 np0005589270.novalocal NetworkManager[855]: <info>  [1768931811.0058] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 20 17:56:51 np0005589270.novalocal NetworkManager[855]: <info>  [1768931811.0086] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 20 17:56:51 np0005589270.novalocal NetworkManager[855]: <info>  [1768931811.0110] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 20 17:56:51 np0005589270.novalocal NetworkManager[855]: <info>  [1768931811.0111] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 20 17:56:51 np0005589270.novalocal NetworkManager[855]: <info>  [1768931811.0114] manager: NetworkManager state is now CONNECTED_SITE
Jan 20 17:56:51 np0005589270.novalocal NetworkManager[855]: <info>  [1768931811.0116] device (eth0): Activation: successful, device activated.
Jan 20 17:56:51 np0005589270.novalocal NetworkManager[855]: <info>  [1768931811.0120] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 20 17:56:51 np0005589270.novalocal NetworkManager[855]: <info>  [1768931811.0122] manager: startup complete
Jan 20 17:56:51 np0005589270.novalocal systemd[1]: Finished Network Manager Wait Online.
Jan 20 17:56:51 np0005589270.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Jan 20 17:56:51 np0005589270.novalocal cloud-init[919]: Cloud-init v. 24.4-8.el9 running 'init' at Tue, 20 Jan 2026 17:56:51 +0000. Up 7.65 seconds.
Jan 20 17:56:51 np0005589270.novalocal cloud-init[919]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Jan 20 17:56:51 np0005589270.novalocal cloud-init[919]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 20 17:56:51 np0005589270.novalocal cloud-init[919]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Jan 20 17:56:51 np0005589270.novalocal cloud-init[919]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 20 17:56:51 np0005589270.novalocal cloud-init[919]: ci-info: |  eth0  | True |         38.102.83.13         | 255.255.255.0 | global | fa:16:3e:3f:1f:bd |
Jan 20 17:56:51 np0005589270.novalocal cloud-init[919]: ci-info: |  eth0  | True | fe80::f816:3eff:fe3f:1fbd/64 |       .       |  link  | fa:16:3e:3f:1f:bd |
Jan 20 17:56:51 np0005589270.novalocal cloud-init[919]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Jan 20 17:56:51 np0005589270.novalocal cloud-init[919]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Jan 20 17:56:51 np0005589270.novalocal cloud-init[919]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 20 17:56:51 np0005589270.novalocal cloud-init[919]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Jan 20 17:56:51 np0005589270.novalocal cloud-init[919]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 20 17:56:51 np0005589270.novalocal cloud-init[919]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Jan 20 17:56:51 np0005589270.novalocal cloud-init[919]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 20 17:56:51 np0005589270.novalocal cloud-init[919]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Jan 20 17:56:51 np0005589270.novalocal cloud-init[919]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Jan 20 17:56:51 np0005589270.novalocal cloud-init[919]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Jan 20 17:56:51 np0005589270.novalocal cloud-init[919]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 20 17:56:51 np0005589270.novalocal cloud-init[919]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Jan 20 17:56:51 np0005589270.novalocal cloud-init[919]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 20 17:56:51 np0005589270.novalocal cloud-init[919]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Jan 20 17:56:51 np0005589270.novalocal cloud-init[919]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 20 17:56:51 np0005589270.novalocal cloud-init[919]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Jan 20 17:56:51 np0005589270.novalocal cloud-init[919]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Jan 20 17:56:51 np0005589270.novalocal cloud-init[919]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 20 17:56:52 np0005589270.novalocal useradd[986]: new group: name=cloud-user, GID=1001
Jan 20 17:56:52 np0005589270.novalocal useradd[986]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Jan 20 17:56:52 np0005589270.novalocal useradd[986]: add 'cloud-user' to group 'adm'
Jan 20 17:56:52 np0005589270.novalocal useradd[986]: add 'cloud-user' to group 'systemd-journal'
Jan 20 17:56:52 np0005589270.novalocal useradd[986]: add 'cloud-user' to shadow group 'adm'
Jan 20 17:56:52 np0005589270.novalocal useradd[986]: add 'cloud-user' to shadow group 'systemd-journal'
Jan 20 17:56:52 np0005589270.novalocal cloud-init[919]: Generating public/private rsa key pair.
Jan 20 17:56:52 np0005589270.novalocal cloud-init[919]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Jan 20 17:56:52 np0005589270.novalocal cloud-init[919]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Jan 20 17:56:52 np0005589270.novalocal cloud-init[919]: The key fingerprint is:
Jan 20 17:56:52 np0005589270.novalocal cloud-init[919]: SHA256:5i9Ds3UiWOuoJOW3TC7jarTzFjQnFAKpPLLEjXY2MfQ root@np0005589270.novalocal
Jan 20 17:56:52 np0005589270.novalocal cloud-init[919]: The key's randomart image is:
Jan 20 17:56:52 np0005589270.novalocal cloud-init[919]: +---[RSA 3072]----+
Jan 20 17:56:52 np0005589270.novalocal cloud-init[919]: |.oo...           |
Jan 20 17:56:52 np0005589270.novalocal cloud-init[919]: |.  +o            |
Jan 20 17:56:52 np0005589270.novalocal cloud-init[919]: |+ o.oE           |
Jan 20 17:56:52 np0005589270.novalocal cloud-init[919]: |oB =+ . .        |
Jan 20 17:56:52 np0005589270.novalocal cloud-init[919]: |+.+.o+ oS.       |
Jan 20 17:56:52 np0005589270.novalocal cloud-init[919]: |. .o. .o= o .    |
Jan 20 17:56:52 np0005589270.novalocal cloud-init[919]: | ...o.o+.= o     |
Jan 20 17:56:52 np0005589270.novalocal cloud-init[919]: |  +o+=..=.       |
Jan 20 17:56:52 np0005589270.novalocal cloud-init[919]: | ..*+++  o.      |
Jan 20 17:56:52 np0005589270.novalocal cloud-init[919]: +----[SHA256]-----+
Jan 20 17:56:52 np0005589270.novalocal cloud-init[919]: Generating public/private ecdsa key pair.
Jan 20 17:56:52 np0005589270.novalocal cloud-init[919]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Jan 20 17:56:52 np0005589270.novalocal cloud-init[919]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Jan 20 17:56:52 np0005589270.novalocal cloud-init[919]: The key fingerprint is:
Jan 20 17:56:52 np0005589270.novalocal cloud-init[919]: SHA256:EZTrZ0LDYfWS1xfRN5CveRpu14zAEXoKtd+zM950N3Q root@np0005589270.novalocal
Jan 20 17:56:52 np0005589270.novalocal cloud-init[919]: The key's randomart image is:
Jan 20 17:56:52 np0005589270.novalocal cloud-init[919]: +---[ECDSA 256]---+
Jan 20 17:56:52 np0005589270.novalocal cloud-init[919]: |       .oo.  .ooo|
Jan 20 17:56:52 np0005589270.novalocal cloud-init[919]: |        +..o.o .+|
Jan 20 17:56:52 np0005589270.novalocal cloud-init[919]: |       o.+ooo.o +|
Jan 20 17:56:52 np0005589270.novalocal cloud-init[919]: |        *.ooo  o |
Jan 20 17:56:52 np0005589270.novalocal cloud-init[919]: |       oSo = oo.E|
Jan 20 17:56:52 np0005589270.novalocal cloud-init[919]: |        o + +++..|
Jan 20 17:56:52 np0005589270.novalocal cloud-init[919]: |         +  ..+*=|
Jan 20 17:56:52 np0005589270.novalocal cloud-init[919]: |             +=+*|
Jan 20 17:56:52 np0005589270.novalocal cloud-init[919]: |            ..oo.|
Jan 20 17:56:52 np0005589270.novalocal cloud-init[919]: +----[SHA256]-----+
Jan 20 17:56:52 np0005589270.novalocal cloud-init[919]: Generating public/private ed25519 key pair.
Jan 20 17:56:52 np0005589270.novalocal cloud-init[919]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Jan 20 17:56:52 np0005589270.novalocal cloud-init[919]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Jan 20 17:56:52 np0005589270.novalocal cloud-init[919]: The key fingerprint is:
Jan 20 17:56:52 np0005589270.novalocal cloud-init[919]: SHA256:O0wxbfvhcIuczgjFIpNhj+5+qJ9PLR7V6yu5NZNqqQY root@np0005589270.novalocal
Jan 20 17:56:52 np0005589270.novalocal cloud-init[919]: The key's randomart image is:
Jan 20 17:56:52 np0005589270.novalocal cloud-init[919]: +--[ED25519 256]--+
Jan 20 17:56:52 np0005589270.novalocal cloud-init[919]: |                 |
Jan 20 17:56:52 np0005589270.novalocal cloud-init[919]: |         .       |
Jan 20 17:56:52 np0005589270.novalocal cloud-init[919]: |    o   o o      |
Jan 20 17:56:52 np0005589270.novalocal cloud-init[919]: |   . = . = .     |
Jan 20 17:56:52 np0005589270.novalocal cloud-init[919]: |    = o S + o    |
Jan 20 17:56:52 np0005589270.novalocal cloud-init[919]: |   .Eo B o X o   |
Jan 20 17:56:52 np0005589270.novalocal cloud-init[919]: |    .o= =oX +    |
Jan 20 17:56:52 np0005589270.novalocal cloud-init[919]: |   ..+o+=O o     |
Jan 20 17:56:52 np0005589270.novalocal cloud-init[919]: |  .+=++o+o=.     |
Jan 20 17:56:52 np0005589270.novalocal cloud-init[919]: +----[SHA256]-----+
Jan 20 17:56:52 np0005589270.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Jan 20 17:56:52 np0005589270.novalocal systemd[1]: Reached target Cloud-config availability.
Jan 20 17:56:52 np0005589270.novalocal systemd[1]: Reached target Network is Online.
Jan 20 17:56:52 np0005589270.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Jan 20 17:56:52 np0005589270.novalocal systemd[1]: Starting Crash recovery kernel arming...
Jan 20 17:56:52 np0005589270.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Jan 20 17:56:52 np0005589270.novalocal systemd[1]: Starting System Logging Service...
Jan 20 17:56:52 np0005589270.novalocal sm-notify[1002]: Version 2.5.4 starting
Jan 20 17:56:52 np0005589270.novalocal systemd[1]: Starting OpenSSH server daemon...
Jan 20 17:56:52 np0005589270.novalocal systemd[1]: Starting Permit User Sessions...
Jan 20 17:56:52 np0005589270.novalocal sshd[1004]: Server listening on 0.0.0.0 port 22.
Jan 20 17:56:52 np0005589270.novalocal sshd[1004]: Server listening on :: port 22.
Jan 20 17:56:52 np0005589270.novalocal systemd[1]: Started OpenSSH server daemon.
Jan 20 17:56:52 np0005589270.novalocal systemd[1]: Started Notify NFS peers of a restart.
Jan 20 17:56:52 np0005589270.novalocal systemd[1]: Finished Permit User Sessions.
Jan 20 17:56:52 np0005589270.novalocal systemd[1]: Started Command Scheduler.
Jan 20 17:56:52 np0005589270.novalocal systemd[1]: Started Getty on tty1.
Jan 20 17:56:52 np0005589270.novalocal crond[1007]: (CRON) STARTUP (1.5.7)
Jan 20 17:56:52 np0005589270.novalocal crond[1007]: (CRON) INFO (Syslog will be used instead of sendmail.)
Jan 20 17:56:52 np0005589270.novalocal crond[1007]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 17% if used.)
Jan 20 17:56:52 np0005589270.novalocal crond[1007]: (CRON) INFO (running with inotify support)
Jan 20 17:56:52 np0005589270.novalocal systemd[1]: Started Serial Getty on ttyS0.
Jan 20 17:56:52 np0005589270.novalocal systemd[1]: Reached target Login Prompts.
Jan 20 17:56:52 np0005589270.novalocal rsyslogd[1003]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1003" x-info="https://www.rsyslog.com"] start
Jan 20 17:56:52 np0005589270.novalocal rsyslogd[1003]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Jan 20 17:56:52 np0005589270.novalocal systemd[1]: Started System Logging Service.
Jan 20 17:56:52 np0005589270.novalocal systemd[1]: Reached target Multi-User System.
Jan 20 17:56:52 np0005589270.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Jan 20 17:56:52 np0005589270.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Jan 20 17:56:52 np0005589270.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Jan 20 17:56:52 np0005589270.novalocal rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 17:56:52 np0005589270.novalocal kdumpctl[1012]: kdump: No kdump initial ramdisk found.
Jan 20 17:56:52 np0005589270.novalocal kdumpctl[1012]: kdump: Rebuilding /boot/initramfs-5.14.0-661.el9.x86_64kdump.img
Jan 20 17:56:52 np0005589270.novalocal cloud-init[1116]: Cloud-init v. 24.4-8.el9 running 'modules:config' at Tue, 20 Jan 2026 17:56:52 +0000. Up 9.18 seconds.
Jan 20 17:56:52 np0005589270.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Jan 20 17:56:52 np0005589270.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Jan 20 17:56:53 np0005589270.novalocal dracut[1263]: dracut-057-102.git20250818.el9
Jan 20 17:56:53 np0005589270.novalocal cloud-init[1281]: Cloud-init v. 24.4-8.el9 running 'modules:final' at Tue, 20 Jan 2026 17:56:53 +0000. Up 9.62 seconds.
Jan 20 17:56:53 np0005589270.novalocal cloud-init[1285]: #############################################################
Jan 20 17:56:53 np0005589270.novalocal cloud-init[1289]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Jan 20 17:56:53 np0005589270.novalocal dracut[1265]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-661.el9.x86_64kdump.img 5.14.0-661.el9.x86_64
Jan 20 17:56:53 np0005589270.novalocal cloud-init[1296]: 256 SHA256:EZTrZ0LDYfWS1xfRN5CveRpu14zAEXoKtd+zM950N3Q root@np0005589270.novalocal (ECDSA)
Jan 20 17:56:53 np0005589270.novalocal cloud-init[1305]: 256 SHA256:O0wxbfvhcIuczgjFIpNhj+5+qJ9PLR7V6yu5NZNqqQY root@np0005589270.novalocal (ED25519)
Jan 20 17:56:53 np0005589270.novalocal cloud-init[1311]: 3072 SHA256:5i9Ds3UiWOuoJOW3TC7jarTzFjQnFAKpPLLEjXY2MfQ root@np0005589270.novalocal (RSA)
Jan 20 17:56:53 np0005589270.novalocal cloud-init[1313]: -----END SSH HOST KEY FINGERPRINTS-----
Jan 20 17:56:53 np0005589270.novalocal cloud-init[1315]: #############################################################
Jan 20 17:56:53 np0005589270.novalocal cloud-init[1281]: Cloud-init v. 24.4-8.el9 finished at Tue, 20 Jan 2026 17:56:53 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 9.79 seconds
Jan 20 17:56:53 np0005589270.novalocal sshd-session[1325]: Connection closed by 38.102.83.114 port 37656 [preauth]
Jan 20 17:56:53 np0005589270.novalocal sshd-session[1343]: Unable to negotiate with 38.102.83.114 port 37664: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Jan 20 17:56:53 np0005589270.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Jan 20 17:56:53 np0005589270.novalocal systemd[1]: Reached target Cloud-init target.
Jan 20 17:56:53 np0005589270.novalocal sshd-session[1354]: Unable to negotiate with 38.102.83.114 port 37688: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Jan 20 17:56:53 np0005589270.novalocal sshd-session[1359]: Unable to negotiate with 38.102.83.114 port 37696: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Jan 20 17:56:53 np0005589270.novalocal sshd-session[1361]: Connection reset by 38.102.83.114 port 37706 [preauth]
Jan 20 17:56:53 np0005589270.novalocal sshd-session[1366]: Connection reset by 38.102.83.114 port 37718 [preauth]
Jan 20 17:56:53 np0005589270.novalocal sshd-session[1351]: Connection closed by 38.102.83.114 port 37680 [preauth]
Jan 20 17:56:53 np0005589270.novalocal sshd-session[1371]: Unable to negotiate with 38.102.83.114 port 37726: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Jan 20 17:56:53 np0005589270.novalocal sshd-session[1379]: Unable to negotiate with 38.102.83.114 port 37734: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Jan 20 17:56:53 np0005589270.novalocal dracut[1265]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Jan 20 17:56:53 np0005589270.novalocal dracut[1265]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Jan 20 17:56:53 np0005589270.novalocal dracut[1265]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Jan 20 17:56:53 np0005589270.novalocal dracut[1265]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 20 17:56:53 np0005589270.novalocal dracut[1265]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 20 17:56:54 np0005589270.novalocal dracut[1265]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 20 17:56:54 np0005589270.novalocal dracut[1265]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 20 17:56:54 np0005589270.novalocal dracut[1265]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 20 17:56:54 np0005589270.novalocal dracut[1265]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 20 17:56:54 np0005589270.novalocal dracut[1265]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 20 17:56:54 np0005589270.novalocal dracut[1265]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 20 17:56:54 np0005589270.novalocal dracut[1265]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 20 17:56:54 np0005589270.novalocal dracut[1265]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 20 17:56:54 np0005589270.novalocal dracut[1265]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 20 17:56:54 np0005589270.novalocal dracut[1265]: Module 'ifcfg' will not be installed, because it's in the list to be omitted!
Jan 20 17:56:54 np0005589270.novalocal dracut[1265]: Module 'plymouth' will not be installed, because it's in the list to be omitted!
Jan 20 17:56:54 np0005589270.novalocal dracut[1265]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 20 17:56:54 np0005589270.novalocal dracut[1265]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 20 17:56:54 np0005589270.novalocal dracut[1265]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 20 17:56:54 np0005589270.novalocal dracut[1265]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 20 17:56:54 np0005589270.novalocal dracut[1265]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 20 17:56:54 np0005589270.novalocal dracut[1265]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 20 17:56:54 np0005589270.novalocal dracut[1265]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 20 17:56:54 np0005589270.novalocal dracut[1265]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 20 17:56:54 np0005589270.novalocal dracut[1265]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 20 17:56:54 np0005589270.novalocal dracut[1265]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 20 17:56:54 np0005589270.novalocal dracut[1265]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 20 17:56:54 np0005589270.novalocal dracut[1265]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 20 17:56:54 np0005589270.novalocal dracut[1265]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 20 17:56:54 np0005589270.novalocal dracut[1265]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 20 17:56:54 np0005589270.novalocal dracut[1265]: Module 'resume' will not be installed, because it's in the list to be omitted!
Jan 20 17:56:54 np0005589270.novalocal dracut[1265]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Jan 20 17:56:54 np0005589270.novalocal dracut[1265]: Module 'earlykdump' will not be installed, because it's in the list to be omitted!
Jan 20 17:56:54 np0005589270.novalocal dracut[1265]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 20 17:56:54 np0005589270.novalocal dracut[1265]: memstrack is not available
Jan 20 17:56:54 np0005589270.novalocal dracut[1265]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 20 17:56:54 np0005589270.novalocal dracut[1265]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 20 17:56:54 np0005589270.novalocal dracut[1265]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 20 17:56:54 np0005589270.novalocal dracut[1265]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 20 17:56:54 np0005589270.novalocal dracut[1265]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 20 17:56:54 np0005589270.novalocal dracut[1265]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 20 17:56:54 np0005589270.novalocal dracut[1265]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 20 17:56:54 np0005589270.novalocal dracut[1265]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 20 17:56:54 np0005589270.novalocal dracut[1265]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 20 17:56:54 np0005589270.novalocal dracut[1265]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 20 17:56:54 np0005589270.novalocal dracut[1265]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 20 17:56:54 np0005589270.novalocal dracut[1265]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 20 17:56:54 np0005589270.novalocal dracut[1265]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 20 17:56:54 np0005589270.novalocal dracut[1265]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 20 17:56:54 np0005589270.novalocal dracut[1265]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 20 17:56:54 np0005589270.novalocal dracut[1265]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 20 17:56:54 np0005589270.novalocal dracut[1265]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 20 17:56:54 np0005589270.novalocal dracut[1265]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 20 17:56:54 np0005589270.novalocal dracut[1265]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 20 17:56:54 np0005589270.novalocal dracut[1265]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 20 17:56:54 np0005589270.novalocal dracut[1265]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 20 17:56:54 np0005589270.novalocal dracut[1265]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 20 17:56:54 np0005589270.novalocal dracut[1265]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 20 17:56:54 np0005589270.novalocal dracut[1265]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 20 17:56:54 np0005589270.novalocal dracut[1265]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 20 17:56:54 np0005589270.novalocal dracut[1265]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 20 17:56:54 np0005589270.novalocal dracut[1265]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 20 17:56:54 np0005589270.novalocal dracut[1265]: memstrack is not available
Jan 20 17:56:54 np0005589270.novalocal dracut[1265]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 20 17:56:55 np0005589270.novalocal dracut[1265]: *** Including module: systemd ***
Jan 20 17:56:55 np0005589270.novalocal dracut[1265]: *** Including module: fips ***
Jan 20 17:56:55 np0005589270.novalocal dracut[1265]: *** Including module: systemd-initrd ***
Jan 20 17:56:55 np0005589270.novalocal dracut[1265]: *** Including module: i18n ***
Jan 20 17:56:55 np0005589270.novalocal dracut[1265]: *** Including module: drm ***
Jan 20 17:56:55 np0005589270.novalocal chronyd[784]: Selected source 162.159.200.1 (2.centos.pool.ntp.org)
Jan 20 17:56:55 np0005589270.novalocal chronyd[784]: System clock TAI offset set to 37 seconds
Jan 20 17:56:56 np0005589270.novalocal dracut[1265]: *** Including module: prefixdevname ***
Jan 20 17:56:56 np0005589270.novalocal dracut[1265]: *** Including module: kernel-modules ***
Jan 20 17:56:56 np0005589270.novalocal kernel: block vda: the capability attribute has been deprecated.
Jan 20 17:56:57 np0005589270.novalocal dracut[1265]: *** Including module: kernel-modules-extra ***
Jan 20 17:56:57 np0005589270.novalocal dracut[1265]:   kernel-modules-extra: configuration source "/run/depmod.d" does not exist
Jan 20 17:56:57 np0005589270.novalocal dracut[1265]:   kernel-modules-extra: configuration source "/lib/depmod.d" does not exist
Jan 20 17:56:57 np0005589270.novalocal dracut[1265]:   kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf"
Jan 20 17:56:57 np0005589270.novalocal dracut[1265]:   kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories
Jan 20 17:56:57 np0005589270.novalocal dracut[1265]: *** Including module: qemu ***
Jan 20 17:56:57 np0005589270.novalocal dracut[1265]: *** Including module: fstab-sys ***
Jan 20 17:56:57 np0005589270.novalocal dracut[1265]: *** Including module: rootfs-block ***
Jan 20 17:56:57 np0005589270.novalocal dracut[1265]: *** Including module: terminfo ***
Jan 20 17:56:57 np0005589270.novalocal dracut[1265]: *** Including module: udev-rules ***
Jan 20 17:56:57 np0005589270.novalocal dracut[1265]: Skipping udev rule: 91-permissions.rules
Jan 20 17:56:57 np0005589270.novalocal dracut[1265]: Skipping udev rule: 80-drivers-modprobe.rules
Jan 20 17:56:57 np0005589270.novalocal dracut[1265]: *** Including module: virtiofs ***
Jan 20 17:56:57 np0005589270.novalocal dracut[1265]: *** Including module: dracut-systemd ***
Jan 20 17:56:58 np0005589270.novalocal dracut[1265]: *** Including module: usrmount ***
Jan 20 17:56:58 np0005589270.novalocal dracut[1265]: *** Including module: base ***
Jan 20 17:56:58 np0005589270.novalocal dracut[1265]: *** Including module: fs-lib ***
Jan 20 17:56:58 np0005589270.novalocal dracut[1265]: *** Including module: kdumpbase ***
Jan 20 17:56:58 np0005589270.novalocal dracut[1265]: *** Including module: microcode_ctl-fw_dir_override ***
Jan 20 17:56:58 np0005589270.novalocal dracut[1265]:   microcode_ctl module: mangling fw_dir
Jan 20 17:56:58 np0005589270.novalocal dracut[1265]:     microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Jan 20 17:56:58 np0005589270.novalocal dracut[1265]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Jan 20 17:56:58 np0005589270.novalocal dracut[1265]:     microcode_ctl: configuration "intel" is ignored
Jan 20 17:56:58 np0005589270.novalocal dracut[1265]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Jan 20 17:56:58 np0005589270.novalocal dracut[1265]:     microcode_ctl: configuration "intel-06-2d-07" is ignored
Jan 20 17:56:58 np0005589270.novalocal dracut[1265]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Jan 20 17:56:58 np0005589270.novalocal dracut[1265]:     microcode_ctl: configuration "intel-06-4e-03" is ignored
Jan 20 17:56:58 np0005589270.novalocal dracut[1265]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Jan 20 17:56:58 np0005589270.novalocal dracut[1265]:     microcode_ctl: configuration "intel-06-4f-01" is ignored
Jan 20 17:56:58 np0005589270.novalocal dracut[1265]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Jan 20 17:56:58 np0005589270.novalocal dracut[1265]:     microcode_ctl: configuration "intel-06-55-04" is ignored
Jan 20 17:56:58 np0005589270.novalocal dracut[1265]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Jan 20 17:56:58 np0005589270.novalocal dracut[1265]:     microcode_ctl: configuration "intel-06-5e-03" is ignored
Jan 20 17:56:58 np0005589270.novalocal dracut[1265]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Jan 20 17:56:58 np0005589270.novalocal dracut[1265]:     microcode_ctl: configuration "intel-06-8c-01" is ignored
Jan 20 17:56:58 np0005589270.novalocal dracut[1265]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Jan 20 17:56:58 np0005589270.novalocal dracut[1265]:     microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Jan 20 17:56:58 np0005589270.novalocal dracut[1265]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Jan 20 17:56:58 np0005589270.novalocal dracut[1265]:     microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Jan 20 17:56:58 np0005589270.novalocal dracut[1265]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Jan 20 17:56:59 np0005589270.novalocal dracut[1265]:     microcode_ctl: configuration "intel-06-8f-08" is ignored
Jan 20 17:56:59 np0005589270.novalocal dracut[1265]:     microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Jan 20 17:56:59 np0005589270.novalocal dracut[1265]: *** Including module: openssl ***
Jan 20 17:56:59 np0005589270.novalocal dracut[1265]: *** Including module: shutdown ***
Jan 20 17:56:59 np0005589270.novalocal dracut[1265]: *** Including module: squash ***
Jan 20 17:56:59 np0005589270.novalocal dracut[1265]: *** Including modules done ***
Jan 20 17:56:59 np0005589270.novalocal dracut[1265]: *** Installing kernel module dependencies ***
Jan 20 17:56:59 np0005589270.novalocal dracut[1265]: *** Installing kernel module dependencies done ***
Jan 20 17:56:59 np0005589270.novalocal dracut[1265]: *** Resolving executable dependencies ***
Jan 20 17:57:00 np0005589270.novalocal irqbalance[792]: Cannot change IRQ 35 affinity: Operation not permitted
Jan 20 17:57:00 np0005589270.novalocal irqbalance[792]: IRQ 35 affinity is now unmanaged
Jan 20 17:57:00 np0005589270.novalocal irqbalance[792]: Cannot change IRQ 33 affinity: Operation not permitted
Jan 20 17:57:00 np0005589270.novalocal irqbalance[792]: IRQ 33 affinity is now unmanaged
Jan 20 17:57:00 np0005589270.novalocal irqbalance[792]: Cannot change IRQ 31 affinity: Operation not permitted
Jan 20 17:57:00 np0005589270.novalocal irqbalance[792]: IRQ 31 affinity is now unmanaged
Jan 20 17:57:00 np0005589270.novalocal irqbalance[792]: Cannot change IRQ 28 affinity: Operation not permitted
Jan 20 17:57:00 np0005589270.novalocal irqbalance[792]: IRQ 28 affinity is now unmanaged
Jan 20 17:57:00 np0005589270.novalocal irqbalance[792]: Cannot change IRQ 34 affinity: Operation not permitted
Jan 20 17:57:00 np0005589270.novalocal irqbalance[792]: IRQ 34 affinity is now unmanaged
Jan 20 17:57:00 np0005589270.novalocal irqbalance[792]: Cannot change IRQ 32 affinity: Operation not permitted
Jan 20 17:57:00 np0005589270.novalocal irqbalance[792]: IRQ 32 affinity is now unmanaged
Jan 20 17:57:00 np0005589270.novalocal irqbalance[792]: Cannot change IRQ 30 affinity: Operation not permitted
Jan 20 17:57:00 np0005589270.novalocal irqbalance[792]: IRQ 30 affinity is now unmanaged
Jan 20 17:57:00 np0005589270.novalocal irqbalance[792]: Cannot change IRQ 29 affinity: Operation not permitted
Jan 20 17:57:00 np0005589270.novalocal irqbalance[792]: IRQ 29 affinity is now unmanaged
Jan 20 17:57:01 np0005589270.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 20 17:57:01 np0005589270.novalocal dracut[1265]: *** Resolving executable dependencies done ***
Jan 20 17:57:01 np0005589270.novalocal dracut[1265]: *** Generating early-microcode cpio image ***
Jan 20 17:57:01 np0005589270.novalocal dracut[1265]: *** Store current command line parameters ***
Jan 20 17:57:01 np0005589270.novalocal dracut[1265]: Stored kernel commandline:
Jan 20 17:57:01 np0005589270.novalocal dracut[1265]: No dracut internal kernel commandline stored in the initramfs
Jan 20 17:57:01 np0005589270.novalocal dracut[1265]: *** Install squash loader ***
Jan 20 17:57:02 np0005589270.novalocal dracut[1265]: *** Squashing the files inside the initramfs ***
Jan 20 17:57:03 np0005589270.novalocal dracut[1265]: *** Squashing the files inside the initramfs done ***
Jan 20 17:57:03 np0005589270.novalocal dracut[1265]: *** Creating image file '/boot/initramfs-5.14.0-661.el9.x86_64kdump.img' ***
Jan 20 17:57:03 np0005589270.novalocal dracut[1265]: *** Hardlinking files ***
Jan 20 17:57:03 np0005589270.novalocal dracut[1265]: Mode:           real
Jan 20 17:57:03 np0005589270.novalocal dracut[1265]: Files:          50
Jan 20 17:57:03 np0005589270.novalocal dracut[1265]: Linked:         0 files
Jan 20 17:57:03 np0005589270.novalocal dracut[1265]: Compared:       0 xattrs
Jan 20 17:57:03 np0005589270.novalocal dracut[1265]: Compared:       0 files
Jan 20 17:57:03 np0005589270.novalocal dracut[1265]: Saved:          0 B
Jan 20 17:57:03 np0005589270.novalocal dracut[1265]: Duration:       0.000824 seconds
Jan 20 17:57:03 np0005589270.novalocal dracut[1265]: *** Hardlinking files done ***
Jan 20 17:57:03 np0005589270.novalocal dracut[1265]: *** Creating initramfs image file '/boot/initramfs-5.14.0-661.el9.x86_64kdump.img' done ***
Jan 20 17:57:04 np0005589270.novalocal kdumpctl[1012]: kdump: kexec: loaded kdump kernel
Jan 20 17:57:04 np0005589270.novalocal kdumpctl[1012]: kdump: Starting kdump: [OK]
Jan 20 17:57:04 np0005589270.novalocal systemd[1]: Finished Crash recovery kernel arming.
Jan 20 17:57:04 np0005589270.novalocal systemd[1]: Startup finished in 2.317s (kernel) + 2.552s (initrd) + 15.751s (userspace) = 20.620s.
Jan 20 17:57:20 np0005589270.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 20 17:58:02 np0005589270.novalocal chronyd[784]: Selected source 54.39.23.64 (2.centos.pool.ntp.org)
Jan 20 18:00:38 np0005589270.novalocal sshd-session[4302]: Accepted publickey for zuul from 38.102.83.114 port 49694 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Jan 20 18:00:38 np0005589270.novalocal systemd[1]: Created slice User Slice of UID 1000.
Jan 20 18:00:38 np0005589270.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Jan 20 18:00:38 np0005589270.novalocal systemd-logind[796]: New session 1 of user zuul.
Jan 20 18:00:38 np0005589270.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Jan 20 18:00:38 np0005589270.novalocal systemd[1]: Starting User Manager for UID 1000...
Jan 20 18:00:38 np0005589270.novalocal systemd[4306]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 18:00:39 np0005589270.novalocal systemd[4306]: Queued start job for default target Main User Target.
Jan 20 18:00:39 np0005589270.novalocal systemd[4306]: Created slice User Application Slice.
Jan 20 18:00:39 np0005589270.novalocal systemd[4306]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 20 18:00:39 np0005589270.novalocal systemd[4306]: Started Daily Cleanup of User's Temporary Directories.
Jan 20 18:00:39 np0005589270.novalocal systemd[4306]: Reached target Paths.
Jan 20 18:00:39 np0005589270.novalocal systemd[4306]: Reached target Timers.
Jan 20 18:00:39 np0005589270.novalocal systemd[4306]: Starting D-Bus User Message Bus Socket...
Jan 20 18:00:39 np0005589270.novalocal systemd[4306]: Starting Create User's Volatile Files and Directories...
Jan 20 18:00:39 np0005589270.novalocal systemd[4306]: Finished Create User's Volatile Files and Directories.
Jan 20 18:00:39 np0005589270.novalocal systemd[4306]: Listening on D-Bus User Message Bus Socket.
Jan 20 18:00:39 np0005589270.novalocal systemd[4306]: Reached target Sockets.
Jan 20 18:00:39 np0005589270.novalocal systemd[4306]: Reached target Basic System.
Jan 20 18:00:39 np0005589270.novalocal systemd[4306]: Reached target Main User Target.
Jan 20 18:00:39 np0005589270.novalocal systemd[4306]: Startup finished in 150ms.
Jan 20 18:00:39 np0005589270.novalocal systemd[1]: Started User Manager for UID 1000.
Jan 20 18:00:39 np0005589270.novalocal systemd[1]: Started Session 1 of User zuul.
Jan 20 18:00:39 np0005589270.novalocal sshd-session[4302]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 18:00:39 np0005589270.novalocal python3[4388]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 18:00:42 np0005589270.novalocal python3[4416]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 18:00:50 np0005589270.novalocal python3[4474]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 18:00:50 np0005589270.novalocal python3[4514]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Jan 20 18:00:53 np0005589270.novalocal python3[4540]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDh4n9x/uSE6cIyj8OLH+6Y4NEO83FA0Gmu3Q6UWyiO5hhezTTncgbJ2b3X26dwFYGOtB0fXMWa5YBd362L6REH9wNaS60Rv3tJo60jQJLPuS9jaZqDVnhkiH97+F3cNu5h0msS1KhRnOS4oVsxjEp1Ls6CA3oq756wpYFHwk8WQuaQ7wEWmvBbVWhsSJf9PM/c9rPn3PMACKZIQ/B3tlq/aZcqL64KU8e4jBoTZcqnjYXlVIEfQLcgb4jOkaOWMSiyDcfLOibMZKNn2ySOa/W76OWdV/7NXGwnrrJEIz1pZteEPM79q9XI7X0JHDBfP8o++7RYI1jOZiHB3+89xIQSrtzfxa67uExzEavhmgFpQAeapbB9iRK4SvQJPVl4Yy1EjmN27e+4lI6o9/JW1rdpCIgRPTZ3iKhHbnn9IRa5N1E1d0gIu2lWu2LP09h8nl8rzsUCy8StrVOu9xUOefthKlh5tZ4laRS8LXykGdc64UwiRWAasb/8qJYZ6e6SH2E= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 18:00:53 np0005589270.novalocal python3[4564]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:00:54 np0005589270.novalocal python3[4663]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 18:00:54 np0005589270.novalocal python3[4734]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768932053.783493-251-105053364965413/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=42aa40c4a297403d85120f256fb24bcb_id_rsa follow=False checksum=c92ee5b4dc2b6e10de79782092cfc47c580a159c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:00:55 np0005589270.novalocal python3[4857]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 18:00:55 np0005589270.novalocal python3[4928]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768932054.7779093-306-106387107282935/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=42aa40c4a297403d85120f256fb24bcb_id_rsa.pub follow=False checksum=0a1d8b047f7b5d86975118a5aec3f399256ed117 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:00:56 np0005589270.novalocal python3[4976]: ansible-ping Invoked with data=pong
Jan 20 18:00:57 np0005589270.novalocal python3[5000]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 18:01:00 np0005589270.novalocal python3[5058]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Jan 20 18:01:01 np0005589270.novalocal CROND[5092]: (root) CMD (run-parts /etc/cron.hourly)
Jan 20 18:01:01 np0005589270.novalocal run-parts[5095]: (/etc/cron.hourly) starting 0anacron
Jan 20 18:01:01 np0005589270.novalocal anacron[5103]: Anacron started on 2026-01-20
Jan 20 18:01:01 np0005589270.novalocal anacron[5103]: Will run job `cron.daily' in 43 min.
Jan 20 18:01:01 np0005589270.novalocal anacron[5103]: Will run job `cron.weekly' in 63 min.
Jan 20 18:01:01 np0005589270.novalocal anacron[5103]: Will run job `cron.monthly' in 83 min.
Jan 20 18:01:01 np0005589270.novalocal anacron[5103]: Jobs will be executed sequentially
Jan 20 18:01:01 np0005589270.novalocal run-parts[5105]: (/etc/cron.hourly) finished 0anacron
Jan 20 18:01:01 np0005589270.novalocal CROND[5091]: (root) CMDEND (run-parts /etc/cron.hourly)
Jan 20 18:01:01 np0005589270.novalocal python3[5090]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:01:02 np0005589270.novalocal python3[5129]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:01:02 np0005589270.novalocal python3[5153]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:01:02 np0005589270.novalocal python3[5177]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:01:02 np0005589270.novalocal python3[5201]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:01:03 np0005589270.novalocal python3[5225]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:01:04 np0005589270.novalocal sudo[5249]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sypoxwqeollasmqugvozuwpbivzemkzh ; /usr/bin/python3'
Jan 20 18:01:04 np0005589270.novalocal sudo[5249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:01:05 np0005589270.novalocal python3[5251]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:01:05 np0005589270.novalocal sudo[5249]: pam_unix(sudo:session): session closed for user root
Jan 20 18:01:05 np0005589270.novalocal sudo[5327]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-algrjqxahpqdhyletzwbgyhyttnkbhjz ; /usr/bin/python3'
Jan 20 18:01:05 np0005589270.novalocal sudo[5327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:01:05 np0005589270.novalocal python3[5329]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 18:01:05 np0005589270.novalocal sudo[5327]: pam_unix(sudo:session): session closed for user root
Jan 20 18:01:06 np0005589270.novalocal sudo[5400]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gsgnfcqynwajmhrtrizhqcmaablaqyql ; /usr/bin/python3'
Jan 20 18:01:06 np0005589270.novalocal sudo[5400]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:01:06 np0005589270.novalocal python3[5402]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1768932065.1836917-31-192170347606303/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:01:06 np0005589270.novalocal sudo[5400]: pam_unix(sudo:session): session closed for user root
Jan 20 18:01:06 np0005589270.novalocal python3[5450]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 18:01:07 np0005589270.novalocal python3[5474]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 18:01:07 np0005589270.novalocal python3[5498]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 18:01:07 np0005589270.novalocal python3[5522]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 18:01:08 np0005589270.novalocal python3[5546]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 18:01:08 np0005589270.novalocal python3[5570]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 18:01:08 np0005589270.novalocal python3[5594]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 18:01:08 np0005589270.novalocal python3[5618]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 18:01:09 np0005589270.novalocal python3[5642]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 18:01:09 np0005589270.novalocal python3[5666]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 18:01:09 np0005589270.novalocal python3[5690]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 18:01:09 np0005589270.novalocal python3[5714]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 18:01:10 np0005589270.novalocal python3[5738]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 18:01:10 np0005589270.novalocal python3[5762]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 18:01:10 np0005589270.novalocal python3[5786]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 18:01:11 np0005589270.novalocal python3[5810]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 18:01:11 np0005589270.novalocal python3[5834]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 18:01:11 np0005589270.novalocal python3[5858]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 18:01:11 np0005589270.novalocal python3[5882]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 18:01:12 np0005589270.novalocal python3[5906]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 18:01:12 np0005589270.novalocal python3[5930]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 18:01:12 np0005589270.novalocal python3[5954]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 18:01:13 np0005589270.novalocal python3[5978]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 18:01:13 np0005589270.novalocal python3[6002]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 18:01:13 np0005589270.novalocal python3[6026]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 18:01:13 np0005589270.novalocal python3[6050]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 18:01:16 np0005589270.novalocal sudo[6074]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sytwqeugzxhvbuqodellyprwyconsmex ; /usr/bin/python3'
Jan 20 18:01:16 np0005589270.novalocal sudo[6074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:01:16 np0005589270.novalocal python3[6076]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 20 18:01:16 np0005589270.novalocal systemd[1]: Starting Time & Date Service...
Jan 20 18:01:16 np0005589270.novalocal systemd[1]: Started Time & Date Service.
Jan 20 18:01:16 np0005589270.novalocal systemd-timedated[6078]: Changed time zone to 'UTC' (UTC).
Jan 20 18:01:16 np0005589270.novalocal sudo[6074]: pam_unix(sudo:session): session closed for user root
Jan 20 18:01:17 np0005589270.novalocal sudo[6105]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhndtupqqxhvevmtbbibgjdzeewzsyry ; /usr/bin/python3'
Jan 20 18:01:17 np0005589270.novalocal sudo[6105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:01:17 np0005589270.novalocal python3[6107]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:01:17 np0005589270.novalocal sudo[6105]: pam_unix(sudo:session): session closed for user root
Jan 20 18:01:18 np0005589270.novalocal python3[6183]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 18:01:18 np0005589270.novalocal python3[6254]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1768932078.1268237-251-170155877411185/source _original_basename=tmp8a1y4g6v follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:01:19 np0005589270.novalocal python3[6354]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 18:01:19 np0005589270.novalocal python3[6425]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1768932078.954068-301-185194360205222/source _original_basename=tmpkk0f3hbp follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:01:20 np0005589270.novalocal sudo[6525]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqdjllpfabodhaxttxbckclpogizstpg ; /usr/bin/python3'
Jan 20 18:01:20 np0005589270.novalocal sudo[6525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:01:20 np0005589270.novalocal python3[6527]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 18:01:20 np0005589270.novalocal sudo[6525]: pam_unix(sudo:session): session closed for user root
Jan 20 18:01:20 np0005589270.novalocal sudo[6598]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sailmlfffllixgmkzzzzoysaaxfqlzfl ; /usr/bin/python3'
Jan 20 18:01:20 np0005589270.novalocal sudo[6598]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:01:20 np0005589270.novalocal python3[6600]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1768932080.1968493-381-230476390259578/source _original_basename=tmpp3xgrd8x follow=False checksum=d994f5a0f8305d9967bdf6cc68f2476e459dce01 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:01:20 np0005589270.novalocal sudo[6598]: pam_unix(sudo:session): session closed for user root
Jan 20 18:01:21 np0005589270.novalocal python3[6648]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:01:22 np0005589270.novalocal python3[6674]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:01:22 np0005589270.novalocal sudo[6752]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yyqudgynfspbbgagauimiiedccwdundk ; /usr/bin/python3'
Jan 20 18:01:22 np0005589270.novalocal sudo[6752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:01:22 np0005589270.novalocal python3[6754]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 18:01:22 np0005589270.novalocal sudo[6752]: pam_unix(sudo:session): session closed for user root
Jan 20 18:01:22 np0005589270.novalocal sudo[6825]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibtnfihmlzrekxizwjxhpnokfjbgacfg ; /usr/bin/python3'
Jan 20 18:01:22 np0005589270.novalocal sudo[6825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:01:23 np0005589270.novalocal python3[6827]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1768932082.4426727-451-29681368218774/source _original_basename=tmpdi6pfd9a follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:01:23 np0005589270.novalocal sudo[6825]: pam_unix(sudo:session): session closed for user root
Jan 20 18:01:23 np0005589270.novalocal sudo[6876]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzzwjimburhpswmtapsjrheaxsijolla ; /usr/bin/python3'
Jan 20 18:01:23 np0005589270.novalocal sudo[6876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:01:23 np0005589270.novalocal python3[6878]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ef9-e89a-2c5a-53fa-00000000001f-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:01:23 np0005589270.novalocal sudo[6876]: pam_unix(sudo:session): session closed for user root
Jan 20 18:01:24 np0005589270.novalocal python3[6906]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163ef9-e89a-2c5a-53fa-000000000020-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Jan 20 18:01:26 np0005589270.novalocal python3[6934]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:01:44 np0005589270.novalocal sudo[6958]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbmglkhiuvemzhgdstigczwdmrqfwxui ; /usr/bin/python3'
Jan 20 18:01:44 np0005589270.novalocal sudo[6958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:01:44 np0005589270.novalocal python3[6960]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:01:44 np0005589270.novalocal sudo[6958]: pam_unix(sudo:session): session closed for user root
Jan 20 18:01:46 np0005589270.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 20 18:02:28 np0005589270.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 20 18:02:28 np0005589270.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Jan 20 18:02:28 np0005589270.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Jan 20 18:02:28 np0005589270.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Jan 20 18:02:28 np0005589270.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Jan 20 18:02:28 np0005589270.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Jan 20 18:02:28 np0005589270.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Jan 20 18:02:28 np0005589270.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Jan 20 18:02:28 np0005589270.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Jan 20 18:02:28 np0005589270.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Jan 20 18:02:28 np0005589270.novalocal NetworkManager[855]: <info>  [1768932148.5820] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 20 18:02:28 np0005589270.novalocal systemd-udevd[6967]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 18:02:28 np0005589270.novalocal NetworkManager[855]: <info>  [1768932148.5951] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 18:02:28 np0005589270.novalocal NetworkManager[855]: <info>  [1768932148.5976] settings: (eth1): created default wired connection 'Wired connection 1'
Jan 20 18:02:28 np0005589270.novalocal NetworkManager[855]: <info>  [1768932148.5980] device (eth1): carrier: link connected
Jan 20 18:02:28 np0005589270.novalocal NetworkManager[855]: <info>  [1768932148.5982] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 20 18:02:28 np0005589270.novalocal NetworkManager[855]: <info>  [1768932148.5989] policy: auto-activating connection 'Wired connection 1' (602bf063-40be-3863-86f7-7246e64f3d42)
Jan 20 18:02:28 np0005589270.novalocal NetworkManager[855]: <info>  [1768932148.5992] device (eth1): Activation: starting connection 'Wired connection 1' (602bf063-40be-3863-86f7-7246e64f3d42)
Jan 20 18:02:28 np0005589270.novalocal NetworkManager[855]: <info>  [1768932148.5993] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 20 18:02:28 np0005589270.novalocal NetworkManager[855]: <info>  [1768932148.5998] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 20 18:02:28 np0005589270.novalocal NetworkManager[855]: <info>  [1768932148.6002] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 20 18:02:28 np0005589270.novalocal NetworkManager[855]: <info>  [1768932148.6009] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 20 18:02:29 np0005589270.novalocal python3[6993]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ef9-e89a-5b79-e159-000000000128-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:02:39 np0005589270.novalocal sudo[7071]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfrzegrrpsvptosjqztaoccxmstudeit ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 20 18:02:39 np0005589270.novalocal sudo[7071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:02:39 np0005589270.novalocal python3[7073]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 18:02:39 np0005589270.novalocal sudo[7071]: pam_unix(sudo:session): session closed for user root
Jan 20 18:02:39 np0005589270.novalocal sudo[7144]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffhjgigdbfyznufindeomtcrmyvnjhie ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 20 18:02:39 np0005589270.novalocal sudo[7144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:02:39 np0005589270.novalocal python3[7146]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768932159.2101502-104-92548911351567/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=22dec381566a837936f8fb2c08b36356a24fee97 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:02:39 np0005589270.novalocal sudo[7144]: pam_unix(sudo:session): session closed for user root
Jan 20 18:02:40 np0005589270.novalocal sudo[7194]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cirbmmbdeuugvuymzncokmlzvolqqnsc ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 20 18:02:40 np0005589270.novalocal sudo[7194]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:02:40 np0005589270.novalocal python3[7196]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 18:02:40 np0005589270.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 20 18:02:40 np0005589270.novalocal systemd[1]: Stopped Network Manager Wait Online.
Jan 20 18:02:40 np0005589270.novalocal systemd[1]: Stopping Network Manager Wait Online...
Jan 20 18:02:40 np0005589270.novalocal NetworkManager[855]: <info>  [1768932160.7287] caught SIGTERM, shutting down normally.
Jan 20 18:02:40 np0005589270.novalocal systemd[1]: Stopping Network Manager...
Jan 20 18:02:40 np0005589270.novalocal NetworkManager[855]: <info>  [1768932160.7294] dhcp4 (eth0): canceled DHCP transaction
Jan 20 18:02:40 np0005589270.novalocal NetworkManager[855]: <info>  [1768932160.7295] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 20 18:02:40 np0005589270.novalocal NetworkManager[855]: <info>  [1768932160.7295] dhcp4 (eth0): state changed no lease
Jan 20 18:02:40 np0005589270.novalocal NetworkManager[855]: <info>  [1768932160.7296] manager: NetworkManager state is now CONNECTING
Jan 20 18:02:40 np0005589270.novalocal NetworkManager[855]: <info>  [1768932160.7398] dhcp4 (eth1): canceled DHCP transaction
Jan 20 18:02:40 np0005589270.novalocal NetworkManager[855]: <info>  [1768932160.7398] dhcp4 (eth1): state changed no lease
Jan 20 18:02:40 np0005589270.novalocal NetworkManager[855]: <info>  [1768932160.7453] exiting (success)
Jan 20 18:02:40 np0005589270.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 20 18:02:40 np0005589270.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 20 18:02:40 np0005589270.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 20 18:02:40 np0005589270.novalocal systemd[1]: Stopped Network Manager.
Jan 20 18:02:40 np0005589270.novalocal systemd[1]: NetworkManager.service: Consumed 2.326s CPU time, 10.0M memory peak.
Jan 20 18:02:40 np0005589270.novalocal systemd[1]: Starting Network Manager...
Jan 20 18:02:40 np0005589270.novalocal NetworkManager[7206]: <info>  [1768932160.8182] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:7a60faef-372d-4827-b0d0-8fdd6d433663)
Jan 20 18:02:40 np0005589270.novalocal NetworkManager[7206]: <info>  [1768932160.8184] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 20 18:02:40 np0005589270.novalocal NetworkManager[7206]: <info>  [1768932160.8244] manager[0x562fb2131000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 20 18:02:40 np0005589270.novalocal systemd[1]: Starting Hostname Service...
Jan 20 18:02:40 np0005589270.novalocal systemd[1]: Started Hostname Service.
Jan 20 18:02:40 np0005589270.novalocal NetworkManager[7206]: <info>  [1768932160.9297] hostname: hostname: using hostnamed
Jan 20 18:02:40 np0005589270.novalocal NetworkManager[7206]: <info>  [1768932160.9298] hostname: static hostname changed from (none) to "np0005589270.novalocal"
Jan 20 18:02:40 np0005589270.novalocal NetworkManager[7206]: <info>  [1768932160.9305] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 20 18:02:40 np0005589270.novalocal NetworkManager[7206]: <info>  [1768932160.9311] manager[0x562fb2131000]: rfkill: Wi-Fi hardware radio set enabled
Jan 20 18:02:40 np0005589270.novalocal NetworkManager[7206]: <info>  [1768932160.9311] manager[0x562fb2131000]: rfkill: WWAN hardware radio set enabled
Jan 20 18:02:40 np0005589270.novalocal NetworkManager[7206]: <info>  [1768932160.9354] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 20 18:02:40 np0005589270.novalocal NetworkManager[7206]: <info>  [1768932160.9354] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 20 18:02:40 np0005589270.novalocal NetworkManager[7206]: <info>  [1768932160.9355] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 20 18:02:40 np0005589270.novalocal NetworkManager[7206]: <info>  [1768932160.9356] manager: Networking is enabled by state file
Jan 20 18:02:40 np0005589270.novalocal NetworkManager[7206]: <info>  [1768932160.9360] settings: Loaded settings plugin: keyfile (internal)
Jan 20 18:02:40 np0005589270.novalocal NetworkManager[7206]: <info>  [1768932160.9365] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 20 18:02:40 np0005589270.novalocal NetworkManager[7206]: <info>  [1768932160.9409] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 20 18:02:40 np0005589270.novalocal NetworkManager[7206]: <info>  [1768932160.9423] dhcp: init: Using DHCP client 'internal'
Jan 20 18:02:40 np0005589270.novalocal NetworkManager[7206]: <info>  [1768932160.9428] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 20 18:02:40 np0005589270.novalocal NetworkManager[7206]: <info>  [1768932160.9436] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 18:02:40 np0005589270.novalocal NetworkManager[7206]: <info>  [1768932160.9444] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 20 18:02:40 np0005589270.novalocal NetworkManager[7206]: <info>  [1768932160.9461] device (lo): Activation: starting connection 'lo' (ee6edf19-39c6-4a96-abbb-0d8aa9c964b6)
Jan 20 18:02:40 np0005589270.novalocal NetworkManager[7206]: <info>  [1768932160.9473] device (eth0): carrier: link connected
Jan 20 18:02:40 np0005589270.novalocal NetworkManager[7206]: <info>  [1768932160.9479] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 20 18:02:40 np0005589270.novalocal NetworkManager[7206]: <info>  [1768932160.9487] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 20 18:02:40 np0005589270.novalocal NetworkManager[7206]: <info>  [1768932160.9487] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 20 18:02:40 np0005589270.novalocal NetworkManager[7206]: <info>  [1768932160.9499] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 20 18:02:40 np0005589270.novalocal NetworkManager[7206]: <info>  [1768932160.9509] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 20 18:02:40 np0005589270.novalocal NetworkManager[7206]: <info>  [1768932160.9520] device (eth1): carrier: link connected
Jan 20 18:02:40 np0005589270.novalocal NetworkManager[7206]: <info>  [1768932160.9527] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 20 18:02:40 np0005589270.novalocal NetworkManager[7206]: <info>  [1768932160.9535] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (602bf063-40be-3863-86f7-7246e64f3d42) (indicated)
Jan 20 18:02:40 np0005589270.novalocal NetworkManager[7206]: <info>  [1768932160.9535] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 20 18:02:40 np0005589270.novalocal NetworkManager[7206]: <info>  [1768932160.9543] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 20 18:02:40 np0005589270.novalocal NetworkManager[7206]: <info>  [1768932160.9555] device (eth1): Activation: starting connection 'Wired connection 1' (602bf063-40be-3863-86f7-7246e64f3d42)
Jan 20 18:02:40 np0005589270.novalocal NetworkManager[7206]: <info>  [1768932160.9566] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 20 18:02:40 np0005589270.novalocal systemd[1]: Started Network Manager.
Jan 20 18:02:40 np0005589270.novalocal NetworkManager[7206]: <info>  [1768932160.9571] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 20 18:02:40 np0005589270.novalocal NetworkManager[7206]: <info>  [1768932160.9574] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 20 18:02:40 np0005589270.novalocal NetworkManager[7206]: <info>  [1768932160.9575] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 20 18:02:40 np0005589270.novalocal NetworkManager[7206]: <info>  [1768932160.9577] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 20 18:02:40 np0005589270.novalocal NetworkManager[7206]: <info>  [1768932160.9579] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 20 18:02:40 np0005589270.novalocal NetworkManager[7206]: <info>  [1768932160.9581] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 20 18:02:40 np0005589270.novalocal NetworkManager[7206]: <info>  [1768932160.9583] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 20 18:02:40 np0005589270.novalocal NetworkManager[7206]: <info>  [1768932160.9585] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 20 18:02:40 np0005589270.novalocal NetworkManager[7206]: <info>  [1768932160.9590] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 20 18:02:40 np0005589270.novalocal NetworkManager[7206]: <info>  [1768932160.9592] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 20 18:02:40 np0005589270.novalocal NetworkManager[7206]: <info>  [1768932160.9599] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 20 18:02:40 np0005589270.novalocal NetworkManager[7206]: <info>  [1768932160.9600] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 20 18:02:40 np0005589270.novalocal NetworkManager[7206]: <info>  [1768932160.9629] dhcp4 (eth0): state changed new lease, address=38.102.83.13
Jan 20 18:02:40 np0005589270.novalocal NetworkManager[7206]: <info>  [1768932160.9633] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 20 18:02:40 np0005589270.novalocal NetworkManager[7206]: <info>  [1768932160.9695] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 20 18:02:40 np0005589270.novalocal NetworkManager[7206]: <info>  [1768932160.9699] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 20 18:02:40 np0005589270.novalocal NetworkManager[7206]: <info>  [1768932160.9700] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 20 18:02:40 np0005589270.novalocal NetworkManager[7206]: <info>  [1768932160.9704] device (lo): Activation: successful, device activated.
Jan 20 18:02:40 np0005589270.novalocal NetworkManager[7206]: <info>  [1768932160.9727] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 20 18:02:40 np0005589270.novalocal NetworkManager[7206]: <info>  [1768932160.9729] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 20 18:02:40 np0005589270.novalocal NetworkManager[7206]: <info>  [1768932160.9731] manager: NetworkManager state is now CONNECTED_SITE
Jan 20 18:02:40 np0005589270.novalocal NetworkManager[7206]: <info>  [1768932160.9732] device (eth0): Activation: successful, device activated.
Jan 20 18:02:40 np0005589270.novalocal NetworkManager[7206]: <info>  [1768932160.9736] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 20 18:02:40 np0005589270.novalocal systemd[1]: Starting Network Manager Wait Online...
Jan 20 18:02:40 np0005589270.novalocal sudo[7194]: pam_unix(sudo:session): session closed for user root
Jan 20 18:02:41 np0005589270.novalocal systemd[4306]: Starting Mark boot as successful...
Jan 20 18:02:41 np0005589270.novalocal systemd[4306]: Finished Mark boot as successful.
Jan 20 18:02:41 np0005589270.novalocal python3[7281]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ef9-e89a-5b79-e159-0000000000bd-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:02:51 np0005589270.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 20 18:03:10 np0005589270.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 20 18:03:26 np0005589270.novalocal NetworkManager[7206]: <info>  [1768932206.6702] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 20 18:03:26 np0005589270.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 20 18:03:26 np0005589270.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 20 18:03:26 np0005589270.novalocal NetworkManager[7206]: <info>  [1768932206.6928] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 20 18:03:26 np0005589270.novalocal NetworkManager[7206]: <info>  [1768932206.6930] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 20 18:03:26 np0005589270.novalocal NetworkManager[7206]: <info>  [1768932206.6934] device (eth1): Activation: successful, device activated.
Jan 20 18:03:26 np0005589270.novalocal NetworkManager[7206]: <info>  [1768932206.6941] manager: startup complete
Jan 20 18:03:26 np0005589270.novalocal NetworkManager[7206]: <info>  [1768932206.6943] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Jan 20 18:03:26 np0005589270.novalocal NetworkManager[7206]: <warn>  [1768932206.6949] device (eth1): Activation: failed for connection 'Wired connection 1'
Jan 20 18:03:26 np0005589270.novalocal NetworkManager[7206]: <info>  [1768932206.6956] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Jan 20 18:03:26 np0005589270.novalocal systemd[1]: Finished Network Manager Wait Online.
Jan 20 18:03:26 np0005589270.novalocal NetworkManager[7206]: <info>  [1768932206.7028] dhcp4 (eth1): canceled DHCP transaction
Jan 20 18:03:26 np0005589270.novalocal NetworkManager[7206]: <info>  [1768932206.7028] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 20 18:03:26 np0005589270.novalocal NetworkManager[7206]: <info>  [1768932206.7029] dhcp4 (eth1): state changed no lease
Jan 20 18:03:26 np0005589270.novalocal NetworkManager[7206]: <info>  [1768932206.7043] policy: auto-activating connection 'ci-private-network' (df32b0f8-e05a-5256-8e63-e2a619e93c70)
Jan 20 18:03:26 np0005589270.novalocal NetworkManager[7206]: <info>  [1768932206.7049] device (eth1): Activation: starting connection 'ci-private-network' (df32b0f8-e05a-5256-8e63-e2a619e93c70)
Jan 20 18:03:26 np0005589270.novalocal NetworkManager[7206]: <info>  [1768932206.7051] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 20 18:03:26 np0005589270.novalocal NetworkManager[7206]: <info>  [1768932206.7055] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 20 18:03:26 np0005589270.novalocal NetworkManager[7206]: <info>  [1768932206.7064] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 20 18:03:26 np0005589270.novalocal NetworkManager[7206]: <info>  [1768932206.7075] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 20 18:03:26 np0005589270.novalocal NetworkManager[7206]: <info>  [1768932206.7121] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 20 18:03:26 np0005589270.novalocal NetworkManager[7206]: <info>  [1768932206.7123] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 20 18:03:26 np0005589270.novalocal NetworkManager[7206]: <info>  [1768932206.7130] device (eth1): Activation: successful, device activated.
Jan 20 18:03:36 np0005589270.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 20 18:03:41 np0005589270.novalocal sshd-session[4315]: Received disconnect from 38.102.83.114 port 49694:11: disconnected by user
Jan 20 18:03:41 np0005589270.novalocal sshd-session[4315]: Disconnected from user zuul 38.102.83.114 port 49694
Jan 20 18:03:41 np0005589270.novalocal sshd-session[4302]: pam_unix(sshd:session): session closed for user zuul
Jan 20 18:03:41 np0005589270.novalocal systemd-logind[796]: Session 1 logged out. Waiting for processes to exit.
Jan 20 18:04:42 np0005589270.novalocal sshd-session[7313]: Accepted publickey for zuul from 38.102.83.114 port 56852 ssh2: RSA SHA256:4QdNcGxIfGrd0SulXH8wKdvIjwwnijbxtrxruAjIfw8
Jan 20 18:04:42 np0005589270.novalocal systemd-logind[796]: New session 3 of user zuul.
Jan 20 18:04:42 np0005589270.novalocal systemd[1]: Started Session 3 of User zuul.
Jan 20 18:04:42 np0005589270.novalocal sshd-session[7313]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 18:04:42 np0005589270.novalocal sudo[7392]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnbmovftyypfhwikajjakottqpefzbke ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 20 18:04:42 np0005589270.novalocal sudo[7392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:04:42 np0005589270.novalocal python3[7394]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 18:04:42 np0005589270.novalocal sudo[7392]: pam_unix(sudo:session): session closed for user root
Jan 20 18:04:43 np0005589270.novalocal sudo[7465]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spzgeqfmpuiqwtnbvdneaozmzzjhxzka ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 20 18:04:43 np0005589270.novalocal sudo[7465]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:04:43 np0005589270.novalocal python3[7467]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768932282.489655-373-196729304137321/source _original_basename=tmpr6lfv62j follow=False checksum=9285143d377043d22a1ada338c1e1cd6477a32c8 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:04:43 np0005589270.novalocal sudo[7465]: pam_unix(sudo:session): session closed for user root
Jan 20 18:04:47 np0005589270.novalocal sshd-session[7316]: Connection closed by 38.102.83.114 port 56852
Jan 20 18:04:47 np0005589270.novalocal sshd-session[7313]: pam_unix(sshd:session): session closed for user zuul
Jan 20 18:04:47 np0005589270.novalocal systemd-logind[796]: Session 3 logged out. Waiting for processes to exit.
Jan 20 18:04:47 np0005589270.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Jan 20 18:04:47 np0005589270.novalocal systemd-logind[796]: Removed session 3.
Jan 20 18:05:40 np0005589270.novalocal systemd[4306]: Created slice User Background Tasks Slice.
Jan 20 18:05:41 np0005589270.novalocal systemd[4306]: Starting Cleanup of User's Temporary Files and Directories...
Jan 20 18:05:41 np0005589270.novalocal systemd[4306]: Finished Cleanup of User's Temporary Files and Directories.
Jan 20 18:11:40 np0005589270.novalocal sshd-session[7497]: Accepted publickey for zuul from 38.102.83.114 port 38574 ssh2: RSA SHA256:4QdNcGxIfGrd0SulXH8wKdvIjwwnijbxtrxruAjIfw8
Jan 20 18:11:40 np0005589270.novalocal systemd-logind[796]: New session 4 of user zuul.
Jan 20 18:11:40 np0005589270.novalocal systemd[1]: Started Session 4 of User zuul.
Jan 20 18:11:40 np0005589270.novalocal sshd-session[7497]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 18:11:40 np0005589270.novalocal sudo[7524]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwiuumpfzpzyuxjwfkigywipwmkarsed ; /usr/bin/python3'
Jan 20 18:11:40 np0005589270.novalocal sudo[7524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:11:41 np0005589270.novalocal python3[7526]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163ef9-e89a-e8a2-6994-00000000217f-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:11:41 np0005589270.novalocal sudo[7524]: pam_unix(sudo:session): session closed for user root
Jan 20 18:11:41 np0005589270.novalocal sudo[7552]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eswcbodwszbkujirpsugrteiqgxbvidq ; /usr/bin/python3'
Jan 20 18:11:41 np0005589270.novalocal sudo[7552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:11:41 np0005589270.novalocal python3[7554]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:11:41 np0005589270.novalocal sudo[7552]: pam_unix(sudo:session): session closed for user root
Jan 20 18:11:41 np0005589270.novalocal sudo[7579]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oeydeherujbkaijzajbtgyskvonbkdef ; /usr/bin/python3'
Jan 20 18:11:41 np0005589270.novalocal sudo[7579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:11:41 np0005589270.novalocal python3[7581]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:11:41 np0005589270.novalocal sudo[7579]: pam_unix(sudo:session): session closed for user root
Jan 20 18:11:41 np0005589270.novalocal sudo[7605]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcofcuofrvrhjwiduwuufbhwsbzeixmk ; /usr/bin/python3'
Jan 20 18:11:41 np0005589270.novalocal sudo[7605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:11:42 np0005589270.novalocal python3[7607]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:11:42 np0005589270.novalocal sudo[7605]: pam_unix(sudo:session): session closed for user root
Jan 20 18:11:42 np0005589270.novalocal sudo[7631]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyllnidxujwvorvsjvrudrwshjzstrbl ; /usr/bin/python3'
Jan 20 18:11:42 np0005589270.novalocal sudo[7631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:11:42 np0005589270.novalocal python3[7633]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:11:42 np0005589270.novalocal sudo[7631]: pam_unix(sudo:session): session closed for user root
Jan 20 18:11:42 np0005589270.novalocal sudo[7657]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwlxqcghjftovejgkcbzshyzmuipwuvy ; /usr/bin/python3'
Jan 20 18:11:42 np0005589270.novalocal sudo[7657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:11:43 np0005589270.novalocal python3[7659]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:11:43 np0005589270.novalocal sudo[7657]: pam_unix(sudo:session): session closed for user root
Jan 20 18:11:43 np0005589270.novalocal sudo[7735]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmafouaqwghiksapagqvyffciwcqyily ; /usr/bin/python3'
Jan 20 18:11:43 np0005589270.novalocal sudo[7735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:11:43 np0005589270.novalocal python3[7737]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 18:11:43 np0005589270.novalocal sudo[7735]: pam_unix(sudo:session): session closed for user root
Jan 20 18:11:43 np0005589270.novalocal systemd[1]: Starting Cleanup of Temporary Directories...
Jan 20 18:11:43 np0005589270.novalocal sudo[7808]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rutvmcrnizlhsygteaxvqjnrelzoqmyx ; /usr/bin/python3'
Jan 20 18:11:43 np0005589270.novalocal sudo[7808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:11:44 np0005589270.novalocal systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Jan 20 18:11:44 np0005589270.novalocal systemd[1]: Finished Cleanup of Temporary Directories.
Jan 20 18:11:44 np0005589270.novalocal systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Jan 20 18:11:44 np0005589270.novalocal python3[7812]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768932703.3967617-538-10438495331467/source _original_basename=tmpycrrfenm follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:11:44 np0005589270.novalocal sudo[7808]: pam_unix(sudo:session): session closed for user root
Jan 20 18:11:44 np0005589270.novalocal sudo[7862]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogvopygagnwxlbeqbxmxiuaoaroqrhee ; /usr/bin/python3'
Jan 20 18:11:44 np0005589270.novalocal sudo[7862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:11:45 np0005589270.novalocal python3[7864]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 20 18:11:45 np0005589270.novalocal systemd[1]: Reloading.
Jan 20 18:11:45 np0005589270.novalocal systemd-rc-local-generator[7886]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:11:45 np0005589270.novalocal sudo[7862]: pam_unix(sudo:session): session closed for user root
Jan 20 18:11:46 np0005589270.novalocal sudo[7918]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrbvhvtmhumliktlprqztpvumslqngtw ; /usr/bin/python3'
Jan 20 18:11:46 np0005589270.novalocal sudo[7918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:11:46 np0005589270.novalocal python3[7920]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Jan 20 18:11:46 np0005589270.novalocal sudo[7918]: pam_unix(sudo:session): session closed for user root
Jan 20 18:11:46 np0005589270.novalocal sudo[7944]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npwkttkukqqnhjdsgxvgojyudosevbzz ; /usr/bin/python3'
Jan 20 18:11:46 np0005589270.novalocal sudo[7944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:11:47 np0005589270.novalocal python3[7946]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:11:47 np0005589270.novalocal sudo[7944]: pam_unix(sudo:session): session closed for user root
Jan 20 18:11:47 np0005589270.novalocal sudo[7972]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgytwovgnhgpcbholpjacvjgorumneeh ; /usr/bin/python3'
Jan 20 18:11:47 np0005589270.novalocal sudo[7972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:11:47 np0005589270.novalocal python3[7974]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:11:47 np0005589270.novalocal sudo[7972]: pam_unix(sudo:session): session closed for user root
Jan 20 18:11:47 np0005589270.novalocal sudo[8000]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlvyyhtqwyhpjfjmhwniauskyaqbgoas ; /usr/bin/python3'
Jan 20 18:11:47 np0005589270.novalocal sudo[8000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:11:47 np0005589270.novalocal python3[8002]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:11:47 np0005589270.novalocal sudo[8000]: pam_unix(sudo:session): session closed for user root
Jan 20 18:11:47 np0005589270.novalocal sudo[8028]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvlckkzdwnmjrghyyakybhamsjzwtfoe ; /usr/bin/python3'
Jan 20 18:11:47 np0005589270.novalocal sudo[8028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:11:47 np0005589270.novalocal python3[8030]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:11:47 np0005589270.novalocal sudo[8028]: pam_unix(sudo:session): session closed for user root
Jan 20 18:11:48 np0005589270.novalocal python3[8057]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163ef9-e89a-e8a2-6994-000000002186-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:11:49 np0005589270.novalocal python3[8087]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 20 18:11:51 np0005589270.novalocal sshd-session[7500]: Connection closed by 38.102.83.114 port 38574
Jan 20 18:11:51 np0005589270.novalocal sshd-session[7497]: pam_unix(sshd:session): session closed for user zuul
Jan 20 18:11:51 np0005589270.novalocal systemd-logind[796]: Session 4 logged out. Waiting for processes to exit.
Jan 20 18:11:51 np0005589270.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Jan 20 18:11:51 np0005589270.novalocal systemd[1]: session-4.scope: Consumed 4.618s CPU time.
Jan 20 18:11:52 np0005589270.novalocal systemd-logind[796]: Removed session 4.
Jan 20 18:11:53 np0005589270.novalocal sshd-session[8091]: Accepted publickey for zuul from 38.102.83.114 port 44552 ssh2: RSA SHA256:4QdNcGxIfGrd0SulXH8wKdvIjwwnijbxtrxruAjIfw8
Jan 20 18:11:53 np0005589270.novalocal systemd-logind[796]: New session 5 of user zuul.
Jan 20 18:11:53 np0005589270.novalocal systemd[1]: Started Session 5 of User zuul.
Jan 20 18:11:53 np0005589270.novalocal sshd-session[8091]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 18:11:54 np0005589270.novalocal sudo[8118]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpawlkinbznfboqqxcvgyanbdilcnutv ; /usr/bin/python3'
Jan 20 18:11:54 np0005589270.novalocal sudo[8118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:11:54 np0005589270.novalocal python3[8120]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 20 18:12:00 np0005589270.novalocal setsebool[8163]: The virt_use_nfs policy boolean was changed to 1 by root
Jan 20 18:12:00 np0005589270.novalocal setsebool[8163]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Jan 20 18:12:11 np0005589270.novalocal kernel: SELinux:  Converting 386 SID table entries...
Jan 20 18:12:11 np0005589270.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Jan 20 18:12:11 np0005589270.novalocal kernel: SELinux:  policy capability open_perms=1
Jan 20 18:12:11 np0005589270.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Jan 20 18:12:11 np0005589270.novalocal kernel: SELinux:  policy capability always_check_network=0
Jan 20 18:12:11 np0005589270.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 20 18:12:11 np0005589270.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 20 18:12:11 np0005589270.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 20 18:12:20 np0005589270.novalocal kernel: SELinux:  Converting 389 SID table entries...
Jan 20 18:12:20 np0005589270.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Jan 20 18:12:20 np0005589270.novalocal kernel: SELinux:  policy capability open_perms=1
Jan 20 18:12:20 np0005589270.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Jan 20 18:12:20 np0005589270.novalocal kernel: SELinux:  policy capability always_check_network=0
Jan 20 18:12:20 np0005589270.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 20 18:12:20 np0005589270.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 20 18:12:20 np0005589270.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 20 18:12:37 np0005589270.novalocal dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 20 18:12:37 np0005589270.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 20 18:12:37 np0005589270.novalocal systemd[1]: Starting man-db-cache-update.service...
Jan 20 18:12:37 np0005589270.novalocal systemd[1]: Reloading.
Jan 20 18:12:37 np0005589270.novalocal systemd-rc-local-generator[8932]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:12:38 np0005589270.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Jan 20 18:12:39 np0005589270.novalocal sudo[8118]: pam_unix(sudo:session): session closed for user root
Jan 20 18:12:43 np0005589270.novalocal python3[12985]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                        _uses_shell=True zuul_log_id=fa163ef9-e89a-d7ef-a226-00000000000c-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:12:44 np0005589270.novalocal kernel: evm: overlay not supported
Jan 20 18:12:44 np0005589270.novalocal systemd[4306]: Starting D-Bus User Message Bus...
Jan 20 18:12:44 np0005589270.novalocal dbus-broker-launch[13808]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Jan 20 18:12:44 np0005589270.novalocal dbus-broker-launch[13808]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Jan 20 18:12:44 np0005589270.novalocal systemd[4306]: Started D-Bus User Message Bus.
Jan 20 18:12:44 np0005589270.novalocal dbus-broker-lau[13808]: Ready
Jan 20 18:12:44 np0005589270.novalocal systemd[4306]: selinux: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 20 18:12:44 np0005589270.novalocal systemd[4306]: Created slice Slice /user.
Jan 20 18:12:44 np0005589270.novalocal systemd[4306]: podman-13710.scope: unit configures an IP firewall, but not running as root.
Jan 20 18:12:44 np0005589270.novalocal systemd[4306]: (This warning is only shown for the first unit using IP firewalling.)
Jan 20 18:12:44 np0005589270.novalocal systemd[4306]: Started podman-13710.scope.
Jan 20 18:12:44 np0005589270.novalocal systemd[4306]: Started podman-pause-13203fa6.scope.
Jan 20 18:12:45 np0005589270.novalocal sudo[14029]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbtikfbagzixgejqkcxljpkkyjijcncm ; /usr/bin/python3'
Jan 20 18:12:45 np0005589270.novalocal sudo[14029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:12:45 np0005589270.novalocal python3[14031]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]
                                                       location = "38.102.83.203:5001"
                                                       insecure = true path=/etc/containers/registries.conf block=[[registry]]
                                                       location = "38.102.83.203:5001"
                                                       insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:12:45 np0005589270.novalocal python3[14031]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Jan 20 18:12:45 np0005589270.novalocal sudo[14029]: pam_unix(sudo:session): session closed for user root
Jan 20 18:12:45 np0005589270.novalocal sshd-session[8094]: Connection closed by 38.102.83.114 port 44552
Jan 20 18:12:45 np0005589270.novalocal sshd-session[8091]: pam_unix(sshd:session): session closed for user zuul
Jan 20 18:12:45 np0005589270.novalocal systemd[1]: session-5.scope: Deactivated successfully.
Jan 20 18:12:45 np0005589270.novalocal systemd[1]: session-5.scope: Consumed 42.523s CPU time.
Jan 20 18:12:45 np0005589270.novalocal systemd-logind[796]: Session 5 logged out. Waiting for processes to exit.
Jan 20 18:12:45 np0005589270.novalocal systemd-logind[796]: Removed session 5.
Jan 20 18:13:04 np0005589270.novalocal sshd-session[20976]: Connection closed by 38.102.83.73 port 40466 [preauth]
Jan 20 18:13:04 np0005589270.novalocal sshd-session[20982]: Connection closed by 38.102.83.73 port 40476 [preauth]
Jan 20 18:13:04 np0005589270.novalocal sshd-session[20984]: Unable to negotiate with 38.102.83.73 port 40478: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Jan 20 18:13:04 np0005589270.novalocal sshd-session[20980]: Unable to negotiate with 38.102.83.73 port 40490: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Jan 20 18:13:05 np0005589270.novalocal sshd-session[20979]: Unable to negotiate with 38.102.83.73 port 40498: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Jan 20 18:13:10 np0005589270.novalocal sshd-session[22770]: Accepted publickey for zuul from 38.102.83.114 port 40944 ssh2: RSA SHA256:4QdNcGxIfGrd0SulXH8wKdvIjwwnijbxtrxruAjIfw8
Jan 20 18:13:10 np0005589270.novalocal systemd-logind[796]: New session 6 of user zuul.
Jan 20 18:13:10 np0005589270.novalocal systemd[1]: Started Session 6 of User zuul.
Jan 20 18:13:10 np0005589270.novalocal sshd-session[22770]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 18:13:10 np0005589270.novalocal python3[22883]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIV0KYf5L1GZd3+ZGa2+Eb4MEufUQXtlGlwSN7BnK+BYhsIeiagJZcA5VL+S815eL3vz1rumQV+9+gQndGlqABk= zuul@np0005589269.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 18:13:10 np0005589270.novalocal sudo[23069]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilmzuenscczvvsbeeqkqnseigbyoxxvp ; /usr/bin/python3'
Jan 20 18:13:10 np0005589270.novalocal sudo[23069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:13:10 np0005589270.novalocal python3[23078]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIV0KYf5L1GZd3+ZGa2+Eb4MEufUQXtlGlwSN7BnK+BYhsIeiagJZcA5VL+S815eL3vz1rumQV+9+gQndGlqABk= zuul@np0005589269.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 18:13:10 np0005589270.novalocal sudo[23069]: pam_unix(sudo:session): session closed for user root
Jan 20 18:13:11 np0005589270.novalocal sudo[23441]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvlhfbiotmrrczqbvzjxzjmasxtztrxh ; /usr/bin/python3'
Jan 20 18:13:11 np0005589270.novalocal sudo[23441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:13:11 np0005589270.novalocal python3[23453]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005589270.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Jan 20 18:13:11 np0005589270.novalocal useradd[23521]: new group: name=cloud-admin, GID=1002
Jan 20 18:13:11 np0005589270.novalocal useradd[23521]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Jan 20 18:13:11 np0005589270.novalocal sudo[23441]: pam_unix(sudo:session): session closed for user root
Jan 20 18:13:12 np0005589270.novalocal sudo[23640]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pokvpoqntywpmwdayeasxpjlhsqgeonx ; /usr/bin/python3'
Jan 20 18:13:12 np0005589270.novalocal sudo[23640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:13:12 np0005589270.novalocal python3[23653]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIV0KYf5L1GZd3+ZGa2+Eb4MEufUQXtlGlwSN7BnK+BYhsIeiagJZcA5VL+S815eL3vz1rumQV+9+gQndGlqABk= zuul@np0005589269.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 18:13:12 np0005589270.novalocal sudo[23640]: pam_unix(sudo:session): session closed for user root
Jan 20 18:13:12 np0005589270.novalocal sudo[23919]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pghxyaweajyxacnmqczixtnpekvhzrtm ; /usr/bin/python3'
Jan 20 18:13:12 np0005589270.novalocal sudo[23919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:13:12 np0005589270.novalocal python3[23929]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 18:13:12 np0005589270.novalocal sudo[23919]: pam_unix(sudo:session): session closed for user root
Jan 20 18:13:13 np0005589270.novalocal sudo[24194]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbldulbdzucwcmysoynrrphjweqpfsnt ; /usr/bin/python3'
Jan 20 18:13:13 np0005589270.novalocal sudo[24194]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:13:13 np0005589270.novalocal python3[24202]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1768932792.3965116-167-217107867489601/source _original_basename=tmpwlsag42e follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:13:13 np0005589270.novalocal sudo[24194]: pam_unix(sudo:session): session closed for user root
Jan 20 18:13:13 np0005589270.novalocal sudo[24549]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdagekmojlnllaszpatglixuwhletxly ; /usr/bin/python3'
Jan 20 18:13:13 np0005589270.novalocal sudo[24549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:13:14 np0005589270.novalocal python3[24560]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Jan 20 18:13:14 np0005589270.novalocal systemd[1]: Starting Hostname Service...
Jan 20 18:13:14 np0005589270.novalocal systemd[1]: Started Hostname Service.
Jan 20 18:13:15 np0005589270.novalocal systemd-hostnamed[24662]: Changed pretty hostname to 'compute-0'
Jan 20 18:13:15 compute-0 systemd-hostnamed[24662]: Hostname set to <compute-0> (static)
Jan 20 18:13:15 compute-0 NetworkManager[7206]: <info>  [1768932795.2738] hostname: static hostname changed from "np0005589270.novalocal" to "compute-0"
Jan 20 18:13:15 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 20 18:13:15 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 20 18:13:15 compute-0 sudo[24549]: pam_unix(sudo:session): session closed for user root
Jan 20 18:13:15 compute-0 sshd-session[22825]: Connection closed by 38.102.83.114 port 40944
Jan 20 18:13:15 compute-0 sshd-session[22770]: pam_unix(sshd:session): session closed for user zuul
Jan 20 18:13:15 compute-0 systemd[1]: session-6.scope: Deactivated successfully.
Jan 20 18:13:15 compute-0 systemd[1]: session-6.scope: Consumed 2.403s CPU time.
Jan 20 18:13:15 compute-0 systemd-logind[796]: Session 6 logged out. Waiting for processes to exit.
Jan 20 18:13:15 compute-0 systemd-logind[796]: Removed session 6.
Jan 20 18:13:25 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 20 18:13:30 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 20 18:13:30 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 20 18:13:30 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1min 3.054s CPU time.
Jan 20 18:13:30 compute-0 systemd[1]: run-rfdfc4038b69244619cf67012f7c821d1.service: Deactivated successfully.
Jan 20 18:13:45 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 20 18:17:07 compute-0 sshd-session[29956]: Accepted publickey for zuul from 38.102.83.73 port 33992 ssh2: RSA SHA256:4QdNcGxIfGrd0SulXH8wKdvIjwwnijbxtrxruAjIfw8
Jan 20 18:17:07 compute-0 systemd-logind[796]: New session 7 of user zuul.
Jan 20 18:17:07 compute-0 systemd[1]: Started Session 7 of User zuul.
Jan 20 18:17:07 compute-0 sshd-session[29956]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 18:17:08 compute-0 python3[30032]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 18:17:11 compute-0 sudo[30146]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jivxolacvyevqxfblmrpbuvdiwpeofbd ; /usr/bin/python3'
Jan 20 18:17:11 compute-0 sudo[30146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:17:11 compute-0 python3[30148]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 18:17:11 compute-0 sudo[30146]: pam_unix(sudo:session): session closed for user root
Jan 20 18:17:11 compute-0 sudo[30219]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izbnburqnogcwyeeczkliaivdqmceaqz ; /usr/bin/python3'
Jan 20 18:17:11 compute-0 sudo[30219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:17:11 compute-0 python3[30221]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1768933031.0130734-34095-215844548579355/source mode=0755 _original_basename=delorean.repo follow=False checksum=0f7c85cc67bf467c48edf98d5acc63e62d808324 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:17:11 compute-0 sudo[30219]: pam_unix(sudo:session): session closed for user root
Jan 20 18:17:12 compute-0 sudo[30245]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjvcqeyyqnvwqqnrfdqxpisniyfjxcej ; /usr/bin/python3'
Jan 20 18:17:12 compute-0 sudo[30245]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:17:12 compute-0 python3[30247]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 18:17:12 compute-0 sudo[30245]: pam_unix(sudo:session): session closed for user root
Jan 20 18:17:12 compute-0 sudo[30318]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzkyhfqucrzdrcjgzzdqobwxhsxqynvs ; /usr/bin/python3'
Jan 20 18:17:12 compute-0 sudo[30318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:17:12 compute-0 python3[30320]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1768933031.0130734-34095-215844548579355/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:17:12 compute-0 sudo[30318]: pam_unix(sudo:session): session closed for user root
Jan 20 18:17:12 compute-0 sudo[30344]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlwanukqafxukoilzzwohkxfziokwqnj ; /usr/bin/python3'
Jan 20 18:17:12 compute-0 sudo[30344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:17:12 compute-0 python3[30346]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 18:17:12 compute-0 sudo[30344]: pam_unix(sudo:session): session closed for user root
Jan 20 18:17:13 compute-0 sudo[30417]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mepbcbmjbqjgsjhhsxvkjgnjivwmrbxn ; /usr/bin/python3'
Jan 20 18:17:13 compute-0 sudo[30417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:17:13 compute-0 python3[30419]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1768933031.0130734-34095-215844548579355/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:17:13 compute-0 sudo[30417]: pam_unix(sudo:session): session closed for user root
Jan 20 18:17:13 compute-0 sudo[30443]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftbczdivutoqaradmzolvmizhogjslls ; /usr/bin/python3'
Jan 20 18:17:13 compute-0 sudo[30443]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:17:13 compute-0 python3[30445]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 18:17:13 compute-0 sudo[30443]: pam_unix(sudo:session): session closed for user root
Jan 20 18:17:13 compute-0 sudo[30516]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcefvfqsjidxkwkbmjwxmbxhxbzonpkc ; /usr/bin/python3'
Jan 20 18:17:13 compute-0 sudo[30516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:17:13 compute-0 python3[30518]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1768933031.0130734-34095-215844548579355/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:17:14 compute-0 sudo[30516]: pam_unix(sudo:session): session closed for user root
Jan 20 18:17:14 compute-0 sudo[30542]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbrpnmlbssgzvavospwfyzgrizuoyqdr ; /usr/bin/python3'
Jan 20 18:17:14 compute-0 sudo[30542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:17:14 compute-0 python3[30544]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 18:17:14 compute-0 sudo[30542]: pam_unix(sudo:session): session closed for user root
Jan 20 18:17:14 compute-0 sudo[30615]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iussxtuagssvrhtlnxablezxqeddhfnz ; /usr/bin/python3'
Jan 20 18:17:14 compute-0 sudo[30615]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:17:14 compute-0 python3[30617]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1768933031.0130734-34095-215844548579355/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:17:14 compute-0 sudo[30615]: pam_unix(sudo:session): session closed for user root
Jan 20 18:17:14 compute-0 sudo[30641]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfdlrnakecqfcoeuepdlnxjwdfiditqz ; /usr/bin/python3'
Jan 20 18:17:14 compute-0 sudo[30641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:17:14 compute-0 python3[30643]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 18:17:14 compute-0 sudo[30641]: pam_unix(sudo:session): session closed for user root
Jan 20 18:17:15 compute-0 sudo[30714]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mikuhdmjmpkmyontopbwufbgegrmrprf ; /usr/bin/python3'
Jan 20 18:17:15 compute-0 sudo[30714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:17:15 compute-0 python3[30716]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1768933031.0130734-34095-215844548579355/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:17:15 compute-0 sudo[30714]: pam_unix(sudo:session): session closed for user root
Jan 20 18:17:15 compute-0 sudo[30740]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfcsknaxyhiasidqfokzwtozigxnfdma ; /usr/bin/python3'
Jan 20 18:17:15 compute-0 sudo[30740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:17:15 compute-0 python3[30742]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 18:17:15 compute-0 sudo[30740]: pam_unix(sudo:session): session closed for user root
Jan 20 18:17:15 compute-0 sudo[30813]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olvpwrsnlyuawjtvjudgufxkirgeukfh ; /usr/bin/python3'
Jan 20 18:17:15 compute-0 sudo[30813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:17:16 compute-0 python3[30815]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1768933031.0130734-34095-215844548579355/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=2583a70b3ee76a9837350b0837bc004a8e52405c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:17:16 compute-0 sudo[30813]: pam_unix(sudo:session): session closed for user root
Jan 20 18:17:18 compute-0 sshd-session[30840]: Unable to negotiate with 192.168.122.11 port 39710: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Jan 20 18:17:18 compute-0 sshd-session[30844]: Connection closed by 192.168.122.11 port 39696 [preauth]
Jan 20 18:17:18 compute-0 sshd-session[30841]: Connection closed by 192.168.122.11 port 39698 [preauth]
Jan 20 18:17:18 compute-0 sshd-session[30842]: Unable to negotiate with 192.168.122.11 port 39718: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Jan 20 18:17:18 compute-0 sshd-session[30843]: Unable to negotiate with 192.168.122.11 port 39722: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Jan 20 18:17:27 compute-0 python3[30873]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:22:27 compute-0 sshd-session[29959]: Received disconnect from 38.102.83.73 port 33992:11: disconnected by user
Jan 20 18:22:27 compute-0 sshd-session[29959]: Disconnected from user zuul 38.102.83.73 port 33992
Jan 20 18:22:27 compute-0 sshd-session[29956]: pam_unix(sshd:session): session closed for user zuul
Jan 20 18:22:27 compute-0 systemd-logind[796]: Session 7 logged out. Waiting for processes to exit.
Jan 20 18:22:27 compute-0 systemd[1]: session-7.scope: Deactivated successfully.
Jan 20 18:22:27 compute-0 systemd[1]: session-7.scope: Consumed 5.801s CPU time.
Jan 20 18:22:27 compute-0 systemd-logind[796]: Removed session 7.
Jan 20 18:29:30 compute-0 sshd-session[30880]: Accepted publickey for zuul from 192.168.122.30 port 37974 ssh2: ECDSA SHA256:OqjpBifwMHsrzwMbJxHbqE54q1skVz6aecL1tN9sOps
Jan 20 18:29:30 compute-0 systemd-logind[796]: New session 8 of user zuul.
Jan 20 18:29:30 compute-0 systemd[1]: Started Session 8 of User zuul.
Jan 20 18:29:30 compute-0 sshd-session[30880]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 18:29:31 compute-0 python3.9[31033]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 18:29:32 compute-0 sudo[31212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usuqhqoaublsazovhsgigzmqfpiqcodb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768933771.9791815-51-117366917598776/AnsiballZ_command.py'
Jan 20 18:29:32 compute-0 sudo[31212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:29:32 compute-0 python3.9[31214]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:29:39 compute-0 sudo[31212]: pam_unix(sudo:session): session closed for user root
Jan 20 18:29:40 compute-0 sshd-session[30883]: Connection closed by 192.168.122.30 port 37974
Jan 20 18:29:40 compute-0 sshd-session[30880]: pam_unix(sshd:session): session closed for user zuul
Jan 20 18:29:40 compute-0 systemd[1]: session-8.scope: Deactivated successfully.
Jan 20 18:29:40 compute-0 systemd[1]: session-8.scope: Consumed 8.705s CPU time.
Jan 20 18:29:40 compute-0 systemd-logind[796]: Session 8 logged out. Waiting for processes to exit.
Jan 20 18:29:40 compute-0 systemd-logind[796]: Removed session 8.
Jan 20 18:29:56 compute-0 sshd-session[31274]: Accepted publickey for zuul from 192.168.122.30 port 49602 ssh2: ECDSA SHA256:OqjpBifwMHsrzwMbJxHbqE54q1skVz6aecL1tN9sOps
Jan 20 18:29:56 compute-0 systemd-logind[796]: New session 9 of user zuul.
Jan 20 18:29:56 compute-0 systemd[1]: Started Session 9 of User zuul.
Jan 20 18:29:56 compute-0 sshd-session[31274]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 18:29:57 compute-0 python3.9[31427]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 20 18:29:58 compute-0 python3.9[31601]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 18:29:59 compute-0 sudo[31751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkjlooudmhlorvhjggwofvgjzwvdnslr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768933799.228454-88-236170407095739/AnsiballZ_command.py'
Jan 20 18:29:59 compute-0 sudo[31751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:29:59 compute-0 python3.9[31753]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:30:00 compute-0 sudo[31751]: pam_unix(sudo:session): session closed for user root
Jan 20 18:30:01 compute-0 sudo[31904]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ishhqgjcbjcdaxmzdbzgmnmbyfcjtlsv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768933800.5197837-124-230220701960177/AnsiballZ_stat.py'
Jan 20 18:30:01 compute-0 sudo[31904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:30:01 compute-0 python3.9[31906]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 18:30:01 compute-0 sudo[31904]: pam_unix(sudo:session): session closed for user root
Jan 20 18:30:01 compute-0 sudo[32056]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzkxlcphbdugwcgddjxskcrashprrrxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768933801.488302-148-262116673492866/AnsiballZ_file.py'
Jan 20 18:30:01 compute-0 sudo[32056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:30:02 compute-0 python3.9[32058]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:30:02 compute-0 sudo[32056]: pam_unix(sudo:session): session closed for user root
Jan 20 18:30:02 compute-0 sudo[32209]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsjjhkhomtaogpkszzmtdejwzjooplyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768933802.4106143-172-100134328501684/AnsiballZ_stat.py'
Jan 20 18:30:02 compute-0 sudo[32209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:30:03 compute-0 python3.9[32211]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:30:03 compute-0 sudo[32209]: pam_unix(sudo:session): session closed for user root
Jan 20 18:30:03 compute-0 sudo[32332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ciummvwgfjeqrvwdknnopxbfggypqmmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768933802.4106143-172-100134328501684/AnsiballZ_copy.py'
Jan 20 18:30:03 compute-0 sudo[32332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:30:03 compute-0 python3.9[32334]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1768933802.4106143-172-100134328501684/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:30:03 compute-0 sudo[32332]: pam_unix(sudo:session): session closed for user root
Jan 20 18:30:04 compute-0 sudo[32484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrxguxkoqpaxaypmohcyfsubixdyilbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768933803.920372-217-245580467851171/AnsiballZ_setup.py'
Jan 20 18:30:04 compute-0 sudo[32484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:30:04 compute-0 python3.9[32486]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 18:30:04 compute-0 sudo[32484]: pam_unix(sudo:session): session closed for user root
Jan 20 18:30:05 compute-0 sudo[32640]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdjokexmluvyambvpssnyvenfkquzruv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768933805.0492635-241-7697687488401/AnsiballZ_file.py'
Jan 20 18:30:05 compute-0 sudo[32640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:30:05 compute-0 python3.9[32642]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:30:05 compute-0 sudo[32640]: pam_unix(sudo:session): session closed for user root
Jan 20 18:30:06 compute-0 sudo[32792]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnforsgxezlyoytrhmocwqlrqpgsuiyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768933805.8944716-268-31916665379354/AnsiballZ_file.py'
Jan 20 18:30:06 compute-0 sudo[32792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:30:06 compute-0 python3.9[32794]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:30:06 compute-0 sudo[32792]: pam_unix(sudo:session): session closed for user root
Jan 20 18:30:07 compute-0 python3.9[32944]: ansible-ansible.builtin.service_facts Invoked
Jan 20 18:30:12 compute-0 python3.9[33197]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:30:13 compute-0 python3.9[33347]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 18:30:14 compute-0 python3.9[33501]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 18:30:15 compute-0 sudo[33657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkttsyrdgggwdrkhueeygjkjlqsmtspy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768933815.3018522-412-86748187430449/AnsiballZ_setup.py'
Jan 20 18:30:15 compute-0 sudo[33657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:30:15 compute-0 python3.9[33659]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 18:30:16 compute-0 sudo[33657]: pam_unix(sudo:session): session closed for user root
Jan 20 18:30:16 compute-0 sudo[33741]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-doayqasogtnapzernydkgqfrmaubrljc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768933815.3018522-412-86748187430449/AnsiballZ_dnf.py'
Jan 20 18:30:16 compute-0 sudo[33741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:30:16 compute-0 python3.9[33743]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 18:31:02 compute-0 systemd[1]: Reloading.
Jan 20 18:31:02 compute-0 systemd-rc-local-generator[33943]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:31:02 compute-0 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Jan 20 18:31:02 compute-0 systemd[1]: Reloading.
Jan 20 18:31:03 compute-0 systemd-rc-local-generator[33985]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:31:03 compute-0 systemd[1]: Starting dnf makecache...
Jan 20 18:31:03 compute-0 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Jan 20 18:31:03 compute-0 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Jan 20 18:31:03 compute-0 systemd[1]: Reloading.
Jan 20 18:31:03 compute-0 systemd-rc-local-generator[34023]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:31:03 compute-0 dnf[33994]: Failed determining last makecache time.
Jan 20 18:31:03 compute-0 dnf[33994]: delorean-openstack-barbican-42b4c41831408a8e323 166 kB/s | 3.0 kB     00:00
Jan 20 18:31:03 compute-0 dnf[33994]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 200 kB/s | 3.0 kB     00:00
Jan 20 18:31:03 compute-0 dnf[33994]: delorean-openstack-cinder-1c00d6490d88e436f26ef 216 kB/s | 3.0 kB     00:00
Jan 20 18:31:03 compute-0 dnf[33994]: delorean-python-stevedore-c4acc5639fd2329372142 213 kB/s | 3.0 kB     00:00
Jan 20 18:31:03 compute-0 dnf[33994]: delorean-python-cloudkitty-tests-tempest-2c80f8 193 kB/s | 3.0 kB     00:00
Jan 20 18:31:03 compute-0 systemd[1]: Listening on LVM2 poll daemon socket.
Jan 20 18:31:03 compute-0 dnf[33994]: delorean-os-refresh-config-9bfc52b5049be2d8de61 136 kB/s | 3.0 kB     00:00
Jan 20 18:31:03 compute-0 dnf[33994]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 184 kB/s | 3.0 kB     00:00
Jan 20 18:31:03 compute-0 dnf[33994]: delorean-python-designate-tests-tempest-347fdbc 193 kB/s | 3.0 kB     00:00
Jan 20 18:31:03 compute-0 dnf[33994]: delorean-openstack-glance-1fd12c29b339f30fe823e 176 kB/s | 3.0 kB     00:00
Jan 20 18:31:03 compute-0 dnf[33994]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 183 kB/s | 3.0 kB     00:00
Jan 20 18:31:03 compute-0 dnf[33994]: delorean-openstack-manila-3c01b7181572c95dac462 163 kB/s | 3.0 kB     00:00
Jan 20 18:31:03 compute-0 dnf[33994]: delorean-python-whitebox-neutron-tests-tempest- 193 kB/s | 3.0 kB     00:00
Jan 20 18:31:03 compute-0 dbus-broker-launch[758]: Noticed file-system modification, trigger reload.
Jan 20 18:31:03 compute-0 dbus-broker-launch[758]: Noticed file-system modification, trigger reload.
Jan 20 18:31:03 compute-0 dbus-broker-launch[758]: Noticed file-system modification, trigger reload.
Jan 20 18:31:03 compute-0 dnf[33994]: delorean-openstack-octavia-ba397f07a7331190208c 166 kB/s | 3.0 kB     00:00
Jan 20 18:31:03 compute-0 dnf[33994]: delorean-openstack-watcher-c014f81a8647287f6dcc 167 kB/s | 3.0 kB     00:00
Jan 20 18:31:03 compute-0 dnf[33994]: delorean-ansible-config_template-5ccaa22121a7ff 192 kB/s | 3.0 kB     00:00
Jan 20 18:31:03 compute-0 dnf[33994]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 144 kB/s | 3.0 kB     00:00
Jan 20 18:31:03 compute-0 dnf[33994]: delorean-openstack-swift-dc98a8463506ac520c469a 182 kB/s | 3.0 kB     00:00
Jan 20 18:31:03 compute-0 dnf[33994]: delorean-python-tempestconf-8515371b7cceebd4282 193 kB/s | 3.0 kB     00:00
Jan 20 18:31:03 compute-0 dnf[33994]: delorean-openstack-heat-ui-013accbfd179753bc3f0 201 kB/s | 3.0 kB     00:00
Jan 20 18:31:03 compute-0 dnf[33994]: CentOS Stream 9 - BaseOS                         53 kB/s | 6.4 kB     00:00
Jan 20 18:31:04 compute-0 dnf[33994]: CentOS Stream 9 - AppStream                      60 kB/s | 6.8 kB     00:00
Jan 20 18:31:04 compute-0 dnf[33994]: CentOS Stream 9 - CRB                            61 kB/s | 6.3 kB     00:00
Jan 20 18:31:04 compute-0 dnf[33994]: CentOS Stream 9 - Extras packages                61 kB/s | 7.3 kB     00:00
Jan 20 18:31:04 compute-0 dnf[33994]: dlrn-antelope-testing                           180 kB/s | 3.0 kB     00:00
Jan 20 18:31:04 compute-0 dnf[33994]: dlrn-antelope-build-deps                        184 kB/s | 3.0 kB     00:00
Jan 20 18:31:04 compute-0 dnf[33994]: centos9-rabbitmq                                101 kB/s | 3.0 kB     00:00
Jan 20 18:31:04 compute-0 dnf[33994]: centos9-storage                                 130 kB/s | 3.0 kB     00:00
Jan 20 18:31:04 compute-0 dnf[33994]: centos9-opstools                                127 kB/s | 3.0 kB     00:00
Jan 20 18:31:04 compute-0 dnf[33994]: NFV SIG OpenvSwitch                             135 kB/s | 3.0 kB     00:00
Jan 20 18:31:04 compute-0 dnf[33994]: repo-setup-centos-appstream                     189 kB/s | 4.4 kB     00:00
Jan 20 18:31:04 compute-0 dnf[33994]: repo-setup-centos-baseos                        158 kB/s | 3.9 kB     00:00
Jan 20 18:31:04 compute-0 dnf[33994]: repo-setup-centos-highavailability              161 kB/s | 3.9 kB     00:00
Jan 20 18:31:04 compute-0 dnf[33994]: repo-setup-centos-powertools                    186 kB/s | 4.3 kB     00:00
Jan 20 18:31:04 compute-0 dnf[33994]: Extra Packages for Enterprise Linux 9 - x86_64  257 kB/s |  33 kB     00:00
Jan 20 18:31:05 compute-0 dnf[33994]: Metadata cache created.
Jan 20 18:31:05 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Jan 20 18:31:05 compute-0 systemd[1]: Finished dnf makecache.
Jan 20 18:31:05 compute-0 systemd[1]: dnf-makecache.service: Consumed 1.638s CPU time.
Jan 20 18:32:11 compute-0 kernel: SELinux:  Converting 2724 SID table entries...
Jan 20 18:32:11 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 20 18:32:11 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 20 18:32:11 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 20 18:32:11 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 20 18:32:11 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 20 18:32:11 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 20 18:32:11 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 20 18:32:11 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Jan 20 18:32:11 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 20 18:32:11 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 20 18:32:11 compute-0 systemd[1]: Reloading.
Jan 20 18:32:12 compute-0 systemd-rc-local-generator[34377]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:32:12 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 20 18:32:12 compute-0 sudo[33741]: pam_unix(sudo:session): session closed for user root
Jan 20 18:32:13 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 20 18:32:13 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 20 18:32:13 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.630s CPU time.
Jan 20 18:32:13 compute-0 systemd[1]: run-rdf7388c5627143c7be62d9ba027fc117.service: Deactivated successfully.
Jan 20 18:32:14 compute-0 sudo[35289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-koolibxoaouazhafbtjrkagewoqeeivn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768933934.5141256-448-133308713859291/AnsiballZ_command.py'
Jan 20 18:32:14 compute-0 sudo[35289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:32:15 compute-0 python3.9[35291]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:32:16 compute-0 sudo[35289]: pam_unix(sudo:session): session closed for user root
Jan 20 18:32:17 compute-0 sudo[35571]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyhyrgikwzkejjkjxnyhaqgunchtozmy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768933936.460942-472-53520804065808/AnsiballZ_selinux.py'
Jan 20 18:32:17 compute-0 sudo[35571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:32:17 compute-0 python3.9[35573]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 20 18:32:17 compute-0 sudo[35571]: pam_unix(sudo:session): session closed for user root
Jan 20 18:32:18 compute-0 sudo[35723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogrlevgoasmgnkwzcnufphvmxowckjmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768933937.9079118-505-108934291153318/AnsiballZ_command.py'
Jan 20 18:32:18 compute-0 sudo[35723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:32:18 compute-0 python3.9[35725]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 20 18:32:19 compute-0 sudo[35723]: pam_unix(sudo:session): session closed for user root
Jan 20 18:32:21 compute-0 sudo[35876]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owxgsyqxscjijfpskrdorqlndtyqsqiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768933940.7896123-529-178288732308311/AnsiballZ_file.py'
Jan 20 18:32:21 compute-0 sudo[35876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:32:21 compute-0 python3.9[35878]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:32:21 compute-0 sudo[35876]: pam_unix(sudo:session): session closed for user root
Jan 20 18:32:22 compute-0 sudo[36028]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ninbndrrxtumgvzgrpzanzktuwvburci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768933941.8340352-553-97156232365964/AnsiballZ_mount.py'
Jan 20 18:32:22 compute-0 sudo[36028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:32:22 compute-0 python3.9[36030]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 20 18:32:22 compute-0 sudo[36028]: pam_unix(sudo:session): session closed for user root
Jan 20 18:32:23 compute-0 sudo[36180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfjfantmanggfahbbgowuhogksufctif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768933943.6152735-637-175928528030657/AnsiballZ_file.py'
Jan 20 18:32:23 compute-0 sudo[36180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:32:24 compute-0 python3.9[36182]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:32:24 compute-0 sudo[36180]: pam_unix(sudo:session): session closed for user root
Jan 20 18:32:24 compute-0 sudo[36332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awjjizyvgdmzcymmlgbudeihtgusiqql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768933944.4492846-661-130541687868008/AnsiballZ_stat.py'
Jan 20 18:32:24 compute-0 sudo[36332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:32:27 compute-0 python3.9[36334]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:32:27 compute-0 sudo[36332]: pam_unix(sudo:session): session closed for user root
Jan 20 18:32:28 compute-0 sudo[36456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvxdmdwnaytvwtefceumwruhueazzayx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768933944.4492846-661-130541687868008/AnsiballZ_copy.py'
Jan 20 18:32:28 compute-0 sudo[36456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:32:28 compute-0 python3.9[36458]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768933944.4492846-661-130541687868008/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=827feab96ffaf6d3142bc545a7d8116c3f01f714 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:32:28 compute-0 sudo[36456]: pam_unix(sudo:session): session closed for user root
Jan 20 18:32:30 compute-0 sudo[36608]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtvwhxrygdslspgilxaasmnwxhwthfyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768933950.630869-733-251950257488650/AnsiballZ_stat.py'
Jan 20 18:32:30 compute-0 sudo[36608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:32:33 compute-0 python3.9[36610]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 18:32:33 compute-0 sudo[36608]: pam_unix(sudo:session): session closed for user root
Jan 20 18:32:33 compute-0 sudo[36760]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fazhvgtdgarihmwdtlwuwildecpjkpcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768933953.5453312-757-1825596649526/AnsiballZ_command.py'
Jan 20 18:32:33 compute-0 sudo[36760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:32:34 compute-0 python3.9[36762]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:32:34 compute-0 sudo[36760]: pam_unix(sudo:session): session closed for user root
Jan 20 18:32:37 compute-0 sudo[36913]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-luijqrieyzhwaaipsyxofqbjvaazuanc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768933956.6727912-781-216627512319116/AnsiballZ_file.py'
Jan 20 18:32:37 compute-0 sudo[36913]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:32:37 compute-0 python3.9[36915]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:32:37 compute-0 sudo[36913]: pam_unix(sudo:session): session closed for user root
Jan 20 18:32:38 compute-0 sudo[37065]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kuimoemtgcflpwxxjlsrwepamusjzemr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768933957.6528165-814-249709031533412/AnsiballZ_getent.py'
Jan 20 18:32:38 compute-0 sudo[37065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:32:38 compute-0 python3.9[37067]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 20 18:32:38 compute-0 sudo[37065]: pam_unix(sudo:session): session closed for user root
Jan 20 18:32:38 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 18:32:39 compute-0 sudo[37219]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwxgnvprrwfrlqunhqemcuqwfxuvmjaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768933958.624004-838-274096003120745/AnsiballZ_group.py'
Jan 20 18:32:39 compute-0 sudo[37219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:32:39 compute-0 python3.9[37221]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 20 18:32:39 compute-0 groupadd[37222]: group added to /etc/group: name=qemu, GID=107
Jan 20 18:32:39 compute-0 groupadd[37222]: group added to /etc/gshadow: name=qemu
Jan 20 18:32:39 compute-0 groupadd[37222]: new group: name=qemu, GID=107
Jan 20 18:32:39 compute-0 sudo[37219]: pam_unix(sudo:session): session closed for user root
Jan 20 18:32:40 compute-0 sudo[37377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xljbioxfgrrbldgxjdngkxiplztqjacg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768933960.288205-862-162372184688052/AnsiballZ_user.py'
Jan 20 18:32:40 compute-0 sudo[37377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:32:41 compute-0 python3.9[37379]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 20 18:32:41 compute-0 useradd[37381]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Jan 20 18:32:41 compute-0 sudo[37377]: pam_unix(sudo:session): session closed for user root
Jan 20 18:32:41 compute-0 sudo[37537]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssgzxguviqcikvdgsyidsmlmjsemzhhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768933961.4683065-886-48155264562805/AnsiballZ_getent.py'
Jan 20 18:32:41 compute-0 sudo[37537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:32:41 compute-0 python3.9[37539]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 20 18:32:41 compute-0 sudo[37537]: pam_unix(sudo:session): session closed for user root
Jan 20 18:32:42 compute-0 sudo[37690]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghaaweyxajasvoniecacckqqgptstqod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768933962.2746336-910-219660167758565/AnsiballZ_group.py'
Jan 20 18:32:42 compute-0 sudo[37690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:32:42 compute-0 python3.9[37692]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 20 18:32:42 compute-0 groupadd[37693]: group added to /etc/group: name=hugetlbfs, GID=42477
Jan 20 18:32:42 compute-0 groupadd[37693]: group added to /etc/gshadow: name=hugetlbfs
Jan 20 18:32:42 compute-0 groupadd[37693]: new group: name=hugetlbfs, GID=42477
Jan 20 18:32:42 compute-0 sudo[37690]: pam_unix(sudo:session): session closed for user root
Jan 20 18:32:43 compute-0 sudo[37848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xposdrnqfcyphzvjwhvgkwurqltyrsok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768933963.264578-937-118067039573144/AnsiballZ_file.py'
Jan 20 18:32:43 compute-0 sudo[37848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:32:43 compute-0 python3.9[37850]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 20 18:32:43 compute-0 sudo[37848]: pam_unix(sudo:session): session closed for user root
Jan 20 18:32:44 compute-0 sudo[38000]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jibusbspqcwypkqxsfdemqlmvuzsedhu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768933964.3468273-970-218492090755568/AnsiballZ_dnf.py'
Jan 20 18:32:44 compute-0 sudo[38000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:32:44 compute-0 python3.9[38002]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 18:32:46 compute-0 sudo[38000]: pam_unix(sudo:session): session closed for user root
Jan 20 18:32:48 compute-0 sudo[38153]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txdtgvvgxjqrwkziyngikytrszfunegj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768933967.539784-994-46627964237935/AnsiballZ_file.py'
Jan 20 18:32:48 compute-0 sudo[38153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:32:48 compute-0 python3.9[38155]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:32:48 compute-0 sudo[38153]: pam_unix(sudo:session): session closed for user root
Jan 20 18:32:48 compute-0 sudo[38305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghrpucdbulyjxkgypvouvuhvavuzixbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768933968.521762-1018-132756169192907/AnsiballZ_stat.py'
Jan 20 18:32:48 compute-0 sudo[38305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:32:49 compute-0 python3.9[38307]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:32:49 compute-0 sudo[38305]: pam_unix(sudo:session): session closed for user root
Jan 20 18:32:49 compute-0 sudo[38428]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsnrtpyshluxygdbjmtoqeesxoevzpjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768933968.521762-1018-132756169192907/AnsiballZ_copy.py'
Jan 20 18:32:49 compute-0 sudo[38428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:32:49 compute-0 python3.9[38430]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768933968.521762-1018-132756169192907/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:32:49 compute-0 sudo[38428]: pam_unix(sudo:session): session closed for user root
Jan 20 18:32:50 compute-0 sudo[38580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfupoduhxwujhnfneddpctphyhfpsfix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768933969.8882668-1063-9386695782032/AnsiballZ_systemd.py'
Jan 20 18:32:50 compute-0 sudo[38580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:32:50 compute-0 python3.9[38582]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 18:32:50 compute-0 systemd[1]: Starting Load Kernel Modules...
Jan 20 18:32:50 compute-0 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Jan 20 18:32:50 compute-0 kernel: Bridge firewalling registered
Jan 20 18:32:50 compute-0 systemd-modules-load[38586]: Inserted module 'br_netfilter'
Jan 20 18:32:50 compute-0 systemd[1]: Finished Load Kernel Modules.
Jan 20 18:32:51 compute-0 sudo[38580]: pam_unix(sudo:session): session closed for user root
Jan 20 18:32:51 compute-0 sudo[38739]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwfixgzwdwrdolwmaiytqbjxzuravrpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768933971.2047362-1087-201284050282280/AnsiballZ_stat.py'
Jan 20 18:32:51 compute-0 sudo[38739]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:32:51 compute-0 python3.9[38741]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:32:51 compute-0 sudo[38739]: pam_unix(sudo:session): session closed for user root
Jan 20 18:32:52 compute-0 sudo[38862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktkslyuvczjjcandcfgtrkztidpdeaww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768933971.2047362-1087-201284050282280/AnsiballZ_copy.py'
Jan 20 18:32:52 compute-0 sudo[38862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:32:52 compute-0 python3.9[38864]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768933971.2047362-1087-201284050282280/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:32:52 compute-0 sudo[38862]: pam_unix(sudo:session): session closed for user root
Jan 20 18:32:53 compute-0 sudo[39014]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhpltgqybjbgtwjxkebkqjvuslvwtvdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768933972.8710492-1141-66874299613186/AnsiballZ_dnf.py'
Jan 20 18:32:53 compute-0 sudo[39014]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:32:53 compute-0 python3.9[39016]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 18:32:56 compute-0 dbus-broker-launch[758]: Noticed file-system modification, trigger reload.
Jan 20 18:32:56 compute-0 dbus-broker-launch[758]: Noticed file-system modification, trigger reload.
Jan 20 18:32:57 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 20 18:32:57 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 20 18:32:57 compute-0 systemd[1]: Reloading.
Jan 20 18:32:57 compute-0 systemd-rc-local-generator[39080]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:32:57 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 20 18:32:58 compute-0 sudo[39014]: pam_unix(sudo:session): session closed for user root
Jan 20 18:33:00 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 20 18:33:00 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 20 18:33:00 compute-0 systemd[1]: man-db-cache-update.service: Consumed 4.582s CPU time.
Jan 20 18:33:00 compute-0 systemd[1]: run-r5429984446ce461f9e627969c5aff874.service: Deactivated successfully.
Jan 20 18:33:02 compute-0 python3.9[42730]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 18:33:03 compute-0 python3.9[42882]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 20 18:33:04 compute-0 python3.9[43032]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 18:33:04 compute-0 sudo[43182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkxetfdzhhrdedzqxrxfaxnunpilcnvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768933984.5530827-1258-16122782062103/AnsiballZ_command.py'
Jan 20 18:33:04 compute-0 sudo[43182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:33:04 compute-0 python3.9[43184]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:33:05 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 20 18:33:05 compute-0 systemd[1]: Starting Authorization Manager...
Jan 20 18:33:05 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 20 18:33:05 compute-0 polkitd[43401]: Started polkitd version 0.117
Jan 20 18:33:05 compute-0 polkitd[43401]: Loading rules from directory /etc/polkit-1/rules.d
Jan 20 18:33:05 compute-0 polkitd[43401]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 20 18:33:05 compute-0 polkitd[43401]: Finished loading, compiling and executing 2 rules
Jan 20 18:33:05 compute-0 systemd[1]: Started Authorization Manager.
Jan 20 18:33:05 compute-0 polkitd[43401]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Jan 20 18:33:05 compute-0 sudo[43182]: pam_unix(sudo:session): session closed for user root
Jan 20 18:33:06 compute-0 sudo[43569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlqfkdmeshxqrledixcrhfopfkgeqyat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768933986.2183597-1285-25065465598328/AnsiballZ_systemd.py'
Jan 20 18:33:06 compute-0 sudo[43569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:33:06 compute-0 python3.9[43571]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 18:33:06 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 20 18:33:06 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Jan 20 18:33:06 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 20 18:33:06 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 20 18:33:07 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 20 18:33:07 compute-0 sudo[43569]: pam_unix(sudo:session): session closed for user root
Jan 20 18:33:07 compute-0 python3.9[43732]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 20 18:33:11 compute-0 sudo[43882]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkogapmojkwgaislfbomfwgojshzacfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768933991.1235566-1456-181786508511220/AnsiballZ_systemd.py'
Jan 20 18:33:11 compute-0 sudo[43882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:33:11 compute-0 python3.9[43884]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 18:33:11 compute-0 systemd[1]: Reloading.
Jan 20 18:33:11 compute-0 systemd-rc-local-generator[43913]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:33:11 compute-0 sudo[43882]: pam_unix(sudo:session): session closed for user root
Jan 20 18:33:12 compute-0 sudo[44071]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cchscajgmhmbnngqcbstqgxnlogamhyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768933992.0887103-1456-86670075584820/AnsiballZ_systemd.py'
Jan 20 18:33:12 compute-0 sudo[44071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:33:12 compute-0 python3.9[44073]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 18:33:12 compute-0 systemd[1]: Reloading.
Jan 20 18:33:12 compute-0 systemd-rc-local-generator[44101]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:33:12 compute-0 sudo[44071]: pam_unix(sudo:session): session closed for user root
Jan 20 18:33:14 compute-0 sudo[44259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqxkrhpvroxwerppdzmswovefmmllept ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768933993.575054-1504-156073263126102/AnsiballZ_command.py'
Jan 20 18:33:14 compute-0 sudo[44259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:33:15 compute-0 python3.9[44261]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:33:15 compute-0 sudo[44259]: pam_unix(sudo:session): session closed for user root
Jan 20 18:33:15 compute-0 sudo[44412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zulwhnmgheioaqywlprifbrcpfftjtpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768933995.3963394-1528-219301458390479/AnsiballZ_command.py'
Jan 20 18:33:15 compute-0 sudo[44412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:33:15 compute-0 python3.9[44414]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:33:15 compute-0 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Jan 20 18:33:15 compute-0 sudo[44412]: pam_unix(sudo:session): session closed for user root
Jan 20 18:33:16 compute-0 sudo[44565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgpblbbvwjzkgxgzwpbxykgvqnccxskv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768933996.1350324-1552-126563436943218/AnsiballZ_command.py'
Jan 20 18:33:16 compute-0 sudo[44565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:33:16 compute-0 python3.9[44567]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:33:18 compute-0 sudo[44565]: pam_unix(sudo:session): session closed for user root
Jan 20 18:33:19 compute-0 sudo[44727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bglotvreawavdefldiarxlutjzklfwhp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768933998.912732-1576-224793766670721/AnsiballZ_command.py'
Jan 20 18:33:19 compute-0 sudo[44727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:33:19 compute-0 python3.9[44729]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:33:19 compute-0 sudo[44727]: pam_unix(sudo:session): session closed for user root
Jan 20 18:33:19 compute-0 sudo[44880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nujwyvlszqsaycxepvuqkccxsfptfhtr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768933999.699156-1600-238485186977926/AnsiballZ_systemd.py'
Jan 20 18:33:19 compute-0 sudo[44880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:33:20 compute-0 python3.9[44882]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 18:33:20 compute-0 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 20 18:33:20 compute-0 systemd[1]: Stopped Apply Kernel Variables.
Jan 20 18:33:20 compute-0 systemd[1]: Stopping Apply Kernel Variables...
Jan 20 18:33:20 compute-0 systemd[1]: Starting Apply Kernel Variables...
Jan 20 18:33:20 compute-0 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 20 18:33:20 compute-0 systemd[1]: Finished Apply Kernel Variables.
Jan 20 18:33:20 compute-0 sudo[44880]: pam_unix(sudo:session): session closed for user root
Jan 20 18:33:20 compute-0 sshd-session[31277]: Connection closed by 192.168.122.30 port 49602
Jan 20 18:33:20 compute-0 sshd-session[31274]: pam_unix(sshd:session): session closed for user zuul
Jan 20 18:33:20 compute-0 systemd[1]: session-9.scope: Deactivated successfully.
Jan 20 18:33:20 compute-0 systemd[1]: session-9.scope: Consumed 2min 22.088s CPU time.
Jan 20 18:33:20 compute-0 systemd-logind[796]: Session 9 logged out. Waiting for processes to exit.
Jan 20 18:33:20 compute-0 systemd-logind[796]: Removed session 9.
Jan 20 18:33:26 compute-0 sshd-session[44914]: Accepted publickey for zuul from 192.168.122.30 port 49156 ssh2: ECDSA SHA256:OqjpBifwMHsrzwMbJxHbqE54q1skVz6aecL1tN9sOps
Jan 20 18:33:26 compute-0 systemd-logind[796]: New session 10 of user zuul.
Jan 20 18:33:26 compute-0 systemd[1]: Started Session 10 of User zuul.
Jan 20 18:33:26 compute-0 sshd-session[44914]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 18:33:26 compute-0 sshd-session[44945]: error: kex_exchange_identification: read: Connection reset by peer
Jan 20 18:33:26 compute-0 sshd-session[44945]: Connection reset by 176.120.22.52 port 18639
Jan 20 18:33:27 compute-0 python3.9[45069]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 18:33:28 compute-0 sudo[45223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-laaexieydaaqvptkpndtpyzotmbtagni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934007.9104135-63-93726740043169/AnsiballZ_getent.py'
Jan 20 18:33:28 compute-0 sudo[45223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:33:28 compute-0 python3.9[45225]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 20 18:33:28 compute-0 sudo[45223]: pam_unix(sudo:session): session closed for user root
Jan 20 18:33:29 compute-0 sudo[45376]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yyustmffrchvnolajixanfqbtenijrmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934008.8632123-87-236293939950843/AnsiballZ_group.py'
Jan 20 18:33:29 compute-0 sudo[45376]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:33:29 compute-0 python3.9[45378]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 20 18:33:29 compute-0 groupadd[45379]: group added to /etc/group: name=openvswitch, GID=42476
Jan 20 18:33:29 compute-0 groupadd[45379]: group added to /etc/gshadow: name=openvswitch
Jan 20 18:33:29 compute-0 groupadd[45379]: new group: name=openvswitch, GID=42476
Jan 20 18:33:29 compute-0 sudo[45376]: pam_unix(sudo:session): session closed for user root
Jan 20 18:33:30 compute-0 sudo[45534]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eteaczxnnabjivtntdeaeeewyjuvickj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934009.8802722-111-79856646393971/AnsiballZ_user.py'
Jan 20 18:33:30 compute-0 sudo[45534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:33:30 compute-0 python3.9[45536]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 20 18:33:30 compute-0 useradd[45538]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Jan 20 18:33:30 compute-0 useradd[45538]: add 'openvswitch' to group 'hugetlbfs'
Jan 20 18:33:30 compute-0 useradd[45538]: add 'openvswitch' to shadow group 'hugetlbfs'
Jan 20 18:33:30 compute-0 sudo[45534]: pam_unix(sudo:session): session closed for user root
Jan 20 18:33:31 compute-0 sudo[45694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htjclyfxfyjsesamdhqvthmcsvtqrffj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934011.3124819-141-93698365951914/AnsiballZ_setup.py'
Jan 20 18:33:31 compute-0 sudo[45694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:33:31 compute-0 python3.9[45696]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 18:33:32 compute-0 sudo[45694]: pam_unix(sudo:session): session closed for user root
Jan 20 18:33:32 compute-0 sudo[45778]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgqlxmdapmghcjjlwlycctifonyejydt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934011.3124819-141-93698365951914/AnsiballZ_dnf.py'
Jan 20 18:33:32 compute-0 sudo[45778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:33:32 compute-0 python3.9[45780]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 20 18:33:34 compute-0 sudo[45778]: pam_unix(sudo:session): session closed for user root
Jan 20 18:33:36 compute-0 sudo[45941]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-duzjqiufeqvpcjmzhxzxlgmazuwdflur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934015.9601119-183-69307469018698/AnsiballZ_dnf.py'
Jan 20 18:33:36 compute-0 sudo[45941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:33:36 compute-0 python3.9[45943]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 18:33:47 compute-0 kernel: SELinux:  Converting 2736 SID table entries...
Jan 20 18:33:47 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 20 18:33:47 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 20 18:33:47 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 20 18:33:47 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 20 18:33:47 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 20 18:33:47 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 20 18:33:47 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 20 18:33:47 compute-0 groupadd[45966]: group added to /etc/group: name=unbound, GID=994
Jan 20 18:33:47 compute-0 groupadd[45966]: group added to /etc/gshadow: name=unbound
Jan 20 18:33:47 compute-0 groupadd[45966]: new group: name=unbound, GID=994
Jan 20 18:33:47 compute-0 useradd[45973]: new user: name=unbound, UID=993, GID=994, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Jan 20 18:33:47 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=7 res=1
Jan 20 18:33:47 compute-0 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Jan 20 18:33:48 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 20 18:33:48 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 20 18:33:48 compute-0 systemd[1]: Reloading.
Jan 20 18:33:48 compute-0 systemd-sysv-generator[46473]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:33:48 compute-0 systemd-rc-local-generator[46470]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:33:49 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 20 18:33:49 compute-0 sudo[45941]: pam_unix(sudo:session): session closed for user root
Jan 20 18:33:49 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 20 18:33:49 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 20 18:33:49 compute-0 systemd[1]: run-ra743c09b8333401e99f5843df757e7fa.service: Deactivated successfully.
Jan 20 18:33:53 compute-0 sudo[47040]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apzpyqbgdeoqmcjkazehyosizjbkvvwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934032.493081-207-165415220000965/AnsiballZ_systemd.py'
Jan 20 18:33:53 compute-0 sudo[47040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:33:53 compute-0 python3.9[47042]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 20 18:33:53 compute-0 systemd[1]: Reloading.
Jan 20 18:33:53 compute-0 systemd-sysv-generator[47078]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:33:53 compute-0 systemd-rc-local-generator[47074]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:33:53 compute-0 systemd[1]: Starting Open vSwitch Database Unit...
Jan 20 18:33:53 compute-0 chown[47085]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Jan 20 18:33:53 compute-0 ovs-ctl[47090]: /etc/openvswitch/conf.db does not exist ... (warning).
Jan 20 18:33:54 compute-0 ovs-ctl[47090]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Jan 20 18:33:54 compute-0 ovs-ctl[47090]: Starting ovsdb-server [  OK  ]
Jan 20 18:33:54 compute-0 ovs-vsctl[47139]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Jan 20 18:33:54 compute-0 ovs-vsctl[47155]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"7018ca8a-de0e-4b56-bb43-675238d4f8b3\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Jan 20 18:33:54 compute-0 ovs-ctl[47090]: Configuring Open vSwitch system IDs [  OK  ]
Jan 20 18:33:54 compute-0 ovs-vsctl[47165]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Jan 20 18:33:54 compute-0 ovs-ctl[47090]: Enabling remote OVSDB managers [  OK  ]
Jan 20 18:33:54 compute-0 systemd[1]: Started Open vSwitch Database Unit.
Jan 20 18:33:54 compute-0 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Jan 20 18:33:54 compute-0 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Jan 20 18:33:54 compute-0 systemd[1]: Starting Open vSwitch Forwarding Unit...
Jan 20 18:33:54 compute-0 kernel: openvswitch: Open vSwitch switching datapath
Jan 20 18:33:54 compute-0 ovs-ctl[47209]: Inserting openvswitch module [  OK  ]
Jan 20 18:33:54 compute-0 ovs-ctl[47178]: Starting ovs-vswitchd [  OK  ]
Jan 20 18:33:54 compute-0 ovs-ctl[47178]: Enabling remote OVSDB managers [  OK  ]
Jan 20 18:33:54 compute-0 ovs-vsctl[47227]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Jan 20 18:33:54 compute-0 systemd[1]: Started Open vSwitch Forwarding Unit.
Jan 20 18:33:54 compute-0 systemd[1]: Starting Open vSwitch...
Jan 20 18:33:54 compute-0 systemd[1]: Finished Open vSwitch.
Jan 20 18:33:54 compute-0 sudo[47040]: pam_unix(sudo:session): session closed for user root
Jan 20 18:33:55 compute-0 python3.9[47378]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 18:33:56 compute-0 sudo[47528]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvsyyyrgnltosiorppcgcyhtotmzanxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934035.7771406-261-60232879893573/AnsiballZ_sefcontext.py'
Jan 20 18:33:56 compute-0 sudo[47528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:33:56 compute-0 python3.9[47530]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 20 18:33:57 compute-0 kernel: SELinux:  Converting 2750 SID table entries...
Jan 20 18:33:57 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 20 18:33:57 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 20 18:33:57 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 20 18:33:57 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 20 18:33:57 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 20 18:33:57 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 20 18:33:57 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 20 18:33:57 compute-0 sudo[47528]: pam_unix(sudo:session): session closed for user root
Jan 20 18:33:59 compute-0 python3.9[47686]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 18:33:59 compute-0 sudo[47842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsnuluqxwcwizzzoqmfewjexhmctdnbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934039.5232193-315-264147029326136/AnsiballZ_dnf.py'
Jan 20 18:33:59 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Jan 20 18:33:59 compute-0 sudo[47842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:34:00 compute-0 python3.9[47844]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 18:34:01 compute-0 sudo[47842]: pam_unix(sudo:session): session closed for user root
Jan 20 18:34:02 compute-0 sudo[47995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgcffpjchwfzaxbarmjkgiznvdeftbef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934041.8014941-339-115607803945339/AnsiballZ_command.py'
Jan 20 18:34:02 compute-0 sudo[47995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:34:02 compute-0 python3.9[47997]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:34:03 compute-0 sudo[47995]: pam_unix(sudo:session): session closed for user root
Jan 20 18:34:03 compute-0 sudo[48282]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxhbdffrwjpiajcxypbdgngbhgorkxdp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934043.4477377-363-196160134912253/AnsiballZ_file.py'
Jan 20 18:34:03 compute-0 sudo[48282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:34:04 compute-0 python3.9[48284]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 20 18:34:04 compute-0 sudo[48282]: pam_unix(sudo:session): session closed for user root
Jan 20 18:34:04 compute-0 python3.9[48434]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 18:34:05 compute-0 sudo[48586]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmbkmgzuhyqevvufutpeeqjtldcobmpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934045.2537444-411-25834031738008/AnsiballZ_dnf.py'
Jan 20 18:34:05 compute-0 sudo[48586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:34:05 compute-0 python3.9[48588]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 18:34:07 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 20 18:34:07 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 20 18:34:07 compute-0 systemd[1]: Reloading.
Jan 20 18:34:07 compute-0 systemd-rc-local-generator[48626]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:34:07 compute-0 systemd-sysv-generator[48631]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:34:07 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 20 18:34:07 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 20 18:34:07 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 20 18:34:07 compute-0 systemd[1]: run-r90fbbe861a65406593fca009da27247a.service: Deactivated successfully.
Jan 20 18:34:08 compute-0 sudo[48586]: pam_unix(sudo:session): session closed for user root
Jan 20 18:34:08 compute-0 sudo[48902]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjwpiwscumulrjmnrnkkhjlduhzersuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934048.3009963-435-217884470160034/AnsiballZ_systemd.py'
Jan 20 18:34:08 compute-0 sudo[48902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:34:08 compute-0 python3.9[48904]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 18:34:08 compute-0 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 20 18:34:08 compute-0 systemd[1]: Stopped Network Manager Wait Online.
Jan 20 18:34:08 compute-0 systemd[1]: Stopping Network Manager Wait Online...
Jan 20 18:34:08 compute-0 systemd[1]: Stopping Network Manager...
Jan 20 18:34:08 compute-0 NetworkManager[7206]: <info>  [1768934048.9054] caught SIGTERM, shutting down normally.
Jan 20 18:34:08 compute-0 NetworkManager[7206]: <info>  [1768934048.9066] dhcp4 (eth0): canceled DHCP transaction
Jan 20 18:34:08 compute-0 NetworkManager[7206]: <info>  [1768934048.9067] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 20 18:34:08 compute-0 NetworkManager[7206]: <info>  [1768934048.9067] dhcp4 (eth0): state changed no lease
Jan 20 18:34:08 compute-0 NetworkManager[7206]: <info>  [1768934048.9068] manager: NetworkManager state is now CONNECTED_SITE
Jan 20 18:34:08 compute-0 NetworkManager[7206]: <info>  [1768934048.9124] exiting (success)
Jan 20 18:34:08 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 20 18:34:08 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 20 18:34:08 compute-0 systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 20 18:34:08 compute-0 systemd[1]: Stopped Network Manager.
Jan 20 18:34:08 compute-0 systemd[1]: NetworkManager.service: Consumed 12.202s CPU time, 4.1M memory peak, read 0B from disk, written 11.0K to disk.
Jan 20 18:34:08 compute-0 systemd[1]: Starting Network Manager...
Jan 20 18:34:08 compute-0 NetworkManager[48914]: <info>  [1768934048.9784] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:7a60faef-372d-4827-b0d0-8fdd6d433663)
Jan 20 18:34:08 compute-0 NetworkManager[48914]: <info>  [1768934048.9785] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 20 18:34:08 compute-0 NetworkManager[48914]: <info>  [1768934048.9858] manager[0x55cac16f6000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 20 18:34:09 compute-0 systemd[1]: Starting Hostname Service...
Jan 20 18:34:09 compute-0 systemd[1]: Started Hostname Service.
Jan 20 18:34:09 compute-0 NetworkManager[48914]: <info>  [1768934049.1012] hostname: hostname: using hostnamed
Jan 20 18:34:09 compute-0 NetworkManager[48914]: <info>  [1768934049.1013] hostname: static hostname changed from (none) to "compute-0"
Jan 20 18:34:09 compute-0 NetworkManager[48914]: <info>  [1768934049.1019] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 20 18:34:09 compute-0 NetworkManager[48914]: <info>  [1768934049.1024] manager[0x55cac16f6000]: rfkill: Wi-Fi hardware radio set enabled
Jan 20 18:34:09 compute-0 NetworkManager[48914]: <info>  [1768934049.1024] manager[0x55cac16f6000]: rfkill: WWAN hardware radio set enabled
Jan 20 18:34:09 compute-0 NetworkManager[48914]: <info>  [1768934049.1048] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-ovs.so)
Jan 20 18:34:09 compute-0 NetworkManager[48914]: <info>  [1768934049.1059] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 20 18:34:09 compute-0 NetworkManager[48914]: <info>  [1768934049.1060] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 20 18:34:09 compute-0 NetworkManager[48914]: <info>  [1768934049.1060] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 20 18:34:09 compute-0 NetworkManager[48914]: <info>  [1768934049.1061] manager: Networking is enabled by state file
Jan 20 18:34:09 compute-0 NetworkManager[48914]: <info>  [1768934049.1063] settings: Loaded settings plugin: keyfile (internal)
Jan 20 18:34:09 compute-0 NetworkManager[48914]: <info>  [1768934049.1067] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 20 18:34:09 compute-0 NetworkManager[48914]: <info>  [1768934049.1095] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 20 18:34:09 compute-0 NetworkManager[48914]: <info>  [1768934049.1108] dhcp: init: Using DHCP client 'internal'
Jan 20 18:34:09 compute-0 NetworkManager[48914]: <info>  [1768934049.1110] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 20 18:34:09 compute-0 NetworkManager[48914]: <info>  [1768934049.1117] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 18:34:09 compute-0 NetworkManager[48914]: <info>  [1768934049.1123] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 20 18:34:09 compute-0 NetworkManager[48914]: <info>  [1768934049.1133] device (lo): Activation: starting connection 'lo' (ee6edf19-39c6-4a96-abbb-0d8aa9c964b6)
Jan 20 18:34:09 compute-0 NetworkManager[48914]: <info>  [1768934049.1141] device (eth0): carrier: link connected
Jan 20 18:34:09 compute-0 NetworkManager[48914]: <info>  [1768934049.1145] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 20 18:34:09 compute-0 NetworkManager[48914]: <info>  [1768934049.1152] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 20 18:34:09 compute-0 NetworkManager[48914]: <info>  [1768934049.1153] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 20 18:34:09 compute-0 NetworkManager[48914]: <info>  [1768934049.1160] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 20 18:34:09 compute-0 NetworkManager[48914]: <info>  [1768934049.1167] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 20 18:34:09 compute-0 NetworkManager[48914]: <info>  [1768934049.1174] device (eth1): carrier: link connected
Jan 20 18:34:09 compute-0 NetworkManager[48914]: <info>  [1768934049.1179] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 20 18:34:09 compute-0 NetworkManager[48914]: <info>  [1768934049.1184] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (df32b0f8-e05a-5256-8e63-e2a619e93c70) (indicated)
Jan 20 18:34:09 compute-0 NetworkManager[48914]: <info>  [1768934049.1185] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 20 18:34:09 compute-0 NetworkManager[48914]: <info>  [1768934049.1191] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 20 18:34:09 compute-0 NetworkManager[48914]: <info>  [1768934049.1200] device (eth1): Activation: starting connection 'ci-private-network' (df32b0f8-e05a-5256-8e63-e2a619e93c70)
Jan 20 18:34:09 compute-0 systemd[1]: Started Network Manager.
Jan 20 18:34:09 compute-0 NetworkManager[48914]: <info>  [1768934049.1207] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 20 18:34:09 compute-0 NetworkManager[48914]: <info>  [1768934049.1221] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 20 18:34:09 compute-0 NetworkManager[48914]: <info>  [1768934049.1224] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 20 18:34:09 compute-0 NetworkManager[48914]: <info>  [1768934049.1226] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 20 18:34:09 compute-0 NetworkManager[48914]: <info>  [1768934049.1228] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 20 18:34:09 compute-0 NetworkManager[48914]: <info>  [1768934049.1231] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 20 18:34:09 compute-0 NetworkManager[48914]: <info>  [1768934049.1234] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 20 18:34:09 compute-0 NetworkManager[48914]: <info>  [1768934049.1235] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 20 18:34:09 compute-0 NetworkManager[48914]: <info>  [1768934049.1239] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 20 18:34:09 compute-0 NetworkManager[48914]: <info>  [1768934049.1245] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 20 18:34:09 compute-0 NetworkManager[48914]: <info>  [1768934049.1248] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 20 18:34:09 compute-0 NetworkManager[48914]: <info>  [1768934049.1256] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 20 18:34:09 compute-0 NetworkManager[48914]: <info>  [1768934049.1270] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 20 18:34:09 compute-0 NetworkManager[48914]: <info>  [1768934049.1282] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 20 18:34:09 compute-0 NetworkManager[48914]: <info>  [1768934049.1285] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 20 18:34:09 compute-0 NetworkManager[48914]: <info>  [1768934049.1293] device (lo): Activation: successful, device activated.
Jan 20 18:34:09 compute-0 NetworkManager[48914]: <info>  [1768934049.1301] dhcp4 (eth0): state changed new lease, address=38.102.83.13
Jan 20 18:34:09 compute-0 NetworkManager[48914]: <info>  [1768934049.1309] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 20 18:34:09 compute-0 systemd[1]: Starting Network Manager Wait Online...
Jan 20 18:34:09 compute-0 NetworkManager[48914]: <info>  [1768934049.1369] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 20 18:34:09 compute-0 NetworkManager[48914]: <info>  [1768934049.1374] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 20 18:34:09 compute-0 NetworkManager[48914]: <info>  [1768934049.1385] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 20 18:34:09 compute-0 NetworkManager[48914]: <info>  [1768934049.1388] manager: NetworkManager state is now CONNECTED_LOCAL
Jan 20 18:34:09 compute-0 NetworkManager[48914]: <info>  [1768934049.1391] device (eth1): Activation: successful, device activated.
Jan 20 18:34:09 compute-0 NetworkManager[48914]: <info>  [1768934049.1435] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 20 18:34:09 compute-0 NetworkManager[48914]: <info>  [1768934049.1436] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 20 18:34:09 compute-0 NetworkManager[48914]: <info>  [1768934049.1439] manager: NetworkManager state is now CONNECTED_SITE
Jan 20 18:34:09 compute-0 NetworkManager[48914]: <info>  [1768934049.1442] device (eth0): Activation: successful, device activated.
Jan 20 18:34:09 compute-0 NetworkManager[48914]: <info>  [1768934049.1447] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 20 18:34:09 compute-0 NetworkManager[48914]: <info>  [1768934049.1476] manager: startup complete
Jan 20 18:34:09 compute-0 systemd[1]: Finished Network Manager Wait Online.
Jan 20 18:34:09 compute-0 sudo[48902]: pam_unix(sudo:session): session closed for user root
Jan 20 18:34:10 compute-0 sudo[49128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lexgavnfjhzeysuhidqpqwgxmfdfrkfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934050.5982087-459-266264894041109/AnsiballZ_dnf.py'
Jan 20 18:34:10 compute-0 sudo[49128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:34:11 compute-0 python3.9[49130]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 18:34:15 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 20 18:34:15 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 20 18:34:15 compute-0 systemd[1]: Reloading.
Jan 20 18:34:15 compute-0 systemd-sysv-generator[49184]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:34:15 compute-0 systemd-rc-local-generator[49179]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:34:15 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 20 18:34:16 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 20 18:34:16 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 20 18:34:16 compute-0 systemd[1]: run-r33cd7e602f62460288ca3c9cbd6c2766.service: Deactivated successfully.
Jan 20 18:34:16 compute-0 sudo[49128]: pam_unix(sudo:session): session closed for user root
Jan 20 18:34:18 compute-0 sudo[49589]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixiyzjseserfojquuuyseawnggnqqkxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934057.9647138-495-226933544314240/AnsiballZ_stat.py'
Jan 20 18:34:18 compute-0 sudo[49589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:34:18 compute-0 python3.9[49591]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 18:34:18 compute-0 sudo[49589]: pam_unix(sudo:session): session closed for user root
Jan 20 18:34:19 compute-0 sudo[49741]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhmuyijsevrnnrkmggeqihlxjkzcnksz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934058.7263553-522-4859252102762/AnsiballZ_ini_file.py'
Jan 20 18:34:19 compute-0 sudo[49741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:34:19 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 20 18:34:19 compute-0 python3.9[49743]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:34:19 compute-0 sudo[49741]: pam_unix(sudo:session): session closed for user root
Jan 20 18:34:20 compute-0 sudo[49895]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-davumtclmvpjnoheckvgyhehnzexfcmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934059.7971885-552-7502092064458/AnsiballZ_ini_file.py'
Jan 20 18:34:20 compute-0 sudo[49895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:34:20 compute-0 python3.9[49897]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:34:20 compute-0 sudo[49895]: pam_unix(sudo:session): session closed for user root
Jan 20 18:34:20 compute-0 sudo[50047]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfyowdttdprmgbbiwfsysyvwjrltybyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934060.4128652-552-147290300746270/AnsiballZ_ini_file.py'
Jan 20 18:34:20 compute-0 sudo[50047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:34:20 compute-0 python3.9[50049]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:34:20 compute-0 sudo[50047]: pam_unix(sudo:session): session closed for user root
Jan 20 18:34:21 compute-0 sudo[50199]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eupsdgplgvufnzckelizqagpxwvjxhdm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934061.2815764-597-235301218530718/AnsiballZ_ini_file.py'
Jan 20 18:34:21 compute-0 sudo[50199]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:34:21 compute-0 python3.9[50201]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:34:21 compute-0 sudo[50199]: pam_unix(sudo:session): session closed for user root
Jan 20 18:34:22 compute-0 sudo[50351]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhwilpicayympnrmbqmjsltspzdeycwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934061.8370042-597-126530047461398/AnsiballZ_ini_file.py'
Jan 20 18:34:22 compute-0 sudo[50351]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:34:22 compute-0 python3.9[50353]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:34:22 compute-0 sudo[50351]: pam_unix(sudo:session): session closed for user root
Jan 20 18:34:23 compute-0 sudo[50503]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sinxrdtmuifuzroxfaxsqdshrhksopwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934062.8961828-642-139275615954470/AnsiballZ_stat.py'
Jan 20 18:34:23 compute-0 sudo[50503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:34:23 compute-0 python3.9[50505]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:34:23 compute-0 sudo[50503]: pam_unix(sudo:session): session closed for user root
Jan 20 18:34:23 compute-0 sudo[50626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kusqkfjmydvaapgzhfahnructyguteum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934062.8961828-642-139275615954470/AnsiballZ_copy.py'
Jan 20 18:34:23 compute-0 sudo[50626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:34:24 compute-0 python3.9[50628]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1768934062.8961828-642-139275615954470/.source _original_basename=.gzs6125f follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:34:24 compute-0 sudo[50626]: pam_unix(sudo:session): session closed for user root
Jan 20 18:34:24 compute-0 sudo[50778]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxlclbouitflbeuhclpogzcugwveryie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934064.2850115-687-87743031699000/AnsiballZ_file.py'
Jan 20 18:34:24 compute-0 sudo[50778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:34:24 compute-0 python3.9[50780]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:34:24 compute-0 sudo[50778]: pam_unix(sudo:session): session closed for user root
Jan 20 18:34:25 compute-0 sudo[50930]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zyuuuavdhnybztjkaqjiymethhyoghap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934065.058291-711-187184351621835/AnsiballZ_edpm_os_net_config_mappings.py'
Jan 20 18:34:25 compute-0 sudo[50930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:34:25 compute-0 python3.9[50932]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Jan 20 18:34:25 compute-0 sudo[50930]: pam_unix(sudo:session): session closed for user root
Jan 20 18:34:26 compute-0 sudo[51082]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adjgorxiahqqantfjfxnjrukwydkdpse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934065.9359887-738-34543148128757/AnsiballZ_file.py'
Jan 20 18:34:26 compute-0 sudo[51082]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:34:26 compute-0 python3.9[51084]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:34:26 compute-0 sudo[51082]: pam_unix(sudo:session): session closed for user root
Jan 20 18:34:27 compute-0 sudo[51234]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkqekkvckfnygmaywstexltijymzsost ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934066.9105935-768-76719944454266/AnsiballZ_stat.py'
Jan 20 18:34:27 compute-0 sudo[51234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:34:27 compute-0 sudo[51234]: pam_unix(sudo:session): session closed for user root
Jan 20 18:34:27 compute-0 sudo[51357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tixxbefvidttpoldbtmsoovdhhkagbma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934066.9105935-768-76719944454266/AnsiballZ_copy.py'
Jan 20 18:34:27 compute-0 sudo[51357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:34:27 compute-0 sudo[51357]: pam_unix(sudo:session): session closed for user root
Jan 20 18:34:28 compute-0 sudo[51509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwaqdakyfkokxckriickyccowudvlbpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934068.3186035-813-119914264353398/AnsiballZ_slurp.py'
Jan 20 18:34:28 compute-0 sudo[51509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:34:28 compute-0 python3.9[51511]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Jan 20 18:34:28 compute-0 sudo[51509]: pam_unix(sudo:session): session closed for user root
Jan 20 18:34:30 compute-0 sudo[51684]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrfnckvzggglmxoheatvgpqmmpxypafb ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934069.2820268-840-156433279603006/async_wrapper.py j140389510917 300 /home/zuul/.ansible/tmp/ansible-tmp-1768934069.2820268-840-156433279603006/AnsiballZ_edpm_os_net_config.py _'
Jan 20 18:34:30 compute-0 sudo[51684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:34:30 compute-0 ansible-async_wrapper.py[51686]: Invoked with j140389510917 300 /home/zuul/.ansible/tmp/ansible-tmp-1768934069.2820268-840-156433279603006/AnsiballZ_edpm_os_net_config.py _
Jan 20 18:34:30 compute-0 ansible-async_wrapper.py[51689]: Starting module and watcher
Jan 20 18:34:30 compute-0 ansible-async_wrapper.py[51689]: Start watching 51690 (300)
Jan 20 18:34:30 compute-0 ansible-async_wrapper.py[51690]: Start module (51690)
Jan 20 18:34:30 compute-0 ansible-async_wrapper.py[51686]: Return async_wrapper task started.
Jan 20 18:34:30 compute-0 sudo[51684]: pam_unix(sudo:session): session closed for user root
Jan 20 18:34:30 compute-0 python3.9[51691]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Jan 20 18:34:31 compute-0 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Jan 20 18:34:31 compute-0 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Jan 20 18:34:31 compute-0 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Jan 20 18:34:31 compute-0 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Jan 20 18:34:31 compute-0 kernel: cfg80211: failed to load regulatory.db
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.0374] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51692 uid=0 result="success"
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.0394] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51692 uid=0 result="success"
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.0802] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.0803] audit: op="connection-add" uuid="a52a83ba-04b2-4a8e-b79b-ffdd29a7224e" name="br-ex-br" pid=51692 uid=0 result="success"
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.0815] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.0816] audit: op="connection-add" uuid="8eae3e46-74bc-4a89-b81b-ea05fe9c78a5" name="br-ex-port" pid=51692 uid=0 result="success"
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.0826] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.0827] audit: op="connection-add" uuid="e5e58a7e-8e97-4272-8972-1dd0ce4149b0" name="eth1-port" pid=51692 uid=0 result="success"
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.0837] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.0838] audit: op="connection-add" uuid="d3147aba-be9e-47a6-a64e-3dd875ee8fa8" name="vlan20-port" pid=51692 uid=0 result="success"
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.0847] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.0848] audit: op="connection-add" uuid="c20cc94d-8fd2-476d-abbc-08a50103fde0" name="vlan21-port" pid=51692 uid=0 result="success"
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.0856] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.0858] audit: op="connection-add" uuid="6bbdafc9-9f66-4aba-bd61-3e9db8716cbc" name="vlan22-port" pid=51692 uid=0 result="success"
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.0867] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.0868] audit: op="connection-add" uuid="d7c26c50-e933-4064-a9bd-86afb20d73fb" name="vlan23-port" pid=51692 uid=0 result="success"
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.0883] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="connection.autoconnect-priority,connection.timestamp,802-3-ethernet.mtu,ipv4.dhcp-client-id,ipv4.dhcp-timeout,ipv6.method,ipv6.addr-gen-mode,ipv6.dhcp-timeout" pid=51692 uid=0 result="success"
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.0897] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.0899] audit: op="connection-add" uuid="cf47f34b-aa83-4300-802e-550e152a5654" name="br-ex-if" pid=51692 uid=0 result="success"
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.0950] audit: op="connection-update" uuid="df32b0f8-e05a-5256-8e63-e2a619e93c70" name="ci-private-network" args="connection.master,connection.controller,connection.slave-type,connection.timestamp,connection.port-type,ovs-external-ids.data,ipv4.addresses,ipv4.never-default,ipv4.method,ipv4.routes,ipv4.dns,ipv4.routing-rules,ipv6.addresses,ipv6.method,ipv6.addr-gen-mode,ipv6.routes,ipv6.dns,ipv6.routing-rules,ovs-interface.type" pid=51692 uid=0 result="success"
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.0965] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.0967] audit: op="connection-add" uuid="25e80cd3-bf41-4ecc-8d60-a2dc20f6c904" name="vlan20-if" pid=51692 uid=0 result="success"
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.0980] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.0982] audit: op="connection-add" uuid="42a322ae-5730-4ccf-ad8c-c9ae997ba4cc" name="vlan21-if" pid=51692 uid=0 result="success"
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.0995] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.0997] audit: op="connection-add" uuid="b54c7d84-785e-4079-b398-f31e4553203f" name="vlan22-if" pid=51692 uid=0 result="success"
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1011] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1013] audit: op="connection-add" uuid="769ecc96-f6e9-4e5f-9878-4b0222879eae" name="vlan23-if" pid=51692 uid=0 result="success"
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1024] audit: op="connection-delete" uuid="602bf063-40be-3863-86f7-7246e64f3d42" name="Wired connection 1" pid=51692 uid=0 result="success"
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1035] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <warn>  [1768934072.1038] device (br-ex)[Open vSwitch Bridge]: error setting IPv4 forwarding to '1': Success
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1044] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1048] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (a52a83ba-04b2-4a8e-b79b-ffdd29a7224e)
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1049] audit: op="connection-activate" uuid="a52a83ba-04b2-4a8e-b79b-ffdd29a7224e" name="br-ex-br" pid=51692 uid=0 result="success"
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1052] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <warn>  [1768934072.1053] device (br-ex)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Success
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1059] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1063] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (8eae3e46-74bc-4a89-b81b-ea05fe9c78a5)
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1065] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <warn>  [1768934072.1066] device (eth1)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1071] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1075] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (e5e58a7e-8e97-4272-8972-1dd0ce4149b0)
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1077] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <warn>  [1768934072.1079] device (vlan20)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1084] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1088] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (d3147aba-be9e-47a6-a64e-3dd875ee8fa8)
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1090] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <warn>  [1768934072.1092] device (vlan21)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1097] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1101] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (c20cc94d-8fd2-476d-abbc-08a50103fde0)
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1103] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <warn>  [1768934072.1105] device (vlan22)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1110] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1113] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (6bbdafc9-9f66-4aba-bd61-3e9db8716cbc)
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1116] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <warn>  [1768934072.1117] device (vlan23)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1122] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1126] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (d7c26c50-e933-4064-a9bd-86afb20d73fb)
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1128] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1130] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1133] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1139] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <warn>  [1768934072.1140] device (br-ex)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1143] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1148] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (cf47f34b-aa83-4300-802e-550e152a5654)
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1149] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1153] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1155] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1157] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1159] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1168] device (eth1): disconnecting for new activation request.
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1170] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1173] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1175] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1177] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1180] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <warn>  [1768934072.1182] device (vlan20)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1185] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1190] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (25e80cd3-bf41-4ecc-8d60-a2dc20f6c904)
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1191] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1194] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1197] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1199] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1202] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <warn>  [1768934072.1203] device (vlan21)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1207] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1213] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (42a322ae-5730-4ccf-ad8c-c9ae997ba4cc)
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1215] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1218] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1221] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1222] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1225] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <warn>  [1768934072.1226] device (vlan22)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1229] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1232] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (b54c7d84-785e-4079-b398-f31e4553203f)
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1232] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1235] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1237] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1237] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1239] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <warn>  [1768934072.1240] device (vlan23)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1243] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1246] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (769ecc96-f6e9-4e5f-9878-4b0222879eae)
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1247] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1249] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1251] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1251] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1252] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1262] audit: op="device-reapply" interface="eth0" ifindex=2 args="connection.autoconnect-priority,802-3-ethernet.mtu,ipv4.dhcp-client-id,ipv4.dhcp-timeout,ipv6.method,ipv6.addr-gen-mode" pid=51692 uid=0 result="success"
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1263] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1266] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1267] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1272] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1275] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1277] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1279] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1281] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1284] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1286] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1289] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1290] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1293] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1297] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1300] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1302] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1305] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1308] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1310] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1311] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1315] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1318] dhcp4 (eth0): canceled DHCP transaction
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1318] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1318] dhcp4 (eth0): state changed no lease
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1319] dhcp4 (eth0): activation: beginning transaction (no timeout)
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1328] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51692 uid=0 result="fail" reason="Device is not activated"
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1358] device (eth1): disconnecting for new activation request.
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1359] audit: op="connection-activate" uuid="df32b0f8-e05a-5256-8e63-e2a619e93c70" name="ci-private-network" pid=51692 uid=0 result="success"
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1371] dhcp4 (eth0): state changed new lease, address=38.102.83.13
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1787] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Jan 20 18:34:32 compute-0 kernel: ovs-system: entered promiscuous mode
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1794] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1803] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Jan 20 18:34:32 compute-0 kernel: Timeout policy base is empty
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1817] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1820] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51692 uid=0 result="success"
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.1823] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Jan 20 18:34:32 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 20 18:34:32 compute-0 systemd-udevd[51698]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 18:34:32 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.2003] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Jan 20 18:34:32 compute-0 kernel: br-ex: entered promiscuous mode
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.2145] device (eth1): Activation: starting connection 'ci-private-network' (df32b0f8-e05a-5256-8e63-e2a619e93c70)
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.2151] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.2152] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.2154] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.2155] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.2156] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.2157] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.2158] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.2166] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.2169] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.2181] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.2187] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.2193] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.2198] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.2201] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.2205] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.2210] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.2215] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.2220] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.2224] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.2229] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.2232] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Jan 20 18:34:32 compute-0 kernel: vlan22: entered promiscuous mode
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.2235] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.2238] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Jan 20 18:34:32 compute-0 systemd-udevd[51697]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.2254] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.2277] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.2284] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.2289] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 20 18:34:32 compute-0 kernel: vlan21: entered promiscuous mode
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.2320] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.2328] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.2333] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.2338] device (eth1): Activation: successful, device activated.
Jan 20 18:34:32 compute-0 systemd-udevd[51696]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 18:34:32 compute-0 kernel: vlan23: entered promiscuous mode
Jan 20 18:34:32 compute-0 kernel: vlan20: entered promiscuous mode
Jan 20 18:34:32 compute-0 systemd-udevd[51804]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.2407] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.2411] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.2414] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.2428] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 20 18:34:32 compute-0 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.2477] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.2489] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.2491] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.2506] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.2515] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.2520] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.2522] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.2533] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.2542] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.2542] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.2544] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.2547] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.2550] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.2553] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.2556] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.2567] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.2597] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.2598] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 20 18:34:32 compute-0 NetworkManager[48914]: <info>  [1768934072.2601] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 20 18:34:33 compute-0 NetworkManager[48914]: <info>  [1768934073.3737] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51692 uid=0 result="success"
Jan 20 18:34:33 compute-0 NetworkManager[48914]: <info>  [1768934073.5348] checkpoint[0x55cac16cc950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Jan 20 18:34:33 compute-0 NetworkManager[48914]: <info>  [1768934073.5351] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51692 uid=0 result="success"
Jan 20 18:34:33 compute-0 sudo[52050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gknszlnnjpukswppcexdkrkgwcvnjkzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934073.3822186-840-214596700863726/AnsiballZ_async_status.py'
Jan 20 18:34:33 compute-0 sudo[52050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:34:33 compute-0 NetworkManager[48914]: <info>  [1768934073.8389] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51692 uid=0 result="success"
Jan 20 18:34:33 compute-0 NetworkManager[48914]: <info>  [1768934073.8402] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51692 uid=0 result="success"
Jan 20 18:34:33 compute-0 python3.9[52052]: ansible-ansible.legacy.async_status Invoked with jid=j140389510917.51686 mode=status _async_dir=/root/.ansible_async
Jan 20 18:34:33 compute-0 sudo[52050]: pam_unix(sudo:session): session closed for user root
Jan 20 18:34:34 compute-0 NetworkManager[48914]: <info>  [1768934074.0317] audit: op="networking-control" arg="global-dns-configuration" pid=51692 uid=0 result="success"
Jan 20 18:34:34 compute-0 NetworkManager[48914]: <info>  [1768934074.0342] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Jan 20 18:34:34 compute-0 NetworkManager[48914]: <info>  [1768934074.0369] audit: op="networking-control" arg="global-dns-configuration" pid=51692 uid=0 result="success"
Jan 20 18:34:34 compute-0 NetworkManager[48914]: <info>  [1768934074.0387] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51692 uid=0 result="success"
Jan 20 18:34:34 compute-0 NetworkManager[48914]: <info>  [1768934074.1675] checkpoint[0x55cac16cca20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Jan 20 18:34:34 compute-0 NetworkManager[48914]: <info>  [1768934074.1691] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51692 uid=0 result="success"
Jan 20 18:34:34 compute-0 ansible-async_wrapper.py[51690]: Module complete (51690)
Jan 20 18:34:35 compute-0 ansible-async_wrapper.py[51689]: Done in kid B.
Jan 20 18:34:37 compute-0 sudo[52154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzxmltqsntacezosingnfcnyafzoloze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934073.3822186-840-214596700863726/AnsiballZ_async_status.py'
Jan 20 18:34:37 compute-0 sudo[52154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:34:37 compute-0 python3.9[52156]: ansible-ansible.legacy.async_status Invoked with jid=j140389510917.51686 mode=status _async_dir=/root/.ansible_async
Jan 20 18:34:37 compute-0 sudo[52154]: pam_unix(sudo:session): session closed for user root
Jan 20 18:34:37 compute-0 sudo[52254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhiuvyhvxstmwqcayjgecgxbsilqhfmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934073.3822186-840-214596700863726/AnsiballZ_async_status.py'
Jan 20 18:34:37 compute-0 sudo[52254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:34:37 compute-0 python3.9[52256]: ansible-ansible.legacy.async_status Invoked with jid=j140389510917.51686 mode=cleanup _async_dir=/root/.ansible_async
Jan 20 18:34:37 compute-0 sudo[52254]: pam_unix(sudo:session): session closed for user root
Jan 20 18:34:38 compute-0 sudo[52406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udcalhrobbezipmsljpphxszmynejrnw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934078.4704201-921-182274172811699/AnsiballZ_stat.py'
Jan 20 18:34:38 compute-0 sudo[52406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:34:38 compute-0 python3.9[52408]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:34:38 compute-0 sudo[52406]: pam_unix(sudo:session): session closed for user root
Jan 20 18:34:39 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 20 18:34:39 compute-0 sudo[52532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qerytuoqpogssrlpeegzszczvzbfebij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934078.4704201-921-182274172811699/AnsiballZ_copy.py'
Jan 20 18:34:39 compute-0 sudo[52532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:34:39 compute-0 python3.9[52534]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768934078.4704201-921-182274172811699/.source.returncode _original_basename=.u3do6qzd follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:34:39 compute-0 sudo[52532]: pam_unix(sudo:session): session closed for user root
Jan 20 18:34:40 compute-0 sudo[52684]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgmjnzmkssbqbgzotlfibkaakxinxjgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934080.0173807-969-200956105567744/AnsiballZ_stat.py'
Jan 20 18:34:40 compute-0 sudo[52684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:34:40 compute-0 python3.9[52686]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:34:40 compute-0 sudo[52684]: pam_unix(sudo:session): session closed for user root
Jan 20 18:34:40 compute-0 sudo[52808]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckaqkvtegtclkhygzxrqtfjxfvujrzxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934080.0173807-969-200956105567744/AnsiballZ_copy.py'
Jan 20 18:34:40 compute-0 sudo[52808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:34:41 compute-0 python3.9[52810]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768934080.0173807-969-200956105567744/.source.cfg _original_basename=.5n37zd8l follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:34:41 compute-0 sudo[52808]: pam_unix(sudo:session): session closed for user root
Jan 20 18:34:41 compute-0 sudo[52960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jclvccteoxypqufxfozkyyledpmvdsfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934081.4374352-1014-182171713385935/AnsiballZ_systemd.py'
Jan 20 18:34:41 compute-0 sudo[52960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:34:41 compute-0 python3.9[52962]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 18:34:42 compute-0 systemd[1]: Reloading Network Manager...
Jan 20 18:34:42 compute-0 NetworkManager[48914]: <info>  [1768934082.0242] audit: op="reload" arg="0" pid=52966 uid=0 result="success"
Jan 20 18:34:42 compute-0 NetworkManager[48914]: <info>  [1768934082.0266] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Jan 20 18:34:42 compute-0 systemd[1]: Reloaded Network Manager.
Jan 20 18:34:42 compute-0 sudo[52960]: pam_unix(sudo:session): session closed for user root
Jan 20 18:34:42 compute-0 sshd-session[44918]: Connection closed by 192.168.122.30 port 49156
Jan 20 18:34:42 compute-0 sshd-session[44914]: pam_unix(sshd:session): session closed for user zuul
Jan 20 18:34:42 compute-0 systemd-logind[796]: Session 10 logged out. Waiting for processes to exit.
Jan 20 18:34:42 compute-0 systemd[1]: session-10.scope: Deactivated successfully.
Jan 20 18:34:42 compute-0 systemd[1]: session-10.scope: Consumed 47.888s CPU time.
Jan 20 18:34:42 compute-0 systemd-logind[796]: Removed session 10.
Jan 20 18:34:48 compute-0 sshd-session[52997]: Accepted publickey for zuul from 192.168.122.30 port 51644 ssh2: ECDSA SHA256:OqjpBifwMHsrzwMbJxHbqE54q1skVz6aecL1tN9sOps
Jan 20 18:34:48 compute-0 systemd-logind[796]: New session 11 of user zuul.
Jan 20 18:34:48 compute-0 systemd[1]: Started Session 11 of User zuul.
Jan 20 18:34:48 compute-0 sshd-session[52997]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 18:34:49 compute-0 python3.9[53150]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 18:34:50 compute-0 python3.9[53304]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 18:34:51 compute-0 python3.9[53498]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:34:52 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 20 18:34:52 compute-0 sshd-session[53000]: Connection closed by 192.168.122.30 port 51644
Jan 20 18:34:52 compute-0 sshd-session[52997]: pam_unix(sshd:session): session closed for user zuul
Jan 20 18:34:52 compute-0 systemd[1]: session-11.scope: Deactivated successfully.
Jan 20 18:34:52 compute-0 systemd[1]: session-11.scope: Consumed 2.217s CPU time.
Jan 20 18:34:52 compute-0 systemd-logind[796]: Session 11 logged out. Waiting for processes to exit.
Jan 20 18:34:52 compute-0 systemd-logind[796]: Removed session 11.
Jan 20 18:34:58 compute-0 sshd-session[53527]: Accepted publickey for zuul from 192.168.122.30 port 37456 ssh2: ECDSA SHA256:OqjpBifwMHsrzwMbJxHbqE54q1skVz6aecL1tN9sOps
Jan 20 18:34:58 compute-0 systemd-logind[796]: New session 12 of user zuul.
Jan 20 18:34:58 compute-0 systemd[1]: Started Session 12 of User zuul.
Jan 20 18:34:58 compute-0 sshd-session[53527]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 18:34:59 compute-0 python3.9[53680]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 18:35:00 compute-0 python3.9[53835]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 18:35:01 compute-0 sudo[53989]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kuvcibbboqugovykbrfwakvnyesaecbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934101.0013843-75-187220224669978/AnsiballZ_setup.py'
Jan 20 18:35:01 compute-0 sudo[53989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:35:01 compute-0 python3.9[53991]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 18:35:01 compute-0 sudo[53989]: pam_unix(sudo:session): session closed for user root
Jan 20 18:35:02 compute-0 sudo[54073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akmghwqaayjfzpkcljpfkimhgdbtuzpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934101.0013843-75-187220224669978/AnsiballZ_dnf.py'
Jan 20 18:35:02 compute-0 sudo[54073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:35:02 compute-0 python3.9[54075]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 18:35:03 compute-0 sudo[54073]: pam_unix(sudo:session): session closed for user root
Jan 20 18:35:04 compute-0 sudo[54227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rspewgbsxpepcirvehdczbusmhzpbptw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934104.0410302-111-249926284639786/AnsiballZ_setup.py'
Jan 20 18:35:04 compute-0 sudo[54227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:35:04 compute-0 python3.9[54229]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 18:35:04 compute-0 sudo[54227]: pam_unix(sudo:session): session closed for user root
Jan 20 18:35:05 compute-0 sudo[54422]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdpqaiafmhywfsvczwlrmzreegcpyyfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934105.421938-144-184163093804713/AnsiballZ_file.py'
Jan 20 18:35:05 compute-0 sudo[54422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:35:05 compute-0 python3.9[54424]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:35:05 compute-0 sudo[54422]: pam_unix(sudo:session): session closed for user root
Jan 20 18:35:06 compute-0 sudo[54574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsdclhydsmkldckvkrwazlpeemqkjyko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934106.2975452-168-200484217425280/AnsiballZ_command.py'
Jan 20 18:35:06 compute-0 sudo[54574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:35:06 compute-0 python3.9[54576]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:35:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat4098636958-merged.mount: Deactivated successfully.
Jan 20 18:35:08 compute-0 podman[54577]: 2026-01-20 18:35:08.36379532 +0000 UTC m=+1.442679381 system refresh
Jan 20 18:35:08 compute-0 sudo[54574]: pam_unix(sudo:session): session closed for user root
Jan 20 18:35:09 compute-0 sudo[54736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dermtdzqcsjtrflqsehyzvrblophjbxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934108.5555925-192-70892383966360/AnsiballZ_stat.py'
Jan 20 18:35:09 compute-0 sudo[54736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:35:09 compute-0 python3.9[54738]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:35:09 compute-0 sudo[54736]: pam_unix(sudo:session): session closed for user root
Jan 20 18:35:09 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 20 18:35:09 compute-0 sudo[54859]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-poeiiutcvxfeqgmhyvtdvkgceqsnjwez ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934108.5555925-192-70892383966360/AnsiballZ_copy.py'
Jan 20 18:35:09 compute-0 sudo[54859]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:35:09 compute-0 python3.9[54861]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768934108.5555925-192-70892383966360/.source.json follow=False _original_basename=podman_network_config.j2 checksum=043b2e3d85204470b95254da172203170adc3bf5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:35:09 compute-0 sudo[54859]: pam_unix(sudo:session): session closed for user root
Jan 20 18:35:10 compute-0 sudo[55011]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbpprxbuhpcbdbjwlzibvrpbklugavci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934110.1173558-237-2707935899817/AnsiballZ_stat.py'
Jan 20 18:35:10 compute-0 sudo[55011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:35:10 compute-0 python3.9[55013]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:35:10 compute-0 sudo[55011]: pam_unix(sudo:session): session closed for user root
Jan 20 18:35:10 compute-0 sudo[55134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-byabjgoaxtkaokqltubivirvydyjprai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934110.1173558-237-2707935899817/AnsiballZ_copy.py'
Jan 20 18:35:10 compute-0 sudo[55134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:35:11 compute-0 python3.9[55136]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768934110.1173558-237-2707935899817/.source.conf follow=False _original_basename=registries.conf.j2 checksum=7583035fe00323822d170f910e3cbd96dee33d94 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:35:11 compute-0 sudo[55134]: pam_unix(sudo:session): session closed for user root
Jan 20 18:35:12 compute-0 sudo[55286]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcpsrzomquzcvguwuqjhzrdcjzvmjtnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934111.5928028-285-240527216487567/AnsiballZ_ini_file.py'
Jan 20 18:35:12 compute-0 sudo[55286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:35:12 compute-0 python3.9[55288]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:35:12 compute-0 sudo[55286]: pam_unix(sudo:session): session closed for user root
Jan 20 18:35:12 compute-0 sudo[55438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlezlbivxdxbgjbkhcrbqbaannfaqvzp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934112.493213-285-103355258307166/AnsiballZ_ini_file.py'
Jan 20 18:35:12 compute-0 sudo[55438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:35:13 compute-0 python3.9[55440]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:35:13 compute-0 sudo[55438]: pam_unix(sudo:session): session closed for user root
Jan 20 18:35:13 compute-0 sudo[55590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uirrdnzndysdevyofayifrgfytzgpiuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934113.2986314-285-227766503933863/AnsiballZ_ini_file.py'
Jan 20 18:35:13 compute-0 sudo[55590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:35:13 compute-0 python3.9[55592]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:35:13 compute-0 sudo[55590]: pam_unix(sudo:session): session closed for user root
Jan 20 18:35:14 compute-0 sudo[55742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjfeysrhkyevaomiezcbhzlcwzauzpcf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934113.9521263-285-75402317919213/AnsiballZ_ini_file.py'
Jan 20 18:35:14 compute-0 sudo[55742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:35:14 compute-0 python3.9[55744]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:35:14 compute-0 sudo[55742]: pam_unix(sudo:session): session closed for user root
Jan 20 18:35:15 compute-0 sudo[55894]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecxthubosqzbwavewawrmkmcyygzabpo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934115.1402338-378-190875467971374/AnsiballZ_dnf.py'
Jan 20 18:35:15 compute-0 sudo[55894]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:35:15 compute-0 python3.9[55896]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 18:35:16 compute-0 sudo[55894]: pam_unix(sudo:session): session closed for user root
Jan 20 18:35:18 compute-0 sudo[56047]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfvwtcxdwowtwcwldokqsqjygzlbtwtt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934118.3853369-411-184777026039988/AnsiballZ_setup.py'
Jan 20 18:35:18 compute-0 sudo[56047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:35:18 compute-0 python3.9[56049]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 18:35:18 compute-0 sudo[56047]: pam_unix(sudo:session): session closed for user root
Jan 20 18:35:19 compute-0 sudo[56201]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnrgmzrjeedhsdpxfzmgtgxdndwnvpzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934119.2592928-435-141107281807931/AnsiballZ_stat.py'
Jan 20 18:35:19 compute-0 sudo[56201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:35:19 compute-0 python3.9[56203]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 18:35:19 compute-0 sudo[56201]: pam_unix(sudo:session): session closed for user root
Jan 20 18:35:20 compute-0 sudo[56353]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijpvpgnzhgpucnrovkqzaxphhlbbrbbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934120.216324-462-172151447838427/AnsiballZ_stat.py'
Jan 20 18:35:20 compute-0 sudo[56353]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:35:20 compute-0 python3.9[56355]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 18:35:20 compute-0 sudo[56353]: pam_unix(sudo:session): session closed for user root
Jan 20 18:35:21 compute-0 sudo[56505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdibmuurgpwownkjxtrbbsycrigcrndq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934121.060168-492-225485791671987/AnsiballZ_command.py'
Jan 20 18:35:21 compute-0 sudo[56505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:35:21 compute-0 python3.9[56507]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:35:21 compute-0 sudo[56505]: pam_unix(sudo:session): session closed for user root
Jan 20 18:35:22 compute-0 sudo[56658]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rparpgaaqmskqztzltsqznuxgvgvfnmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934122.1433918-522-88823209691906/AnsiballZ_service_facts.py'
Jan 20 18:35:22 compute-0 sudo[56658]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:35:22 compute-0 python3.9[56660]: ansible-service_facts Invoked
Jan 20 18:35:22 compute-0 network[56677]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 20 18:35:22 compute-0 network[56678]: 'network-scripts' will be removed from distribution in near future.
Jan 20 18:35:22 compute-0 network[56679]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 20 18:35:29 compute-0 sudo[56658]: pam_unix(sudo:session): session closed for user root
Jan 20 18:35:30 compute-0 sudo[56962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbnfjduaqwjsaojqengvshfgnybxjvrc ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1768934130.455474-567-258935725951141/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1768934130.455474-567-258935725951141/args'
Jan 20 18:35:30 compute-0 sudo[56962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:35:30 compute-0 sudo[56962]: pam_unix(sudo:session): session closed for user root
Jan 20 18:35:31 compute-0 sudo[57129]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzzodkgwtwwevuzynecjvxutzlziuhzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934131.2447264-600-3453697937271/AnsiballZ_dnf.py'
Jan 20 18:35:31 compute-0 sudo[57129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:35:31 compute-0 python3.9[57131]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 18:35:32 compute-0 sudo[57129]: pam_unix(sudo:session): session closed for user root
Jan 20 18:35:34 compute-0 sudo[57282]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yreibsatfiwxsucavohclzavzlapuuxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934133.8553545-639-26241347025645/AnsiballZ_package_facts.py'
Jan 20 18:35:34 compute-0 sudo[57282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:35:34 compute-0 python3.9[57284]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 20 18:35:34 compute-0 sudo[57282]: pam_unix(sudo:session): session closed for user root
Jan 20 18:35:35 compute-0 sudo[57434]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwisfnbapmystvpbisbvgajvtgupwuqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934135.6728575-669-26310626856060/AnsiballZ_stat.py'
Jan 20 18:35:35 compute-0 sudo[57434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:35:36 compute-0 python3.9[57436]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:35:36 compute-0 sudo[57434]: pam_unix(sudo:session): session closed for user root
Jan 20 18:35:36 compute-0 sudo[57559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbbbvepiolakpexerheqwqjkptrtmgpj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934135.6728575-669-26310626856060/AnsiballZ_copy.py'
Jan 20 18:35:36 compute-0 sudo[57559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:35:36 compute-0 python3.9[57561]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768934135.6728575-669-26310626856060/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:35:36 compute-0 sudo[57559]: pam_unix(sudo:session): session closed for user root
Jan 20 18:35:37 compute-0 sudo[57713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glxkxzzdkbrjnlclydwsnqorqtybadxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934137.3191128-714-190506316325476/AnsiballZ_stat.py'
Jan 20 18:35:37 compute-0 sudo[57713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:35:37 compute-0 python3.9[57715]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:35:37 compute-0 sudo[57713]: pam_unix(sudo:session): session closed for user root
Jan 20 18:35:38 compute-0 sudo[57838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-philxecxzamdzahdskekrjbizljrbcha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934137.3191128-714-190506316325476/AnsiballZ_copy.py'
Jan 20 18:35:38 compute-0 sudo[57838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:35:38 compute-0 python3.9[57840]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768934137.3191128-714-190506316325476/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:35:38 compute-0 sudo[57838]: pam_unix(sudo:session): session closed for user root
Jan 20 18:35:39 compute-0 sudo[57992]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlxzwwyxhkptzeuqujozzqgiehzojjrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934139.5090065-777-112648547062785/AnsiballZ_lineinfile.py'
Jan 20 18:35:39 compute-0 sudo[57992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:35:40 compute-0 python3.9[57994]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:35:40 compute-0 sudo[57992]: pam_unix(sudo:session): session closed for user root
Jan 20 18:35:41 compute-0 sudo[58146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epuaflbagsxxsahatlmzaehuuuyokoct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934141.2309856-822-154985146037452/AnsiballZ_setup.py'
Jan 20 18:35:41 compute-0 sudo[58146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:35:41 compute-0 python3.9[58148]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 18:35:41 compute-0 sudo[58146]: pam_unix(sudo:session): session closed for user root
Jan 20 18:35:42 compute-0 sudo[58230]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sknlzmsuhsyctbbvngrimdgbytatvblw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934141.2309856-822-154985146037452/AnsiballZ_systemd.py'
Jan 20 18:35:42 compute-0 sudo[58230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:35:42 compute-0 python3.9[58232]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 18:35:42 compute-0 sudo[58230]: pam_unix(sudo:session): session closed for user root
Jan 20 18:35:43 compute-0 sudo[58384]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htowmqtmkfoybnzihasowadansceokho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934143.7089164-870-107486044760356/AnsiballZ_setup.py'
Jan 20 18:35:43 compute-0 sudo[58384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:35:44 compute-0 python3.9[58386]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 18:35:44 compute-0 sudo[58384]: pam_unix(sudo:session): session closed for user root
Jan 20 18:35:44 compute-0 sudo[58468]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfuuftdtanivveiaoynaqlalijmeysws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934143.7089164-870-107486044760356/AnsiballZ_systemd.py'
Jan 20 18:35:44 compute-0 sudo[58468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:35:44 compute-0 python3.9[58470]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 18:35:44 compute-0 chronyd[784]: chronyd exiting
Jan 20 18:35:44 compute-0 systemd[1]: Stopping NTP client/server...
Jan 20 18:35:44 compute-0 systemd[1]: chronyd.service: Deactivated successfully.
Jan 20 18:35:44 compute-0 systemd[1]: Stopped NTP client/server.
Jan 20 18:35:44 compute-0 systemd[1]: Starting NTP client/server...
Jan 20 18:35:45 compute-0 chronyd[58478]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 20 18:35:45 compute-0 chronyd[58478]: Frequency -26.502 +/- 0.181 ppm read from /var/lib/chrony/drift
Jan 20 18:35:45 compute-0 chronyd[58478]: Loaded seccomp filter (level 2)
Jan 20 18:35:45 compute-0 systemd[1]: Started NTP client/server.
Jan 20 18:35:45 compute-0 sudo[58468]: pam_unix(sudo:session): session closed for user root
Jan 20 18:35:45 compute-0 sshd-session[53530]: Connection closed by 192.168.122.30 port 37456
Jan 20 18:35:45 compute-0 sshd-session[53527]: pam_unix(sshd:session): session closed for user zuul
Jan 20 18:35:45 compute-0 systemd-logind[796]: Session 12 logged out. Waiting for processes to exit.
Jan 20 18:35:45 compute-0 systemd[1]: session-12.scope: Deactivated successfully.
Jan 20 18:35:45 compute-0 systemd[1]: session-12.scope: Consumed 24.214s CPU time.
Jan 20 18:35:45 compute-0 systemd-logind[796]: Removed session 12.
Jan 20 18:35:51 compute-0 sshd-session[58504]: Accepted publickey for zuul from 192.168.122.30 port 59808 ssh2: ECDSA SHA256:OqjpBifwMHsrzwMbJxHbqE54q1skVz6aecL1tN9sOps
Jan 20 18:35:51 compute-0 systemd-logind[796]: New session 13 of user zuul.
Jan 20 18:35:51 compute-0 systemd[1]: Started Session 13 of User zuul.
Jan 20 18:35:51 compute-0 sshd-session[58504]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 18:35:52 compute-0 sudo[58657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfcxuujupnyeypbdnrbeecgiggkrchzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934151.6331096-21-232758941460246/AnsiballZ_file.py'
Jan 20 18:35:52 compute-0 sudo[58657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:35:52 compute-0 python3.9[58659]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:35:52 compute-0 sudo[58657]: pam_unix(sudo:session): session closed for user root
Jan 20 18:35:53 compute-0 sudo[58809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqggxxmscmuxrqqbsqskvycrblqkezcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934153.0887458-57-178477461284758/AnsiballZ_stat.py'
Jan 20 18:35:53 compute-0 sudo[58809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:35:53 compute-0 python3.9[58811]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:35:53 compute-0 sudo[58809]: pam_unix(sudo:session): session closed for user root
Jan 20 18:35:54 compute-0 sudo[58932]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cosofzsrpioqoaiugpdhuorqawonmeyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934153.0887458-57-178477461284758/AnsiballZ_copy.py'
Jan 20 18:35:54 compute-0 sudo[58932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:35:54 compute-0 python3.9[58934]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768934153.0887458-57-178477461284758/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:35:54 compute-0 sudo[58932]: pam_unix(sudo:session): session closed for user root
Jan 20 18:35:55 compute-0 sshd-session[58507]: Connection closed by 192.168.122.30 port 59808
Jan 20 18:35:55 compute-0 sshd-session[58504]: pam_unix(sshd:session): session closed for user zuul
Jan 20 18:35:55 compute-0 systemd[1]: session-13.scope: Deactivated successfully.
Jan 20 18:35:55 compute-0 systemd[1]: session-13.scope: Consumed 1.606s CPU time.
Jan 20 18:35:55 compute-0 systemd-logind[796]: Session 13 logged out. Waiting for processes to exit.
Jan 20 18:35:55 compute-0 systemd-logind[796]: Removed session 13.
Jan 20 18:36:00 compute-0 sshd-session[58959]: Accepted publickey for zuul from 192.168.122.30 port 59816 ssh2: ECDSA SHA256:OqjpBifwMHsrzwMbJxHbqE54q1skVz6aecL1tN9sOps
Jan 20 18:36:00 compute-0 systemd-logind[796]: New session 14 of user zuul.
Jan 20 18:36:00 compute-0 systemd[1]: Started Session 14 of User zuul.
Jan 20 18:36:00 compute-0 sshd-session[58959]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 18:36:01 compute-0 python3.9[59112]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 18:36:02 compute-0 sudo[59266]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yaljmkdwnqkkxsrpkpgqswpahxorkseb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934162.167359-54-101397314719992/AnsiballZ_file.py'
Jan 20 18:36:02 compute-0 sudo[59266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:36:02 compute-0 python3.9[59268]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:36:02 compute-0 sudo[59266]: pam_unix(sudo:session): session closed for user root
Jan 20 18:36:03 compute-0 sudo[59441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bldvbznhjutzgzysyfrpqzpvzjmkjbfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934162.9562294-78-260473488146126/AnsiballZ_stat.py'
Jan 20 18:36:03 compute-0 sudo[59441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:36:03 compute-0 python3.9[59443]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:36:03 compute-0 sudo[59441]: pam_unix(sudo:session): session closed for user root
Jan 20 18:36:04 compute-0 sudo[59564]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzchvvgrmrfggokxoikjrmtwueyvczqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934162.9562294-78-260473488146126/AnsiballZ_copy.py'
Jan 20 18:36:04 compute-0 sudo[59564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:36:04 compute-0 python3.9[59566]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1768934162.9562294-78-260473488146126/.source.json _original_basename=.9ad3ursy follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:36:04 compute-0 sudo[59564]: pam_unix(sudo:session): session closed for user root
Jan 20 18:36:05 compute-0 sudo[59716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyxattszfsbhnhwnelkxvilhybphswsv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934164.9755318-147-271728729812381/AnsiballZ_stat.py'
Jan 20 18:36:05 compute-0 sudo[59716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:36:05 compute-0 python3.9[59718]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:36:05 compute-0 sudo[59716]: pam_unix(sudo:session): session closed for user root
Jan 20 18:36:05 compute-0 sudo[59839]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzyvewoxiojxmxljcqofolstqrjnpcjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934164.9755318-147-271728729812381/AnsiballZ_copy.py'
Jan 20 18:36:05 compute-0 sudo[59839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:36:05 compute-0 python3.9[59841]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768934164.9755318-147-271728729812381/.source _original_basename=.9ynx72k6 follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:36:06 compute-0 sudo[59839]: pam_unix(sudo:session): session closed for user root
Jan 20 18:36:06 compute-0 sudo[59991]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtkrbuzusztuvvdqsrwhvqwznptijtrw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934166.3539824-195-80849353094115/AnsiballZ_file.py'
Jan 20 18:36:06 compute-0 sudo[59991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:36:06 compute-0 python3.9[59993]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:36:06 compute-0 sudo[59991]: pam_unix(sudo:session): session closed for user root
Jan 20 18:36:07 compute-0 sudo[60143]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggbphpjurapprxweqnmzwiywtdntckin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934167.1294143-219-237104135764590/AnsiballZ_stat.py'
Jan 20 18:36:07 compute-0 sudo[60143]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:36:07 compute-0 python3.9[60145]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:36:07 compute-0 sudo[60143]: pam_unix(sudo:session): session closed for user root
Jan 20 18:36:07 compute-0 sudo[60266]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlvqfbjzrwzfkmivdmkxtxnvjjczxxdq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934167.1294143-219-237104135764590/AnsiballZ_copy.py'
Jan 20 18:36:07 compute-0 sudo[60266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:36:08 compute-0 python3.9[60268]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768934167.1294143-219-237104135764590/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:36:08 compute-0 sudo[60266]: pam_unix(sudo:session): session closed for user root
Jan 20 18:36:08 compute-0 sudo[60418]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adwhpfbpqtxjjoxxlvieplzipapvyklc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934168.191887-219-86258179832831/AnsiballZ_stat.py'
Jan 20 18:36:08 compute-0 sudo[60418]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:36:08 compute-0 python3.9[60420]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:36:08 compute-0 sudo[60418]: pam_unix(sudo:session): session closed for user root
Jan 20 18:36:09 compute-0 sudo[60541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxgsacddnzbsdzsaopwsbpekceinvffe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934168.191887-219-86258179832831/AnsiballZ_copy.py'
Jan 20 18:36:09 compute-0 sudo[60541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:36:09 compute-0 python3.9[60543]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768934168.191887-219-86258179832831/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:36:09 compute-0 sudo[60541]: pam_unix(sudo:session): session closed for user root
Jan 20 18:36:10 compute-0 sudo[60694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrbvuigtydfdzxldgcthpgsyevwzdxuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934169.9787385-306-99350179404962/AnsiballZ_file.py'
Jan 20 18:36:10 compute-0 sudo[60694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:36:10 compute-0 python3.9[60696]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:36:10 compute-0 sudo[60694]: pam_unix(sudo:session): session closed for user root
Jan 20 18:36:10 compute-0 sudo[60846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdcpstlrqoqfujafrhhckeehjnygdclj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934170.6515038-330-75367599721285/AnsiballZ_stat.py'
Jan 20 18:36:10 compute-0 sudo[60846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:36:11 compute-0 python3.9[60848]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:36:11 compute-0 sudo[60846]: pam_unix(sudo:session): session closed for user root
Jan 20 18:36:11 compute-0 sudo[60969]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzcidfuthvtbufhhawxsbemtkwkrntfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934170.6515038-330-75367599721285/AnsiballZ_copy.py'
Jan 20 18:36:11 compute-0 sudo[60969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:36:11 compute-0 python3.9[60971]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768934170.6515038-330-75367599721285/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:36:11 compute-0 sudo[60969]: pam_unix(sudo:session): session closed for user root
Jan 20 18:36:12 compute-0 sudo[61121]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymemwcazeowjwrejcevumbeysnoihqjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934172.1284678-375-189799990357753/AnsiballZ_stat.py'
Jan 20 18:36:12 compute-0 sudo[61121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:36:12 compute-0 python3.9[61123]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:36:12 compute-0 sudo[61121]: pam_unix(sudo:session): session closed for user root
Jan 20 18:36:12 compute-0 sudo[61244]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwlyjfaceiwylmrzrlfiuxdlwbmgzpoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934172.1284678-375-189799990357753/AnsiballZ_copy.py'
Jan 20 18:36:12 compute-0 sudo[61244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:36:13 compute-0 python3.9[61246]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768934172.1284678-375-189799990357753/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:36:13 compute-0 sudo[61244]: pam_unix(sudo:session): session closed for user root
Jan 20 18:36:14 compute-0 sudo[61396]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwmxxpgxyznniejxmsmmdyihvvlojuze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934173.5056381-420-240660708508580/AnsiballZ_systemd.py'
Jan 20 18:36:14 compute-0 sudo[61396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:36:14 compute-0 python3.9[61398]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 18:36:14 compute-0 systemd[1]: Reloading.
Jan 20 18:36:14 compute-0 systemd-sysv-generator[61425]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:36:14 compute-0 systemd-rc-local-generator[61421]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:36:14 compute-0 systemd[1]: Reloading.
Jan 20 18:36:14 compute-0 systemd-rc-local-generator[61461]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:36:14 compute-0 systemd-sysv-generator[61464]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:36:15 compute-0 systemd[1]: Starting EDPM Container Shutdown...
Jan 20 18:36:15 compute-0 systemd[1]: Finished EDPM Container Shutdown.
Jan 20 18:36:15 compute-0 sudo[61396]: pam_unix(sudo:session): session closed for user root
Jan 20 18:36:15 compute-0 sudo[61624]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zivdidadcitezlevxscibzqvnbkdtslj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934175.2108185-444-83769065228433/AnsiballZ_stat.py'
Jan 20 18:36:15 compute-0 sudo[61624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:36:15 compute-0 python3.9[61626]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:36:15 compute-0 sudo[61624]: pam_unix(sudo:session): session closed for user root
Jan 20 18:36:16 compute-0 sudo[61747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgzihxqkyfgnedqscengplpnzqclqjhd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934175.2108185-444-83769065228433/AnsiballZ_copy.py'
Jan 20 18:36:16 compute-0 sudo[61747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:36:16 compute-0 python3.9[61749]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768934175.2108185-444-83769065228433/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:36:16 compute-0 sudo[61747]: pam_unix(sudo:session): session closed for user root
Jan 20 18:36:17 compute-0 sudo[61899]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swxibjkbgzlowozbmtwgsjmmbcbyolyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934176.929749-489-179883563823504/AnsiballZ_stat.py'
Jan 20 18:36:17 compute-0 sudo[61899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:36:17 compute-0 python3.9[61901]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:36:17 compute-0 sudo[61899]: pam_unix(sudo:session): session closed for user root
Jan 20 18:36:17 compute-0 sudo[62022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hapuvleejvkmlwgwwbufndvmchvtqmyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934176.929749-489-179883563823504/AnsiballZ_copy.py'
Jan 20 18:36:17 compute-0 sudo[62022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:36:18 compute-0 python3.9[62024]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768934176.929749-489-179883563823504/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:36:18 compute-0 sudo[62022]: pam_unix(sudo:session): session closed for user root
Jan 20 18:36:18 compute-0 sudo[62174]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtpwhsrhozieljalnyzprgxhrnfklxek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934178.2980692-534-112845335370992/AnsiballZ_systemd.py'
Jan 20 18:36:18 compute-0 sudo[62174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:36:18 compute-0 python3.9[62176]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 18:36:18 compute-0 systemd[1]: Reloading.
Jan 20 18:36:19 compute-0 systemd-rc-local-generator[62204]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:36:19 compute-0 systemd-sysv-generator[62208]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:36:19 compute-0 systemd[1]: Reloading.
Jan 20 18:36:19 compute-0 systemd-sysv-generator[62244]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:36:19 compute-0 systemd-rc-local-generator[62240]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:36:19 compute-0 systemd[1]: Starting Create netns directory...
Jan 20 18:36:19 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 20 18:36:19 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 20 18:36:19 compute-0 systemd[1]: Finished Create netns directory.
Jan 20 18:36:19 compute-0 sudo[62174]: pam_unix(sudo:session): session closed for user root
Jan 20 18:36:20 compute-0 python3.9[62402]: ansible-ansible.builtin.service_facts Invoked
Jan 20 18:36:20 compute-0 network[62419]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 20 18:36:20 compute-0 network[62420]: 'network-scripts' will be removed from distribution in near future.
Jan 20 18:36:20 compute-0 network[62421]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 20 18:36:25 compute-0 sudo[62681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-micovkxzyytyqmkyxnafqqwxadfhbjnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934185.4951246-582-168284029245042/AnsiballZ_systemd.py'
Jan 20 18:36:25 compute-0 sudo[62681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:36:26 compute-0 python3.9[62683]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 18:36:26 compute-0 systemd[1]: Reloading.
Jan 20 18:36:26 compute-0 systemd-sysv-generator[62717]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:36:26 compute-0 systemd-rc-local-generator[62713]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:36:26 compute-0 systemd[1]: Stopping IPv4 firewall with iptables...
Jan 20 18:36:27 compute-0 iptables.init[62723]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Jan 20 18:36:27 compute-0 iptables.init[62723]: iptables: Flushing firewall rules: [  OK  ]
Jan 20 18:36:27 compute-0 systemd[1]: iptables.service: Deactivated successfully.
Jan 20 18:36:27 compute-0 systemd[1]: Stopped IPv4 firewall with iptables.
Jan 20 18:36:27 compute-0 sudo[62681]: pam_unix(sudo:session): session closed for user root
Jan 20 18:36:27 compute-0 sudo[62917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrskykxlmapkyeomzvvfwtbgqarnihlw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934187.448302-582-178130789312498/AnsiballZ_systemd.py'
Jan 20 18:36:27 compute-0 sudo[62917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:36:28 compute-0 python3.9[62919]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 18:36:28 compute-0 sudo[62917]: pam_unix(sudo:session): session closed for user root
Jan 20 18:36:28 compute-0 sudo[63071]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikjjmfdecimeofcqqhnkjuaqyanetbqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934188.3540335-630-205502519198157/AnsiballZ_systemd.py'
Jan 20 18:36:28 compute-0 sudo[63071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:36:28 compute-0 python3.9[63073]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 18:36:28 compute-0 systemd[1]: Reloading.
Jan 20 18:36:29 compute-0 systemd-rc-local-generator[63105]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:36:29 compute-0 systemd-sysv-generator[63108]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:36:29 compute-0 systemd[1]: Starting Netfilter Tables...
Jan 20 18:36:29 compute-0 systemd[1]: Finished Netfilter Tables.
Jan 20 18:36:29 compute-0 sudo[63071]: pam_unix(sudo:session): session closed for user root
Jan 20 18:36:30 compute-0 sudo[63265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahbugizrcmbbhrlaiuwbvqjfgncbmhtp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934189.745726-654-45234934112705/AnsiballZ_command.py'
Jan 20 18:36:30 compute-0 sudo[63265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:36:30 compute-0 python3.9[63267]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:36:30 compute-0 sudo[63265]: pam_unix(sudo:session): session closed for user root
Jan 20 18:36:31 compute-0 sudo[63418]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etbvejivjsymnomwfcqtmemvgecbbznt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934191.1388052-696-176221885721330/AnsiballZ_stat.py'
Jan 20 18:36:31 compute-0 sudo[63418]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:36:31 compute-0 python3.9[63420]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:36:31 compute-0 sudo[63418]: pam_unix(sudo:session): session closed for user root
Jan 20 18:36:32 compute-0 sudo[63543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uksphjrbyggpimtbpqavubxbjutsnkbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934191.1388052-696-176221885721330/AnsiballZ_copy.py'
Jan 20 18:36:32 compute-0 sudo[63543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:36:32 compute-0 python3.9[63545]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1768934191.1388052-696-176221885721330/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:36:32 compute-0 sudo[63543]: pam_unix(sudo:session): session closed for user root
Jan 20 18:36:32 compute-0 sudo[63696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vticejjnlkwtrfmsvotvzxnjmzcgeqnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934192.5771284-741-236219736143967/AnsiballZ_systemd.py'
Jan 20 18:36:32 compute-0 sudo[63696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:36:33 compute-0 python3.9[63698]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 18:36:33 compute-0 systemd[1]: Reloading OpenSSH server daemon...
Jan 20 18:36:33 compute-0 sshd[1004]: Received SIGHUP; restarting.
Jan 20 18:36:33 compute-0 sshd[1004]: Server listening on 0.0.0.0 port 22.
Jan 20 18:36:33 compute-0 sshd[1004]: Server listening on :: port 22.
Jan 20 18:36:33 compute-0 systemd[1]: Reloaded OpenSSH server daemon.
Jan 20 18:36:33 compute-0 sudo[63696]: pam_unix(sudo:session): session closed for user root
Jan 20 18:36:33 compute-0 sudo[63852]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqdpgpdlkkthpbwpikogqsxncgysvxuh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934193.5418286-765-178262423600443/AnsiballZ_file.py'
Jan 20 18:36:33 compute-0 sudo[63852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:36:33 compute-0 python3.9[63854]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:36:34 compute-0 sudo[63852]: pam_unix(sudo:session): session closed for user root
Jan 20 18:36:34 compute-0 sudo[64004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjtyhnrprkanzqunbgzeyauisjpdsjgj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934194.5130844-789-135210350676220/AnsiballZ_stat.py'
Jan 20 18:36:34 compute-0 sudo[64004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:36:34 compute-0 python3.9[64006]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:36:34 compute-0 sudo[64004]: pam_unix(sudo:session): session closed for user root
Jan 20 18:36:35 compute-0 sudo[64127]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kexztuwlduqefnkccghwqofinetvufuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934194.5130844-789-135210350676220/AnsiballZ_copy.py'
Jan 20 18:36:35 compute-0 sudo[64127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:36:35 compute-0 python3.9[64129]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768934194.5130844-789-135210350676220/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:36:35 compute-0 sudo[64127]: pam_unix(sudo:session): session closed for user root
Jan 20 18:36:36 compute-0 sudo[64279]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shtofvbejzqpnscujemrvczldshflhcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934196.2742348-843-261196535763961/AnsiballZ_timezone.py'
Jan 20 18:36:36 compute-0 sudo[64279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:36:36 compute-0 python3.9[64281]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 20 18:36:36 compute-0 systemd[1]: Starting Time & Date Service...
Jan 20 18:36:36 compute-0 systemd[1]: Started Time & Date Service.
Jan 20 18:36:37 compute-0 sudo[64279]: pam_unix(sudo:session): session closed for user root
Jan 20 18:36:38 compute-0 sudo[64435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vaefglwsalgegqmusugxsebkkmonurcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934198.3882337-870-157630790819109/AnsiballZ_file.py'
Jan 20 18:36:38 compute-0 sudo[64435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:36:38 compute-0 python3.9[64437]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:36:38 compute-0 sudo[64435]: pam_unix(sudo:session): session closed for user root
Jan 20 18:36:39 compute-0 sudo[64587]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fobakimesmqjmsgkxwiubhlrpywogbvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934199.4264748-894-222127683441817/AnsiballZ_stat.py'
Jan 20 18:36:39 compute-0 sudo[64587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:36:39 compute-0 python3.9[64589]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:36:39 compute-0 sudo[64587]: pam_unix(sudo:session): session closed for user root
Jan 20 18:36:40 compute-0 sudo[64710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cigsikghcdjcpclmazqyzhbvtazvczfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934199.4264748-894-222127683441817/AnsiballZ_copy.py'
Jan 20 18:36:40 compute-0 sudo[64710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:36:40 compute-0 python3.9[64712]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768934199.4264748-894-222127683441817/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:36:40 compute-0 sudo[64710]: pam_unix(sudo:session): session closed for user root
Jan 20 18:36:41 compute-0 sudo[64862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thvsvkfcnsakwjwwbfnbygfywdzjpkwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934200.8338718-939-242785798288956/AnsiballZ_stat.py'
Jan 20 18:36:41 compute-0 sudo[64862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:36:41 compute-0 python3.9[64864]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:36:41 compute-0 sudo[64862]: pam_unix(sudo:session): session closed for user root
Jan 20 18:36:41 compute-0 sudo[64985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovfwlftaujnohwpiyuvipkabhtybloen ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934200.8338718-939-242785798288956/AnsiballZ_copy.py'
Jan 20 18:36:41 compute-0 sudo[64985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:36:41 compute-0 python3.9[64987]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768934200.8338718-939-242785798288956/.source.yaml _original_basename=.1sxp74yx follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:36:41 compute-0 sudo[64985]: pam_unix(sudo:session): session closed for user root
Jan 20 18:36:42 compute-0 sudo[65137]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhfmdqodfeifowemmrfkxhovoejjasrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934202.2191956-984-273186834817675/AnsiballZ_stat.py'
Jan 20 18:36:42 compute-0 sudo[65137]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:36:42 compute-0 python3.9[65139]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:36:42 compute-0 sudo[65137]: pam_unix(sudo:session): session closed for user root
Jan 20 18:36:43 compute-0 sudo[65260]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lasrllyywzrdxsdauxfhqhjpjfqrlgko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934202.2191956-984-273186834817675/AnsiballZ_copy.py'
Jan 20 18:36:43 compute-0 sudo[65260]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:36:43 compute-0 python3.9[65262]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768934202.2191956-984-273186834817675/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:36:43 compute-0 sudo[65260]: pam_unix(sudo:session): session closed for user root
Jan 20 18:36:44 compute-0 sudo[65412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbhgupeyyypreqzycrmcvgtwlvtxuvem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934203.7397664-1029-21236060092362/AnsiballZ_command.py'
Jan 20 18:36:44 compute-0 sudo[65412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:36:44 compute-0 python3.9[65414]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:36:44 compute-0 sudo[65412]: pam_unix(sudo:session): session closed for user root
Jan 20 18:36:44 compute-0 sudo[65565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpuwhdvkbyijinppkfdlsryaamjkxhqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934204.541288-1053-89048379791625/AnsiballZ_command.py'
Jan 20 18:36:44 compute-0 sudo[65565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:36:45 compute-0 python3.9[65567]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:36:45 compute-0 sudo[65565]: pam_unix(sudo:session): session closed for user root
Jan 20 18:36:45 compute-0 sudo[65718]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtarvvvsnkqlwaxbnzfvihyltadzmcki ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1768934205.3266084-1077-140112124728784/AnsiballZ_edpm_nftables_from_files.py'
Jan 20 18:36:45 compute-0 sudo[65718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:36:45 compute-0 python3[65720]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 20 18:36:46 compute-0 sudo[65718]: pam_unix(sudo:session): session closed for user root
Jan 20 18:36:46 compute-0 sudo[65870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssnfdyfbvxsnhhcksjednrzqofyeoybp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934206.2874064-1101-87628203752215/AnsiballZ_stat.py'
Jan 20 18:36:46 compute-0 sudo[65870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:36:46 compute-0 python3.9[65872]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:36:46 compute-0 sudo[65870]: pam_unix(sudo:session): session closed for user root
Jan 20 18:36:47 compute-0 sudo[65993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvxdxnlhddutqklmzdktoqeidbmqdcpy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934206.2874064-1101-87628203752215/AnsiballZ_copy.py'
Jan 20 18:36:47 compute-0 sudo[65993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:36:47 compute-0 python3.9[65995]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768934206.2874064-1101-87628203752215/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:36:47 compute-0 sudo[65993]: pam_unix(sudo:session): session closed for user root
Jan 20 18:36:47 compute-0 sudo[66145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgqdrnrfvfueineqflpoblodzqiuaexo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934207.6557817-1146-251079589984860/AnsiballZ_stat.py'
Jan 20 18:36:47 compute-0 sudo[66145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:36:48 compute-0 python3.9[66147]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:36:48 compute-0 sudo[66145]: pam_unix(sudo:session): session closed for user root
Jan 20 18:36:48 compute-0 sudo[66268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkefanswddywuiwbnwfyvjmhqvslwqsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934207.6557817-1146-251079589984860/AnsiballZ_copy.py'
Jan 20 18:36:48 compute-0 sudo[66268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:36:48 compute-0 python3.9[66270]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768934207.6557817-1146-251079589984860/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:36:48 compute-0 sudo[66268]: pam_unix(sudo:session): session closed for user root
Jan 20 18:36:49 compute-0 sudo[66420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjwnfxrckevltgttjdapvceqbjjgbtdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934209.2419944-1191-170413380968676/AnsiballZ_stat.py'
Jan 20 18:36:49 compute-0 sudo[66420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:36:49 compute-0 python3.9[66422]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:36:49 compute-0 sudo[66420]: pam_unix(sudo:session): session closed for user root
Jan 20 18:36:50 compute-0 sudo[66543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlknbovpovnzwwvzscfrkfrpwiavizid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934209.2419944-1191-170413380968676/AnsiballZ_copy.py'
Jan 20 18:36:50 compute-0 sudo[66543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:36:50 compute-0 python3.9[66545]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768934209.2419944-1191-170413380968676/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:36:50 compute-0 sudo[66543]: pam_unix(sudo:session): session closed for user root
Jan 20 18:36:50 compute-0 sudo[66695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfbuibmgmbcaljijeferqiwjmrxmhcgz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934210.6306334-1236-145622707892100/AnsiballZ_stat.py'
Jan 20 18:36:50 compute-0 sudo[66695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:36:51 compute-0 python3.9[66697]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:36:51 compute-0 sudo[66695]: pam_unix(sudo:session): session closed for user root
Jan 20 18:36:51 compute-0 sudo[66818]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-suwbgeodaypqbhrldivwfdbrrnigpthx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934210.6306334-1236-145622707892100/AnsiballZ_copy.py'
Jan 20 18:36:51 compute-0 sudo[66818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:36:51 compute-0 python3.9[66820]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768934210.6306334-1236-145622707892100/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:36:51 compute-0 sudo[66818]: pam_unix(sudo:session): session closed for user root
Jan 20 18:36:52 compute-0 sudo[66970]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vniizgujfflmtjnofetvwlplegyuschb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934211.9318435-1281-190534450085323/AnsiballZ_stat.py'
Jan 20 18:36:52 compute-0 sudo[66970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:36:52 compute-0 python3.9[66972]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:36:52 compute-0 sudo[66970]: pam_unix(sudo:session): session closed for user root
Jan 20 18:36:52 compute-0 sudo[67093]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjsedhutirilvopjsclekmbspcvlhpqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934211.9318435-1281-190534450085323/AnsiballZ_copy.py'
Jan 20 18:36:52 compute-0 sudo[67093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:36:52 compute-0 python3.9[67095]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768934211.9318435-1281-190534450085323/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:36:52 compute-0 sudo[67093]: pam_unix(sudo:session): session closed for user root
Jan 20 18:36:53 compute-0 sudo[67245]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqlblgruydcpifvkyhhnschvbeqzeqha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934213.6156476-1326-110769590759041/AnsiballZ_file.py'
Jan 20 18:36:53 compute-0 sudo[67245]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:36:54 compute-0 python3.9[67247]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:36:54 compute-0 sudo[67245]: pam_unix(sudo:session): session closed for user root
Jan 20 18:36:54 compute-0 sudo[67397]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iistdzwmecmolnddhvvcyvttudjgrzvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934214.3542264-1350-164274887292581/AnsiballZ_command.py'
Jan 20 18:36:54 compute-0 sudo[67397]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:36:54 compute-0 python3.9[67399]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:36:54 compute-0 sudo[67397]: pam_unix(sudo:session): session closed for user root
Jan 20 18:36:55 compute-0 sudo[67556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdcyntbfopbcfwdadxntrhdipktctull ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934215.1402478-1374-217542463678023/AnsiballZ_blockinfile.py'
Jan 20 18:36:55 compute-0 sudo[67556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:36:55 compute-0 python3.9[67558]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:36:55 compute-0 sudo[67556]: pam_unix(sudo:session): session closed for user root
Jan 20 18:36:56 compute-0 sudo[67709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxupeclnhpcajkzflftfrscmnaipsaki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934216.1278262-1401-254630395531644/AnsiballZ_file.py'
Jan 20 18:36:56 compute-0 sudo[67709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:36:56 compute-0 python3.9[67711]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:36:56 compute-0 sudo[67709]: pam_unix(sudo:session): session closed for user root
Jan 20 18:36:56 compute-0 sudo[67861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmhjgmodqlrvttuanbkddjrwbtyhkwsb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934216.709357-1401-202483356022781/AnsiballZ_file.py'
Jan 20 18:36:56 compute-0 sudo[67861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:36:57 compute-0 python3.9[67863]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:36:57 compute-0 sudo[67861]: pam_unix(sudo:session): session closed for user root
Jan 20 18:36:57 compute-0 sudo[68013]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stlualcxpezgyupoyfuhaftbgsjozdfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934217.5322652-1446-23280933193746/AnsiballZ_mount.py'
Jan 20 18:36:57 compute-0 sudo[68013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:36:58 compute-0 python3.9[68015]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 20 18:36:58 compute-0 sudo[68013]: pam_unix(sudo:session): session closed for user root
Jan 20 18:36:58 compute-0 sudo[68166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npjbxvpuhrnbmkykrrarbtmcggmrgyeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934218.2448964-1446-16430550161075/AnsiballZ_mount.py'
Jan 20 18:36:58 compute-0 sudo[68166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:36:58 compute-0 python3.9[68168]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 20 18:36:58 compute-0 sudo[68166]: pam_unix(sudo:session): session closed for user root
Jan 20 18:36:59 compute-0 sshd-session[58962]: Connection closed by 192.168.122.30 port 59816
Jan 20 18:36:59 compute-0 sshd-session[58959]: pam_unix(sshd:session): session closed for user zuul
Jan 20 18:36:59 compute-0 systemd[1]: session-14.scope: Deactivated successfully.
Jan 20 18:36:59 compute-0 systemd[1]: session-14.scope: Consumed 33.862s CPU time.
Jan 20 18:36:59 compute-0 systemd-logind[796]: Session 14 logged out. Waiting for processes to exit.
Jan 20 18:36:59 compute-0 systemd-logind[796]: Removed session 14.
Jan 20 18:37:04 compute-0 sshd-session[68195]: Accepted publickey for zuul from 192.168.122.30 port 58038 ssh2: ECDSA SHA256:OqjpBifwMHsrzwMbJxHbqE54q1skVz6aecL1tN9sOps
Jan 20 18:37:04 compute-0 systemd-logind[796]: New session 15 of user zuul.
Jan 20 18:37:04 compute-0 systemd[1]: Started Session 15 of User zuul.
Jan 20 18:37:04 compute-0 sshd-session[68195]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 18:37:05 compute-0 sudo[68348]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdsvqzrdvnrnvaefnytuekreyzhfzndl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934225.0322018-18-149767453498959/AnsiballZ_tempfile.py'
Jan 20 18:37:05 compute-0 sudo[68348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:37:05 compute-0 python3.9[68350]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 20 18:37:05 compute-0 sudo[68348]: pam_unix(sudo:session): session closed for user root
Jan 20 18:37:06 compute-0 sudo[68500]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-galsuygatsgsdrccxmujzcquctvcrsup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934225.9618404-54-110251010305787/AnsiballZ_stat.py'
Jan 20 18:37:06 compute-0 sudo[68500]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:37:06 compute-0 python3.9[68502]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 18:37:06 compute-0 sudo[68500]: pam_unix(sudo:session): session closed for user root
Jan 20 18:37:07 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 20 18:37:07 compute-0 sudo[68654]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfehxqnddmzgjfcnsznrriqdncdvwlkx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934226.9074633-84-162445352641581/AnsiballZ_setup.py'
Jan 20 18:37:07 compute-0 sudo[68654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:37:07 compute-0 python3.9[68656]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 18:37:07 compute-0 sudo[68654]: pam_unix(sudo:session): session closed for user root
Jan 20 18:37:08 compute-0 sudo[68806]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cugiyszvoyllhwwofdzkslugwcqftoos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934228.1767535-109-74699826577410/AnsiballZ_blockinfile.py'
Jan 20 18:37:08 compute-0 sudo[68806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:37:08 compute-0 python3.9[68808]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC9QdI3DVbQ/8fNqEX5lzpJhhopd9VDFCOgX5Ovz96zAHRQi14Jvy8BufX9CL0nn2KOl6ezG8aIoN/hRsxCDqP65NYENNiByELn9tcS4HvcuoagePtrXopXNN2+zL3+f9mUnz3qU9ygYcjKbL+Q6PS39awjEYMDz6GSF0CWlJiQ2EVuSkpGUxLwePZI1OoVvjp7enzUJiXOT1dy1t4dsk+oAzlOCz7Twc5cYTKMsIESt6jBb3yW6gs3FUO0b6XN9xuE7bWoaTFrPzdUTXZV+kOH9/bDLe6am45px3PMBlOBK3/Dj7RrO2YLNqU7O+xjM5OKRsGZCKGjWVIB/xCRXUHaUhy9Ysa7lcTd8CvaOuVaE8WC9M5E75GQUXEsWnu7zs0+W5ZNQGQ+Y9LLfw6kNdIwLmvzVYXv3+eLyUnU9I1hOw6pgVpkfBB7NlkA/KumUL/XhjuamC0ZHRtAY7BEF6tMG/GRfy+spzvJ8gkHNtXNsF5uneM29eZfNeHasXpmras=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKZNrFAD4ziqo0rY8uHXS8b5yDewrwNrfH4oVhpwZEyi
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJHO4xAirgvBrLVTYbLwLCRykAt37Lt68eJO/YoBRvtoa8G0TJwvYUVlCWW1uOltqGDLrd0Z3J9FcQSAsez16Lw=
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCgECz5eSdEf6b1nBbTrZn/96903LYDK9+b7cxGGOCNIDGWsHg2JX5Kqd/u4iGx/w1fQTGAR14aejfZ2SeHn8xBZRL0RP7QYzl5W/W0lgMt+4fg8fWSgBK+lGuWeXHxLQA3EfGSOaH75DPbFbPjNhPZwK8MM3/bOS5enrM/lUVLI0VjjLnetWBuhc6gJxekhxkMC+KxBr+1a6yk+lD0cSkbmAmVqpFWaQIJNPncphxpsr3JkTklrC7sP7JtXOsYCFIiHJw/tPUTIfMpYDk5suT3f2b+uuRFUWI3DJOwpLaBMpN39KNvfSFAJCNn5V1ts3cw4gwm4TCggGyo5cCQy1wFPvrxqtNQ2SXE1N3DHUV6eF/aB7ho3f9Tfd3e04AbyJCY9eCHMOks+s0XErfE/Cn0chsJ4ZM+ET3NfOQK+Pb0/TVX82iYfJLZYF9Jp11RvI77SMs/7osnwh70VyTLREUMDpiXboJEynLArKP6ijyVsgJQlb28WyBbvYZSG1ObDC8=
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJTqyKhPPFjqXut+RZeKNFFnMHaz29oIDVm2c0ADBh2O
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK48ShS5Scyh986e4WPn7bEgHcWXxRaxxF6rW4jUSClnY+cE5Aoo/m90YSyz93HHWjTtRg6XJ3YwjdVSOx6pfdw=
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCw7cXQxn2VuIaK8uuKhbAQcJI6FiqRrHdVUvwaDkOrq8qzvByaVgk5xl8EmCSduf3k7Y8SialoKXuoU8N4uSFOSpfxwz+Hh3X3fqr6lhtpSdW+l8C1kh3dPgL3wL3CnE7vIXa+JC+4RvVawPsqUZ4Mr9cCO1BQ+K1Jl9P2NFNV2nHdMeXlm8Y5lti9nJg2TH2c+qoVr2JJ0mbQ2g6802EjO2cn2ICs7VGaGTwXoCYX4HbPgf+zq5fv6uF8vZ+fz+tpoj7+ORrrNVoMDMQPDz+OT1l9WmK4vQ0x2R+27rDgRDmcetscRnRCtJRUPUEkHy72oDBZDWQvM2R3c+hZbjeJJRpiLRcri/fCFrweLudytyA1hKV+sJodq8EbfrC8lMy0fxEGDs3/YXN0udAzS8Sg/6LiIiJRzNcbF6H70B8P4FpAnKo03BrWwGDGRVNXWh8YqOXzIqN/FQVPOJ2aZ3ZCt1xIlMKY5ncsFz1F4volxuwrKutRdeDhfJu0M+M5Go8=
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMtfgxr5p3WX7/JV8ZGeyedNjypTLSFpEQC1rgg6zjYI
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFqHcFmVLlh75o0B9MfID1Caa9btI9E7S53rpl9+oGjdKlHBWb0Ut4EGvboMZg6zbxshPKgaBs01y0VgbZ/88Io=
                                             create=True mode=0644 path=/tmp/ansible.gmfb43_u state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:37:08 compute-0 sudo[68806]: pam_unix(sudo:session): session closed for user root
Jan 20 18:37:09 compute-0 sudo[68958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqkobgatykisgrsywhtecujyhkoaqpuh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934229.069911-133-12315315979016/AnsiballZ_command.py'
Jan 20 18:37:09 compute-0 sudo[68958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:37:09 compute-0 python3.9[68960]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.gmfb43_u' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:37:09 compute-0 sudo[68958]: pam_unix(sudo:session): session closed for user root
Jan 20 18:37:10 compute-0 sudo[69112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvajnmleacrahbhiltanwezkvxrvrrvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934229.9923666-157-178411068138344/AnsiballZ_file.py'
Jan 20 18:37:10 compute-0 sudo[69112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:37:10 compute-0 python3.9[69114]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.gmfb43_u state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:37:10 compute-0 sudo[69112]: pam_unix(sudo:session): session closed for user root
Jan 20 18:37:11 compute-0 sshd-session[68198]: Connection closed by 192.168.122.30 port 58038
Jan 20 18:37:11 compute-0 sshd-session[68195]: pam_unix(sshd:session): session closed for user zuul
Jan 20 18:37:11 compute-0 systemd[1]: session-15.scope: Deactivated successfully.
Jan 20 18:37:11 compute-0 systemd[1]: session-15.scope: Consumed 3.085s CPU time.
Jan 20 18:37:11 compute-0 systemd-logind[796]: Session 15 logged out. Waiting for processes to exit.
Jan 20 18:37:11 compute-0 systemd-logind[796]: Removed session 15.
Jan 20 18:37:16 compute-0 sshd-session[69139]: Accepted publickey for zuul from 192.168.122.30 port 48204 ssh2: ECDSA SHA256:OqjpBifwMHsrzwMbJxHbqE54q1skVz6aecL1tN9sOps
Jan 20 18:37:16 compute-0 systemd-logind[796]: New session 16 of user zuul.
Jan 20 18:37:16 compute-0 systemd[1]: Started Session 16 of User zuul.
Jan 20 18:37:16 compute-0 sshd-session[69139]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 18:37:17 compute-0 python3.9[69292]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 18:37:18 compute-0 sudo[69446]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqgrvrkmfhdgheoipialzpkkbelfrolq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934237.904715-51-118172504271351/AnsiballZ_systemd.py'
Jan 20 18:37:18 compute-0 sudo[69446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:37:18 compute-0 python3.9[69448]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 20 18:37:19 compute-0 sudo[69446]: pam_unix(sudo:session): session closed for user root
Jan 20 18:37:20 compute-0 sudo[69600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugesffwrhffdmvybhrgkttutymzfozuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934240.0495548-75-258665657284121/AnsiballZ_systemd.py'
Jan 20 18:37:20 compute-0 sudo[69600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:37:20 compute-0 python3.9[69602]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 18:37:20 compute-0 sudo[69600]: pam_unix(sudo:session): session closed for user root
Jan 20 18:37:21 compute-0 sudo[69753]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ddmxgklbysafciigtkwgddngqcuhpqsx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934241.107974-102-235030854590483/AnsiballZ_command.py'
Jan 20 18:37:21 compute-0 sudo[69753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:37:21 compute-0 python3.9[69755]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:37:21 compute-0 sudo[69753]: pam_unix(sudo:session): session closed for user root
Jan 20 18:37:22 compute-0 sudo[69906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkiilukxzexsxojpozznfdjitfywzjxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934241.9418468-126-102577219630318/AnsiballZ_stat.py'
Jan 20 18:37:22 compute-0 sudo[69906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:37:22 compute-0 python3.9[69908]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 18:37:22 compute-0 sudo[69906]: pam_unix(sudo:session): session closed for user root
Jan 20 18:37:23 compute-0 sudo[70060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctypmyfuimvlpmqkuzentllooftwregj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934242.848802-150-30870865362786/AnsiballZ_command.py'
Jan 20 18:37:23 compute-0 sudo[70060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:37:23 compute-0 python3.9[70062]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:37:23 compute-0 sudo[70060]: pam_unix(sudo:session): session closed for user root
Jan 20 18:37:24 compute-0 sudo[70215]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elxwhwcanazjwabczdwzfymkmpbjyavo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934243.6780806-174-29427532201445/AnsiballZ_file.py'
Jan 20 18:37:24 compute-0 sudo[70215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:37:24 compute-0 python3.9[70217]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:37:24 compute-0 sudo[70215]: pam_unix(sudo:session): session closed for user root
Jan 20 18:37:24 compute-0 sshd-session[69142]: Connection closed by 192.168.122.30 port 48204
Jan 20 18:37:24 compute-0 sshd-session[69139]: pam_unix(sshd:session): session closed for user zuul
Jan 20 18:37:24 compute-0 systemd-logind[796]: Session 16 logged out. Waiting for processes to exit.
Jan 20 18:37:24 compute-0 systemd[1]: session-16.scope: Deactivated successfully.
Jan 20 18:37:24 compute-0 systemd[1]: session-16.scope: Consumed 4.149s CPU time.
Jan 20 18:37:24 compute-0 systemd-logind[796]: Removed session 16.
Jan 20 18:37:30 compute-0 sshd-session[70242]: Accepted publickey for zuul from 192.168.122.30 port 47894 ssh2: ECDSA SHA256:OqjpBifwMHsrzwMbJxHbqE54q1skVz6aecL1tN9sOps
Jan 20 18:37:30 compute-0 systemd-logind[796]: New session 17 of user zuul.
Jan 20 18:37:30 compute-0 systemd[1]: Started Session 17 of User zuul.
Jan 20 18:37:30 compute-0 sshd-session[70242]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 18:37:31 compute-0 python3.9[70395]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 18:37:31 compute-0 sudo[70549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkgieadvkxfxjhjmrabntpedjaonrlgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934251.6151085-57-247413659690072/AnsiballZ_setup.py'
Jan 20 18:37:31 compute-0 sudo[70549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:37:32 compute-0 python3.9[70551]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 18:37:32 compute-0 sudo[70549]: pam_unix(sudo:session): session closed for user root
Jan 20 18:37:32 compute-0 sudo[70633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sazvkpejoosbktbmjuoqxofqphdyhljy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934251.6151085-57-247413659690072/AnsiballZ_dnf.py'
Jan 20 18:37:32 compute-0 sudo[70633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:37:32 compute-0 python3.9[70635]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 20 18:37:34 compute-0 sudo[70633]: pam_unix(sudo:session): session closed for user root
Jan 20 18:37:35 compute-0 python3.9[70786]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:37:36 compute-0 python3.9[70937]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 20 18:37:37 compute-0 python3.9[71087]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 18:37:37 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 18:37:38 compute-0 python3.9[71238]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 18:37:39 compute-0 sshd-session[70245]: Connection closed by 192.168.122.30 port 47894
Jan 20 18:37:39 compute-0 sshd-session[70242]: pam_unix(sshd:session): session closed for user zuul
Jan 20 18:37:39 compute-0 systemd-logind[796]: Session 17 logged out. Waiting for processes to exit.
Jan 20 18:37:39 compute-0 systemd[1]: session-17.scope: Deactivated successfully.
Jan 20 18:37:39 compute-0 systemd[1]: session-17.scope: Consumed 5.977s CPU time.
Jan 20 18:37:39 compute-0 systemd-logind[796]: Removed session 17.
Jan 20 18:37:47 compute-0 sshd-session[71263]: Accepted publickey for zuul from 38.102.83.73 port 57820 ssh2: RSA SHA256:4QdNcGxIfGrd0SulXH8wKdvIjwwnijbxtrxruAjIfw8
Jan 20 18:37:47 compute-0 systemd-logind[796]: New session 18 of user zuul.
Jan 20 18:37:47 compute-0 systemd[1]: Started Session 18 of User zuul.
Jan 20 18:37:47 compute-0 sshd-session[71263]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 18:37:47 compute-0 sudo[71339]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnwdsfkrvpbkvnntvqudptibdhvbsnod ; /usr/bin/python3'
Jan 20 18:37:47 compute-0 sudo[71339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:37:47 compute-0 useradd[71343]: new group: name=ceph-admin, GID=42478
Jan 20 18:37:47 compute-0 useradd[71343]: new user: name=ceph-admin, UID=42477, GID=42478, home=/home/ceph-admin, shell=/bin/bash, from=none
Jan 20 18:37:47 compute-0 sudo[71339]: pam_unix(sudo:session): session closed for user root
Jan 20 18:37:48 compute-0 sudo[71425]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsuxexynpjbmkuhfyaewuvediseiyheo ; /usr/bin/python3'
Jan 20 18:37:48 compute-0 sudo[71425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:37:48 compute-0 sudo[71425]: pam_unix(sudo:session): session closed for user root
Jan 20 18:37:48 compute-0 sudo[71498]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snkmnxlcpdofwjnddyvsqqhdcawtmtdk ; /usr/bin/python3'
Jan 20 18:37:48 compute-0 sudo[71498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:37:49 compute-0 sudo[71498]: pam_unix(sudo:session): session closed for user root
Jan 20 18:37:49 compute-0 sudo[71548]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-maawjxwwtxsiwlkgjpwdaivnodkzozgf ; /usr/bin/python3'
Jan 20 18:37:49 compute-0 sudo[71548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:37:49 compute-0 sudo[71548]: pam_unix(sudo:session): session closed for user root
Jan 20 18:37:49 compute-0 sudo[71574]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yobcgzinbsewwopycczbzgybquyheswc ; /usr/bin/python3'
Jan 20 18:37:49 compute-0 sudo[71574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:37:49 compute-0 sudo[71574]: pam_unix(sudo:session): session closed for user root
Jan 20 18:37:50 compute-0 sudo[71600]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qowqeyxpadmsboaqvoytffkzzpyjqpfu ; /usr/bin/python3'
Jan 20 18:37:50 compute-0 sudo[71600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:37:50 compute-0 sudo[71600]: pam_unix(sudo:session): session closed for user root
Jan 20 18:37:50 compute-0 sudo[71626]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dztrukordegodyplkqrxkmbmfkgnkogo ; /usr/bin/python3'
Jan 20 18:37:50 compute-0 sudo[71626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:37:50 compute-0 sudo[71626]: pam_unix(sudo:session): session closed for user root
Jan 20 18:37:51 compute-0 sudo[71704]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcheceydziviuwljtuwnlxmnhmdegpxe ; /usr/bin/python3'
Jan 20 18:37:51 compute-0 sudo[71704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:37:51 compute-0 sudo[71704]: pam_unix(sudo:session): session closed for user root
Jan 20 18:37:51 compute-0 sudo[71777]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-geurzlnsqecwhbtfybbgucyjggrvgjdo ; /usr/bin/python3'
Jan 20 18:37:51 compute-0 sudo[71777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:37:51 compute-0 sudo[71777]: pam_unix(sudo:session): session closed for user root
Jan 20 18:37:52 compute-0 sudo[71879]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnadforfmfvuojoqdzmpnoxxcdymmxbb ; /usr/bin/python3'
Jan 20 18:37:52 compute-0 sudo[71879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:37:52 compute-0 sudo[71879]: pam_unix(sudo:session): session closed for user root
Jan 20 18:37:52 compute-0 sudo[71952]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbvhcyfxnvryfbawpgsuroajbznlhdvn ; /usr/bin/python3'
Jan 20 18:37:52 compute-0 sudo[71952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:37:52 compute-0 sudo[71952]: pam_unix(sudo:session): session closed for user root
Jan 20 18:37:53 compute-0 sudo[72002]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upzqfprdolfrhywmsikwaxnlhoemfjip ; /usr/bin/python3'
Jan 20 18:37:53 compute-0 sudo[72002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:37:53 compute-0 python3[72004]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 18:37:54 compute-0 chronyd[58478]: Selected source 23.133.168.246 (pool.ntp.org)
Jan 20 18:37:54 compute-0 sudo[72002]: pam_unix(sudo:session): session closed for user root
Jan 20 18:37:55 compute-0 sudo[72097]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vetszhsuslovpzkyzblbnafjtopptmiu ; /usr/bin/python3'
Jan 20 18:37:55 compute-0 sudo[72097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:37:55 compute-0 python3[72099]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 20 18:37:56 compute-0 sudo[72097]: pam_unix(sudo:session): session closed for user root
Jan 20 18:37:56 compute-0 sudo[72124]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnfwlsiwtbotwbugkghzbkpdgkscfhbv ; /usr/bin/python3'
Jan 20 18:37:56 compute-0 sudo[72124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:37:57 compute-0 python3[72126]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 20 18:37:57 compute-0 sudo[72124]: pam_unix(sudo:session): session closed for user root
Jan 20 18:37:57 compute-0 sudo[72150]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltamjnykidtgwqtukpehvcopvxwmojsj ; /usr/bin/python3'
Jan 20 18:37:57 compute-0 sudo[72150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:37:57 compute-0 python3[72152]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G
                                          losetup /dev/loop3 /var/lib/ceph-osd-0.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:37:57 compute-0 kernel: loop: module loaded
Jan 20 18:37:57 compute-0 kernel: loop3: detected capacity change from 0 to 41943040
Jan 20 18:37:57 compute-0 sudo[72150]: pam_unix(sudo:session): session closed for user root
Jan 20 18:37:57 compute-0 sudo[72185]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdlrphgscuhskztvenjzbvzbabtnznej ; /usr/bin/python3'
Jan 20 18:37:57 compute-0 sudo[72185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:37:58 compute-0 python3[72187]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3
                                          vgcreate ceph_vg0 /dev/loop3
                                          lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:37:58 compute-0 lvm[72190]: PV /dev/loop3 not used.
Jan 20 18:37:58 compute-0 lvm[72199]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 18:37:58 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Jan 20 18:37:58 compute-0 lvm[72201]:   1 logical volume(s) in volume group "ceph_vg0" now active
Jan 20 18:37:58 compute-0 sudo[72185]: pam_unix(sudo:session): session closed for user root
Jan 20 18:37:58 compute-0 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Jan 20 18:37:58 compute-0 sudo[72277]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ooukiyehrthhajptofpnywcvuuhwhbou ; /usr/bin/python3'
Jan 20 18:37:58 compute-0 sudo[72277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:37:58 compute-0 python3[72279]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 18:37:58 compute-0 sudo[72277]: pam_unix(sudo:session): session closed for user root
Jan 20 18:37:59 compute-0 sudo[72350]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrlcrtoejyoygbfryeckaitbgunnvktz ; /usr/bin/python3'
Jan 20 18:37:59 compute-0 sudo[72350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:37:59 compute-0 python3[72352]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768934278.5536468-36999-22227859437278/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:37:59 compute-0 sudo[72350]: pam_unix(sudo:session): session closed for user root
Jan 20 18:37:59 compute-0 sudo[72400]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkmvjpcqxdsyjhyvrgnmmjejcctsbuop ; /usr/bin/python3'
Jan 20 18:37:59 compute-0 sudo[72400]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:38:00 compute-0 python3[72402]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 18:38:00 compute-0 systemd[1]: Reloading.
Jan 20 18:38:00 compute-0 systemd-rc-local-generator[72427]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:38:00 compute-0 systemd-sysv-generator[72432]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:38:00 compute-0 systemd[1]: Starting Ceph OSD losetup...
Jan 20 18:38:00 compute-0 bash[72441]: /dev/loop3: [64513]:4328449 (/var/lib/ceph-osd-0.img)
Jan 20 18:38:00 compute-0 systemd[1]: Finished Ceph OSD losetup.
Jan 20 18:38:00 compute-0 sudo[72400]: pam_unix(sudo:session): session closed for user root
Jan 20 18:38:00 compute-0 lvm[72442]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 18:38:00 compute-0 lvm[72442]: VG ceph_vg0 finished
Jan 20 18:38:02 compute-0 python3[72466]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 18:38:06 compute-0 sudo[72557]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zivbnrjvvfzurjfvvuievdtedxwdkaty ; /usr/bin/python3'
Jan 20 18:38:06 compute-0 sudo[72557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:38:06 compute-0 python3[72559]: ansible-ansible.legacy.dnf Invoked with name=['centos-release-ceph-squid'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 20 18:38:08 compute-0 sudo[72557]: pam_unix(sudo:session): session closed for user root
Jan 20 18:38:08 compute-0 sudo[72614]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hiztekldlmrjjqdypgymwbkxgakuklry ; /usr/bin/python3'
Jan 20 18:38:08 compute-0 sudo[72614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:38:08 compute-0 python3[72616]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 20 18:38:12 compute-0 groupadd[72626]: group added to /etc/group: name=cephadm, GID=993
Jan 20 18:38:12 compute-0 groupadd[72626]: group added to /etc/gshadow: name=cephadm
Jan 20 18:38:12 compute-0 groupadd[72626]: new group: name=cephadm, GID=993
Jan 20 18:38:12 compute-0 useradd[72633]: new user: name=cephadm, UID=992, GID=993, home=/var/lib/cephadm, shell=/bin/bash, from=none
Jan 20 18:38:12 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 20 18:38:12 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 20 18:38:13 compute-0 sudo[72614]: pam_unix(sudo:session): session closed for user root
Jan 20 18:38:13 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 20 18:38:13 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 20 18:38:13 compute-0 systemd[1]: run-r0ecf2f7c0c8d42a5afa354a56baaa81c.service: Deactivated successfully.
Jan 20 18:38:13 compute-0 sudo[72729]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plzjkhapcohiqtlkbupfqrbywjsvrwfb ; /usr/bin/python3'
Jan 20 18:38:13 compute-0 sudo[72729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:38:13 compute-0 python3[72731]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 20 18:38:13 compute-0 sudo[72729]: pam_unix(sudo:session): session closed for user root
Jan 20 18:38:13 compute-0 sudo[72757]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uijlixxbpgblhvbljbkndzzofkerplkm ; /usr/bin/python3'
Jan 20 18:38:13 compute-0 sudo[72757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:38:14 compute-0 python3[72759]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:38:14 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 20 18:38:14 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 20 18:38:14 compute-0 sudo[72757]: pam_unix(sudo:session): session closed for user root
Jan 20 18:38:14 compute-0 sudo[72822]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yskqmjihsagzyukcugqckqjrpfiitovp ; /usr/bin/python3'
Jan 20 18:38:14 compute-0 sudo[72822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:38:15 compute-0 python3[72824]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:38:15 compute-0 sudo[72822]: pam_unix(sudo:session): session closed for user root
Jan 20 18:38:15 compute-0 sudo[72848]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhingngxbeecklomxvompufsuhrolymn ; /usr/bin/python3'
Jan 20 18:38:15 compute-0 sudo[72848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:38:15 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 20 18:38:15 compute-0 python3[72850]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:38:15 compute-0 sudo[72848]: pam_unix(sudo:session): session closed for user root
Jan 20 18:38:15 compute-0 sudo[72926]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbnrfhwtbrwhnlxrdulhhjeubqijvgyv ; /usr/bin/python3'
Jan 20 18:38:15 compute-0 sudo[72926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:38:16 compute-0 python3[72928]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 18:38:16 compute-0 sudo[72926]: pam_unix(sudo:session): session closed for user root
Jan 20 18:38:16 compute-0 sudo[72999]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhqhkborityanvdfaiuanauwjmyuunei ; /usr/bin/python3'
Jan 20 18:38:16 compute-0 sudo[72999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:38:16 compute-0 python3[73001]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768934295.8168378-37191-211619312729834/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=a2c84611a4e46cfce32a90c112eae0345cab6abb backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:38:16 compute-0 sudo[72999]: pam_unix(sudo:session): session closed for user root
Jan 20 18:38:17 compute-0 sudo[73101]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btthkxfsetreuixbskgaqkyjinmkxhou ; /usr/bin/python3'
Jan 20 18:38:17 compute-0 sudo[73101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:38:17 compute-0 python3[73103]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 18:38:17 compute-0 sudo[73101]: pam_unix(sudo:session): session closed for user root
Jan 20 18:38:17 compute-0 sudo[73174]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptmmsrfkfysqyoebkvyqargaopbqbbwv ; /usr/bin/python3'
Jan 20 18:38:17 compute-0 sudo[73174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:38:17 compute-0 python3[73176]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768934296.8937147-37209-29239549709296/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:38:17 compute-0 sudo[73174]: pam_unix(sudo:session): session closed for user root
Jan 20 18:38:17 compute-0 sudo[73224]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prpvendtmzclqgtzswdstdippmpwskuk ; /usr/bin/python3'
Jan 20 18:38:17 compute-0 sudo[73224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:38:17 compute-0 python3[73226]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 20 18:38:17 compute-0 sudo[73224]: pam_unix(sudo:session): session closed for user root
Jan 20 18:38:18 compute-0 sudo[73252]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfjkdlrotcezvnnratxutwnmsdsjipvz ; /usr/bin/python3'
Jan 20 18:38:18 compute-0 sudo[73252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:38:18 compute-0 python3[73254]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 20 18:38:18 compute-0 sudo[73252]: pam_unix(sudo:session): session closed for user root
Jan 20 18:38:18 compute-0 sudo[73280]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-deofdobvoupytxttcxgvqfyeowwrennt ; /usr/bin/python3'
Jan 20 18:38:18 compute-0 sudo[73280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:38:18 compute-0 python3[73282]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 20 18:38:18 compute-0 sudo[73280]: pam_unix(sudo:session): session closed for user root
Jan 20 18:38:18 compute-0 python3[73308]: ansible-ansible.builtin.stat Invoked with path=/tmp/cephadm_registry.json follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 20 18:38:19 compute-0 sudo[73332]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahlvjlhxxvbxpobdheuzwvrdadviozuw ; /usr/bin/python3'
Jan 20 18:38:19 compute-0 sudo[73332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:38:19 compute-0 python3[73334]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 --config /home/ceph-admin/assimilate_ceph.conf \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100
                                           _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:38:19 compute-0 sshd-session[73338]: Accepted publickey for ceph-admin from 192.168.122.100 port 42264 ssh2: RSA SHA256:58ALtshni2jJ/laX5+bMZOBBr4k3I3UMx5wmNonUL8k
Jan 20 18:38:19 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Jan 20 18:38:19 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Jan 20 18:38:19 compute-0 systemd-logind[796]: New session 19 of user ceph-admin.
Jan 20 18:38:19 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Jan 20 18:38:19 compute-0 systemd[1]: Starting User Manager for UID 42477...
Jan 20 18:38:19 compute-0 systemd[73342]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 20 18:38:19 compute-0 systemd[73342]: Queued start job for default target Main User Target.
Jan 20 18:38:19 compute-0 systemd[73342]: Created slice User Application Slice.
Jan 20 18:38:19 compute-0 systemd[73342]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 20 18:38:19 compute-0 systemd[73342]: Started Daily Cleanup of User's Temporary Directories.
Jan 20 18:38:19 compute-0 systemd[73342]: Reached target Paths.
Jan 20 18:38:19 compute-0 systemd[73342]: Reached target Timers.
Jan 20 18:38:19 compute-0 systemd[73342]: Starting D-Bus User Message Bus Socket...
Jan 20 18:38:19 compute-0 systemd[73342]: Starting Create User's Volatile Files and Directories...
Jan 20 18:38:19 compute-0 systemd[73342]: Finished Create User's Volatile Files and Directories.
Jan 20 18:38:19 compute-0 systemd[73342]: Listening on D-Bus User Message Bus Socket.
Jan 20 18:38:19 compute-0 systemd[73342]: Reached target Sockets.
Jan 20 18:38:19 compute-0 systemd[73342]: Reached target Basic System.
Jan 20 18:38:19 compute-0 systemd[73342]: Reached target Main User Target.
Jan 20 18:38:19 compute-0 systemd[73342]: Startup finished in 123ms.
Jan 20 18:38:19 compute-0 systemd[1]: Started User Manager for UID 42477.
Jan 20 18:38:19 compute-0 systemd[1]: Started Session 19 of User ceph-admin.
Jan 20 18:38:19 compute-0 sshd-session[73338]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 20 18:38:19 compute-0 sudo[73358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/echo
Jan 20 18:38:19 compute-0 sudo[73358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:38:19 compute-0 sudo[73358]: pam_unix(sudo:session): session closed for user root
Jan 20 18:38:19 compute-0 sshd-session[73357]: Received disconnect from 192.168.122.100 port 42264:11: disconnected by user
Jan 20 18:38:19 compute-0 sshd-session[73357]: Disconnected from user ceph-admin 192.168.122.100 port 42264
Jan 20 18:38:19 compute-0 sshd-session[73338]: pam_unix(sshd:session): session closed for user ceph-admin
Jan 20 18:38:19 compute-0 systemd[1]: session-19.scope: Deactivated successfully.
Jan 20 18:38:19 compute-0 systemd-logind[796]: Session 19 logged out. Waiting for processes to exit.
Jan 20 18:38:19 compute-0 systemd-logind[796]: Removed session 19.
Jan 20 18:38:19 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 20 18:38:20 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 20 18:38:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat876418563-lower\x2dmapped.mount: Deactivated successfully.
Jan 20 18:38:30 compute-0 systemd[1]: Stopping User Manager for UID 42477...
Jan 20 18:38:30 compute-0 systemd[73342]: Activating special unit Exit the Session...
Jan 20 18:38:30 compute-0 systemd[73342]: Stopped target Main User Target.
Jan 20 18:38:30 compute-0 systemd[73342]: Stopped target Basic System.
Jan 20 18:38:30 compute-0 systemd[73342]: Stopped target Paths.
Jan 20 18:38:30 compute-0 systemd[73342]: Stopped target Sockets.
Jan 20 18:38:30 compute-0 systemd[73342]: Stopped target Timers.
Jan 20 18:38:30 compute-0 systemd[73342]: Stopped Mark boot as successful after the user session has run 2 minutes.
Jan 20 18:38:30 compute-0 systemd[73342]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 20 18:38:30 compute-0 systemd[73342]: Closed D-Bus User Message Bus Socket.
Jan 20 18:38:30 compute-0 systemd[73342]: Stopped Create User's Volatile Files and Directories.
Jan 20 18:38:30 compute-0 systemd[73342]: Removed slice User Application Slice.
Jan 20 18:38:30 compute-0 systemd[73342]: Reached target Shutdown.
Jan 20 18:38:30 compute-0 systemd[73342]: Finished Exit the Session.
Jan 20 18:38:30 compute-0 systemd[73342]: Reached target Exit the Session.
Jan 20 18:38:30 compute-0 systemd[1]: user@42477.service: Deactivated successfully.
Jan 20 18:38:30 compute-0 systemd[1]: Stopped User Manager for UID 42477.
Jan 20 18:38:30 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Jan 20 18:38:30 compute-0 systemd[1]: run-user-42477.mount: Deactivated successfully.
Jan 20 18:38:30 compute-0 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Jan 20 18:38:30 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Jan 20 18:38:30 compute-0 systemd[1]: Removed slice User Slice of UID 42477.
Jan 20 18:38:39 compute-0 podman[73436]: 2026-01-20 18:38:39.808671277 +0000 UTC m=+19.635267024 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:38:39 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 20 18:38:39 compute-0 podman[73498]: 2026-01-20 18:38:39.886238738 +0000 UTC m=+0.048667526 container create 70826d534e9e09f32751bb0107eafae3b14f93a2976784da7ba98b45bf9d0419 (image=quay.io/ceph/ceph:v19, name=youthful_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:38:39 compute-0 systemd[1]: Created slice Virtual Machine and Container Slice.
Jan 20 18:38:39 compute-0 systemd[1]: Started libpod-conmon-70826d534e9e09f32751bb0107eafae3b14f93a2976784da7ba98b45bf9d0419.scope.
Jan 20 18:38:39 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:38:39 compute-0 podman[73498]: 2026-01-20 18:38:39.860240561 +0000 UTC m=+0.022669399 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:38:39 compute-0 podman[73498]: 2026-01-20 18:38:39.966656197 +0000 UTC m=+0.129085055 container init 70826d534e9e09f32751bb0107eafae3b14f93a2976784da7ba98b45bf9d0419 (image=quay.io/ceph/ceph:v19, name=youthful_ardinghelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 20 18:38:39 compute-0 podman[73498]: 2026-01-20 18:38:39.972678041 +0000 UTC m=+0.135106829 container start 70826d534e9e09f32751bb0107eafae3b14f93a2976784da7ba98b45bf9d0419 (image=quay.io/ceph/ceph:v19, name=youthful_ardinghelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Jan 20 18:38:39 compute-0 podman[73498]: 2026-01-20 18:38:39.975873077 +0000 UTC m=+0.138301895 container attach 70826d534e9e09f32751bb0107eafae3b14f93a2976784da7ba98b45bf9d0419 (image=quay.io/ceph/ceph:v19, name=youthful_ardinghelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Jan 20 18:38:40 compute-0 youthful_ardinghelli[73514]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)
Jan 20 18:38:40 compute-0 systemd[1]: libpod-70826d534e9e09f32751bb0107eafae3b14f93a2976784da7ba98b45bf9d0419.scope: Deactivated successfully.
Jan 20 18:38:40 compute-0 podman[73498]: 2026-01-20 18:38:40.076726292 +0000 UTC m=+0.239155080 container died 70826d534e9e09f32751bb0107eafae3b14f93a2976784da7ba98b45bf9d0419 (image=quay.io/ceph/ceph:v19, name=youthful_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 20 18:38:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ef488232fdd3a759630e96373d7846d26a0da3fab3fbb68e43c6d7f3e40da01-merged.mount: Deactivated successfully.
Jan 20 18:38:40 compute-0 podman[73498]: 2026-01-20 18:38:40.111758285 +0000 UTC m=+0.274187063 container remove 70826d534e9e09f32751bb0107eafae3b14f93a2976784da7ba98b45bf9d0419 (image=quay.io/ceph/ceph:v19, name=youthful_ardinghelli, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Jan 20 18:38:40 compute-0 systemd[1]: libpod-conmon-70826d534e9e09f32751bb0107eafae3b14f93a2976784da7ba98b45bf9d0419.scope: Deactivated successfully.
Jan 20 18:38:40 compute-0 podman[73530]: 2026-01-20 18:38:40.167989936 +0000 UTC m=+0.038606591 container create dd195359e67d81158ea6b83810b5c4bbd4c8ca31e41ae494822fb78504e2be4b (image=quay.io/ceph/ceph:v19, name=youthful_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Jan 20 18:38:40 compute-0 systemd[1]: Started libpod-conmon-dd195359e67d81158ea6b83810b5c4bbd4c8ca31e41ae494822fb78504e2be4b.scope.
Jan 20 18:38:40 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:38:40 compute-0 podman[73530]: 2026-01-20 18:38:40.219668712 +0000 UTC m=+0.090285387 container init dd195359e67d81158ea6b83810b5c4bbd4c8ca31e41ae494822fb78504e2be4b (image=quay.io/ceph/ceph:v19, name=youthful_tesla, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:38:40 compute-0 podman[73530]: 2026-01-20 18:38:40.225235724 +0000 UTC m=+0.095852389 container start dd195359e67d81158ea6b83810b5c4bbd4c8ca31e41ae494822fb78504e2be4b (image=quay.io/ceph/ceph:v19, name=youthful_tesla, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 20 18:38:40 compute-0 youthful_tesla[73546]: 167 167
Jan 20 18:38:40 compute-0 podman[73530]: 2026-01-20 18:38:40.229740117 +0000 UTC m=+0.100356822 container attach dd195359e67d81158ea6b83810b5c4bbd4c8ca31e41ae494822fb78504e2be4b (image=quay.io/ceph/ceph:v19, name=youthful_tesla, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:38:40 compute-0 systemd[1]: libpod-dd195359e67d81158ea6b83810b5c4bbd4c8ca31e41ae494822fb78504e2be4b.scope: Deactivated successfully.
Jan 20 18:38:40 compute-0 podman[73530]: 2026-01-20 18:38:40.23097767 +0000 UTC m=+0.101594365 container died dd195359e67d81158ea6b83810b5c4bbd4c8ca31e41ae494822fb78504e2be4b (image=quay.io/ceph/ceph:v19, name=youthful_tesla, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 20 18:38:40 compute-0 podman[73530]: 2026-01-20 18:38:40.152530185 +0000 UTC m=+0.023146850 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:38:40 compute-0 podman[73530]: 2026-01-20 18:38:40.273257131 +0000 UTC m=+0.143873796 container remove dd195359e67d81158ea6b83810b5c4bbd4c8ca31e41ae494822fb78504e2be4b (image=quay.io/ceph/ceph:v19, name=youthful_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:38:40 compute-0 systemd[1]: libpod-conmon-dd195359e67d81158ea6b83810b5c4bbd4c8ca31e41ae494822fb78504e2be4b.scope: Deactivated successfully.
Jan 20 18:38:40 compute-0 podman[73563]: 2026-01-20 18:38:40.340138591 +0000 UTC m=+0.047047851 container create bde0633433f3457b6c905b159f84647fc4b10fb130dce93b1cc65774f6fc9fba (image=quay.io/ceph/ceph:v19, name=hardcore_aryabhata, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:38:40 compute-0 systemd[1]: Started libpod-conmon-bde0633433f3457b6c905b159f84647fc4b10fb130dce93b1cc65774f6fc9fba.scope.
Jan 20 18:38:40 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:38:40 compute-0 podman[73563]: 2026-01-20 18:38:40.390237005 +0000 UTC m=+0.097146265 container init bde0633433f3457b6c905b159f84647fc4b10fb130dce93b1cc65774f6fc9fba (image=quay.io/ceph/ceph:v19, name=hardcore_aryabhata, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:38:40 compute-0 podman[73563]: 2026-01-20 18:38:40.395348934 +0000 UTC m=+0.102258184 container start bde0633433f3457b6c905b159f84647fc4b10fb130dce93b1cc65774f6fc9fba (image=quay.io/ceph/ceph:v19, name=hardcore_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Jan 20 18:38:40 compute-0 podman[73563]: 2026-01-20 18:38:40.398258343 +0000 UTC m=+0.105167593 container attach bde0633433f3457b6c905b159f84647fc4b10fb130dce93b1cc65774f6fc9fba (image=quay.io/ceph/ceph:v19, name=hardcore_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:38:40 compute-0 podman[73563]: 2026-01-20 18:38:40.317406352 +0000 UTC m=+0.024315662 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:38:40 compute-0 hardcore_aryabhata[73580]: AQCwy29pVGKXGBAAN/EnfY+/8zkqmUIMgRAfRA==
Jan 20 18:38:40 compute-0 systemd[1]: libpod-bde0633433f3457b6c905b159f84647fc4b10fb130dce93b1cc65774f6fc9fba.scope: Deactivated successfully.
Jan 20 18:38:40 compute-0 podman[73563]: 2026-01-20 18:38:40.415831851 +0000 UTC m=+0.122741101 container died bde0633433f3457b6c905b159f84647fc4b10fb130dce93b1cc65774f6fc9fba (image=quay.io/ceph/ceph:v19, name=hardcore_aryabhata, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:38:40 compute-0 podman[73563]: 2026-01-20 18:38:40.453396723 +0000 UTC m=+0.160305983 container remove bde0633433f3457b6c905b159f84647fc4b10fb130dce93b1cc65774f6fc9fba (image=quay.io/ceph/ceph:v19, name=hardcore_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 20 18:38:40 compute-0 systemd[1]: libpod-conmon-bde0633433f3457b6c905b159f84647fc4b10fb130dce93b1cc65774f6fc9fba.scope: Deactivated successfully.
Jan 20 18:38:40 compute-0 podman[73600]: 2026-01-20 18:38:40.514400134 +0000 UTC m=+0.039632539 container create 8aebbd38d111ccdbc0322b8a7c30109e9f5cdc96e11e47b99d81d6af1837b1e8 (image=quay.io/ceph/ceph:v19, name=distracted_mcnulty, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 18:38:40 compute-0 systemd[1]: Started libpod-conmon-8aebbd38d111ccdbc0322b8a7c30109e9f5cdc96e11e47b99d81d6af1837b1e8.scope.
Jan 20 18:38:40 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:38:40 compute-0 podman[73600]: 2026-01-20 18:38:40.568248339 +0000 UTC m=+0.093480754 container init 8aebbd38d111ccdbc0322b8a7c30109e9f5cdc96e11e47b99d81d6af1837b1e8 (image=quay.io/ceph/ceph:v19, name=distracted_mcnulty, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 20 18:38:40 compute-0 podman[73600]: 2026-01-20 18:38:40.574319715 +0000 UTC m=+0.099552130 container start 8aebbd38d111ccdbc0322b8a7c30109e9f5cdc96e11e47b99d81d6af1837b1e8 (image=quay.io/ceph/ceph:v19, name=distracted_mcnulty, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 18:38:40 compute-0 podman[73600]: 2026-01-20 18:38:40.577780209 +0000 UTC m=+0.103012634 container attach 8aebbd38d111ccdbc0322b8a7c30109e9f5cdc96e11e47b99d81d6af1837b1e8 (image=quay.io/ceph/ceph:v19, name=distracted_mcnulty, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:38:40 compute-0 distracted_mcnulty[73615]: AQCwy29pjlx3IxAADdVA6D7fwkuoLKhCUI8+qA==
Jan 20 18:38:40 compute-0 podman[73600]: 2026-01-20 18:38:40.498915323 +0000 UTC m=+0.024147748 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:38:40 compute-0 systemd[1]: libpod-8aebbd38d111ccdbc0322b8a7c30109e9f5cdc96e11e47b99d81d6af1837b1e8.scope: Deactivated successfully.
Jan 20 18:38:40 compute-0 podman[73600]: 2026-01-20 18:38:40.598950405 +0000 UTC m=+0.124182850 container died 8aebbd38d111ccdbc0322b8a7c30109e9f5cdc96e11e47b99d81d6af1837b1e8 (image=quay.io/ceph/ceph:v19, name=distracted_mcnulty, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:38:40 compute-0 podman[73600]: 2026-01-20 18:38:40.63622931 +0000 UTC m=+0.161461725 container remove 8aebbd38d111ccdbc0322b8a7c30109e9f5cdc96e11e47b99d81d6af1837b1e8 (image=quay.io/ceph/ceph:v19, name=distracted_mcnulty, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Jan 20 18:38:40 compute-0 systemd[1]: libpod-conmon-8aebbd38d111ccdbc0322b8a7c30109e9f5cdc96e11e47b99d81d6af1837b1e8.scope: Deactivated successfully.
Jan 20 18:38:40 compute-0 podman[73636]: 2026-01-20 18:38:40.694475044 +0000 UTC m=+0.039843515 container create 7329796ea945bb915cba2eb1ebe91159ed0cd3ef2e67a3e4b5bc2194e5cdf836 (image=quay.io/ceph/ceph:v19, name=funny_margulis, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 20 18:38:40 compute-0 systemd[1]: Started libpod-conmon-7329796ea945bb915cba2eb1ebe91159ed0cd3ef2e67a3e4b5bc2194e5cdf836.scope.
Jan 20 18:38:40 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:38:40 compute-0 podman[73636]: 2026-01-20 18:38:40.677943625 +0000 UTC m=+0.023312106 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:38:43 compute-0 podman[73636]: 2026-01-20 18:38:43.44307682 +0000 UTC m=+2.788445291 container init 7329796ea945bb915cba2eb1ebe91159ed0cd3ef2e67a3e4b5bc2194e5cdf836 (image=quay.io/ceph/ceph:v19, name=funny_margulis, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Jan 20 18:38:43 compute-0 podman[73636]: 2026-01-20 18:38:43.45083143 +0000 UTC m=+2.796199961 container start 7329796ea945bb915cba2eb1ebe91159ed0cd3ef2e67a3e4b5bc2194e5cdf836 (image=quay.io/ceph/ceph:v19, name=funny_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Jan 20 18:38:43 compute-0 podman[73636]: 2026-01-20 18:38:43.472935582 +0000 UTC m=+2.818304073 container attach 7329796ea945bb915cba2eb1ebe91159ed0cd3ef2e67a3e4b5bc2194e5cdf836 (image=quay.io/ceph/ceph:v19, name=funny_margulis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Jan 20 18:38:43 compute-0 funny_margulis[73652]: AQCzy29pgaleHRAAky7SrrvprHPxsKygmLeguA==
Jan 20 18:38:43 compute-0 systemd[1]: libpod-7329796ea945bb915cba2eb1ebe91159ed0cd3ef2e67a3e4b5bc2194e5cdf836.scope: Deactivated successfully.
Jan 20 18:38:43 compute-0 podman[73636]: 2026-01-20 18:38:43.498766585 +0000 UTC m=+2.844135116 container died 7329796ea945bb915cba2eb1ebe91159ed0cd3ef2e67a3e4b5bc2194e5cdf836 (image=quay.io/ceph/ceph:v19, name=funny_margulis, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 18:38:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-d891805ef0f9d8734cf3e178007f20ce128e35afbf75f38ac609d76f09fb9155-merged.mount: Deactivated successfully.
Jan 20 18:38:43 compute-0 podman[73636]: 2026-01-20 18:38:43.545881607 +0000 UTC m=+2.891250068 container remove 7329796ea945bb915cba2eb1ebe91159ed0cd3ef2e67a3e4b5bc2194e5cdf836 (image=quay.io/ceph/ceph:v19, name=funny_margulis, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 18:38:43 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 20 18:38:43 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 20 18:38:43 compute-0 systemd[1]: libpod-conmon-7329796ea945bb915cba2eb1ebe91159ed0cd3ef2e67a3e4b5bc2194e5cdf836.scope: Deactivated successfully.
Jan 20 18:38:43 compute-0 podman[73671]: 2026-01-20 18:38:43.621848474 +0000 UTC m=+0.049991520 container create 24dd2b089ed9a273d1108d97af9e0f17f225f7a70db04dffabdff2873eccc2dd (image=quay.io/ceph/ceph:v19, name=gracious_pare, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:38:43 compute-0 systemd[1]: Started libpod-conmon-24dd2b089ed9a273d1108d97af9e0f17f225f7a70db04dffabdff2873eccc2dd.scope.
Jan 20 18:38:43 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:38:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4985d3ddc89811b43f8cc0f54f87c6f3be86120230c0b9e4b422f6e23f0bbbc6/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Jan 20 18:38:43 compute-0 podman[73671]: 2026-01-20 18:38:43.684400067 +0000 UTC m=+0.112543123 container init 24dd2b089ed9a273d1108d97af9e0f17f225f7a70db04dffabdff2873eccc2dd (image=quay.io/ceph/ceph:v19, name=gracious_pare, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True)
Jan 20 18:38:43 compute-0 podman[73671]: 2026-01-20 18:38:43.689535447 +0000 UTC m=+0.117678503 container start 24dd2b089ed9a273d1108d97af9e0f17f225f7a70db04dffabdff2873eccc2dd (image=quay.io/ceph/ceph:v19, name=gracious_pare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Jan 20 18:38:43 compute-0 podman[73671]: 2026-01-20 18:38:43.693710351 +0000 UTC m=+0.121853397 container attach 24dd2b089ed9a273d1108d97af9e0f17f225f7a70db04dffabdff2873eccc2dd (image=quay.io/ceph/ceph:v19, name=gracious_pare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:38:43 compute-0 podman[73671]: 2026-01-20 18:38:43.60219691 +0000 UTC m=+0.030340006 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:38:43 compute-0 gracious_pare[73688]: /usr/bin/monmaptool: monmap file /tmp/monmap
Jan 20 18:38:43 compute-0 gracious_pare[73688]: setting min_mon_release = quincy
Jan 20 18:38:43 compute-0 gracious_pare[73688]: /usr/bin/monmaptool: set fsid to aecbbf3b-b405-507b-97d7-637a83f5b4b1
Jan 20 18:38:43 compute-0 gracious_pare[73688]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Jan 20 18:38:43 compute-0 systemd[1]: libpod-24dd2b089ed9a273d1108d97af9e0f17f225f7a70db04dffabdff2873eccc2dd.scope: Deactivated successfully.
Jan 20 18:38:43 compute-0 podman[73671]: 2026-01-20 18:38:43.730029458 +0000 UTC m=+0.158172504 container died 24dd2b089ed9a273d1108d97af9e0f17f225f7a70db04dffabdff2873eccc2dd (image=quay.io/ceph/ceph:v19, name=gracious_pare, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:38:43 compute-0 podman[73671]: 2026-01-20 18:38:43.767333424 +0000 UTC m=+0.195476480 container remove 24dd2b089ed9a273d1108d97af9e0f17f225f7a70db04dffabdff2873eccc2dd (image=quay.io/ceph/ceph:v19, name=gracious_pare, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:38:43 compute-0 systemd[1]: libpod-conmon-24dd2b089ed9a273d1108d97af9e0f17f225f7a70db04dffabdff2873eccc2dd.scope: Deactivated successfully.
Jan 20 18:38:43 compute-0 podman[73707]: 2026-01-20 18:38:43.84396552 +0000 UTC m=+0.045597063 container create 41eb3be785c9a6cabc83e13cb17da388034c655769aad26903036f4dbfb55e00 (image=quay.io/ceph/ceph:v19, name=peaceful_turing, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:38:43 compute-0 systemd[1]: Started libpod-conmon-41eb3be785c9a6cabc83e13cb17da388034c655769aad26903036f4dbfb55e00.scope.
Jan 20 18:38:43 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:38:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f15d54b0157a04f09411a1e75c9e35106b38be5392ee44b7a06d8839dab32e7/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Jan 20 18:38:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f15d54b0157a04f09411a1e75c9e35106b38be5392ee44b7a06d8839dab32e7/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 18:38:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f15d54b0157a04f09411a1e75c9e35106b38be5392ee44b7a06d8839dab32e7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:38:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f15d54b0157a04f09411a1e75c9e35106b38be5392ee44b7a06d8839dab32e7/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 20 18:38:43 compute-0 podman[73707]: 2026-01-20 18:38:43.82745245 +0000 UTC m=+0.029083983 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:38:43 compute-0 podman[73707]: 2026-01-20 18:38:43.926183498 +0000 UTC m=+0.127815051 container init 41eb3be785c9a6cabc83e13cb17da388034c655769aad26903036f4dbfb55e00 (image=quay.io/ceph/ceph:v19, name=peaceful_turing, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Jan 20 18:38:43 compute-0 podman[73707]: 2026-01-20 18:38:43.933232369 +0000 UTC m=+0.134863892 container start 41eb3be785c9a6cabc83e13cb17da388034c655769aad26903036f4dbfb55e00 (image=quay.io/ceph/ceph:v19, name=peaceful_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Jan 20 18:38:43 compute-0 podman[73707]: 2026-01-20 18:38:43.937208377 +0000 UTC m=+0.138839920 container attach 41eb3be785c9a6cabc83e13cb17da388034c655769aad26903036f4dbfb55e00 (image=quay.io/ceph/ceph:v19, name=peaceful_turing, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:38:44 compute-0 systemd[1]: libpod-41eb3be785c9a6cabc83e13cb17da388034c655769aad26903036f4dbfb55e00.scope: Deactivated successfully.
Jan 20 18:38:44 compute-0 podman[73707]: 2026-01-20 18:38:44.029097628 +0000 UTC m=+0.230729191 container died 41eb3be785c9a6cabc83e13cb17da388034c655769aad26903036f4dbfb55e00 (image=quay.io/ceph/ceph:v19, name=peaceful_turing, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Jan 20 18:38:44 compute-0 podman[73707]: 2026-01-20 18:38:44.071917604 +0000 UTC m=+0.273549137 container remove 41eb3be785c9a6cabc83e13cb17da388034c655769aad26903036f4dbfb55e00 (image=quay.io/ceph/ceph:v19, name=peaceful_turing, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default)
Jan 20 18:38:44 compute-0 systemd[1]: libpod-conmon-41eb3be785c9a6cabc83e13cb17da388034c655769aad26903036f4dbfb55e00.scope: Deactivated successfully.
Jan 20 18:38:44 compute-0 systemd[1]: Reloading.
Jan 20 18:38:44 compute-0 systemd-rc-local-generator[73796]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:38:44 compute-0 systemd-sysv-generator[73800]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:38:44 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 20 18:38:44 compute-0 systemd[1]: Reloading.
Jan 20 18:38:44 compute-0 systemd-rc-local-generator[73829]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:38:44 compute-0 systemd-sysv-generator[73833]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:38:44 compute-0 systemd[1]: Reached target All Ceph clusters and services.
Jan 20 18:38:44 compute-0 systemd[1]: Reloading.
Jan 20 18:38:44 compute-0 systemd-rc-local-generator[73868]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:38:44 compute-0 systemd-sysv-generator[73872]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:38:44 compute-0 systemd[1]: Reached target Ceph cluster aecbbf3b-b405-507b-97d7-637a83f5b4b1.
Jan 20 18:38:44 compute-0 systemd[1]: Reloading.
Jan 20 18:38:45 compute-0 systemd-rc-local-generator[73907]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:38:45 compute-0 systemd-sysv-generator[73911]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:38:45 compute-0 systemd[1]: Reloading.
Jan 20 18:38:45 compute-0 systemd-rc-local-generator[73947]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:38:45 compute-0 systemd-sysv-generator[73952]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:38:45 compute-0 systemd[1]: Created slice Slice /system/ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1.
Jan 20 18:38:45 compute-0 systemd[1]: Reached target System Time Set.
Jan 20 18:38:45 compute-0 systemd[1]: Reached target System Time Synchronized.
Jan 20 18:38:45 compute-0 systemd[1]: Starting Ceph mon.compute-0 for aecbbf3b-b405-507b-97d7-637a83f5b4b1...
Jan 20 18:38:45 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 20 18:38:45 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 20 18:38:45 compute-0 podman[74004]: 2026-01-20 18:38:45.887176936 +0000 UTC m=+0.050934076 container create aaaa94b864b9ba49a0ca1ddc8ad681b9aa319bf83930177af0c44d3b2ed91495 (image=quay.io/ceph/ceph:v19, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mon-compute-0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Jan 20 18:38:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a55f6f47b4a337d4ee3d6fe7c3b35b6b6e498e069094b7b1aafa9823d1133e2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:38:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a55f6f47b4a337d4ee3d6fe7c3b35b6b6e498e069094b7b1aafa9823d1133e2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:38:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a55f6f47b4a337d4ee3d6fe7c3b35b6b6e498e069094b7b1aafa9823d1133e2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 18:38:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a55f6f47b4a337d4ee3d6fe7c3b35b6b6e498e069094b7b1aafa9823d1133e2/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 20 18:38:45 compute-0 podman[74004]: 2026-01-20 18:38:45.86562842 +0000 UTC m=+0.029385610 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:38:45 compute-0 podman[74004]: 2026-01-20 18:38:45.967463331 +0000 UTC m=+0.131220501 container init aaaa94b864b9ba49a0ca1ddc8ad681b9aa319bf83930177af0c44d3b2ed91495 (image=quay.io/ceph/ceph:v19, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:38:45 compute-0 podman[74004]: 2026-01-20 18:38:45.97473452 +0000 UTC m=+0.138491660 container start aaaa94b864b9ba49a0ca1ddc8ad681b9aa319bf83930177af0c44d3b2ed91495 (image=quay.io/ceph/ceph:v19, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mon-compute-0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 20 18:38:45 compute-0 bash[74004]: aaaa94b864b9ba49a0ca1ddc8ad681b9aa319bf83930177af0c44d3b2ed91495
Jan 20 18:38:45 compute-0 systemd[1]: Started Ceph mon.compute-0 for aecbbf3b-b405-507b-97d7-637a83f5b4b1.
Jan 20 18:38:46 compute-0 ceph-mon[74024]: set uid:gid to 167:167 (ceph:ceph)
Jan 20 18:38:46 compute-0 ceph-mon[74024]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mon, pid 2
Jan 20 18:38:46 compute-0 ceph-mon[74024]: pidfile_write: ignore empty --pid-file
Jan 20 18:38:46 compute-0 ceph-mon[74024]: load: jerasure load: lrc 
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb: RocksDB version: 7.9.2
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb: Git sha 0
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb: Compile date 2025-07-17 03:12:14
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb: DB SUMMARY
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb: DB Session ID:  AUZH9KVNQWPXJE0R55OU
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb: CURRENT file:  CURRENT
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb: IDENTITY file:  IDENTITY
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:                         Options.error_if_exists: 0
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:                       Options.create_if_missing: 0
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:                         Options.paranoid_checks: 1
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:                                     Options.env: 0x55a17a4e5c20
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:                                      Options.fs: PosixFileSystem
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:                                Options.info_log: 0x55a17cb0b940
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:                Options.max_file_opening_threads: 16
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:                              Options.statistics: (nil)
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:                               Options.use_fsync: 0
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:                       Options.max_log_file_size: 0
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:                         Options.allow_fallocate: 1
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:                        Options.use_direct_reads: 0
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:          Options.create_missing_column_families: 0
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:                              Options.db_log_dir: 
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:                                 Options.wal_dir: 
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:                   Options.advise_random_on_open: 1
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:                    Options.write_buffer_manager: 0x55a17cb0f900
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:                            Options.rate_limiter: (nil)
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:                  Options.unordered_write: 0
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:                               Options.row_cache: None
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:                              Options.wal_filter: None
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:             Options.allow_ingest_behind: 0
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:             Options.two_write_queues: 0
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:             Options.manual_wal_flush: 0
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:             Options.wal_compression: 0
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:             Options.atomic_flush: 0
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:                 Options.log_readahead_size: 0
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:             Options.allow_data_in_errors: 0
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:             Options.db_host_id: __hostname__
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:             Options.max_background_jobs: 2
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:             Options.max_background_compactions: -1
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:             Options.max_subcompactions: 1
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:             Options.max_total_wal_size: 0
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:                          Options.max_open_files: -1
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:                          Options.bytes_per_sync: 0
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:       Options.compaction_readahead_size: 0
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:                  Options.max_background_flushes: -1
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb: Compression algorithms supported:
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:         kZSTD supported: 0
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:         kXpressCompression supported: 0
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:         kBZip2Compression supported: 0
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:         kLZ4Compression supported: 1
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:         kZlibCompression supported: 1
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:         kLZ4HCCompression supported: 1
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:         kSnappyCompression supported: 1
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:           Options.merge_operator: 
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:        Options.compaction_filter: None
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a17cb0b5e0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a17cb2e9b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:        Options.write_buffer_size: 33554432
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:  Options.max_write_buffer_number: 2
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:          Options.compression: NoCompression
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:             Options.num_levels: 7
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:                           Options.bloom_locality: 0
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:                               Options.ttl: 2592000
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:                       Options.enable_blob_files: false
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:                           Options.min_blob_size: 0
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: cbf3ab03-d51c-4622-b6c7-e997cd5246eb
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768934326028071, "job": 1, "event": "recovery_started", "wal_files": [4]}
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768934326030092, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768934326, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cbf3ab03-d51c-4622-b6c7-e997cd5246eb", "db_session_id": "AUZH9KVNQWPXJE0R55OU", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768934326030226, "job": 1, "event": "recovery_finished"}
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55a17cb30e00
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb: DB pointer 0x55a17cb40000
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 18:38:46 compute-0 ceph-mon[74024]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.13 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.13 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a17cb2e9b0#2 capacity: 512.00 MB usage: 1.17 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 0.00022 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.95 KB,0.000181794%) FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 20 18:38:46 compute-0 ceph-mon[74024]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1
Jan 20 18:38:46 compute-0 ceph-mon[74024]: mon.compute-0@-1(???) e0 preinit fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1
Jan 20 18:38:46 compute-0 ceph-mon[74024]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Jan 20 18:38:46 compute-0 ceph-mon[74024]: mon.compute-0@0(probing) e0 win_standalone_election
Jan 20 18:38:46 compute-0 ceph-mon[74024]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Jan 20 18:38:46 compute-0 ceph-mon[74024]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 20 18:38:46 compute-0 ceph-mon[74024]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 20 18:38:46 compute-0 ceph-mon[74024]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Jan 20 18:38:46 compute-0 ceph-mon[74024]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Jan 20 18:38:46 compute-0 ceph-mon[74024]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Jan 20 18:38:46 compute-0 ceph-mon[74024]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Jan 20 18:38:46 compute-0 ceph-mon[74024]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 20 18:38:46 compute-0 ceph-mon[74024]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Jan 20 18:38:46 compute-0 ceph-mon[74024]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Jan 20 18:38:46 compute-0 ceph-mon[74024]: mon.compute-0@0(probing) e1 win_standalone_election
Jan 20 18:38:46 compute-0 ceph-mon[74024]: paxos.0).electionLogic(2) init, last seen epoch 2
Jan 20 18:38:46 compute-0 ceph-mon[74024]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 20 18:38:46 compute-0 ceph-mon[74024]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 20 18:38:46 compute-0 ceph-mon[74024]: log_channel(cluster) log [DBG] : monmap epoch 1
Jan 20 18:38:46 compute-0 ceph-mon[74024]: log_channel(cluster) log [DBG] : fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1
Jan 20 18:38:46 compute-0 ceph-mon[74024]: log_channel(cluster) log [DBG] : last_changed 2026-01-20T18:38:43.724879+0000
Jan 20 18:38:46 compute-0 ceph-mon[74024]: log_channel(cluster) log [DBG] : created 2026-01-20T18:38:43.724879+0000
Jan 20 18:38:46 compute-0 ceph-mon[74024]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Jan 20 18:38:46 compute-0 ceph-mon[74024]: log_channel(cluster) log [DBG] : election_strategy: 1
Jan 20 18:38:46 compute-0 ceph-mon[74024]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 20 18:38:46 compute-0 ceph-mon[74024]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 20 18:38:46 compute-0 ceph-mon[74024]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v19,cpu=AMD EPYC-Rome Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026,kernel_version=5.14.0-661.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864316,os=Linux}
Jan 20 18:38:46 compute-0 ceph-mon[74024]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Jan 20 18:38:46 compute-0 ceph-mon[74024]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Jan 20 18:38:46 compute-0 ceph-mon[74024]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Jan 20 18:38:46 compute-0 ceph-mon[74024]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Jan 20 18:38:46 compute-0 ceph-mon[74024]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 20 18:38:46 compute-0 ceph-mon[74024]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout,16=squid ondisk layout}
Jan 20 18:38:46 compute-0 ceph-mon[74024]: mon.compute-0@0(leader).mds e1 new map
Jan 20 18:38:46 compute-0 ceph-mon[74024]: mon.compute-0@0(leader).mds e1 print_map
                                           e1
                                           btime 2026-01-20T18:38:46:076055+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Jan 20 18:38:46 compute-0 ceph-mon[74024]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Jan 20 18:38:46 compute-0 ceph-mon[74024]: log_channel(cluster) log [DBG] : fsmap 
Jan 20 18:38:46 compute-0 ceph-mon[74024]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Jan 20 18:38:46 compute-0 ceph-mon[74024]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Jan 20 18:38:46 compute-0 ceph-mon[74024]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Jan 20 18:38:46 compute-0 ceph-mon[74024]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Jan 20 18:38:46 compute-0 ceph-mon[74024]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 20 18:38:46 compute-0 ceph-mon[74024]: mkfs aecbbf3b-b405-507b-97d7-637a83f5b4b1
Jan 20 18:38:46 compute-0 ceph-mon[74024]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 20 18:38:46 compute-0 ceph-mon[74024]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 20 18:38:46 compute-0 ceph-mon[74024]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Jan 20 18:38:46 compute-0 podman[74025]: 2026-01-20 18:38:46.100165403 +0000 UTC m=+0.073604964 container create adc6c0f3f5c3b70bdbc905497759d32de451e7bb72694881666e3085db8d91d8 (image=quay.io/ceph/ceph:v19, name=practical_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:38:46 compute-0 ceph-mon[74024]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Jan 20 18:38:46 compute-0 ceph-mon[74024]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Jan 20 18:38:46 compute-0 ceph-mon[74024]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 20 18:38:46 compute-0 systemd[1]: Started libpod-conmon-adc6c0f3f5c3b70bdbc905497759d32de451e7bb72694881666e3085db8d91d8.scope.
Jan 20 18:38:46 compute-0 podman[74025]: 2026-01-20 18:38:46.073286462 +0000 UTC m=+0.046726053 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:38:46 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:38:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6e87306222519ccb4a7232b1bec74eb198b99cf8caca9a1e9a0644a7ef6fdd9/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 18:38:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6e87306222519ccb4a7232b1bec74eb198b99cf8caca9a1e9a0644a7ef6fdd9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:38:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6e87306222519ccb4a7232b1bec74eb198b99cf8caca9a1e9a0644a7ef6fdd9/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 20 18:38:46 compute-0 podman[74025]: 2026-01-20 18:38:46.208023828 +0000 UTC m=+0.181463429 container init adc6c0f3f5c3b70bdbc905497759d32de451e7bb72694881666e3085db8d91d8 (image=quay.io/ceph/ceph:v19, name=practical_haslett, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:38:46 compute-0 podman[74025]: 2026-01-20 18:38:46.218207856 +0000 UTC m=+0.191647407 container start adc6c0f3f5c3b70bdbc905497759d32de451e7bb72694881666e3085db8d91d8 (image=quay.io/ceph/ceph:v19, name=practical_haslett, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:38:46 compute-0 podman[74025]: 2026-01-20 18:38:46.22204158 +0000 UTC m=+0.195481131 container attach adc6c0f3f5c3b70bdbc905497759d32de451e7bb72694881666e3085db8d91d8 (image=quay.io/ceph/ceph:v19, name=practical_haslett, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 20 18:38:46 compute-0 ceph-mon[74024]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Jan 20 18:38:46 compute-0 ceph-mon[74024]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4260064950' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 20 18:38:46 compute-0 practical_haslett[74079]:   cluster:
Jan 20 18:38:46 compute-0 practical_haslett[74079]:     id:     aecbbf3b-b405-507b-97d7-637a83f5b4b1
Jan 20 18:38:46 compute-0 practical_haslett[74079]:     health: HEALTH_OK
Jan 20 18:38:46 compute-0 practical_haslett[74079]:  
Jan 20 18:38:46 compute-0 practical_haslett[74079]:   services:
Jan 20 18:38:46 compute-0 practical_haslett[74079]:     mon: 1 daemons, quorum compute-0 (age 0.357071s)
Jan 20 18:38:46 compute-0 practical_haslett[74079]:     mgr: no daemons active
Jan 20 18:38:46 compute-0 practical_haslett[74079]:     osd: 0 osds: 0 up, 0 in
Jan 20 18:38:46 compute-0 practical_haslett[74079]:  
Jan 20 18:38:46 compute-0 practical_haslett[74079]:   data:
Jan 20 18:38:46 compute-0 practical_haslett[74079]:     pools:   0 pools, 0 pgs
Jan 20 18:38:46 compute-0 practical_haslett[74079]:     objects: 0 objects, 0 B
Jan 20 18:38:46 compute-0 practical_haslett[74079]:     usage:   0 B used, 0 B / 0 B avail
Jan 20 18:38:46 compute-0 practical_haslett[74079]:     pgs:     
Jan 20 18:38:46 compute-0 practical_haslett[74079]:  
Jan 20 18:38:46 compute-0 systemd[1]: libpod-adc6c0f3f5c3b70bdbc905497759d32de451e7bb72694881666e3085db8d91d8.scope: Deactivated successfully.
Jan 20 18:38:46 compute-0 podman[74025]: 2026-01-20 18:38:46.448605306 +0000 UTC m=+0.422044887 container died adc6c0f3f5c3b70bdbc905497759d32de451e7bb72694881666e3085db8d91d8 (image=quay.io/ceph/ceph:v19, name=practical_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:38:46 compute-0 podman[74025]: 2026-01-20 18:38:46.504756674 +0000 UTC m=+0.478196225 container remove adc6c0f3f5c3b70bdbc905497759d32de451e7bb72694881666e3085db8d91d8 (image=quay.io/ceph/ceph:v19, name=practical_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:38:46 compute-0 systemd[1]: libpod-conmon-adc6c0f3f5c3b70bdbc905497759d32de451e7bb72694881666e3085db8d91d8.scope: Deactivated successfully.
Jan 20 18:38:46 compute-0 podman[74118]: 2026-01-20 18:38:46.570121553 +0000 UTC m=+0.043893425 container create ad32b5cfb7e7ca4772bfd7724f077c5ebb256ebf18dc8fde15c26d4a7ce3c30c (image=quay.io/ceph/ceph:v19, name=great_gauss, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True)
Jan 20 18:38:46 compute-0 systemd[1]: Started libpod-conmon-ad32b5cfb7e7ca4772bfd7724f077c5ebb256ebf18dc8fde15c26d4a7ce3c30c.scope.
Jan 20 18:38:46 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:38:46 compute-0 podman[74118]: 2026-01-20 18:38:46.548175676 +0000 UTC m=+0.021947598 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:38:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d01592f6a859464066e49833be41c045c554745f2b7e85fd6e741f2bb4e54f3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:38:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d01592f6a859464066e49833be41c045c554745f2b7e85fd6e741f2bb4e54f3/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 18:38:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d01592f6a859464066e49833be41c045c554745f2b7e85fd6e741f2bb4e54f3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:38:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d01592f6a859464066e49833be41c045c554745f2b7e85fd6e741f2bb4e54f3/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 20 18:38:46 compute-0 podman[74118]: 2026-01-20 18:38:46.65490197 +0000 UTC m=+0.128673862 container init ad32b5cfb7e7ca4772bfd7724f077c5ebb256ebf18dc8fde15c26d4a7ce3c30c (image=quay.io/ceph/ceph:v19, name=great_gauss, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Jan 20 18:38:46 compute-0 podman[74118]: 2026-01-20 18:38:46.662351384 +0000 UTC m=+0.136123266 container start ad32b5cfb7e7ca4772bfd7724f077c5ebb256ebf18dc8fde15c26d4a7ce3c30c (image=quay.io/ceph/ceph:v19, name=great_gauss, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 20 18:38:46 compute-0 podman[74118]: 2026-01-20 18:38:46.666195839 +0000 UTC m=+0.139967711 container attach ad32b5cfb7e7ca4772bfd7724f077c5ebb256ebf18dc8fde15c26d4a7ce3c30c (image=quay.io/ceph/ceph:v19, name=great_gauss, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 18:38:46 compute-0 ceph-mon[74024]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Jan 20 18:38:46 compute-0 ceph-mon[74024]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4147141233' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 20 18:38:46 compute-0 ceph-mon[74024]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4147141233' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 20 18:38:46 compute-0 great_gauss[74135]: 
Jan 20 18:38:46 compute-0 great_gauss[74135]: [global]
Jan 20 18:38:46 compute-0 great_gauss[74135]:         fsid = aecbbf3b-b405-507b-97d7-637a83f5b4b1
Jan 20 18:38:46 compute-0 great_gauss[74135]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Jan 20 18:38:46 compute-0 systemd[1]: libpod-ad32b5cfb7e7ca4772bfd7724f077c5ebb256ebf18dc8fde15c26d4a7ce3c30c.scope: Deactivated successfully.
Jan 20 18:38:46 compute-0 podman[74118]: 2026-01-20 18:38:46.904563636 +0000 UTC m=+0.378335538 container died ad32b5cfb7e7ca4772bfd7724f077c5ebb256ebf18dc8fde15c26d4a7ce3c30c (image=quay.io/ceph/ceph:v19, name=great_gauss, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:38:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d01592f6a859464066e49833be41c045c554745f2b7e85fd6e741f2bb4e54f3-merged.mount: Deactivated successfully.
Jan 20 18:38:46 compute-0 podman[74118]: 2026-01-20 18:38:46.953674032 +0000 UTC m=+0.427445944 container remove ad32b5cfb7e7ca4772bfd7724f077c5ebb256ebf18dc8fde15c26d4a7ce3c30c (image=quay.io/ceph/ceph:v19, name=great_gauss, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Jan 20 18:38:46 compute-0 systemd[1]: libpod-conmon-ad32b5cfb7e7ca4772bfd7724f077c5ebb256ebf18dc8fde15c26d4a7ce3c30c.scope: Deactivated successfully.
Jan 20 18:38:47 compute-0 podman[74173]: 2026-01-20 18:38:47.030698498 +0000 UTC m=+0.050463934 container create 771d1d8f68c3697a3f40c7ce5118b8879dad7d0ea07da89f57c0fc81bea7dad3 (image=quay.io/ceph/ceph:v19, name=inspiring_yalow, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:38:47 compute-0 systemd[1]: Started libpod-conmon-771d1d8f68c3697a3f40c7ce5118b8879dad7d0ea07da89f57c0fc81bea7dad3.scope.
Jan 20 18:38:47 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:38:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55151fe537067f9ee9ec72973a154c497ed41dad8839c4dc371e772f80d2fba4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:38:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55151fe537067f9ee9ec72973a154c497ed41dad8839c4dc371e772f80d2fba4/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 18:38:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55151fe537067f9ee9ec72973a154c497ed41dad8839c4dc371e772f80d2fba4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:38:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55151fe537067f9ee9ec72973a154c497ed41dad8839c4dc371e772f80d2fba4/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 20 18:38:47 compute-0 podman[74173]: 2026-01-20 18:38:47.011466305 +0000 UTC m=+0.031231721 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:38:47 compute-0 ceph-mon[74024]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 20 18:38:47 compute-0 ceph-mon[74024]: monmap epoch 1
Jan 20 18:38:47 compute-0 ceph-mon[74024]: fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1
Jan 20 18:38:47 compute-0 ceph-mon[74024]: last_changed 2026-01-20T18:38:43.724879+0000
Jan 20 18:38:47 compute-0 ceph-mon[74024]: created 2026-01-20T18:38:43.724879+0000
Jan 20 18:38:47 compute-0 ceph-mon[74024]: min_mon_release 19 (squid)
Jan 20 18:38:47 compute-0 ceph-mon[74024]: election_strategy: 1
Jan 20 18:38:47 compute-0 ceph-mon[74024]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 20 18:38:47 compute-0 ceph-mon[74024]: fsmap 
Jan 20 18:38:47 compute-0 ceph-mon[74024]: osdmap e1: 0 total, 0 up, 0 in
Jan 20 18:38:47 compute-0 ceph-mon[74024]: mgrmap e1: no daemons active
Jan 20 18:38:47 compute-0 ceph-mon[74024]: from='client.? 192.168.122.100:0/4260064950' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 20 18:38:47 compute-0 ceph-mon[74024]: from='client.? 192.168.122.100:0/4147141233' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 20 18:38:47 compute-0 ceph-mon[74024]: from='client.? 192.168.122.100:0/4147141233' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 20 18:38:47 compute-0 podman[74173]: 2026-01-20 18:38:47.118922559 +0000 UTC m=+0.138687985 container init 771d1d8f68c3697a3f40c7ce5118b8879dad7d0ea07da89f57c0fc81bea7dad3 (image=quay.io/ceph/ceph:v19, name=inspiring_yalow, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 20 18:38:47 compute-0 podman[74173]: 2026-01-20 18:38:47.125076116 +0000 UTC m=+0.144841522 container start 771d1d8f68c3697a3f40c7ce5118b8879dad7d0ea07da89f57c0fc81bea7dad3 (image=quay.io/ceph/ceph:v19, name=inspiring_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:38:47 compute-0 podman[74173]: 2026-01-20 18:38:47.12815939 +0000 UTC m=+0.147924816 container attach 771d1d8f68c3697a3f40c7ce5118b8879dad7d0ea07da89f57c0fc81bea7dad3 (image=quay.io/ceph/ceph:v19, name=inspiring_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 18:38:47 compute-0 ceph-mon[74024]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 18:38:47 compute-0 ceph-mon[74024]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4254476854' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:38:47 compute-0 systemd[1]: libpod-771d1d8f68c3697a3f40c7ce5118b8879dad7d0ea07da89f57c0fc81bea7dad3.scope: Deactivated successfully.
Jan 20 18:38:47 compute-0 podman[74215]: 2026-01-20 18:38:47.380736005 +0000 UTC m=+0.030467041 container died 771d1d8f68c3697a3f40c7ce5118b8879dad7d0ea07da89f57c0fc81bea7dad3 (image=quay.io/ceph/ceph:v19, name=inspiring_yalow, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:38:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-55151fe537067f9ee9ec72973a154c497ed41dad8839c4dc371e772f80d2fba4-merged.mount: Deactivated successfully.
Jan 20 18:38:47 compute-0 podman[74215]: 2026-01-20 18:38:47.432722739 +0000 UTC m=+0.082453695 container remove 771d1d8f68c3697a3f40c7ce5118b8879dad7d0ea07da89f57c0fc81bea7dad3 (image=quay.io/ceph/ceph:v19, name=inspiring_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Jan 20 18:38:47 compute-0 systemd[1]: libpod-conmon-771d1d8f68c3697a3f40c7ce5118b8879dad7d0ea07da89f57c0fc81bea7dad3.scope: Deactivated successfully.
Jan 20 18:38:47 compute-0 systemd[1]: Stopping Ceph mon.compute-0 for aecbbf3b-b405-507b-97d7-637a83f5b4b1...
Jan 20 18:38:47 compute-0 ceph-mon[74024]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Jan 20 18:38:47 compute-0 ceph-mon[74024]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Jan 20 18:38:47 compute-0 ceph-mon[74024]: mon.compute-0@0(leader) e1 shutdown
Jan 20 18:38:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mon-compute-0[74020]: 2026-01-20T18:38:47.679+0000 7f8d6d44b640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Jan 20 18:38:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mon-compute-0[74020]: 2026-01-20T18:38:47.679+0000 7f8d6d44b640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Jan 20 18:38:47 compute-0 ceph-mon[74024]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 20 18:38:47 compute-0 ceph-mon[74024]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 20 18:38:47 compute-0 podman[74259]: 2026-01-20 18:38:47.700435225 +0000 UTC m=+0.056224431 container died aaaa94b864b9ba49a0ca1ddc8ad681b9aa319bf83930177af0c44d3b2ed91495 (image=quay.io/ceph/ceph:v19, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:38:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a55f6f47b4a337d4ee3d6fe7c3b35b6b6e498e069094b7b1aafa9823d1133e2-merged.mount: Deactivated successfully.
Jan 20 18:38:47 compute-0 podman[74259]: 2026-01-20 18:38:47.745640765 +0000 UTC m=+0.101429971 container remove aaaa94b864b9ba49a0ca1ddc8ad681b9aa319bf83930177af0c44d3b2ed91495 (image=quay.io/ceph/ceph:v19, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mon-compute-0, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Jan 20 18:38:47 compute-0 bash[74259]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mon-compute-0
Jan 20 18:38:47 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 20 18:38:47 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 20 18:38:47 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@mon.compute-0.service: Deactivated successfully.
Jan 20 18:38:47 compute-0 systemd[1]: Stopped Ceph mon.compute-0 for aecbbf3b-b405-507b-97d7-637a83f5b4b1.
Jan 20 18:38:47 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@mon.compute-0.service: Consumed 1.016s CPU time.
Jan 20 18:38:47 compute-0 systemd[1]: Starting Ceph mon.compute-0 for aecbbf3b-b405-507b-97d7-637a83f5b4b1...
Jan 20 18:38:48 compute-0 podman[74361]: 2026-01-20 18:38:48.118720359 +0000 UTC m=+0.072869514 container create 2fba800b181b7eef43f9bbe592c3e2bd413c4a1140b3b26fe8cd839a68603da7 (image=quay.io/ceph/ceph:v19, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:38:48 compute-0 podman[74361]: 2026-01-20 18:38:48.08863112 +0000 UTC m=+0.042780345 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:38:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3734aaac79d252886fa7c136f022ba03cd79a371078b7f173d93991f19a5fd6e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:38:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3734aaac79d252886fa7c136f022ba03cd79a371078b7f173d93991f19a5fd6e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:38:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3734aaac79d252886fa7c136f022ba03cd79a371078b7f173d93991f19a5fd6e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 18:38:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3734aaac79d252886fa7c136f022ba03cd79a371078b7f173d93991f19a5fd6e/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 20 18:38:48 compute-0 podman[74361]: 2026-01-20 18:38:48.200032392 +0000 UTC m=+0.154181527 container init 2fba800b181b7eef43f9bbe592c3e2bd413c4a1140b3b26fe8cd839a68603da7 (image=quay.io/ceph/ceph:v19, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mon-compute-0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 18:38:48 compute-0 podman[74361]: 2026-01-20 18:38:48.206299112 +0000 UTC m=+0.160448227 container start 2fba800b181b7eef43f9bbe592c3e2bd413c4a1140b3b26fe8cd839a68603da7 (image=quay.io/ceph/ceph:v19, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:38:48 compute-0 bash[74361]: 2fba800b181b7eef43f9bbe592c3e2bd413c4a1140b3b26fe8cd839a68603da7
Jan 20 18:38:48 compute-0 systemd[1]: Started Ceph mon.compute-0 for aecbbf3b-b405-507b-97d7-637a83f5b4b1.
Jan 20 18:38:48 compute-0 ceph-mon[74381]: set uid:gid to 167:167 (ceph:ceph)
Jan 20 18:38:48 compute-0 ceph-mon[74381]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mon, pid 2
Jan 20 18:38:48 compute-0 ceph-mon[74381]: pidfile_write: ignore empty --pid-file
Jan 20 18:38:48 compute-0 ceph-mon[74381]: load: jerasure load: lrc 
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb: RocksDB version: 7.9.2
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb: Git sha 0
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb: Compile date 2025-07-17 03:12:14
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb: DB SUMMARY
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb: DB Session ID:  I40O2DG19JCNHUB0JQU4
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb: CURRENT file:  CURRENT
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb: IDENTITY file:  IDENTITY
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 58743 ; 
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:                         Options.error_if_exists: 0
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:                       Options.create_if_missing: 0
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:                         Options.paranoid_checks: 1
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:                                     Options.env: 0x564b93dd6c20
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:                                      Options.fs: PosixFileSystem
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:                                Options.info_log: 0x564b95be9ac0
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:                Options.max_file_opening_threads: 16
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:                              Options.statistics: (nil)
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:                               Options.use_fsync: 0
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:                       Options.max_log_file_size: 0
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:                         Options.allow_fallocate: 1
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:                        Options.use_direct_reads: 0
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:          Options.create_missing_column_families: 0
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:                              Options.db_log_dir: 
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:                                 Options.wal_dir: 
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:                   Options.advise_random_on_open: 1
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:                    Options.write_buffer_manager: 0x564b95bed900
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:                            Options.rate_limiter: (nil)
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:                  Options.unordered_write: 0
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:                               Options.row_cache: None
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:                              Options.wal_filter: None
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:             Options.allow_ingest_behind: 0
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:             Options.two_write_queues: 0
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:             Options.manual_wal_flush: 0
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:             Options.wal_compression: 0
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:             Options.atomic_flush: 0
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:                 Options.log_readahead_size: 0
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:             Options.allow_data_in_errors: 0
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:             Options.db_host_id: __hostname__
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:             Options.max_background_jobs: 2
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:             Options.max_background_compactions: -1
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:             Options.max_subcompactions: 1
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:             Options.max_total_wal_size: 0
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:                          Options.max_open_files: -1
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:                          Options.bytes_per_sync: 0
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:       Options.compaction_readahead_size: 0
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:                  Options.max_background_flushes: -1
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb: Compression algorithms supported:
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:         kZSTD supported: 0
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:         kXpressCompression supported: 0
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:         kBZip2Compression supported: 0
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:         kLZ4Compression supported: 1
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:         kZlibCompression supported: 1
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:         kLZ4HCCompression supported: 1
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:         kSnappyCompression supported: 1
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:           Options.merge_operator: 
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:        Options.compaction_filter: None
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x564b95be9760)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x564b95c0c9b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:        Options.write_buffer_size: 33554432
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:  Options.max_write_buffer_number: 2
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:          Options.compression: NoCompression
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:             Options.num_levels: 7
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:                           Options.bloom_locality: 0
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:                               Options.ttl: 2592000
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:                       Options.enable_blob_files: false
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:                           Options.min_blob_size: 0
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: cbf3ab03-d51c-4622-b6c7-e997cd5246eb
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768934328266945, "job": 1, "event": "recovery_started", "wal_files": [9]}
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768934328271942, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 58494, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 137, "table_properties": {"data_size": 56968, "index_size": 168, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 325, "raw_key_size": 3182, "raw_average_key_size": 30, "raw_value_size": 54485, "raw_average_value_size": 523, "num_data_blocks": 9, "num_entries": 104, "num_filter_entries": 104, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768934328, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cbf3ab03-d51c-4622-b6c7-e997cd5246eb", "db_session_id": "I40O2DG19JCNHUB0JQU4", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768934328272091, "job": 1, "event": "recovery_finished"}
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x564b95c0ee00
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb: DB pointer 0x564b95c1e000
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 18:38:48 compute-0 ceph-mon[74381]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0   59.02 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     12.2      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Sum      2/0   59.02 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     12.2      0.00              0.00         1    0.005       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     12.2      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.2      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 2.97 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 2.97 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564b95c0c9b0#2 capacity: 512.00 MB usage: 0.84 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(2,0.48 KB,9.23872e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 20 18:38:48 compute-0 ceph-mon[74381]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1
Jan 20 18:38:48 compute-0 ceph-mon[74381]: mon.compute-0@-1(???) e1 preinit fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1
Jan 20 18:38:48 compute-0 ceph-mon[74381]: mon.compute-0@-1(???).mds e1 new map
Jan 20 18:38:48 compute-0 ceph-mon[74381]: mon.compute-0@-1(???).mds e1 print_map
                                           e1
                                           btime 2026-01-20T18:38:46:076055+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Jan 20 18:38:48 compute-0 ceph-mon[74381]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Jan 20 18:38:48 compute-0 ceph-mon[74381]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 20 18:38:48 compute-0 ceph-mon[74381]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 20 18:38:48 compute-0 ceph-mon[74381]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 20 18:38:48 compute-0 ceph-mon[74381]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Jan 20 18:38:48 compute-0 podman[74382]: 2026-01-20 18:38:48.287530623 +0000 UTC m=+0.050878755 container create ab110e8752eddced9e1724ff7177b982eea3fad176883f1ef377a92ed0b588a4 (image=quay.io/ceph/ceph:v19, name=nostalgic_mclaren, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:38:48 compute-0 ceph-mon[74381]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Jan 20 18:38:48 compute-0 ceph-mon[74381]: mon.compute-0@0(probing) e1 win_standalone_election
Jan 20 18:38:48 compute-0 ceph-mon[74381]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Jan 20 18:38:48 compute-0 ceph-mon[74381]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 20 18:38:48 compute-0 ceph-mon[74381]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 20 18:38:48 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : monmap epoch 1
Jan 20 18:38:48 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1
Jan 20 18:38:48 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : last_changed 2026-01-20T18:38:43.724879+0000
Jan 20 18:38:48 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : created 2026-01-20T18:38:43.724879+0000
Jan 20 18:38:48 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Jan 20 18:38:48 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : election_strategy: 1
Jan 20 18:38:48 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 20 18:38:48 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 20 18:38:48 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : fsmap 
Jan 20 18:38:48 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Jan 20 18:38:48 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Jan 20 18:38:48 compute-0 systemd[1]: Started libpod-conmon-ab110e8752eddced9e1724ff7177b982eea3fad176883f1ef377a92ed0b588a4.scope.
Jan 20 18:38:48 compute-0 ceph-mon[74381]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 20 18:38:48 compute-0 ceph-mon[74381]: monmap epoch 1
Jan 20 18:38:48 compute-0 ceph-mon[74381]: fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1
Jan 20 18:38:48 compute-0 ceph-mon[74381]: last_changed 2026-01-20T18:38:43.724879+0000
Jan 20 18:38:48 compute-0 ceph-mon[74381]: created 2026-01-20T18:38:43.724879+0000
Jan 20 18:38:48 compute-0 ceph-mon[74381]: min_mon_release 19 (squid)
Jan 20 18:38:48 compute-0 ceph-mon[74381]: election_strategy: 1
Jan 20 18:38:48 compute-0 ceph-mon[74381]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 20 18:38:48 compute-0 ceph-mon[74381]: fsmap 
Jan 20 18:38:48 compute-0 ceph-mon[74381]: osdmap e1: 0 total, 0 up, 0 in
Jan 20 18:38:48 compute-0 ceph-mon[74381]: mgrmap e1: no daemons active
Jan 20 18:38:48 compute-0 podman[74382]: 2026-01-20 18:38:48.264762144 +0000 UTC m=+0.028110296 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:38:48 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:38:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a64868be1c3f29710601a477765549c801c6765bc991d48fdac1bd2c8623ebb0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:38:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a64868be1c3f29710601a477765549c801c6765bc991d48fdac1bd2c8623ebb0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 18:38:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a64868be1c3f29710601a477765549c801c6765bc991d48fdac1bd2c8623ebb0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:38:48 compute-0 podman[74382]: 2026-01-20 18:38:48.382471627 +0000 UTC m=+0.145819779 container init ab110e8752eddced9e1724ff7177b982eea3fad176883f1ef377a92ed0b588a4 (image=quay.io/ceph/ceph:v19, name=nostalgic_mclaren, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:38:48 compute-0 podman[74382]: 2026-01-20 18:38:48.392741687 +0000 UTC m=+0.156089809 container start ab110e8752eddced9e1724ff7177b982eea3fad176883f1ef377a92ed0b588a4 (image=quay.io/ceph/ceph:v19, name=nostalgic_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:38:48 compute-0 podman[74382]: 2026-01-20 18:38:48.39618218 +0000 UTC m=+0.159530312 container attach ab110e8752eddced9e1724ff7177b982eea3fad176883f1ef377a92ed0b588a4 (image=quay.io/ceph/ceph:v19, name=nostalgic_mclaren, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:38:48 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0)
Jan 20 18:38:48 compute-0 systemd[1]: libpod-ab110e8752eddced9e1724ff7177b982eea3fad176883f1ef377a92ed0b588a4.scope: Deactivated successfully.
Jan 20 18:38:48 compute-0 conmon[74436]: conmon ab110e8752eddced9e17 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ab110e8752eddced9e1724ff7177b982eea3fad176883f1ef377a92ed0b588a4.scope/container/memory.events
Jan 20 18:38:48 compute-0 podman[74382]: 2026-01-20 18:38:48.612203089 +0000 UTC m=+0.375551221 container died ab110e8752eddced9e1724ff7177b982eea3fad176883f1ef377a92ed0b588a4 (image=quay.io/ceph/ceph:v19, name=nostalgic_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 20 18:38:48 compute-0 podman[74382]: 2026-01-20 18:38:48.650698487 +0000 UTC m=+0.414046639 container remove ab110e8752eddced9e1724ff7177b982eea3fad176883f1ef377a92ed0b588a4 (image=quay.io/ceph/ceph:v19, name=nostalgic_mclaren, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Jan 20 18:38:48 compute-0 systemd[1]: libpod-conmon-ab110e8752eddced9e1724ff7177b982eea3fad176883f1ef377a92ed0b588a4.scope: Deactivated successfully.
Jan 20 18:38:48 compute-0 podman[74474]: 2026-01-20 18:38:48.746428472 +0000 UTC m=+0.054804782 container create c9570db062eb814991f18f1564550015d0d5fac0017d8cd828a89fe3e61711b3 (image=quay.io/ceph/ceph:v19, name=laughing_jemison, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Jan 20 18:38:48 compute-0 systemd[1]: Started libpod-conmon-c9570db062eb814991f18f1564550015d0d5fac0017d8cd828a89fe3e61711b3.scope.
Jan 20 18:38:48 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:38:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bb9c0f35f22ba6395aecca96cddb6eb02cf73b5ca2cd39c1c8d7c978ca07710/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:38:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bb9c0f35f22ba6395aecca96cddb6eb02cf73b5ca2cd39c1c8d7c978ca07710/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 18:38:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bb9c0f35f22ba6395aecca96cddb6eb02cf73b5ca2cd39c1c8d7c978ca07710/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:38:48 compute-0 podman[74474]: 2026-01-20 18:38:48.82461199 +0000 UTC m=+0.132988330 container init c9570db062eb814991f18f1564550015d0d5fac0017d8cd828a89fe3e61711b3 (image=quay.io/ceph/ceph:v19, name=laughing_jemison, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:38:48 compute-0 podman[74474]: 2026-01-20 18:38:48.729622695 +0000 UTC m=+0.037999025 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:38:48 compute-0 podman[74474]: 2026-01-20 18:38:48.831736234 +0000 UTC m=+0.140112534 container start c9570db062eb814991f18f1564550015d0d5fac0017d8cd828a89fe3e61711b3 (image=quay.io/ceph/ceph:v19, name=laughing_jemison, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 20 18:38:48 compute-0 podman[74474]: 2026-01-20 18:38:48.835359183 +0000 UTC m=+0.143735503 container attach c9570db062eb814991f18f1564550015d0d5fac0017d8cd828a89fe3e61711b3 (image=quay.io/ceph/ceph:v19, name=laughing_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 18:38:49 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0)
Jan 20 18:38:49 compute-0 systemd[1]: libpod-c9570db062eb814991f18f1564550015d0d5fac0017d8cd828a89fe3e61711b3.scope: Deactivated successfully.
Jan 20 18:38:49 compute-0 podman[74474]: 2026-01-20 18:38:49.045075111 +0000 UTC m=+0.353451421 container died c9570db062eb814991f18f1564550015d0d5fac0017d8cd828a89fe3e61711b3 (image=quay.io/ceph/ceph:v19, name=laughing_jemison, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:38:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-1bb9c0f35f22ba6395aecca96cddb6eb02cf73b5ca2cd39c1c8d7c978ca07710-merged.mount: Deactivated successfully.
Jan 20 18:38:49 compute-0 podman[74474]: 2026-01-20 18:38:49.086433116 +0000 UTC m=+0.394809456 container remove c9570db062eb814991f18f1564550015d0d5fac0017d8cd828a89fe3e61711b3 (image=quay.io/ceph/ceph:v19, name=laughing_jemison, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325)
Jan 20 18:38:49 compute-0 systemd[1]: libpod-conmon-c9570db062eb814991f18f1564550015d0d5fac0017d8cd828a89fe3e61711b3.scope: Deactivated successfully.
Jan 20 18:38:49 compute-0 systemd[1]: Reloading.
Jan 20 18:38:49 compute-0 systemd-rc-local-generator[74553]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:38:49 compute-0 systemd-sysv-generator[74557]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:38:49 compute-0 systemd[1]: Reloading.
Jan 20 18:38:49 compute-0 systemd-rc-local-generator[74596]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:38:49 compute-0 systemd-sysv-generator[74601]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:38:49 compute-0 systemd[1]: Starting Ceph mgr.compute-0.cepfkm for aecbbf3b-b405-507b-97d7-637a83f5b4b1...
Jan 20 18:38:49 compute-0 podman[74657]: 2026-01-20 18:38:49.951548591 +0000 UTC m=+0.047702760 container create 5d7fd05f6661777fcc7cbe2bbfd55cb48d26b185ca00b775802ec615bf1ff1be (image=quay.io/ceph/ceph:v19, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid)
Jan 20 18:38:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05bc787984aff6a45317405673c7b8b15c1aac2224e89acf2414185b4a490153/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:38:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05bc787984aff6a45317405673c7b8b15c1aac2224e89acf2414185b4a490153/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:38:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05bc787984aff6a45317405673c7b8b15c1aac2224e89acf2414185b4a490153/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 18:38:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05bc787984aff6a45317405673c7b8b15c1aac2224e89acf2414185b4a490153/merged/var/lib/ceph/mgr/ceph-compute-0.cepfkm supports timestamps until 2038 (0x7fffffff)
Jan 20 18:38:50 compute-0 podman[74657]: 2026-01-20 18:38:50.018544504 +0000 UTC m=+0.114698753 container init 5d7fd05f6661777fcc7cbe2bbfd55cb48d26b185ca00b775802ec615bf1ff1be (image=quay.io/ceph/ceph:v19, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Jan 20 18:38:50 compute-0 podman[74657]: 2026-01-20 18:38:50.025951335 +0000 UTC m=+0.122105524 container start 5d7fd05f6661777fcc7cbe2bbfd55cb48d26b185ca00b775802ec615bf1ff1be (image=quay.io/ceph/ceph:v19, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:38:50 compute-0 podman[74657]: 2026-01-20 18:38:49.933065558 +0000 UTC m=+0.029219737 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:38:50 compute-0 bash[74657]: 5d7fd05f6661777fcc7cbe2bbfd55cb48d26b185ca00b775802ec615bf1ff1be
Jan 20 18:38:50 compute-0 systemd[1]: Started Ceph mgr.compute-0.cepfkm for aecbbf3b-b405-507b-97d7-637a83f5b4b1.
Jan 20 18:38:50 compute-0 ceph-mgr[74676]: set uid:gid to 167:167 (ceph:ceph)
Jan 20 18:38:50 compute-0 ceph-mgr[74676]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Jan 20 18:38:50 compute-0 ceph-mgr[74676]: pidfile_write: ignore empty --pid-file
Jan 20 18:38:50 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'alerts'
Jan 20 18:38:50 compute-0 podman[74677]: 2026-01-20 18:38:50.123356017 +0000 UTC m=+0.053683983 container create e4ec83c628f5c6ff356107289383fbac31a81e6bad1f04ac339ab84052277e19 (image=quay.io/ceph/ceph:v19, name=cranky_margulis, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:38:50 compute-0 systemd[1]: Started libpod-conmon-e4ec83c628f5c6ff356107289383fbac31a81e6bad1f04ac339ab84052277e19.scope.
Jan 20 18:38:50 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:38:50 compute-0 podman[74677]: 2026-01-20 18:38:50.099219879 +0000 UTC m=+0.029547935 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:38:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5427857d66a92ecc4eaa7b2c89fa42de5f5730fd0e1af1d8ab98e9e726cc255/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:38:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5427857d66a92ecc4eaa7b2c89fa42de5f5730fd0e1af1d8ab98e9e726cc255/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 18:38:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5427857d66a92ecc4eaa7b2c89fa42de5f5730fd0e1af1d8ab98e9e726cc255/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:38:50 compute-0 ceph-mgr[74676]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 20 18:38:50 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'balancer'
Jan 20 18:38:50 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:38:50.195+0000 7fc62372f140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 20 18:38:50 compute-0 podman[74677]: 2026-01-20 18:38:50.211749252 +0000 UTC m=+0.142077238 container init e4ec83c628f5c6ff356107289383fbac31a81e6bad1f04ac339ab84052277e19 (image=quay.io/ceph/ceph:v19, name=cranky_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 20 18:38:50 compute-0 podman[74677]: 2026-01-20 18:38:50.220065929 +0000 UTC m=+0.150393895 container start e4ec83c628f5c6ff356107289383fbac31a81e6bad1f04ac339ab84052277e19 (image=quay.io/ceph/ceph:v19, name=cranky_margulis, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 20 18:38:50 compute-0 podman[74677]: 2026-01-20 18:38:50.223236815 +0000 UTC m=+0.153564781 container attach e4ec83c628f5c6ff356107289383fbac31a81e6bad1f04ac339ab84052277e19 (image=quay.io/ceph/ceph:v19, name=cranky_margulis, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Jan 20 18:38:50 compute-0 ceph-mgr[74676]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 20 18:38:50 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:38:50.280+0000 7fc62372f140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 20 18:38:50 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'cephadm'
Jan 20 18:38:50 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 20 18:38:50 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3901804472' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 20 18:38:50 compute-0 cranky_margulis[74713]: 
Jan 20 18:38:50 compute-0 cranky_margulis[74713]: {
Jan 20 18:38:50 compute-0 cranky_margulis[74713]:     "fsid": "aecbbf3b-b405-507b-97d7-637a83f5b4b1",
Jan 20 18:38:50 compute-0 cranky_margulis[74713]:     "health": {
Jan 20 18:38:50 compute-0 cranky_margulis[74713]:         "status": "HEALTH_OK",
Jan 20 18:38:50 compute-0 cranky_margulis[74713]:         "checks": {},
Jan 20 18:38:50 compute-0 cranky_margulis[74713]:         "mutes": []
Jan 20 18:38:50 compute-0 cranky_margulis[74713]:     },
Jan 20 18:38:50 compute-0 cranky_margulis[74713]:     "election_epoch": 5,
Jan 20 18:38:50 compute-0 cranky_margulis[74713]:     "quorum": [
Jan 20 18:38:50 compute-0 cranky_margulis[74713]:         0
Jan 20 18:38:50 compute-0 cranky_margulis[74713]:     ],
Jan 20 18:38:50 compute-0 cranky_margulis[74713]:     "quorum_names": [
Jan 20 18:38:50 compute-0 cranky_margulis[74713]:         "compute-0"
Jan 20 18:38:50 compute-0 cranky_margulis[74713]:     ],
Jan 20 18:38:50 compute-0 cranky_margulis[74713]:     "quorum_age": 2,
Jan 20 18:38:50 compute-0 cranky_margulis[74713]:     "monmap": {
Jan 20 18:38:50 compute-0 cranky_margulis[74713]:         "epoch": 1,
Jan 20 18:38:50 compute-0 cranky_margulis[74713]:         "min_mon_release_name": "squid",
Jan 20 18:38:50 compute-0 cranky_margulis[74713]:         "num_mons": 1
Jan 20 18:38:50 compute-0 cranky_margulis[74713]:     },
Jan 20 18:38:50 compute-0 cranky_margulis[74713]:     "osdmap": {
Jan 20 18:38:50 compute-0 cranky_margulis[74713]:         "epoch": 1,
Jan 20 18:38:50 compute-0 cranky_margulis[74713]:         "num_osds": 0,
Jan 20 18:38:50 compute-0 cranky_margulis[74713]:         "num_up_osds": 0,
Jan 20 18:38:50 compute-0 cranky_margulis[74713]:         "osd_up_since": 0,
Jan 20 18:38:50 compute-0 cranky_margulis[74713]:         "num_in_osds": 0,
Jan 20 18:38:50 compute-0 cranky_margulis[74713]:         "osd_in_since": 0,
Jan 20 18:38:50 compute-0 cranky_margulis[74713]:         "num_remapped_pgs": 0
Jan 20 18:38:50 compute-0 cranky_margulis[74713]:     },
Jan 20 18:38:50 compute-0 cranky_margulis[74713]:     "pgmap": {
Jan 20 18:38:50 compute-0 cranky_margulis[74713]:         "pgs_by_state": [],
Jan 20 18:38:50 compute-0 cranky_margulis[74713]:         "num_pgs": 0,
Jan 20 18:38:50 compute-0 cranky_margulis[74713]:         "num_pools": 0,
Jan 20 18:38:50 compute-0 cranky_margulis[74713]:         "num_objects": 0,
Jan 20 18:38:50 compute-0 cranky_margulis[74713]:         "data_bytes": 0,
Jan 20 18:38:50 compute-0 cranky_margulis[74713]:         "bytes_used": 0,
Jan 20 18:38:50 compute-0 cranky_margulis[74713]:         "bytes_avail": 0,
Jan 20 18:38:50 compute-0 cranky_margulis[74713]:         "bytes_total": 0
Jan 20 18:38:50 compute-0 cranky_margulis[74713]:     },
Jan 20 18:38:50 compute-0 cranky_margulis[74713]:     "fsmap": {
Jan 20 18:38:50 compute-0 cranky_margulis[74713]:         "epoch": 1,
Jan 20 18:38:50 compute-0 cranky_margulis[74713]:         "btime": "2026-01-20T18:38:46:076055+0000",
Jan 20 18:38:50 compute-0 cranky_margulis[74713]:         "by_rank": [],
Jan 20 18:38:50 compute-0 cranky_margulis[74713]:         "up:standby": 0
Jan 20 18:38:50 compute-0 cranky_margulis[74713]:     },
Jan 20 18:38:50 compute-0 cranky_margulis[74713]:     "mgrmap": {
Jan 20 18:38:50 compute-0 cranky_margulis[74713]:         "available": false,
Jan 20 18:38:50 compute-0 cranky_margulis[74713]:         "num_standbys": 0,
Jan 20 18:38:50 compute-0 cranky_margulis[74713]:         "modules": [
Jan 20 18:38:50 compute-0 cranky_margulis[74713]:             "iostat",
Jan 20 18:38:50 compute-0 cranky_margulis[74713]:             "nfs",
Jan 20 18:38:50 compute-0 cranky_margulis[74713]:             "restful"
Jan 20 18:38:50 compute-0 cranky_margulis[74713]:         ],
Jan 20 18:38:50 compute-0 cranky_margulis[74713]:         "services": {}
Jan 20 18:38:50 compute-0 cranky_margulis[74713]:     },
Jan 20 18:38:50 compute-0 cranky_margulis[74713]:     "servicemap": {
Jan 20 18:38:50 compute-0 cranky_margulis[74713]:         "epoch": 1,
Jan 20 18:38:50 compute-0 cranky_margulis[74713]:         "modified": "2026-01-20T18:38:46.080282+0000",
Jan 20 18:38:50 compute-0 cranky_margulis[74713]:         "services": {}
Jan 20 18:38:50 compute-0 cranky_margulis[74713]:     },
Jan 20 18:38:50 compute-0 cranky_margulis[74713]:     "progress_events": {}
Jan 20 18:38:50 compute-0 cranky_margulis[74713]: }
Jan 20 18:38:50 compute-0 systemd[1]: libpod-e4ec83c628f5c6ff356107289383fbac31a81e6bad1f04ac339ab84052277e19.scope: Deactivated successfully.
Jan 20 18:38:50 compute-0 podman[74677]: 2026-01-20 18:38:50.42552308 +0000 UTC m=+0.355851086 container died e4ec83c628f5c6ff356107289383fbac31a81e6bad1f04ac339ab84052277e19 (image=quay.io/ceph/ceph:v19, name=cranky_margulis, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid)
Jan 20 18:38:50 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/3901804472' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 20 18:38:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-d5427857d66a92ecc4eaa7b2c89fa42de5f5730fd0e1af1d8ab98e9e726cc255-merged.mount: Deactivated successfully.
Jan 20 18:38:50 compute-0 podman[74677]: 2026-01-20 18:38:50.488012421 +0000 UTC m=+0.418340417 container remove e4ec83c628f5c6ff356107289383fbac31a81e6bad1f04ac339ab84052277e19 (image=quay.io/ceph/ceph:v19, name=cranky_margulis, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 18:38:50 compute-0 systemd[1]: libpod-conmon-e4ec83c628f5c6ff356107289383fbac31a81e6bad1f04ac339ab84052277e19.scope: Deactivated successfully.
Jan 20 18:38:51 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'crash'
Jan 20 18:38:51 compute-0 ceph-mgr[74676]: mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 20 18:38:51 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'dashboard'
Jan 20 18:38:51 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:38:51.182+0000 7fc62372f140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 20 18:38:51 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'devicehealth'
Jan 20 18:38:51 compute-0 ceph-mgr[74676]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 20 18:38:51 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'diskprediction_local'
Jan 20 18:38:51 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:38:51.869+0000 7fc62372f140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 20 18:38:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 20 18:38:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 20 18:38:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]:   from numpy import show_config as show_numpy_config
Jan 20 18:38:52 compute-0 ceph-mgr[74676]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 20 18:38:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:38:52.045+0000 7fc62372f140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 20 18:38:52 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'influx'
Jan 20 18:38:52 compute-0 ceph-mgr[74676]: mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 20 18:38:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:38:52.114+0000 7fc62372f140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 20 18:38:52 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'insights'
Jan 20 18:38:52 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'iostat'
Jan 20 18:38:52 compute-0 ceph-mgr[74676]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 20 18:38:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:38:52.253+0000 7fc62372f140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 20 18:38:52 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'k8sevents'
Jan 20 18:38:52 compute-0 podman[74764]: 2026-01-20 18:38:52.561329797 +0000 UTC m=+0.048148431 container create f6af4a787ff9c31734a326bed505a43251cc2fa5a3318856d472ece20cd0e482 (image=quay.io/ceph/ceph:v19, name=vigilant_wescoff, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:38:52 compute-0 systemd[1]: Started libpod-conmon-f6af4a787ff9c31734a326bed505a43251cc2fa5a3318856d472ece20cd0e482.scope.
Jan 20 18:38:52 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:38:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae012f6f031d65eb30d82ed65cf0023e06cc0f71766cb3382e88072fca13eb50/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:38:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae012f6f031d65eb30d82ed65cf0023e06cc0f71766cb3382e88072fca13eb50/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 18:38:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae012f6f031d65eb30d82ed65cf0023e06cc0f71766cb3382e88072fca13eb50/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:38:52 compute-0 podman[74764]: 2026-01-20 18:38:52.53424928 +0000 UTC m=+0.021067954 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:38:52 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'localpool'
Jan 20 18:38:52 compute-0 podman[74764]: 2026-01-20 18:38:52.645768745 +0000 UTC m=+0.132587409 container init f6af4a787ff9c31734a326bed505a43251cc2fa5a3318856d472ece20cd0e482 (image=quay.io/ceph/ceph:v19, name=vigilant_wescoff, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Jan 20 18:38:52 compute-0 podman[74764]: 2026-01-20 18:38:52.651020508 +0000 UTC m=+0.137839142 container start f6af4a787ff9c31734a326bed505a43251cc2fa5a3318856d472ece20cd0e482 (image=quay.io/ceph/ceph:v19, name=vigilant_wescoff, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Jan 20 18:38:52 compute-0 podman[74764]: 2026-01-20 18:38:52.658831131 +0000 UTC m=+0.145649795 container attach f6af4a787ff9c31734a326bed505a43251cc2fa5a3318856d472ece20cd0e482 (image=quay.io/ceph/ceph:v19, name=vigilant_wescoff, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Jan 20 18:38:52 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'mds_autoscaler'
Jan 20 18:38:52 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 20 18:38:52 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2756178437' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 20 18:38:52 compute-0 vigilant_wescoff[74780]: 
Jan 20 18:38:52 compute-0 vigilant_wescoff[74780]: {
Jan 20 18:38:52 compute-0 vigilant_wescoff[74780]:     "fsid": "aecbbf3b-b405-507b-97d7-637a83f5b4b1",
Jan 20 18:38:52 compute-0 vigilant_wescoff[74780]:     "health": {
Jan 20 18:38:52 compute-0 vigilant_wescoff[74780]:         "status": "HEALTH_OK",
Jan 20 18:38:52 compute-0 vigilant_wescoff[74780]:         "checks": {},
Jan 20 18:38:52 compute-0 vigilant_wescoff[74780]:         "mutes": []
Jan 20 18:38:52 compute-0 vigilant_wescoff[74780]:     },
Jan 20 18:38:52 compute-0 vigilant_wescoff[74780]:     "election_epoch": 5,
Jan 20 18:38:52 compute-0 vigilant_wescoff[74780]:     "quorum": [
Jan 20 18:38:52 compute-0 vigilant_wescoff[74780]:         0
Jan 20 18:38:52 compute-0 vigilant_wescoff[74780]:     ],
Jan 20 18:38:52 compute-0 vigilant_wescoff[74780]:     "quorum_names": [
Jan 20 18:38:52 compute-0 vigilant_wescoff[74780]:         "compute-0"
Jan 20 18:38:52 compute-0 vigilant_wescoff[74780]:     ],
Jan 20 18:38:52 compute-0 vigilant_wescoff[74780]:     "quorum_age": 4,
Jan 20 18:38:52 compute-0 vigilant_wescoff[74780]:     "monmap": {
Jan 20 18:38:52 compute-0 vigilant_wescoff[74780]:         "epoch": 1,
Jan 20 18:38:52 compute-0 vigilant_wescoff[74780]:         "min_mon_release_name": "squid",
Jan 20 18:38:52 compute-0 vigilant_wescoff[74780]:         "num_mons": 1
Jan 20 18:38:52 compute-0 vigilant_wescoff[74780]:     },
Jan 20 18:38:52 compute-0 vigilant_wescoff[74780]:     "osdmap": {
Jan 20 18:38:52 compute-0 vigilant_wescoff[74780]:         "epoch": 1,
Jan 20 18:38:52 compute-0 vigilant_wescoff[74780]:         "num_osds": 0,
Jan 20 18:38:52 compute-0 vigilant_wescoff[74780]:         "num_up_osds": 0,
Jan 20 18:38:52 compute-0 vigilant_wescoff[74780]:         "osd_up_since": 0,
Jan 20 18:38:52 compute-0 vigilant_wescoff[74780]:         "num_in_osds": 0,
Jan 20 18:38:52 compute-0 vigilant_wescoff[74780]:         "osd_in_since": 0,
Jan 20 18:38:52 compute-0 vigilant_wescoff[74780]:         "num_remapped_pgs": 0
Jan 20 18:38:52 compute-0 vigilant_wescoff[74780]:     },
Jan 20 18:38:52 compute-0 vigilant_wescoff[74780]:     "pgmap": {
Jan 20 18:38:52 compute-0 vigilant_wescoff[74780]:         "pgs_by_state": [],
Jan 20 18:38:52 compute-0 vigilant_wescoff[74780]:         "num_pgs": 0,
Jan 20 18:38:52 compute-0 vigilant_wescoff[74780]:         "num_pools": 0,
Jan 20 18:38:52 compute-0 vigilant_wescoff[74780]:         "num_objects": 0,
Jan 20 18:38:52 compute-0 vigilant_wescoff[74780]:         "data_bytes": 0,
Jan 20 18:38:52 compute-0 vigilant_wescoff[74780]:         "bytes_used": 0,
Jan 20 18:38:52 compute-0 vigilant_wescoff[74780]:         "bytes_avail": 0,
Jan 20 18:38:52 compute-0 vigilant_wescoff[74780]:         "bytes_total": 0
Jan 20 18:38:52 compute-0 vigilant_wescoff[74780]:     },
Jan 20 18:38:52 compute-0 vigilant_wescoff[74780]:     "fsmap": {
Jan 20 18:38:52 compute-0 vigilant_wescoff[74780]:         "epoch": 1,
Jan 20 18:38:52 compute-0 vigilant_wescoff[74780]:         "btime": "2026-01-20T18:38:46:076055+0000",
Jan 20 18:38:52 compute-0 vigilant_wescoff[74780]:         "by_rank": [],
Jan 20 18:38:52 compute-0 vigilant_wescoff[74780]:         "up:standby": 0
Jan 20 18:38:52 compute-0 vigilant_wescoff[74780]:     },
Jan 20 18:38:52 compute-0 vigilant_wescoff[74780]:     "mgrmap": {
Jan 20 18:38:52 compute-0 vigilant_wescoff[74780]:         "available": false,
Jan 20 18:38:52 compute-0 vigilant_wescoff[74780]:         "num_standbys": 0,
Jan 20 18:38:52 compute-0 vigilant_wescoff[74780]:         "modules": [
Jan 20 18:38:52 compute-0 vigilant_wescoff[74780]:             "iostat",
Jan 20 18:38:52 compute-0 vigilant_wescoff[74780]:             "nfs",
Jan 20 18:38:52 compute-0 vigilant_wescoff[74780]:             "restful"
Jan 20 18:38:52 compute-0 vigilant_wescoff[74780]:         ],
Jan 20 18:38:52 compute-0 vigilant_wescoff[74780]:         "services": {}
Jan 20 18:38:52 compute-0 vigilant_wescoff[74780]:     },
Jan 20 18:38:52 compute-0 vigilant_wescoff[74780]:     "servicemap": {
Jan 20 18:38:52 compute-0 vigilant_wescoff[74780]:         "epoch": 1,
Jan 20 18:38:52 compute-0 vigilant_wescoff[74780]:         "modified": "2026-01-20T18:38:46.080282+0000",
Jan 20 18:38:52 compute-0 vigilant_wescoff[74780]:         "services": {}
Jan 20 18:38:52 compute-0 vigilant_wescoff[74780]:     },
Jan 20 18:38:52 compute-0 vigilant_wescoff[74780]:     "progress_events": {}
Jan 20 18:38:52 compute-0 vigilant_wescoff[74780]: }
Jan 20 18:38:52 compute-0 systemd[1]: libpod-f6af4a787ff9c31734a326bed505a43251cc2fa5a3318856d472ece20cd0e482.scope: Deactivated successfully.
Jan 20 18:38:52 compute-0 podman[74764]: 2026-01-20 18:38:52.851234167 +0000 UTC m=+0.338052801 container died f6af4a787ff9c31734a326bed505a43251cc2fa5a3318856d472ece20cd0e482 (image=quay.io/ceph/ceph:v19, name=vigilant_wescoff, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:38:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae012f6f031d65eb30d82ed65cf0023e06cc0f71766cb3382e88072fca13eb50-merged.mount: Deactivated successfully.
Jan 20 18:38:52 compute-0 podman[74764]: 2026-01-20 18:38:52.883826214 +0000 UTC m=+0.370644838 container remove f6af4a787ff9c31734a326bed505a43251cc2fa5a3318856d472ece20cd0e482 (image=quay.io/ceph/ceph:v19, name=vigilant_wescoff, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:38:52 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/2756178437' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 20 18:38:52 compute-0 systemd[1]: libpod-conmon-f6af4a787ff9c31734a326bed505a43251cc2fa5a3318856d472ece20cd0e482.scope: Deactivated successfully.
Jan 20 18:38:52 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'mirroring'
Jan 20 18:38:53 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'nfs'
Jan 20 18:38:53 compute-0 ceph-mgr[74676]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 20 18:38:53 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:38:53.273+0000 7fc62372f140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 20 18:38:53 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'orchestrator'
Jan 20 18:38:53 compute-0 ceph-mgr[74676]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 20 18:38:53 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:38:53.495+0000 7fc62372f140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 20 18:38:53 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'osd_perf_query'
Jan 20 18:38:53 compute-0 ceph-mgr[74676]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 20 18:38:53 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:38:53.584+0000 7fc62372f140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 20 18:38:53 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'osd_support'
Jan 20 18:38:53 compute-0 ceph-mgr[74676]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 20 18:38:53 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:38:53.652+0000 7fc62372f140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 20 18:38:53 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'pg_autoscaler'
Jan 20 18:38:53 compute-0 ceph-mgr[74676]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 20 18:38:53 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:38:53.730+0000 7fc62372f140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 20 18:38:53 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'progress'
Jan 20 18:38:53 compute-0 ceph-mgr[74676]: mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 20 18:38:53 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:38:53.803+0000 7fc62372f140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 20 18:38:53 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'prometheus'
Jan 20 18:38:54 compute-0 ceph-mgr[74676]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 20 18:38:54 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:38:54.141+0000 7fc62372f140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 20 18:38:54 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'rbd_support'
Jan 20 18:38:54 compute-0 ceph-mgr[74676]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 20 18:38:54 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:38:54.242+0000 7fc62372f140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 20 18:38:54 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'restful'
Jan 20 18:38:54 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'rgw'
Jan 20 18:38:54 compute-0 ceph-mgr[74676]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 20 18:38:54 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'rook'
Jan 20 18:38:54 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:38:54.694+0000 7fc62372f140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 20 18:38:54 compute-0 podman[74819]: 2026-01-20 18:38:54.955555427 +0000 UTC m=+0.045546621 container create d77fc2c67bd96c39ac67c2170282928394524aeee4debf424f60c2d4bce97b95 (image=quay.io/ceph/ceph:v19, name=eloquent_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS)
Jan 20 18:38:54 compute-0 systemd[1]: Started libpod-conmon-d77fc2c67bd96c39ac67c2170282928394524aeee4debf424f60c2d4bce97b95.scope.
Jan 20 18:38:55 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:38:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e63df086fd310f71a9cd51540c5d36f37d3a0af8faa14e45876db24ade22f0a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:38:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e63df086fd310f71a9cd51540c5d36f37d3a0af8faa14e45876db24ade22f0a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 18:38:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e63df086fd310f71a9cd51540c5d36f37d3a0af8faa14e45876db24ade22f0a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:38:55 compute-0 podman[74819]: 2026-01-20 18:38:55.0243461 +0000 UTC m=+0.114337294 container init d77fc2c67bd96c39ac67c2170282928394524aeee4debf424f60c2d4bce97b95 (image=quay.io/ceph/ceph:v19, name=eloquent_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Jan 20 18:38:55 compute-0 podman[74819]: 2026-01-20 18:38:54.936482038 +0000 UTC m=+0.026473262 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:38:55 compute-0 podman[74819]: 2026-01-20 18:38:55.032751048 +0000 UTC m=+0.122742242 container start d77fc2c67bd96c39ac67c2170282928394524aeee4debf424f60c2d4bce97b95 (image=quay.io/ceph/ceph:v19, name=eloquent_tharp, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 20 18:38:55 compute-0 podman[74819]: 2026-01-20 18:38:55.035941155 +0000 UTC m=+0.125932349 container attach d77fc2c67bd96c39ac67c2170282928394524aeee4debf424f60c2d4bce97b95 (image=quay.io/ceph/ceph:v19, name=eloquent_tharp, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 20 18:38:55 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 20 18:38:55 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3248468240' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 20 18:38:55 compute-0 eloquent_tharp[74835]: 
Jan 20 18:38:55 compute-0 eloquent_tharp[74835]: {
Jan 20 18:38:55 compute-0 eloquent_tharp[74835]:     "fsid": "aecbbf3b-b405-507b-97d7-637a83f5b4b1",
Jan 20 18:38:55 compute-0 eloquent_tharp[74835]:     "health": {
Jan 20 18:38:55 compute-0 eloquent_tharp[74835]:         "status": "HEALTH_OK",
Jan 20 18:38:55 compute-0 eloquent_tharp[74835]:         "checks": {},
Jan 20 18:38:55 compute-0 eloquent_tharp[74835]:         "mutes": []
Jan 20 18:38:55 compute-0 eloquent_tharp[74835]:     },
Jan 20 18:38:55 compute-0 eloquent_tharp[74835]:     "election_epoch": 5,
Jan 20 18:38:55 compute-0 eloquent_tharp[74835]:     "quorum": [
Jan 20 18:38:55 compute-0 eloquent_tharp[74835]:         0
Jan 20 18:38:55 compute-0 eloquent_tharp[74835]:     ],
Jan 20 18:38:55 compute-0 eloquent_tharp[74835]:     "quorum_names": [
Jan 20 18:38:55 compute-0 eloquent_tharp[74835]:         "compute-0"
Jan 20 18:38:55 compute-0 eloquent_tharp[74835]:     ],
Jan 20 18:38:55 compute-0 eloquent_tharp[74835]:     "quorum_age": 6,
Jan 20 18:38:55 compute-0 eloquent_tharp[74835]:     "monmap": {
Jan 20 18:38:55 compute-0 eloquent_tharp[74835]:         "epoch": 1,
Jan 20 18:38:55 compute-0 eloquent_tharp[74835]:         "min_mon_release_name": "squid",
Jan 20 18:38:55 compute-0 eloquent_tharp[74835]:         "num_mons": 1
Jan 20 18:38:55 compute-0 eloquent_tharp[74835]:     },
Jan 20 18:38:55 compute-0 eloquent_tharp[74835]:     "osdmap": {
Jan 20 18:38:55 compute-0 eloquent_tharp[74835]:         "epoch": 1,
Jan 20 18:38:55 compute-0 eloquent_tharp[74835]:         "num_osds": 0,
Jan 20 18:38:55 compute-0 eloquent_tharp[74835]:         "num_up_osds": 0,
Jan 20 18:38:55 compute-0 eloquent_tharp[74835]:         "osd_up_since": 0,
Jan 20 18:38:55 compute-0 eloquent_tharp[74835]:         "num_in_osds": 0,
Jan 20 18:38:55 compute-0 eloquent_tharp[74835]:         "osd_in_since": 0,
Jan 20 18:38:55 compute-0 eloquent_tharp[74835]:         "num_remapped_pgs": 0
Jan 20 18:38:55 compute-0 eloquent_tharp[74835]:     },
Jan 20 18:38:55 compute-0 eloquent_tharp[74835]:     "pgmap": {
Jan 20 18:38:55 compute-0 eloquent_tharp[74835]:         "pgs_by_state": [],
Jan 20 18:38:55 compute-0 eloquent_tharp[74835]:         "num_pgs": 0,
Jan 20 18:38:55 compute-0 eloquent_tharp[74835]:         "num_pools": 0,
Jan 20 18:38:55 compute-0 eloquent_tharp[74835]:         "num_objects": 0,
Jan 20 18:38:55 compute-0 eloquent_tharp[74835]:         "data_bytes": 0,
Jan 20 18:38:55 compute-0 eloquent_tharp[74835]:         "bytes_used": 0,
Jan 20 18:38:55 compute-0 eloquent_tharp[74835]:         "bytes_avail": 0,
Jan 20 18:38:55 compute-0 eloquent_tharp[74835]:         "bytes_total": 0
Jan 20 18:38:55 compute-0 eloquent_tharp[74835]:     },
Jan 20 18:38:55 compute-0 eloquent_tharp[74835]:     "fsmap": {
Jan 20 18:38:55 compute-0 eloquent_tharp[74835]:         "epoch": 1,
Jan 20 18:38:55 compute-0 eloquent_tharp[74835]:         "btime": "2026-01-20T18:38:46:076055+0000",
Jan 20 18:38:55 compute-0 eloquent_tharp[74835]:         "by_rank": [],
Jan 20 18:38:55 compute-0 eloquent_tharp[74835]:         "up:standby": 0
Jan 20 18:38:55 compute-0 eloquent_tharp[74835]:     },
Jan 20 18:38:55 compute-0 eloquent_tharp[74835]:     "mgrmap": {
Jan 20 18:38:55 compute-0 eloquent_tharp[74835]:         "available": false,
Jan 20 18:38:55 compute-0 eloquent_tharp[74835]:         "num_standbys": 0,
Jan 20 18:38:55 compute-0 eloquent_tharp[74835]:         "modules": [
Jan 20 18:38:55 compute-0 eloquent_tharp[74835]:             "iostat",
Jan 20 18:38:55 compute-0 eloquent_tharp[74835]:             "nfs",
Jan 20 18:38:55 compute-0 eloquent_tharp[74835]:             "restful"
Jan 20 18:38:55 compute-0 eloquent_tharp[74835]:         ],
Jan 20 18:38:55 compute-0 eloquent_tharp[74835]:         "services": {}
Jan 20 18:38:55 compute-0 eloquent_tharp[74835]:     },
Jan 20 18:38:55 compute-0 eloquent_tharp[74835]:     "servicemap": {
Jan 20 18:38:55 compute-0 eloquent_tharp[74835]:         "epoch": 1,
Jan 20 18:38:55 compute-0 eloquent_tharp[74835]:         "modified": "2026-01-20T18:38:46.080282+0000",
Jan 20 18:38:55 compute-0 eloquent_tharp[74835]:         "services": {}
Jan 20 18:38:55 compute-0 eloquent_tharp[74835]:     },
Jan 20 18:38:55 compute-0 eloquent_tharp[74835]:     "progress_events": {}
Jan 20 18:38:55 compute-0 eloquent_tharp[74835]: }
Jan 20 18:38:55 compute-0 systemd[1]: libpod-d77fc2c67bd96c39ac67c2170282928394524aeee4debf424f60c2d4bce97b95.scope: Deactivated successfully.
Jan 20 18:38:55 compute-0 podman[74819]: 2026-01-20 18:38:55.219628434 +0000 UTC m=+0.309619638 container died d77fc2c67bd96c39ac67c2170282928394524aeee4debf424f60c2d4bce97b95 (image=quay.io/ceph/ceph:v19, name=eloquent_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 18:38:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e63df086fd310f71a9cd51540c5d36f37d3a0af8faa14e45876db24ade22f0a-merged.mount: Deactivated successfully.
Jan 20 18:38:55 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/3248468240' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 20 18:38:55 compute-0 podman[74819]: 2026-01-20 18:38:55.268502904 +0000 UTC m=+0.358494108 container remove d77fc2c67bd96c39ac67c2170282928394524aeee4debf424f60c2d4bce97b95 (image=quay.io/ceph/ceph:v19, name=eloquent_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Jan 20 18:38:55 compute-0 ceph-mgr[74676]: mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 20 18:38:55 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:38:55.279+0000 7fc62372f140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 20 18:38:55 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'selftest'
Jan 20 18:38:55 compute-0 systemd[1]: libpod-conmon-d77fc2c67bd96c39ac67c2170282928394524aeee4debf424f60c2d4bce97b95.scope: Deactivated successfully.
Jan 20 18:38:55 compute-0 ceph-mgr[74676]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 20 18:38:55 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:38:55.355+0000 7fc62372f140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 20 18:38:55 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'snap_schedule'
Jan 20 18:38:55 compute-0 ceph-mgr[74676]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 20 18:38:55 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:38:55.445+0000 7fc62372f140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 20 18:38:55 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'stats'
Jan 20 18:38:55 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'status'
Jan 20 18:38:55 compute-0 ceph-mgr[74676]: mgr[py] Module status has missing NOTIFY_TYPES member
Jan 20 18:38:55 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:38:55.604+0000 7fc62372f140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Jan 20 18:38:55 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'telegraf'
Jan 20 18:38:55 compute-0 ceph-mgr[74676]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 20 18:38:55 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:38:55.679+0000 7fc62372f140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 20 18:38:55 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'telemetry'
Jan 20 18:38:55 compute-0 ceph-mgr[74676]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 20 18:38:55 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:38:55.837+0000 7fc62372f140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 20 18:38:55 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'test_orchestrator'
Jan 20 18:38:56 compute-0 ceph-mgr[74676]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 20 18:38:56 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:38:56.065+0000 7fc62372f140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 20 18:38:56 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'volumes'
Jan 20 18:38:56 compute-0 ceph-mgr[74676]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 20 18:38:56 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:38:56.332+0000 7fc62372f140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 20 18:38:56 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'zabbix'
Jan 20 18:38:56 compute-0 ceph-mgr[74676]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 20 18:38:56 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:38:56.403+0000 7fc62372f140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 20 18:38:56 compute-0 ceph-mgr[74676]: ms_deliver_dispatch: unhandled message 0x560616f0a9c0 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Jan 20 18:38:56 compute-0 ceph-mon[74381]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.cepfkm
Jan 20 18:38:56 compute-0 ceph-mgr[74676]: mgr handle_mgr_map Activating!
Jan 20 18:38:56 compute-0 ceph-mgr[74676]: mgr handle_mgr_map I am now activating
Jan 20 18:38:56 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.cepfkm(active, starting, since 0.0166308s)
Jan 20 18:38:56 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Jan 20 18:38:56 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1458634133' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mds metadata"}]: dispatch
Jan 20 18:38:56 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).mds e1 all = 1
Jan 20 18:38:56 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Jan 20 18:38:56 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1458634133' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 20 18:38:56 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Jan 20 18:38:56 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1458634133' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mon metadata"}]: dispatch
Jan 20 18:38:56 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 20 18:38:56 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1458634133' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 20 18:38:56 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.cepfkm", "id": "compute-0.cepfkm"} v 0)
Jan 20 18:38:56 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/1458634133' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mgr metadata", "who": "compute-0.cepfkm", "id": "compute-0.cepfkm"}]: dispatch
Jan 20 18:38:56 compute-0 ceph-mgr[74676]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 18:38:56 compute-0 ceph-mgr[74676]: mgr load Constructed class from module: balancer
Jan 20 18:38:56 compute-0 ceph-mgr[74676]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 18:38:56 compute-0 ceph-mgr[74676]: mgr load Constructed class from module: crash
Jan 20 18:38:56 compute-0 ceph-mgr[74676]: [balancer INFO root] Starting
Jan 20 18:38:56 compute-0 ceph-mon[74381]: log_channel(cluster) log [INF] : Manager daemon compute-0.cepfkm is now available
Jan 20 18:38:56 compute-0 ceph-mgr[74676]: [balancer INFO root] Optimize plan auto_2026-01-20_18:38:56
Jan 20 18:38:56 compute-0 ceph-mgr[74676]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 18:38:56 compute-0 ceph-mgr[74676]: [balancer INFO root] do_upmap
Jan 20 18:38:56 compute-0 ceph-mgr[74676]: [balancer INFO root] No pools available
Jan 20 18:38:56 compute-0 ceph-mgr[74676]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 18:38:56 compute-0 ceph-mgr[74676]: mgr load Constructed class from module: devicehealth
Jan 20 18:38:56 compute-0 ceph-mgr[74676]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 18:38:56 compute-0 ceph-mgr[74676]: mgr load Constructed class from module: iostat
Jan 20 18:38:56 compute-0 ceph-mgr[74676]: [devicehealth INFO root] Starting
Jan 20 18:38:56 compute-0 ceph-mgr[74676]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 18:38:56 compute-0 ceph-mgr[74676]: mgr load Constructed class from module: nfs
Jan 20 18:38:56 compute-0 ceph-mgr[74676]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 18:38:56 compute-0 ceph-mgr[74676]: mgr load Constructed class from module: orchestrator
Jan 20 18:38:56 compute-0 ceph-mgr[74676]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 18:38:56 compute-0 ceph-mgr[74676]: mgr load Constructed class from module: pg_autoscaler
Jan 20 18:38:56 compute-0 ceph-mgr[74676]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 18:38:56 compute-0 ceph-mgr[74676]: mgr load Constructed class from module: progress
Jan 20 18:38:56 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 18:38:56 compute-0 ceph-mgr[74676]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 18:38:56 compute-0 ceph-mgr[74676]: [progress INFO root] Loading...
Jan 20 18:38:56 compute-0 ceph-mgr[74676]: [progress INFO root] No stored events to load
Jan 20 18:38:56 compute-0 ceph-mgr[74676]: [progress INFO root] Loaded [] historic events
Jan 20 18:38:56 compute-0 ceph-mgr[74676]: [progress INFO root] Loaded OSDMap, ready.
Jan 20 18:38:56 compute-0 ceph-mgr[74676]: [rbd_support INFO root] recovery thread starting
Jan 20 18:38:56 compute-0 ceph-mgr[74676]: [rbd_support INFO root] starting setup
Jan 20 18:38:56 compute-0 ceph-mgr[74676]: mgr load Constructed class from module: rbd_support
Jan 20 18:38:56 compute-0 ceph-mgr[74676]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 18:38:56 compute-0 ceph-mgr[74676]: mgr load Constructed class from module: restful
Jan 20 18:38:56 compute-0 ceph-mgr[74676]: [restful INFO root] server_addr: :: server_port: 8003
Jan 20 18:38:56 compute-0 ceph-mgr[74676]: [restful WARNING root] server not running: no certificate configured
Jan 20 18:38:56 compute-0 ceph-mgr[74676]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 18:38:56 compute-0 ceph-mgr[74676]: mgr load Constructed class from module: status
Jan 20 18:38:56 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.cepfkm/mirror_snapshot_schedule"} v 0)
Jan 20 18:38:56 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1458634133' entity='mgr.compute-0.cepfkm' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.cepfkm/mirror_snapshot_schedule"}]: dispatch
Jan 20 18:38:56 compute-0 ceph-mgr[74676]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 18:38:56 compute-0 ceph-mgr[74676]: mgr load Constructed class from module: telemetry
Jan 20 18:38:56 compute-0 ceph-mgr[74676]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 18:38:56 compute-0 ceph-mgr[74676]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 18:38:56 compute-0 ceph-mgr[74676]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Jan 20 18:38:56 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0)
Jan 20 18:38:56 compute-0 ceph-mgr[74676]: [rbd_support INFO root] PerfHandler: starting
Jan 20 18:38:56 compute-0 ceph-mgr[74676]: [rbd_support INFO root] TaskHandler: starting
Jan 20 18:38:56 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.cepfkm/trash_purge_schedule"} v 0)
Jan 20 18:38:56 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1458634133' entity='mgr.compute-0.cepfkm' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.cepfkm/trash_purge_schedule"}]: dispatch
Jan 20 18:38:56 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1458634133' entity='mgr.compute-0.cepfkm' 
Jan 20 18:38:56 compute-0 ceph-mgr[74676]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 18:38:56 compute-0 ceph-mgr[74676]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Jan 20 18:38:56 compute-0 ceph-mgr[74676]: [rbd_support INFO root] setup complete
Jan 20 18:38:56 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0)
Jan 20 18:38:56 compute-0 ceph-mon[74381]: Activating manager daemon compute-0.cepfkm
Jan 20 18:38:56 compute-0 ceph-mon[74381]: mgrmap e2: compute-0.cepfkm(active, starting, since 0.0166308s)
Jan 20 18:38:56 compute-0 ceph-mon[74381]: from='mgr.14102 192.168.122.100:0/1458634133' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mds metadata"}]: dispatch
Jan 20 18:38:56 compute-0 ceph-mon[74381]: from='mgr.14102 192.168.122.100:0/1458634133' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 20 18:38:56 compute-0 ceph-mon[74381]: from='mgr.14102 192.168.122.100:0/1458634133' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mon metadata"}]: dispatch
Jan 20 18:38:56 compute-0 ceph-mon[74381]: from='mgr.14102 192.168.122.100:0/1458634133' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 20 18:38:56 compute-0 ceph-mon[74381]: from='mgr.14102 192.168.122.100:0/1458634133' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mgr metadata", "who": "compute-0.cepfkm", "id": "compute-0.cepfkm"}]: dispatch
Jan 20 18:38:56 compute-0 ceph-mon[74381]: Manager daemon compute-0.cepfkm is now available
Jan 20 18:38:56 compute-0 ceph-mon[74381]: from='mgr.14102 192.168.122.100:0/1458634133' entity='mgr.compute-0.cepfkm' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.cepfkm/mirror_snapshot_schedule"}]: dispatch
Jan 20 18:38:56 compute-0 ceph-mon[74381]: from='mgr.14102 192.168.122.100:0/1458634133' entity='mgr.compute-0.cepfkm' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.cepfkm/trash_purge_schedule"}]: dispatch
Jan 20 18:38:56 compute-0 ceph-mgr[74676]: mgr load Constructed class from module: volumes
Jan 20 18:38:56 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1458634133' entity='mgr.compute-0.cepfkm' 
Jan 20 18:38:56 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0)
Jan 20 18:38:56 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/1458634133' entity='mgr.compute-0.cepfkm' 
Jan 20 18:38:57 compute-0 podman[74953]: 2026-01-20 18:38:57.339325063 +0000 UTC m=+0.050953248 container create 0023d5c49168c337f5301779142b3146bfbae9cc5740f14ef8794a7acdc72240 (image=quay.io/ceph/ceph:v19, name=unruffled_hodgkin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 18:38:57 compute-0 systemd[1]: Started libpod-conmon-0023d5c49168c337f5301779142b3146bfbae9cc5740f14ef8794a7acdc72240.scope.
Jan 20 18:38:57 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:38:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b2645b0253ba67c5d745f699a9f09917d90a57160d8957c4e4622a7cac4213d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 18:38:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b2645b0253ba67c5d745f699a9f09917d90a57160d8957c4e4622a7cac4213d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:38:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b2645b0253ba67c5d745f699a9f09917d90a57160d8957c4e4622a7cac4213d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:38:57 compute-0 podman[74953]: 2026-01-20 18:38:57.308340319 +0000 UTC m=+0.019968524 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:38:57 compute-0 podman[74953]: 2026-01-20 18:38:57.515352314 +0000 UTC m=+0.226980509 container init 0023d5c49168c337f5301779142b3146bfbae9cc5740f14ef8794a7acdc72240 (image=quay.io/ceph/ceph:v19, name=unruffled_hodgkin, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True)
Jan 20 18:38:57 compute-0 podman[74953]: 2026-01-20 18:38:57.521965224 +0000 UTC m=+0.233593409 container start 0023d5c49168c337f5301779142b3146bfbae9cc5740f14ef8794a7acdc72240 (image=quay.io/ceph/ceph:v19, name=unruffled_hodgkin, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:38:57 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.cepfkm(active, since 1.11547s)
Jan 20 18:38:57 compute-0 podman[74953]: 2026-01-20 18:38:57.527390342 +0000 UTC m=+0.239018527 container attach 0023d5c49168c337f5301779142b3146bfbae9cc5740f14ef8794a7acdc72240 (image=quay.io/ceph/ceph:v19, name=unruffled_hodgkin, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Jan 20 18:38:57 compute-0 ceph-mon[74381]: from='mgr.14102 192.168.122.100:0/1458634133' entity='mgr.compute-0.cepfkm' 
Jan 20 18:38:57 compute-0 ceph-mon[74381]: from='mgr.14102 192.168.122.100:0/1458634133' entity='mgr.compute-0.cepfkm' 
Jan 20 18:38:57 compute-0 ceph-mon[74381]: from='mgr.14102 192.168.122.100:0/1458634133' entity='mgr.compute-0.cepfkm' 
Jan 20 18:38:57 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 20 18:38:57 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/190945769' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 20 18:38:57 compute-0 unruffled_hodgkin[74969]: 
Jan 20 18:38:57 compute-0 unruffled_hodgkin[74969]: {
Jan 20 18:38:57 compute-0 unruffled_hodgkin[74969]:     "fsid": "aecbbf3b-b405-507b-97d7-637a83f5b4b1",
Jan 20 18:38:57 compute-0 unruffled_hodgkin[74969]:     "health": {
Jan 20 18:38:57 compute-0 unruffled_hodgkin[74969]:         "status": "HEALTH_OK",
Jan 20 18:38:57 compute-0 unruffled_hodgkin[74969]:         "checks": {},
Jan 20 18:38:57 compute-0 unruffled_hodgkin[74969]:         "mutes": []
Jan 20 18:38:57 compute-0 unruffled_hodgkin[74969]:     },
Jan 20 18:38:57 compute-0 unruffled_hodgkin[74969]:     "election_epoch": 5,
Jan 20 18:38:57 compute-0 unruffled_hodgkin[74969]:     "quorum": [
Jan 20 18:38:57 compute-0 unruffled_hodgkin[74969]:         0
Jan 20 18:38:57 compute-0 unruffled_hodgkin[74969]:     ],
Jan 20 18:38:57 compute-0 unruffled_hodgkin[74969]:     "quorum_names": [
Jan 20 18:38:57 compute-0 unruffled_hodgkin[74969]:         "compute-0"
Jan 20 18:38:57 compute-0 unruffled_hodgkin[74969]:     ],
Jan 20 18:38:57 compute-0 unruffled_hodgkin[74969]:     "quorum_age": 9,
Jan 20 18:38:57 compute-0 unruffled_hodgkin[74969]:     "monmap": {
Jan 20 18:38:57 compute-0 unruffled_hodgkin[74969]:         "epoch": 1,
Jan 20 18:38:57 compute-0 unruffled_hodgkin[74969]:         "min_mon_release_name": "squid",
Jan 20 18:38:57 compute-0 unruffled_hodgkin[74969]:         "num_mons": 1
Jan 20 18:38:57 compute-0 unruffled_hodgkin[74969]:     },
Jan 20 18:38:57 compute-0 unruffled_hodgkin[74969]:     "osdmap": {
Jan 20 18:38:57 compute-0 unruffled_hodgkin[74969]:         "epoch": 1,
Jan 20 18:38:57 compute-0 unruffled_hodgkin[74969]:         "num_osds": 0,
Jan 20 18:38:57 compute-0 unruffled_hodgkin[74969]:         "num_up_osds": 0,
Jan 20 18:38:57 compute-0 unruffled_hodgkin[74969]:         "osd_up_since": 0,
Jan 20 18:38:57 compute-0 unruffled_hodgkin[74969]:         "num_in_osds": 0,
Jan 20 18:38:57 compute-0 unruffled_hodgkin[74969]:         "osd_in_since": 0,
Jan 20 18:38:57 compute-0 unruffled_hodgkin[74969]:         "num_remapped_pgs": 0
Jan 20 18:38:57 compute-0 unruffled_hodgkin[74969]:     },
Jan 20 18:38:57 compute-0 unruffled_hodgkin[74969]:     "pgmap": {
Jan 20 18:38:57 compute-0 unruffled_hodgkin[74969]:         "pgs_by_state": [],
Jan 20 18:38:57 compute-0 unruffled_hodgkin[74969]:         "num_pgs": 0,
Jan 20 18:38:57 compute-0 unruffled_hodgkin[74969]:         "num_pools": 0,
Jan 20 18:38:57 compute-0 unruffled_hodgkin[74969]:         "num_objects": 0,
Jan 20 18:38:57 compute-0 unruffled_hodgkin[74969]:         "data_bytes": 0,
Jan 20 18:38:57 compute-0 unruffled_hodgkin[74969]:         "bytes_used": 0,
Jan 20 18:38:57 compute-0 unruffled_hodgkin[74969]:         "bytes_avail": 0,
Jan 20 18:38:57 compute-0 unruffled_hodgkin[74969]:         "bytes_total": 0
Jan 20 18:38:57 compute-0 unruffled_hodgkin[74969]:     },
Jan 20 18:38:57 compute-0 unruffled_hodgkin[74969]:     "fsmap": {
Jan 20 18:38:57 compute-0 unruffled_hodgkin[74969]:         "epoch": 1,
Jan 20 18:38:57 compute-0 unruffled_hodgkin[74969]:         "btime": "2026-01-20T18:38:46:076055+0000",
Jan 20 18:38:57 compute-0 unruffled_hodgkin[74969]:         "by_rank": [],
Jan 20 18:38:57 compute-0 unruffled_hodgkin[74969]:         "up:standby": 0
Jan 20 18:38:57 compute-0 unruffled_hodgkin[74969]:     },
Jan 20 18:38:57 compute-0 unruffled_hodgkin[74969]:     "mgrmap": {
Jan 20 18:38:57 compute-0 unruffled_hodgkin[74969]:         "available": true,
Jan 20 18:38:57 compute-0 unruffled_hodgkin[74969]:         "num_standbys": 0,
Jan 20 18:38:57 compute-0 unruffled_hodgkin[74969]:         "modules": [
Jan 20 18:38:57 compute-0 unruffled_hodgkin[74969]:             "iostat",
Jan 20 18:38:57 compute-0 unruffled_hodgkin[74969]:             "nfs",
Jan 20 18:38:57 compute-0 unruffled_hodgkin[74969]:             "restful"
Jan 20 18:38:57 compute-0 unruffled_hodgkin[74969]:         ],
Jan 20 18:38:57 compute-0 unruffled_hodgkin[74969]:         "services": {}
Jan 20 18:38:57 compute-0 unruffled_hodgkin[74969]:     },
Jan 20 18:38:57 compute-0 unruffled_hodgkin[74969]:     "servicemap": {
Jan 20 18:38:57 compute-0 unruffled_hodgkin[74969]:         "epoch": 1,
Jan 20 18:38:57 compute-0 unruffled_hodgkin[74969]:         "modified": "2026-01-20T18:38:46.080282+0000",
Jan 20 18:38:57 compute-0 unruffled_hodgkin[74969]:         "services": {}
Jan 20 18:38:57 compute-0 unruffled_hodgkin[74969]:     },
Jan 20 18:38:57 compute-0 unruffled_hodgkin[74969]:     "progress_events": {}
Jan 20 18:38:57 compute-0 unruffled_hodgkin[74969]: }
Jan 20 18:38:57 compute-0 systemd[1]: libpod-0023d5c49168c337f5301779142b3146bfbae9cc5740f14ef8794a7acdc72240.scope: Deactivated successfully.
Jan 20 18:38:57 compute-0 conmon[74969]: conmon 0023d5c49168c337f530 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0023d5c49168c337f5301779142b3146bfbae9cc5740f14ef8794a7acdc72240.scope/container/memory.events
Jan 20 18:38:57 compute-0 podman[74953]: 2026-01-20 18:38:57.961195948 +0000 UTC m=+0.672824173 container died 0023d5c49168c337f5301779142b3146bfbae9cc5740f14ef8794a7acdc72240 (image=quay.io/ceph/ceph:v19, name=unruffled_hodgkin, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Jan 20 18:38:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b2645b0253ba67c5d745f699a9f09917d90a57160d8957c4e4622a7cac4213d-merged.mount: Deactivated successfully.
Jan 20 18:38:58 compute-0 podman[74953]: 2026-01-20 18:38:58.006228484 +0000 UTC m=+0.717856669 container remove 0023d5c49168c337f5301779142b3146bfbae9cc5740f14ef8794a7acdc72240 (image=quay.io/ceph/ceph:v19, name=unruffled_hodgkin, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:38:58 compute-0 systemd[1]: libpod-conmon-0023d5c49168c337f5301779142b3146bfbae9cc5740f14ef8794a7acdc72240.scope: Deactivated successfully.
Jan 20 18:38:58 compute-0 podman[75007]: 2026-01-20 18:38:58.05826364 +0000 UTC m=+0.034275485 container create 66a32cbae9c935bd0723c5e63d00f4bbdbb2584f3d7d457e9f5063616adf3973 (image=quay.io/ceph/ceph:v19, name=thirsty_booth, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 20 18:38:58 compute-0 systemd[1]: Started libpod-conmon-66a32cbae9c935bd0723c5e63d00f4bbdbb2584f3d7d457e9f5063616adf3973.scope.
Jan 20 18:38:58 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:38:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3b8a6551a300ebe2ede096617efa49b675ea6a8070440f66ceb5d548279fa4e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 18:38:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3b8a6551a300ebe2ede096617efa49b675ea6a8070440f66ceb5d548279fa4e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:38:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3b8a6551a300ebe2ede096617efa49b675ea6a8070440f66ceb5d548279fa4e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:38:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3b8a6551a300ebe2ede096617efa49b675ea6a8070440f66ceb5d548279fa4e/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:38:58 compute-0 podman[75007]: 2026-01-20 18:38:58.132472499 +0000 UTC m=+0.108484374 container init 66a32cbae9c935bd0723c5e63d00f4bbdbb2584f3d7d457e9f5063616adf3973 (image=quay.io/ceph/ceph:v19, name=thirsty_booth, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Jan 20 18:38:58 compute-0 podman[75007]: 2026-01-20 18:38:58.137546857 +0000 UTC m=+0.113558702 container start 66a32cbae9c935bd0723c5e63d00f4bbdbb2584f3d7d457e9f5063616adf3973 (image=quay.io/ceph/ceph:v19, name=thirsty_booth, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 20 18:38:58 compute-0 podman[75007]: 2026-01-20 18:38:58.042503281 +0000 UTC m=+0.018515146 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:38:58 compute-0 podman[75007]: 2026-01-20 18:38:58.140793806 +0000 UTC m=+0.116805651 container attach 66a32cbae9c935bd0723c5e63d00f4bbdbb2584f3d7d457e9f5063616adf3973 (image=quay.io/ceph/ceph:v19, name=thirsty_booth, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 18:38:58 compute-0 ceph-mgr[74676]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 20 18:38:58 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Jan 20 18:38:58 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2302815789' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 20 18:38:58 compute-0 thirsty_booth[75023]: 
Jan 20 18:38:58 compute-0 thirsty_booth[75023]: [global]
Jan 20 18:38:58 compute-0 thirsty_booth[75023]:         fsid = aecbbf3b-b405-507b-97d7-637a83f5b4b1
Jan 20 18:38:58 compute-0 thirsty_booth[75023]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Jan 20 18:38:58 compute-0 systemd[1]: libpod-66a32cbae9c935bd0723c5e63d00f4bbdbb2584f3d7d457e9f5063616adf3973.scope: Deactivated successfully.
Jan 20 18:38:58 compute-0 podman[75007]: 2026-01-20 18:38:58.468754821 +0000 UTC m=+0.444766666 container died 66a32cbae9c935bd0723c5e63d00f4bbdbb2584f3d7d457e9f5063616adf3973 (image=quay.io/ceph/ceph:v19, name=thirsty_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:38:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-a3b8a6551a300ebe2ede096617efa49b675ea6a8070440f66ceb5d548279fa4e-merged.mount: Deactivated successfully.
Jan 20 18:38:58 compute-0 podman[75007]: 2026-01-20 18:38:58.505526792 +0000 UTC m=+0.481538637 container remove 66a32cbae9c935bd0723c5e63d00f4bbdbb2584f3d7d457e9f5063616adf3973 (image=quay.io/ceph/ceph:v19, name=thirsty_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Jan 20 18:38:58 compute-0 systemd[1]: libpod-conmon-66a32cbae9c935bd0723c5e63d00f4bbdbb2584f3d7d457e9f5063616adf3973.scope: Deactivated successfully.
Jan 20 18:38:58 compute-0 ceph-mon[74381]: mgrmap e3: compute-0.cepfkm(active, since 1.11547s)
Jan 20 18:38:58 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/190945769' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 20 18:38:58 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/2302815789' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 20 18:38:58 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.cepfkm(active, since 2s)
Jan 20 18:38:58 compute-0 podman[75061]: 2026-01-20 18:38:58.578282582 +0000 UTC m=+0.045042717 container create 1f0252df19de05e5c51ad322cf368233b0def300609c8afecd85d6bd0819bfde (image=quay.io/ceph/ceph:v19, name=dazzling_albattani, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:38:58 compute-0 systemd[1]: Started libpod-conmon-1f0252df19de05e5c51ad322cf368233b0def300609c8afecd85d6bd0819bfde.scope.
Jan 20 18:38:58 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:38:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd6f8c83cc8a71be093bcb8b7e1a5a29cce51dcd15f4db2bb4b7dd9f9e9eba78/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 18:38:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd6f8c83cc8a71be093bcb8b7e1a5a29cce51dcd15f4db2bb4b7dd9f9e9eba78/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:38:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd6f8c83cc8a71be093bcb8b7e1a5a29cce51dcd15f4db2bb4b7dd9f9e9eba78/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:38:58 compute-0 podman[75061]: 2026-01-20 18:38:58.653302164 +0000 UTC m=+0.120062319 container init 1f0252df19de05e5c51ad322cf368233b0def300609c8afecd85d6bd0819bfde (image=quay.io/ceph/ceph:v19, name=dazzling_albattani, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Jan 20 18:38:58 compute-0 podman[75061]: 2026-01-20 18:38:58.561036573 +0000 UTC m=+0.027796748 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:38:58 compute-0 podman[75061]: 2026-01-20 18:38:58.658848315 +0000 UTC m=+0.125608440 container start 1f0252df19de05e5c51ad322cf368233b0def300609c8afecd85d6bd0819bfde (image=quay.io/ceph/ceph:v19, name=dazzling_albattani, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325)
Jan 20 18:38:58 compute-0 podman[75061]: 2026-01-20 18:38:58.662046312 +0000 UTC m=+0.128806447 container attach 1f0252df19de05e5c51ad322cf368233b0def300609c8afecd85d6bd0819bfde (image=quay.io/ceph/ceph:v19, name=dazzling_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:38:59 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0)
Jan 20 18:38:59 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/929493113' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Jan 20 18:38:59 compute-0 ceph-mon[74381]: mgrmap e4: compute-0.cepfkm(active, since 2s)
Jan 20 18:38:59 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/929493113' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Jan 20 18:38:59 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/929493113' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Jan 20 18:38:59 compute-0 ceph-mgr[74676]: mgr handle_mgr_map respawning because set of enabled modules changed!
Jan 20 18:38:59 compute-0 ceph-mgr[74676]: mgr respawn  e: '/usr/bin/ceph-mgr'
Jan 20 18:38:59 compute-0 ceph-mgr[74676]: mgr respawn  0: '/usr/bin/ceph-mgr'
Jan 20 18:38:59 compute-0 ceph-mgr[74676]: mgr respawn  1: '-n'
Jan 20 18:38:59 compute-0 ceph-mgr[74676]: mgr respawn  2: 'mgr.compute-0.cepfkm'
Jan 20 18:38:59 compute-0 ceph-mgr[74676]: mgr respawn  3: '-f'
Jan 20 18:38:59 compute-0 ceph-mgr[74676]: mgr respawn  4: '--setuser'
Jan 20 18:38:59 compute-0 ceph-mgr[74676]: mgr respawn  5: 'ceph'
Jan 20 18:38:59 compute-0 ceph-mgr[74676]: mgr respawn  6: '--setgroup'
Jan 20 18:38:59 compute-0 ceph-mgr[74676]: mgr respawn  7: 'ceph'
Jan 20 18:38:59 compute-0 ceph-mgr[74676]: mgr respawn  8: '--default-log-to-file=false'
Jan 20 18:38:59 compute-0 ceph-mgr[74676]: mgr respawn  9: '--default-log-to-journald=true'
Jan 20 18:38:59 compute-0 ceph-mgr[74676]: mgr respawn  10: '--default-log-to-stderr=false'
Jan 20 18:38:59 compute-0 ceph-mgr[74676]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Jan 20 18:38:59 compute-0 ceph-mgr[74676]: mgr respawn  exe_path /proc/self/exe
Jan 20 18:38:59 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.cepfkm(active, since 3s)
Jan 20 18:38:59 compute-0 systemd[1]: libpod-1f0252df19de05e5c51ad322cf368233b0def300609c8afecd85d6bd0819bfde.scope: Deactivated successfully.
Jan 20 18:38:59 compute-0 podman[75061]: 2026-01-20 18:38:59.579395671 +0000 UTC m=+1.046155816 container died 1f0252df19de05e5c51ad322cf368233b0def300609c8afecd85d6bd0819bfde (image=quay.io/ceph/ceph:v19, name=dazzling_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Jan 20 18:38:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd6f8c83cc8a71be093bcb8b7e1a5a29cce51dcd15f4db2bb4b7dd9f9e9eba78-merged.mount: Deactivated successfully.
Jan 20 18:38:59 compute-0 podman[75061]: 2026-01-20 18:38:59.61313548 +0000 UTC m=+1.079895605 container remove 1f0252df19de05e5c51ad322cf368233b0def300609c8afecd85d6bd0819bfde (image=quay.io/ceph/ceph:v19, name=dazzling_albattani, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:38:59 compute-0 systemd[1]: libpod-conmon-1f0252df19de05e5c51ad322cf368233b0def300609c8afecd85d6bd0819bfde.scope: Deactivated successfully.
Jan 20 18:38:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ignoring --setuser ceph since I am not root
Jan 20 18:38:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ignoring --setgroup ceph since I am not root
Jan 20 18:38:59 compute-0 ceph-mgr[74676]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Jan 20 18:38:59 compute-0 ceph-mgr[74676]: pidfile_write: ignore empty --pid-file
Jan 20 18:38:59 compute-0 podman[75115]: 2026-01-20 18:38:59.671415612 +0000 UTC m=+0.039486005 container create cb08d44aed74a9cf08fd14dba55fd18d1f10110497b84c7e1c66021325c66e7d (image=quay.io/ceph/ceph:v19, name=kind_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:38:59 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'alerts'
Jan 20 18:38:59 compute-0 systemd[1]: Started libpod-conmon-cb08d44aed74a9cf08fd14dba55fd18d1f10110497b84c7e1c66021325c66e7d.scope.
Jan 20 18:38:59 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:38:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d7004a34ee64fff4f987c8ac4a292c40da2818307fa8e5a71c3f6614e72789e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 18:38:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d7004a34ee64fff4f987c8ac4a292c40da2818307fa8e5a71c3f6614e72789e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:38:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d7004a34ee64fff4f987c8ac4a292c40da2818307fa8e5a71c3f6614e72789e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:38:59 compute-0 podman[75115]: 2026-01-20 18:38:59.736040865 +0000 UTC m=+0.104111308 container init cb08d44aed74a9cf08fd14dba55fd18d1f10110497b84c7e1c66021325c66e7d (image=quay.io/ceph/ceph:v19, name=kind_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:38:59 compute-0 podman[75115]: 2026-01-20 18:38:59.74068996 +0000 UTC m=+0.108760353 container start cb08d44aed74a9cf08fd14dba55fd18d1f10110497b84c7e1c66021325c66e7d (image=quay.io/ceph/ceph:v19, name=kind_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 20 18:38:59 compute-0 podman[75115]: 2026-01-20 18:38:59.743919858 +0000 UTC m=+0.111990251 container attach cb08d44aed74a9cf08fd14dba55fd18d1f10110497b84c7e1c66021325c66e7d (image=quay.io/ceph/ceph:v19, name=kind_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:38:59 compute-0 podman[75115]: 2026-01-20 18:38:59.650227921 +0000 UTC m=+0.018298324 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:38:59 compute-0 ceph-mgr[74676]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 20 18:38:59 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'balancer'
Jan 20 18:38:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:38:59.778+0000 7f64dc381140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 20 18:38:59 compute-0 ceph-mgr[74676]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 20 18:38:59 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'cephadm'
Jan 20 18:38:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:38:59.866+0000 7f64dc381140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 20 18:39:00 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Jan 20 18:39:00 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/144705708' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 20 18:39:00 compute-0 kind_matsumoto[75149]: {
Jan 20 18:39:00 compute-0 kind_matsumoto[75149]:     "epoch": 5,
Jan 20 18:39:00 compute-0 kind_matsumoto[75149]:     "available": true,
Jan 20 18:39:00 compute-0 kind_matsumoto[75149]:     "active_name": "compute-0.cepfkm",
Jan 20 18:39:00 compute-0 kind_matsumoto[75149]:     "num_standby": 0
Jan 20 18:39:00 compute-0 kind_matsumoto[75149]: }
Jan 20 18:39:00 compute-0 systemd[1]: libpod-cb08d44aed74a9cf08fd14dba55fd18d1f10110497b84c7e1c66021325c66e7d.scope: Deactivated successfully.
Jan 20 18:39:00 compute-0 podman[75115]: 2026-01-20 18:39:00.156503483 +0000 UTC m=+0.524573926 container died cb08d44aed74a9cf08fd14dba55fd18d1f10110497b84c7e1c66021325c66e7d (image=quay.io/ceph/ceph:v19, name=kind_matsumoto, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:39:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d7004a34ee64fff4f987c8ac4a292c40da2818307fa8e5a71c3f6614e72789e-merged.mount: Deactivated successfully.
Jan 20 18:39:00 compute-0 podman[75115]: 2026-01-20 18:39:00.205031262 +0000 UTC m=+0.573101655 container remove cb08d44aed74a9cf08fd14dba55fd18d1f10110497b84c7e1c66021325c66e7d (image=quay.io/ceph/ceph:v19, name=kind_matsumoto, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Jan 20 18:39:00 compute-0 systemd[1]: libpod-conmon-cb08d44aed74a9cf08fd14dba55fd18d1f10110497b84c7e1c66021325c66e7d.scope: Deactivated successfully.
Jan 20 18:39:00 compute-0 podman[75187]: 2026-01-20 18:39:00.280861576 +0000 UTC m=+0.051752576 container create d4f6c3cc06ae752cebfd6fc56f3f6be25f85354f4bb12b8a413cb579d4a8d6c8 (image=quay.io/ceph/ceph:v19, name=interesting_euler, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0)
Jan 20 18:39:00 compute-0 podman[75187]: 2026-01-20 18:39:00.255294807 +0000 UTC m=+0.026185887 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:39:00 compute-0 systemd[1]: Started libpod-conmon-d4f6c3cc06ae752cebfd6fc56f3f6be25f85354f4bb12b8a413cb579d4a8d6c8.scope.
Jan 20 18:39:00 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:39:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5631c054307ed85f18750246ebad25247b30b1c2c0431256d17b7b8e333c495/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:39:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5631c054307ed85f18750246ebad25247b30b1c2c0431256d17b7b8e333c495/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:39:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5631c054307ed85f18750246ebad25247b30b1c2c0431256d17b7b8e333c495/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 18:39:00 compute-0 podman[75187]: 2026-01-20 18:39:00.525219166 +0000 UTC m=+0.296110176 container init d4f6c3cc06ae752cebfd6fc56f3f6be25f85354f4bb12b8a413cb579d4a8d6c8 (image=quay.io/ceph/ceph:v19, name=interesting_euler, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:39:00 compute-0 podman[75187]: 2026-01-20 18:39:00.532659247 +0000 UTC m=+0.303550237 container start d4f6c3cc06ae752cebfd6fc56f3f6be25f85354f4bb12b8a413cb579d4a8d6c8 (image=quay.io/ceph/ceph:v19, name=interesting_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Jan 20 18:39:00 compute-0 podman[75187]: 2026-01-20 18:39:00.537823116 +0000 UTC m=+0.308714116 container attach d4f6c3cc06ae752cebfd6fc56f3f6be25f85354f4bb12b8a413cb579d4a8d6c8 (image=quay.io/ceph/ceph:v19, name=interesting_euler, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2)
Jan 20 18:39:00 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/929493113' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Jan 20 18:39:00 compute-0 ceph-mon[74381]: mgrmap e5: compute-0.cepfkm(active, since 3s)
Jan 20 18:39:00 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/144705708' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 20 18:39:00 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'crash'
Jan 20 18:39:00 compute-0 ceph-mgr[74676]: mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 20 18:39:00 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:39:00.707+0000 7f64dc381140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 20 18:39:00 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'dashboard'
Jan 20 18:39:01 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'devicehealth'
Jan 20 18:39:01 compute-0 ceph-mgr[74676]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 20 18:39:01 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:39:01.371+0000 7f64dc381140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 20 18:39:01 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'diskprediction_local'
Jan 20 18:39:01 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 20 18:39:01 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 20 18:39:01 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]:   from numpy import show_config as show_numpy_config
Jan 20 18:39:01 compute-0 ceph-mgr[74676]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 20 18:39:01 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'influx'
Jan 20 18:39:01 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:39:01.542+0000 7f64dc381140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 20 18:39:01 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:39:01.623+0000 7f64dc381140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 20 18:39:01 compute-0 ceph-mgr[74676]: mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 20 18:39:01 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'insights'
Jan 20 18:39:01 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'iostat'
Jan 20 18:39:01 compute-0 ceph-mgr[74676]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 20 18:39:01 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:39:01.778+0000 7f64dc381140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 20 18:39:01 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'k8sevents'
Jan 20 18:39:02 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'localpool'
Jan 20 18:39:02 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'mds_autoscaler'
Jan 20 18:39:02 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'mirroring'
Jan 20 18:39:02 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'nfs'
Jan 20 18:39:02 compute-0 ceph-mgr[74676]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 20 18:39:02 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:39:02.809+0000 7f64dc381140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 20 18:39:02 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'orchestrator'
Jan 20 18:39:03 compute-0 ceph-mgr[74676]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 20 18:39:03 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:39:03.023+0000 7f64dc381140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 20 18:39:03 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'osd_perf_query'
Jan 20 18:39:03 compute-0 ceph-mgr[74676]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 20 18:39:03 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'osd_support'
Jan 20 18:39:03 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:39:03.101+0000 7f64dc381140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 20 18:39:03 compute-0 ceph-mgr[74676]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 20 18:39:03 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'pg_autoscaler'
Jan 20 18:39:03 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:39:03.166+0000 7f64dc381140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 20 18:39:03 compute-0 ceph-mgr[74676]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 20 18:39:03 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'progress'
Jan 20 18:39:03 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:39:03.246+0000 7f64dc381140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 20 18:39:03 compute-0 ceph-mgr[74676]: mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 20 18:39:03 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'prometheus'
Jan 20 18:39:03 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:39:03.322+0000 7f64dc381140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 20 18:39:03 compute-0 ceph-mgr[74676]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 20 18:39:03 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:39:03.668+0000 7f64dc381140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 20 18:39:03 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'rbd_support'
Jan 20 18:39:03 compute-0 ceph-mgr[74676]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 20 18:39:03 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:39:03.768+0000 7f64dc381140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 20 18:39:03 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'restful'
Jan 20 18:39:03 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'rgw'
Jan 20 18:39:04 compute-0 ceph-mgr[74676]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 20 18:39:04 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'rook'
Jan 20 18:39:04 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:39:04.212+0000 7f64dc381140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 20 18:39:04 compute-0 ceph-mgr[74676]: mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 20 18:39:04 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:39:04.815+0000 7f64dc381140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 20 18:39:04 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'selftest'
Jan 20 18:39:04 compute-0 ceph-mgr[74676]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 20 18:39:04 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:39:04.888+0000 7f64dc381140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 20 18:39:04 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'snap_schedule'
Jan 20 18:39:04 compute-0 ceph-mgr[74676]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 20 18:39:04 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'stats'
Jan 20 18:39:04 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:39:04.969+0000 7f64dc381140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 20 18:39:05 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'status'
Jan 20 18:39:05 compute-0 ceph-mgr[74676]: mgr[py] Module status has missing NOTIFY_TYPES member
Jan 20 18:39:05 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'telegraf'
Jan 20 18:39:05 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:39:05.122+0000 7f64dc381140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Jan 20 18:39:05 compute-0 ceph-mgr[74676]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 20 18:39:05 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:39:05.198+0000 7f64dc381140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 20 18:39:05 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'telemetry'
Jan 20 18:39:05 compute-0 ceph-mgr[74676]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 20 18:39:05 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'test_orchestrator'
Jan 20 18:39:05 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:39:05.383+0000 7f64dc381140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 20 18:39:05 compute-0 ceph-mgr[74676]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 20 18:39:05 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'volumes'
Jan 20 18:39:05 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:39:05.615+0000 7f64dc381140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 20 18:39:05 compute-0 ceph-mgr[74676]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 20 18:39:05 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:39:05.893+0000 7f64dc381140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 20 18:39:05 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'zabbix'
Jan 20 18:39:05 compute-0 ceph-mgr[74676]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 20 18:39:05 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:39:05.965+0000 7f64dc381140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 20 18:39:05 compute-0 ceph-mon[74381]: log_channel(cluster) log [INF] : Active manager daemon compute-0.cepfkm restarted
Jan 20 18:39:05 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Jan 20 18:39:05 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 20 18:39:05 compute-0 ceph-mon[74381]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.cepfkm
Jan 20 18:39:05 compute-0 ceph-mgr[74676]: ms_deliver_dispatch: unhandled message 0x56061f1f8d00 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Jan 20 18:39:06 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Jan 20 18:39:06 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Jan 20 18:39:06 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Jan 20 18:39:06 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Jan 20 18:39:06 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.cepfkm(active, starting, since 0.596974s)
Jan 20 18:39:06 compute-0 ceph-mgr[74676]: mgr handle_mgr_map Activating!
Jan 20 18:39:06 compute-0 ceph-mgr[74676]: mgr handle_mgr_map I am now activating
Jan 20 18:39:06 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 20 18:39:06 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 20 18:39:06 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.cepfkm", "id": "compute-0.cepfkm"} v 0)
Jan 20 18:39:06 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mgr metadata", "who": "compute-0.cepfkm", "id": "compute-0.cepfkm"}]: dispatch
Jan 20 18:39:06 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Jan 20 18:39:06 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mds metadata"}]: dispatch
Jan 20 18:39:06 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).mds e1 all = 1
Jan 20 18:39:06 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Jan 20 18:39:06 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 20 18:39:06 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Jan 20 18:39:06 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mon metadata"}]: dispatch
Jan 20 18:39:06 compute-0 ceph-mon[74381]: Active manager daemon compute-0.cepfkm restarted
Jan 20 18:39:06 compute-0 ceph-mon[74381]: Activating manager daemon compute-0.cepfkm
Jan 20 18:39:06 compute-0 ceph-mon[74381]: osdmap e2: 0 total, 0 up, 0 in
Jan 20 18:39:06 compute-0 ceph-mon[74381]: mgrmap e6: compute-0.cepfkm(active, starting, since 0.596974s)
Jan 20 18:39:06 compute-0 ceph-mgr[74676]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 18:39:06 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 20 18:39:06 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mgr metadata", "who": "compute-0.cepfkm", "id": "compute-0.cepfkm"}]: dispatch
Jan 20 18:39:06 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mds metadata"}]: dispatch
Jan 20 18:39:06 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 20 18:39:06 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mon metadata"}]: dispatch
Jan 20 18:39:06 compute-0 ceph-mgr[74676]: mgr load Constructed class from module: balancer
Jan 20 18:39:06 compute-0 ceph-mgr[74676]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 18:39:06 compute-0 ceph-mgr[74676]: [balancer INFO root] Starting
Jan 20 18:39:06 compute-0 ceph-mgr[74676]: [balancer INFO root] Optimize plan auto_2026-01-20_18:39:06
Jan 20 18:39:06 compute-0 ceph-mgr[74676]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 18:39:06 compute-0 ceph-mgr[74676]: [balancer INFO root] do_upmap
Jan 20 18:39:06 compute-0 ceph-mgr[74676]: [balancer INFO root] No pools available
Jan 20 18:39:06 compute-0 ceph-mon[74381]: log_channel(cluster) log [INF] : Manager daemon compute-0.cepfkm is now available
Jan 20 18:39:06 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Jan 20 18:39:06 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Jan 20 18:39:06 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0)
Jan 20 18:39:07 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:07 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0)
Jan 20 18:39:07 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:07 compute-0 ceph-mgr[74676]: mgr load Constructed class from module: cephadm
Jan 20 18:39:07 compute-0 ceph-mgr[74676]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 18:39:07 compute-0 ceph-mgr[74676]: mgr load Constructed class from module: crash
Jan 20 18:39:07 compute-0 ceph-mgr[74676]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 18:39:07 compute-0 ceph-mgr[74676]: mgr load Constructed class from module: devicehealth
Jan 20 18:39:07 compute-0 ceph-mgr[74676]: [devicehealth INFO root] Starting
Jan 20 18:39:07 compute-0 ceph-mgr[74676]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 18:39:07 compute-0 ceph-mgr[74676]: mgr load Constructed class from module: iostat
Jan 20 18:39:07 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 20 18:39:07 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 20 18:39:07 compute-0 ceph-mgr[74676]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 18:39:07 compute-0 ceph-mgr[74676]: mgr load Constructed class from module: nfs
Jan 20 18:39:07 compute-0 ceph-mgr[74676]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 18:39:07 compute-0 ceph-mgr[74676]: mgr load Constructed class from module: orchestrator
Jan 20 18:39:07 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 20 18:39:07 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 20 18:39:07 compute-0 ceph-mgr[74676]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 18:39:07 compute-0 ceph-mgr[74676]: mgr load Constructed class from module: pg_autoscaler
Jan 20 18:39:07 compute-0 ceph-mgr[74676]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 18:39:07 compute-0 ceph-mgr[74676]: mgr load Constructed class from module: progress
Jan 20 18:39:07 compute-0 ceph-mgr[74676]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 18:39:07 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 18:39:07 compute-0 ceph-mgr[74676]: [progress INFO root] Loading...
Jan 20 18:39:07 compute-0 ceph-mgr[74676]: [progress INFO root] No stored events to load
Jan 20 18:39:07 compute-0 ceph-mgr[74676]: [progress INFO root] Loaded [] historic events
Jan 20 18:39:07 compute-0 ceph-mgr[74676]: [progress INFO root] Loaded OSDMap, ready.
Jan 20 18:39:07 compute-0 ceph-mgr[74676]: [rbd_support INFO root] recovery thread starting
Jan 20 18:39:07 compute-0 ceph-mgr[74676]: [rbd_support INFO root] starting setup
Jan 20 18:39:07 compute-0 ceph-mgr[74676]: mgr load Constructed class from module: rbd_support
Jan 20 18:39:07 compute-0 ceph-mgr[74676]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 18:39:07 compute-0 ceph-mgr[74676]: mgr load Constructed class from module: restful
Jan 20 18:39:07 compute-0 ceph-mgr[74676]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 18:39:07 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.cepfkm/mirror_snapshot_schedule"} v 0)
Jan 20 18:39:07 compute-0 ceph-mgr[74676]: mgr load Constructed class from module: status
Jan 20 18:39:07 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.cepfkm/mirror_snapshot_schedule"}]: dispatch
Jan 20 18:39:07 compute-0 ceph-mgr[74676]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 18:39:07 compute-0 ceph-mgr[74676]: [restful INFO root] server_addr: :: server_port: 8003
Jan 20 18:39:07 compute-0 ceph-mgr[74676]: mgr load Constructed class from module: telemetry
Jan 20 18:39:07 compute-0 ceph-mgr[74676]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 18:39:07 compute-0 ceph-mgr[74676]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Jan 20 18:39:07 compute-0 ceph-mgr[74676]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 18:39:07 compute-0 ceph-mgr[74676]: [restful WARNING root] server not running: no certificate configured
Jan 20 18:39:07 compute-0 ceph-mgr[74676]: [rbd_support INFO root] PerfHandler: starting
Jan 20 18:39:07 compute-0 ceph-mgr[74676]: [rbd_support INFO root] TaskHandler: starting
Jan 20 18:39:07 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.cepfkm/trash_purge_schedule"} v 0)
Jan 20 18:39:07 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.cepfkm/trash_purge_schedule"}]: dispatch
Jan 20 18:39:07 compute-0 ceph-mgr[74676]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 18:39:07 compute-0 ceph-mgr[74676]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Jan 20 18:39:07 compute-0 ceph-mgr[74676]: [rbd_support INFO root] setup complete
Jan 20 18:39:07 compute-0 ceph-mgr[74676]: mgr load Constructed class from module: volumes
Jan 20 18:39:07 compute-0 ceph-mon[74381]: Manager daemon compute-0.cepfkm is now available
Jan 20 18:39:07 compute-0 ceph-mon[74381]: Found migration_current of "None". Setting to last migration.
Jan 20 18:39:07 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:07 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:07 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 20 18:39:07 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 20 18:39:07 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.cepfkm/mirror_snapshot_schedule"}]: dispatch
Jan 20 18:39:07 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.cepfkm/trash_purge_schedule"}]: dispatch
Jan 20 18:39:07 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.cepfkm(active, since 1.65685s)
Jan 20 18:39:07 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Jan 20 18:39:07 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Jan 20 18:39:07 compute-0 interesting_euler[75213]: {
Jan 20 18:39:07 compute-0 interesting_euler[75213]:     "mgrmap_epoch": 7,
Jan 20 18:39:07 compute-0 interesting_euler[75213]:     "initialized": true
Jan 20 18:39:07 compute-0 interesting_euler[75213]: }
Jan 20 18:39:07 compute-0 systemd[1]: libpod-d4f6c3cc06ae752cebfd6fc56f3f6be25f85354f4bb12b8a413cb579d4a8d6c8.scope: Deactivated successfully.
Jan 20 18:39:07 compute-0 podman[75187]: 2026-01-20 18:39:07.652333644 +0000 UTC m=+7.423224684 container died d4f6c3cc06ae752cebfd6fc56f3f6be25f85354f4bb12b8a413cb579d4a8d6c8 (image=quay.io/ceph/ceph:v19, name=interesting_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Jan 20 18:39:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-b5631c054307ed85f18750246ebad25247b30b1c2c0431256d17b7b8e333c495-merged.mount: Deactivated successfully.
Jan 20 18:39:07 compute-0 podman[75187]: 2026-01-20 18:39:07.71260243 +0000 UTC m=+7.483493460 container remove d4f6c3cc06ae752cebfd6fc56f3f6be25f85354f4bb12b8a413cb579d4a8d6c8 (image=quay.io/ceph/ceph:v19, name=interesting_euler, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Jan 20 18:39:07 compute-0 systemd[1]: libpod-conmon-d4f6c3cc06ae752cebfd6fc56f3f6be25f85354f4bb12b8a413cb579d4a8d6c8.scope: Deactivated successfully.
Jan 20 18:39:07 compute-0 podman[75362]: 2026-01-20 18:39:07.782372571 +0000 UTC m=+0.047544133 container create 6f2f63c7732e48f760d8b71cb8cef539d0970238df45bfbd97024797ea2a8f3b (image=quay.io/ceph/ceph:v19, name=agitated_archimedes, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:39:07 compute-0 systemd[1]: Started libpod-conmon-6f2f63c7732e48f760d8b71cb8cef539d0970238df45bfbd97024797ea2a8f3b.scope.
Jan 20 18:39:07 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:39:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e8fc303e3e603b6717128897a1a198083bdc6b6ea6984780d64b852842ccac1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 18:39:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e8fc303e3e603b6717128897a1a198083bdc6b6ea6984780d64b852842ccac1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:39:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e8fc303e3e603b6717128897a1a198083bdc6b6ea6984780d64b852842ccac1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:39:07 compute-0 podman[75362]: 2026-01-20 18:39:07.758364784 +0000 UTC m=+0.023536366 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:39:07 compute-0 podman[75362]: 2026-01-20 18:39:07.859352467 +0000 UTC m=+0.124524069 container init 6f2f63c7732e48f760d8b71cb8cef539d0970238df45bfbd97024797ea2a8f3b (image=quay.io/ceph/ceph:v19, name=agitated_archimedes, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Jan 20 18:39:07 compute-0 podman[75362]: 2026-01-20 18:39:07.865689237 +0000 UTC m=+0.130860809 container start 6f2f63c7732e48f760d8b71cb8cef539d0970238df45bfbd97024797ea2a8f3b (image=quay.io/ceph/ceph:v19, name=agitated_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 20 18:39:07 compute-0 podman[75362]: 2026-01-20 18:39:07.869787738 +0000 UTC m=+0.134959300 container attach 6f2f63c7732e48f760d8b71cb8cef539d0970238df45bfbd97024797ea2a8f3b (image=quay.io/ceph/ceph:v19, name=agitated_archimedes, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 20 18:39:08 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.14134 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 18:39:08 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0)
Jan 20 18:39:08 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:08 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 20 18:39:08 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 20 18:39:08 compute-0 systemd[1]: libpod-6f2f63c7732e48f760d8b71cb8cef539d0970238df45bfbd97024797ea2a8f3b.scope: Deactivated successfully.
Jan 20 18:39:08 compute-0 podman[75405]: 2026-01-20 18:39:08.273130424 +0000 UTC m=+0.023376620 container died 6f2f63c7732e48f760d8b71cb8cef539d0970238df45bfbd97024797ea2a8f3b (image=quay.io/ceph/ceph:v19, name=agitated_archimedes, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Jan 20 18:39:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e8fc303e3e603b6717128897a1a198083bdc6b6ea6984780d64b852842ccac1-merged.mount: Deactivated successfully.
Jan 20 18:39:08 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019925262 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 18:39:08 compute-0 podman[75405]: 2026-01-20 18:39:08.307653626 +0000 UTC m=+0.057899802 container remove 6f2f63c7732e48f760d8b71cb8cef539d0970238df45bfbd97024797ea2a8f3b (image=quay.io/ceph/ceph:v19, name=agitated_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 18:39:08 compute-0 systemd[1]: libpod-conmon-6f2f63c7732e48f760d8b71cb8cef539d0970238df45bfbd97024797ea2a8f3b.scope: Deactivated successfully.
Jan 20 18:39:08 compute-0 podman[75420]: 2026-01-20 18:39:08.374972711 +0000 UTC m=+0.040459132 container create 66a20a9820e7f55438990faf0eca004bf942e182280e55d4ca8c17c4febe6fda (image=quay.io/ceph/ceph:v19, name=quizzical_ritchie, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:39:08 compute-0 systemd[1]: Started libpod-conmon-66a20a9820e7f55438990faf0eca004bf942e182280e55d4ca8c17c4febe6fda.scope.
Jan 20 18:39:08 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:39:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19de233a10f4a133ce5df293cb79fc8176cd5f18b96aa582458c92b69ffe3bfc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:39:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19de233a10f4a133ce5df293cb79fc8176cd5f18b96aa582458c92b69ffe3bfc/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 18:39:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19de233a10f4a133ce5df293cb79fc8176cd5f18b96aa582458c92b69ffe3bfc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:39:08 compute-0 podman[75420]: 2026-01-20 18:39:08.438553786 +0000 UTC m=+0.104040227 container init 66a20a9820e7f55438990faf0eca004bf942e182280e55d4ca8c17c4febe6fda (image=quay.io/ceph/ceph:v19, name=quizzical_ritchie, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:39:08 compute-0 podman[75420]: 2026-01-20 18:39:08.447563799 +0000 UTC m=+0.113050220 container start 66a20a9820e7f55438990faf0eca004bf942e182280e55d4ca8c17c4febe6fda (image=quay.io/ceph/ceph:v19, name=quizzical_ritchie, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Jan 20 18:39:08 compute-0 podman[75420]: 2026-01-20 18:39:08.355462025 +0000 UTC m=+0.020948466 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:39:08 compute-0 podman[75420]: 2026-01-20 18:39:08.452024749 +0000 UTC m=+0.117511160 container attach 66a20a9820e7f55438990faf0eca004bf942e182280e55d4ca8c17c4febe6fda (image=quay.io/ceph/ceph:v19, name=quizzical_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 20 18:39:08 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.agent_endpoint_root_cert}] v 0)
Jan 20 18:39:08 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:08 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.agent_endpoint_key}] v 0)
Jan 20 18:39:08 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:08 compute-0 ceph-mgr[74676]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 20 18:39:08 compute-0 ceph-mon[74381]: mgrmap e7: compute-0.cepfkm(active, since 1.65685s)
Jan 20 18:39:08 compute-0 ceph-mon[74381]: from='client.14126 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Jan 20 18:39:08 compute-0 ceph-mon[74381]: from='client.14126 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Jan 20 18:39:08 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:08 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 20 18:39:08 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:08 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:08 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.cepfkm(active, since 2s)
Jan 20 18:39:08 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 18:39:08 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0)
Jan 20 18:39:08 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:08 compute-0 ceph-mgr[74676]: [cephadm INFO root] Set ssh ssh_user
Jan 20 18:39:08 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Jan 20 18:39:08 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0)
Jan 20 18:39:08 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:08 compute-0 ceph-mgr[74676]: [cephadm INFO root] Set ssh ssh_config
Jan 20 18:39:08 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Jan 20 18:39:08 compute-0 ceph-mgr[74676]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Jan 20 18:39:08 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Jan 20 18:39:08 compute-0 quizzical_ritchie[75437]: ssh user set to ceph-admin. sudo will be used
Jan 20 18:39:08 compute-0 systemd[1]: libpod-66a20a9820e7f55438990faf0eca004bf942e182280e55d4ca8c17c4febe6fda.scope: Deactivated successfully.
Jan 20 18:39:08 compute-0 podman[75420]: 2026-01-20 18:39:08.853582117 +0000 UTC m=+0.519068538 container died 66a20a9820e7f55438990faf0eca004bf942e182280e55d4ca8c17c4febe6fda (image=quay.io/ceph/ceph:v19, name=quizzical_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 20 18:39:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-19de233a10f4a133ce5df293cb79fc8176cd5f18b96aa582458c92b69ffe3bfc-merged.mount: Deactivated successfully.
Jan 20 18:39:08 compute-0 podman[75420]: 2026-01-20 18:39:08.944765766 +0000 UTC m=+0.610252197 container remove 66a20a9820e7f55438990faf0eca004bf942e182280e55d4ca8c17c4febe6fda (image=quay.io/ceph/ceph:v19, name=quizzical_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Jan 20 18:39:08 compute-0 systemd[1]: libpod-conmon-66a20a9820e7f55438990faf0eca004bf942e182280e55d4ca8c17c4febe6fda.scope: Deactivated successfully.
Jan 20 18:39:09 compute-0 podman[75477]: 2026-01-20 18:39:09.007350714 +0000 UTC m=+0.040434172 container create 190c24a29da18203c61aea4735ca60cddb1e212ebaea6c4fab174636f1264bad (image=quay.io/ceph/ceph:v19, name=exciting_ellis, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 18:39:09 compute-0 systemd[1]: Started libpod-conmon-190c24a29da18203c61aea4735ca60cddb1e212ebaea6c4fab174636f1264bad.scope.
Jan 20 18:39:09 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:39:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e286615a4de3d7c2ac1f543672f2a270cf7cf9717d0aa0fb9c41bd03195253a1/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Jan 20 18:39:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e286615a4de3d7c2ac1f543672f2a270cf7cf9717d0aa0fb9c41bd03195253a1/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Jan 20 18:39:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e286615a4de3d7c2ac1f543672f2a270cf7cf9717d0aa0fb9c41bd03195253a1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:39:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e286615a4de3d7c2ac1f543672f2a270cf7cf9717d0aa0fb9c41bd03195253a1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 18:39:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e286615a4de3d7c2ac1f543672f2a270cf7cf9717d0aa0fb9c41bd03195253a1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:39:09 compute-0 podman[75477]: 2026-01-20 18:39:08.99236954 +0000 UTC m=+0.025453018 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:39:09 compute-0 podman[75477]: 2026-01-20 18:39:09.130653719 +0000 UTC m=+0.163737197 container init 190c24a29da18203c61aea4735ca60cddb1e212ebaea6c4fab174636f1264bad (image=quay.io/ceph/ceph:v19, name=exciting_ellis, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:39:09 compute-0 podman[75477]: 2026-01-20 18:39:09.135980403 +0000 UTC m=+0.169063861 container start 190c24a29da18203c61aea4735ca60cddb1e212ebaea6c4fab174636f1264bad (image=quay.io/ceph/ceph:v19, name=exciting_ellis, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Jan 20 18:39:09 compute-0 podman[75477]: 2026-01-20 18:39:09.139663291 +0000 UTC m=+0.172746789 container attach 190c24a29da18203c61aea4735ca60cddb1e212ebaea6c4fab174636f1264bad (image=quay.io/ceph/ceph:v19, name=exciting_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid)
Jan 20 18:39:09 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 18:39:09 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0)
Jan 20 18:39:09 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:09 compute-0 ceph-mgr[74676]: [cephadm INFO root] Set ssh ssh_identity_key
Jan 20 18:39:09 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Jan 20 18:39:09 compute-0 ceph-mgr[74676]: [cephadm INFO root] Set ssh private key
Jan 20 18:39:09 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Set ssh private key
Jan 20 18:39:09 compute-0 systemd[1]: libpod-190c24a29da18203c61aea4735ca60cddb1e212ebaea6c4fab174636f1264bad.scope: Deactivated successfully.
Jan 20 18:39:09 compute-0 podman[75477]: 2026-01-20 18:39:09.502680951 +0000 UTC m=+0.535764409 container died 190c24a29da18203c61aea4735ca60cddb1e212ebaea6c4fab174636f1264bad (image=quay.io/ceph/ceph:v19, name=exciting_ellis, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 20 18:39:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-e286615a4de3d7c2ac1f543672f2a270cf7cf9717d0aa0fb9c41bd03195253a1-merged.mount: Deactivated successfully.
Jan 20 18:39:09 compute-0 podman[75477]: 2026-01-20 18:39:09.540539092 +0000 UTC m=+0.573622570 container remove 190c24a29da18203c61aea4735ca60cddb1e212ebaea6c4fab174636f1264bad (image=quay.io/ceph/ceph:v19, name=exciting_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:39:09 compute-0 systemd[1]: libpod-conmon-190c24a29da18203c61aea4735ca60cddb1e212ebaea6c4fab174636f1264bad.scope: Deactivated successfully.
Jan 20 18:39:09 compute-0 podman[75532]: 2026-01-20 18:39:09.611850515 +0000 UTC m=+0.047559703 container create a74572ea0afa2469ec8e56925570245ed0864b87449de7b4ee0b3d48fac5daf8 (image=quay.io/ceph/ceph:v19, name=vibrant_booth, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:39:09 compute-0 systemd[1]: Started libpod-conmon-a74572ea0afa2469ec8e56925570245ed0864b87449de7b4ee0b3d48fac5daf8.scope.
Jan 20 18:39:09 compute-0 ceph-mon[74381]: from='client.14134 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 18:39:09 compute-0 ceph-mon[74381]: mgrmap e8: compute-0.cepfkm(active, since 2s)
Jan 20 18:39:09 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:09 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:09 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:09 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:39:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7b7e07c90c1ddbc35854886888af2cc458a7bd5d424946976e2f2506b16ef6d/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Jan 20 18:39:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7b7e07c90c1ddbc35854886888af2cc458a7bd5d424946976e2f2506b16ef6d/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Jan 20 18:39:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7b7e07c90c1ddbc35854886888af2cc458a7bd5d424946976e2f2506b16ef6d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:39:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7b7e07c90c1ddbc35854886888af2cc458a7bd5d424946976e2f2506b16ef6d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:39:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7b7e07c90c1ddbc35854886888af2cc458a7bd5d424946976e2f2506b16ef6d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 18:39:09 compute-0 podman[75532]: 2026-01-20 18:39:09.590509489 +0000 UTC m=+0.026218717 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:39:09 compute-0 podman[75532]: 2026-01-20 18:39:09.696905408 +0000 UTC m=+0.132614616 container init a74572ea0afa2469ec8e56925570245ed0864b87449de7b4ee0b3d48fac5daf8 (image=quay.io/ceph/ceph:v19, name=vibrant_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 20 18:39:09 compute-0 podman[75532]: 2026-01-20 18:39:09.702200051 +0000 UTC m=+0.137909239 container start a74572ea0afa2469ec8e56925570245ed0864b87449de7b4ee0b3d48fac5daf8 (image=quay.io/ceph/ceph:v19, name=vibrant_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Jan 20 18:39:09 compute-0 ceph-mgr[74676]: [cephadm INFO cherrypy.error] [20/Jan/2026:18:39:09] ENGINE Bus STARTING
Jan 20 18:39:09 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : [20/Jan/2026:18:39:09] ENGINE Bus STARTING
Jan 20 18:39:09 compute-0 podman[75532]: 2026-01-20 18:39:09.706496197 +0000 UTC m=+0.142205385 container attach a74572ea0afa2469ec8e56925570245ed0864b87449de7b4ee0b3d48fac5daf8 (image=quay.io/ceph/ceph:v19, name=vibrant_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 20 18:39:09 compute-0 ceph-mgr[74676]: [cephadm INFO cherrypy.error] [20/Jan/2026:18:39:09] ENGINE Serving on https://192.168.122.100:7150
Jan 20 18:39:09 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : [20/Jan/2026:18:39:09] ENGINE Serving on https://192.168.122.100:7150
Jan 20 18:39:09 compute-0 ceph-mgr[74676]: [cephadm INFO cherrypy.error] [20/Jan/2026:18:39:09] ENGINE Client ('192.168.122.100', 55724) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 20 18:39:09 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : [20/Jan/2026:18:39:09] ENGINE Client ('192.168.122.100', 55724) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 20 18:39:09 compute-0 ceph-mgr[74676]: [cephadm INFO cherrypy.error] [20/Jan/2026:18:39:09] ENGINE Serving on http://192.168.122.100:8765
Jan 20 18:39:09 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : [20/Jan/2026:18:39:09] ENGINE Serving on http://192.168.122.100:8765
Jan 20 18:39:09 compute-0 ceph-mgr[74676]: [cephadm INFO cherrypy.error] [20/Jan/2026:18:39:09] ENGINE Bus STARTED
Jan 20 18:39:09 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : [20/Jan/2026:18:39:09] ENGINE Bus STARTED
Jan 20 18:39:09 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 20 18:39:09 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 20 18:39:10 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 18:39:10 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0)
Jan 20 18:39:10 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:10 compute-0 ceph-mgr[74676]: [cephadm INFO root] Set ssh ssh_identity_pub
Jan 20 18:39:10 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Jan 20 18:39:10 compute-0 systemd[1]: libpod-a74572ea0afa2469ec8e56925570245ed0864b87449de7b4ee0b3d48fac5daf8.scope: Deactivated successfully.
Jan 20 18:39:10 compute-0 podman[75532]: 2026-01-20 18:39:10.049368723 +0000 UTC m=+0.485077931 container died a74572ea0afa2469ec8e56925570245ed0864b87449de7b4ee0b3d48fac5daf8 (image=quay.io/ceph/ceph:v19, name=vibrant_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:39:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-f7b7e07c90c1ddbc35854886888af2cc458a7bd5d424946976e2f2506b16ef6d-merged.mount: Deactivated successfully.
Jan 20 18:39:10 compute-0 podman[75532]: 2026-01-20 18:39:10.09674649 +0000 UTC m=+0.532455668 container remove a74572ea0afa2469ec8e56925570245ed0864b87449de7b4ee0b3d48fac5daf8 (image=quay.io/ceph/ceph:v19, name=vibrant_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:39:10 compute-0 systemd[1]: libpod-conmon-a74572ea0afa2469ec8e56925570245ed0864b87449de7b4ee0b3d48fac5daf8.scope: Deactivated successfully.
Jan 20 18:39:10 compute-0 podman[75608]: 2026-01-20 18:39:10.156156222 +0000 UTC m=+0.039323061 container create 9d52530e488f1ff94e9691b769c09f43159c0a91a85d2dad4f9acbaf46252014 (image=quay.io/ceph/ceph:v19, name=jovial_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:39:10 compute-0 systemd[1]: Started libpod-conmon-9d52530e488f1ff94e9691b769c09f43159c0a91a85d2dad4f9acbaf46252014.scope.
Jan 20 18:39:10 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:39:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ec319b2755356adafa2ff912d1820869c3c24a08f27dd1eaf1bb1ddbdf06948/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:39:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ec319b2755356adafa2ff912d1820869c3c24a08f27dd1eaf1bb1ddbdf06948/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:39:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ec319b2755356adafa2ff912d1820869c3c24a08f27dd1eaf1bb1ddbdf06948/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 18:39:10 compute-0 podman[75608]: 2026-01-20 18:39:10.226687925 +0000 UTC m=+0.109854774 container init 9d52530e488f1ff94e9691b769c09f43159c0a91a85d2dad4f9acbaf46252014 (image=quay.io/ceph/ceph:v19, name=jovial_booth, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:39:10 compute-0 podman[75608]: 2026-01-20 18:39:10.231183336 +0000 UTC m=+0.114350185 container start 9d52530e488f1ff94e9691b769c09f43159c0a91a85d2dad4f9acbaf46252014 (image=quay.io/ceph/ceph:v19, name=jovial_booth, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Jan 20 18:39:10 compute-0 podman[75608]: 2026-01-20 18:39:10.234802063 +0000 UTC m=+0.117968912 container attach 9d52530e488f1ff94e9691b769c09f43159c0a91a85d2dad4f9acbaf46252014 (image=quay.io/ceph/ceph:v19, name=jovial_booth, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:39:10 compute-0 podman[75608]: 2026-01-20 18:39:10.140630504 +0000 UTC m=+0.023797363 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:39:10 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 18:39:10 compute-0 jovial_booth[75624]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCjprJae8MaBvvYpG1ruQ4rMQ2xchh8mqm1+4e381m5g8dgB4M9Cx9AFfXULItv4NHmjsbijCYgtL0TrmYzl3iYgI/0anCYExxqTiThFjn1tj/zkZEmgTkEfbre3fboiYqooOfAHT1xv3TivfAVTXYps5giyuA9UWQ69UVr/yeZfm1UDiIHbXylVMoArMDUC+mWBC2XuJ0Q34PSdQwSI/MKMbi0sfxdfpUTqkupC8kUWy20aPbsSAou8wqcPFXFqTPcvWJaezojF7XpjVEN1kg6ncae7vVscXhYCKblY1l2gfrnbIxSOtNAuRY89hvXYMvW4kLyNUk0923P0mp0ZwtV/YzAH8eHQbb5xZAK/A2IA2fMa28V33rwGz0w9P8Iwb403BZqznVZW/4n9wm8qH3soGmzZImjwtO5c5xL38I9iQLsS1cNRee6RlaNi+xk4ajjjAda0GB/JDacn0+Pj7J803V02ycpiZhVZ+UDAW5QrcdoGsMzwzPyUfv5qCF3mqs= zuul@controller
Jan 20 18:39:10 compute-0 ceph-mgr[74676]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 20 18:39:10 compute-0 systemd[1]: libpod-9d52530e488f1ff94e9691b769c09f43159c0a91a85d2dad4f9acbaf46252014.scope: Deactivated successfully.
Jan 20 18:39:10 compute-0 conmon[75624]: conmon 9d52530e488f1ff94e96 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9d52530e488f1ff94e9691b769c09f43159c0a91a85d2dad4f9acbaf46252014.scope/container/memory.events
Jan 20 18:39:10 compute-0 podman[75608]: 2026-01-20 18:39:10.587963697 +0000 UTC m=+0.471130556 container died 9d52530e488f1ff94e9691b769c09f43159c0a91a85d2dad4f9acbaf46252014 (image=quay.io/ceph/ceph:v19, name=jovial_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 18:39:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ec319b2755356adafa2ff912d1820869c3c24a08f27dd1eaf1bb1ddbdf06948-merged.mount: Deactivated successfully.
Jan 20 18:39:10 compute-0 podman[75608]: 2026-01-20 18:39:10.625613942 +0000 UTC m=+0.508780791 container remove 9d52530e488f1ff94e9691b769c09f43159c0a91a85d2dad4f9acbaf46252014 (image=quay.io/ceph/ceph:v19, name=jovial_booth, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:39:10 compute-0 systemd[1]: libpod-conmon-9d52530e488f1ff94e9691b769c09f43159c0a91a85d2dad4f9acbaf46252014.scope: Deactivated successfully.
Jan 20 18:39:10 compute-0 ceph-mon[74381]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 18:39:10 compute-0 ceph-mon[74381]: Set ssh ssh_user
Jan 20 18:39:10 compute-0 ceph-mon[74381]: Set ssh ssh_config
Jan 20 18:39:10 compute-0 ceph-mon[74381]: ssh user set to ceph-admin. sudo will be used
Jan 20 18:39:10 compute-0 ceph-mon[74381]: from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 18:39:10 compute-0 ceph-mon[74381]: Set ssh ssh_identity_key
Jan 20 18:39:10 compute-0 ceph-mon[74381]: Set ssh private key
Jan 20 18:39:10 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 20 18:39:10 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:10 compute-0 podman[75662]: 2026-01-20 18:39:10.682946448 +0000 UTC m=+0.038416677 container create 936859e1f34987933821227bea0c64c321dfdad2e6c6d7bf530f55820deab27f (image=quay.io/ceph/ceph:v19, name=dreamy_booth, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 20 18:39:10 compute-0 systemd[1]: Started libpod-conmon-936859e1f34987933821227bea0c64c321dfdad2e6c6d7bf530f55820deab27f.scope.
Jan 20 18:39:10 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:39:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e60a0a318587ff43123025d8e84121ef0b2ce3090a155277b6d25c6235d8ea62/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:39:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e60a0a318587ff43123025d8e84121ef0b2ce3090a155277b6d25c6235d8ea62/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:39:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e60a0a318587ff43123025d8e84121ef0b2ce3090a155277b6d25c6235d8ea62/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 18:39:10 compute-0 podman[75662]: 2026-01-20 18:39:10.741612029 +0000 UTC m=+0.097082278 container init 936859e1f34987933821227bea0c64c321dfdad2e6c6d7bf530f55820deab27f (image=quay.io/ceph/ceph:v19, name=dreamy_booth, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Jan 20 18:39:10 compute-0 podman[75662]: 2026-01-20 18:39:10.746108721 +0000 UTC m=+0.101578960 container start 936859e1f34987933821227bea0c64c321dfdad2e6c6d7bf530f55820deab27f (image=quay.io/ceph/ceph:v19, name=dreamy_booth, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:39:10 compute-0 podman[75662]: 2026-01-20 18:39:10.749006879 +0000 UTC m=+0.104477108 container attach 936859e1f34987933821227bea0c64c321dfdad2e6c6d7bf530f55820deab27f (image=quay.io/ceph/ceph:v19, name=dreamy_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 18:39:10 compute-0 podman[75662]: 2026-01-20 18:39:10.668079207 +0000 UTC m=+0.023549466 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:39:11 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 18:39:11 compute-0 sshd-session[75705]: Accepted publickey for ceph-admin from 192.168.122.100 port 60312 ssh2: RSA SHA256:58ALtshni2jJ/laX5+bMZOBBr4k3I3UMx5wmNonUL8k
Jan 20 18:39:11 compute-0 systemd-logind[796]: New session 21 of user ceph-admin.
Jan 20 18:39:11 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Jan 20 18:39:11 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Jan 20 18:39:11 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Jan 20 18:39:11 compute-0 systemd[1]: Starting User Manager for UID 42477...
Jan 20 18:39:11 compute-0 systemd[75709]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 20 18:39:11 compute-0 systemd[75709]: Queued start job for default target Main User Target.
Jan 20 18:39:11 compute-0 systemd[75709]: Created slice User Application Slice.
Jan 20 18:39:11 compute-0 systemd[75709]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 20 18:39:11 compute-0 systemd[75709]: Started Daily Cleanup of User's Temporary Directories.
Jan 20 18:39:11 compute-0 systemd[75709]: Reached target Paths.
Jan 20 18:39:11 compute-0 systemd[75709]: Reached target Timers.
Jan 20 18:39:11 compute-0 sshd-session[75722]: Accepted publickey for ceph-admin from 192.168.122.100 port 60316 ssh2: RSA SHA256:58ALtshni2jJ/laX5+bMZOBBr4k3I3UMx5wmNonUL8k
Jan 20 18:39:11 compute-0 systemd[75709]: Starting D-Bus User Message Bus Socket...
Jan 20 18:39:11 compute-0 systemd[75709]: Starting Create User's Volatile Files and Directories...
Jan 20 18:39:11 compute-0 systemd-logind[796]: New session 23 of user ceph-admin.
Jan 20 18:39:11 compute-0 systemd[75709]: Listening on D-Bus User Message Bus Socket.
Jan 20 18:39:11 compute-0 systemd[75709]: Reached target Sockets.
Jan 20 18:39:11 compute-0 systemd[75709]: Finished Create User's Volatile Files and Directories.
Jan 20 18:39:11 compute-0 systemd[75709]: Reached target Basic System.
Jan 20 18:39:11 compute-0 systemd[75709]: Reached target Main User Target.
Jan 20 18:39:11 compute-0 systemd[75709]: Startup finished in 125ms.
Jan 20 18:39:11 compute-0 systemd[1]: Started User Manager for UID 42477.
Jan 20 18:39:11 compute-0 systemd[1]: Started Session 21 of User ceph-admin.
Jan 20 18:39:11 compute-0 systemd[1]: Started Session 23 of User ceph-admin.
Jan 20 18:39:11 compute-0 sshd-session[75705]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 20 18:39:11 compute-0 sshd-session[75722]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 20 18:39:11 compute-0 sudo[75729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:39:11 compute-0 sudo[75729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:39:11 compute-0 sudo[75729]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:11 compute-0 ceph-mon[74381]: [20/Jan/2026:18:39:09] ENGINE Bus STARTING
Jan 20 18:39:11 compute-0 ceph-mon[74381]: [20/Jan/2026:18:39:09] ENGINE Serving on https://192.168.122.100:7150
Jan 20 18:39:11 compute-0 ceph-mon[74381]: [20/Jan/2026:18:39:09] ENGINE Client ('192.168.122.100', 55724) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 20 18:39:11 compute-0 ceph-mon[74381]: [20/Jan/2026:18:39:09] ENGINE Serving on http://192.168.122.100:8765
Jan 20 18:39:11 compute-0 ceph-mon[74381]: [20/Jan/2026:18:39:09] ENGINE Bus STARTED
Jan 20 18:39:11 compute-0 ceph-mon[74381]: from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 18:39:11 compute-0 ceph-mon[74381]: Set ssh ssh_identity_pub
Jan 20 18:39:11 compute-0 ceph-mon[74381]: from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 18:39:11 compute-0 sshd-session[75754]: Accepted publickey for ceph-admin from 192.168.122.100 port 60318 ssh2: RSA SHA256:58ALtshni2jJ/laX5+bMZOBBr4k3I3UMx5wmNonUL8k
Jan 20 18:39:11 compute-0 systemd-logind[796]: New session 24 of user ceph-admin.
Jan 20 18:39:11 compute-0 systemd[1]: Started Session 24 of User ceph-admin.
Jan 20 18:39:11 compute-0 sshd-session[75754]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 20 18:39:11 compute-0 sudo[75758]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host --expect-hostname compute-0
Jan 20 18:39:11 compute-0 sudo[75758]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:39:11 compute-0 sudo[75758]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:12 compute-0 sshd-session[75783]: Accepted publickey for ceph-admin from 192.168.122.100 port 60328 ssh2: RSA SHA256:58ALtshni2jJ/laX5+bMZOBBr4k3I3UMx5wmNonUL8k
Jan 20 18:39:12 compute-0 systemd-logind[796]: New session 25 of user ceph-admin.
Jan 20 18:39:12 compute-0 systemd[1]: Started Session 25 of User ceph-admin.
Jan 20 18:39:12 compute-0 sshd-session[75783]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 20 18:39:12 compute-0 sudo[75787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36
Jan 20 18:39:12 compute-0 sudo[75787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:39:12 compute-0 sudo[75787]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:12 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Jan 20 18:39:12 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Jan 20 18:39:12 compute-0 sshd-session[75812]: Accepted publickey for ceph-admin from 192.168.122.100 port 60336 ssh2: RSA SHA256:58ALtshni2jJ/laX5+bMZOBBr4k3I3UMx5wmNonUL8k
Jan 20 18:39:12 compute-0 systemd-logind[796]: New session 26 of user ceph-admin.
Jan 20 18:39:12 compute-0 systemd[1]: Started Session 26 of User ceph-admin.
Jan 20 18:39:12 compute-0 sshd-session[75812]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 20 18:39:12 compute-0 ceph-mgr[74676]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 20 18:39:12 compute-0 sudo[75816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1
Jan 20 18:39:12 compute-0 sudo[75816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:39:12 compute-0 sudo[75816]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:12 compute-0 ceph-mon[74381]: from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 18:39:12 compute-0 sshd-session[75841]: Accepted publickey for ceph-admin from 192.168.122.100 port 60352 ssh2: RSA SHA256:58ALtshni2jJ/laX5+bMZOBBr4k3I3UMx5wmNonUL8k
Jan 20 18:39:12 compute-0 systemd-logind[796]: New session 27 of user ceph-admin.
Jan 20 18:39:12 compute-0 systemd[1]: Started Session 27 of User ceph-admin.
Jan 20 18:39:12 compute-0 sshd-session[75841]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 20 18:39:12 compute-0 sudo[75845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1
Jan 20 18:39:12 compute-0 sudo[75845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:39:12 compute-0 sudo[75845]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:13 compute-0 sshd-session[75870]: Accepted publickey for ceph-admin from 192.168.122.100 port 60356 ssh2: RSA SHA256:58ALtshni2jJ/laX5+bMZOBBr4k3I3UMx5wmNonUL8k
Jan 20 18:39:13 compute-0 systemd-logind[796]: New session 28 of user ceph-admin.
Jan 20 18:39:13 compute-0 systemd[1]: Started Session 28 of User ceph-admin.
Jan 20 18:39:13 compute-0 sshd-session[75870]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 20 18:39:13 compute-0 sudo[75874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36.new
Jan 20 18:39:13 compute-0 sudo[75874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:39:13 compute-0 sudo[75874]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:13 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020053080 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 18:39:13 compute-0 sshd-session[75899]: Accepted publickey for ceph-admin from 192.168.122.100 port 60362 ssh2: RSA SHA256:58ALtshni2jJ/laX5+bMZOBBr4k3I3UMx5wmNonUL8k
Jan 20 18:39:13 compute-0 systemd-logind[796]: New session 29 of user ceph-admin.
Jan 20 18:39:13 compute-0 systemd[1]: Started Session 29 of User ceph-admin.
Jan 20 18:39:13 compute-0 sshd-session[75899]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 20 18:39:13 compute-0 sudo[75903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1
Jan 20 18:39:13 compute-0 sudo[75903]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:39:13 compute-0 sudo[75903]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:13 compute-0 ceph-mon[74381]: Deploying cephadm binary to compute-0
Jan 20 18:39:13 compute-0 sshd-session[75928]: Accepted publickey for ceph-admin from 192.168.122.100 port 60372 ssh2: RSA SHA256:58ALtshni2jJ/laX5+bMZOBBr4k3I3UMx5wmNonUL8k
Jan 20 18:39:13 compute-0 systemd-logind[796]: New session 30 of user ceph-admin.
Jan 20 18:39:13 compute-0 systemd[1]: Started Session 30 of User ceph-admin.
Jan 20 18:39:13 compute-0 sshd-session[75928]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 20 18:39:13 compute-0 sudo[75932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36.new
Jan 20 18:39:13 compute-0 sudo[75932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:39:13 compute-0 sudo[75932]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:14 compute-0 sshd-session[75957]: Accepted publickey for ceph-admin from 192.168.122.100 port 60382 ssh2: RSA SHA256:58ALtshni2jJ/laX5+bMZOBBr4k3I3UMx5wmNonUL8k
Jan 20 18:39:14 compute-0 systemd-logind[796]: New session 31 of user ceph-admin.
Jan 20 18:39:14 compute-0 systemd[1]: Started Session 31 of User ceph-admin.
Jan 20 18:39:14 compute-0 sshd-session[75957]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 20 18:39:14 compute-0 ceph-mgr[74676]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 20 18:39:15 compute-0 sshd-session[75984]: Accepted publickey for ceph-admin from 192.168.122.100 port 60396 ssh2: RSA SHA256:58ALtshni2jJ/laX5+bMZOBBr4k3I3UMx5wmNonUL8k
Jan 20 18:39:15 compute-0 systemd-logind[796]: New session 32 of user ceph-admin.
Jan 20 18:39:15 compute-0 systemd[1]: Started Session 32 of User ceph-admin.
Jan 20 18:39:15 compute-0 sshd-session[75984]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 20 18:39:15 compute-0 sudo[75988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36.new /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36
Jan 20 18:39:15 compute-0 sudo[75988]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:39:15 compute-0 sudo[75988]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:15 compute-0 sshd-session[76013]: Accepted publickey for ceph-admin from 192.168.122.100 port 60410 ssh2: RSA SHA256:58ALtshni2jJ/laX5+bMZOBBr4k3I3UMx5wmNonUL8k
Jan 20 18:39:15 compute-0 systemd-logind[796]: New session 33 of user ceph-admin.
Jan 20 18:39:15 compute-0 systemd[1]: Started Session 33 of User ceph-admin.
Jan 20 18:39:15 compute-0 sshd-session[76013]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 20 18:39:15 compute-0 sudo[76017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host --expect-hostname compute-0
Jan 20 18:39:15 compute-0 sudo[76017]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:39:16 compute-0 sudo[76017]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:16 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 20 18:39:16 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:16 compute-0 ceph-mgr[74676]: [cephadm INFO root] Added host compute-0
Jan 20 18:39:16 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Added host compute-0
Jan 20 18:39:16 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 20 18:39:16 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 20 18:39:16 compute-0 dreamy_booth[75679]: Added host 'compute-0' with addr '192.168.122.100'
Jan 20 18:39:16 compute-0 systemd[1]: libpod-936859e1f34987933821227bea0c64c321dfdad2e6c6d7bf530f55820deab27f.scope: Deactivated successfully.
Jan 20 18:39:16 compute-0 sudo[76062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:39:16 compute-0 sudo[76062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:39:16 compute-0 sudo[76062]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:16 compute-0 podman[76084]: 2026-01-20 18:39:16.27379626 +0000 UTC m=+0.031045689 container died 936859e1f34987933821227bea0c64c321dfdad2e6c6d7bf530f55820deab27f (image=quay.io/ceph/ceph:v19, name=dreamy_booth, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 18:39:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-e60a0a318587ff43123025d8e84121ef0b2ce3090a155277b6d25c6235d8ea62-merged.mount: Deactivated successfully.
Jan 20 18:39:16 compute-0 podman[76084]: 2026-01-20 18:39:16.321664591 +0000 UTC m=+0.078914010 container remove 936859e1f34987933821227bea0c64c321dfdad2e6c6d7bf530f55820deab27f (image=quay.io/ceph/ceph:v19, name=dreamy_booth, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:39:16 compute-0 systemd[1]: libpod-conmon-936859e1f34987933821227bea0c64c321dfdad2e6c6d7bf530f55820deab27f.scope: Deactivated successfully.
Jan 20 18:39:16 compute-0 sudo[76101]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph:v19 --timeout 895 pull
Jan 20 18:39:16 compute-0 sudo[76101]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:39:16 compute-0 podman[76127]: 2026-01-20 18:39:16.40099971 +0000 UTC m=+0.048386716 container create 5730e60ae2870a34626a313957a88380a5c785df16e891aa66bab8d40b16eb84 (image=quay.io/ceph/ceph:v19, name=funny_kirch, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:39:16 compute-0 systemd[1]: Started libpod-conmon-5730e60ae2870a34626a313957a88380a5c785df16e891aa66bab8d40b16eb84.scope.
Jan 20 18:39:16 compute-0 podman[76127]: 2026-01-20 18:39:16.379895941 +0000 UTC m=+0.027282977 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:39:16 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:39:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25b904dfa1c07ecfe8a0a6854392cfce37a876ed50e57f646bfcb1d02bdfac2d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:39:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25b904dfa1c07ecfe8a0a6854392cfce37a876ed50e57f646bfcb1d02bdfac2d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 18:39:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25b904dfa1c07ecfe8a0a6854392cfce37a876ed50e57f646bfcb1d02bdfac2d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:39:16 compute-0 podman[76127]: 2026-01-20 18:39:16.495115978 +0000 UTC m=+0.142503014 container init 5730e60ae2870a34626a313957a88380a5c785df16e891aa66bab8d40b16eb84 (image=quay.io/ceph/ceph:v19, name=funny_kirch, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:39:16 compute-0 podman[76127]: 2026-01-20 18:39:16.507592015 +0000 UTC m=+0.154979021 container start 5730e60ae2870a34626a313957a88380a5c785df16e891aa66bab8d40b16eb84 (image=quay.io/ceph/ceph:v19, name=funny_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Jan 20 18:39:16 compute-0 podman[76127]: 2026-01-20 18:39:16.511614903 +0000 UTC m=+0.159001909 container attach 5730e60ae2870a34626a313957a88380a5c785df16e891aa66bab8d40b16eb84 (image=quay.io/ceph/ceph:v19, name=funny_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Jan 20 18:39:16 compute-0 ceph-mgr[74676]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 20 18:39:16 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 18:39:16 compute-0 ceph-mgr[74676]: [cephadm INFO root] Saving service mon spec with placement count:5
Jan 20 18:39:16 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Jan 20 18:39:16 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 20 18:39:16 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:16 compute-0 funny_kirch[76144]: Scheduled mon update...
Jan 20 18:39:16 compute-0 podman[76127]: 2026-01-20 18:39:16.947858907 +0000 UTC m=+0.595245943 container died 5730e60ae2870a34626a313957a88380a5c785df16e891aa66bab8d40b16eb84 (image=quay.io/ceph/ceph:v19, name=funny_kirch, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:39:16 compute-0 systemd[1]: libpod-5730e60ae2870a34626a313957a88380a5c785df16e891aa66bab8d40b16eb84.scope: Deactivated successfully.
Jan 20 18:39:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-25b904dfa1c07ecfe8a0a6854392cfce37a876ed50e57f646bfcb1d02bdfac2d-merged.mount: Deactivated successfully.
Jan 20 18:39:16 compute-0 podman[76127]: 2026-01-20 18:39:16.995342497 +0000 UTC m=+0.642729503 container remove 5730e60ae2870a34626a313957a88380a5c785df16e891aa66bab8d40b16eb84 (image=quay.io/ceph/ceph:v19, name=funny_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:39:17 compute-0 systemd[1]: libpod-conmon-5730e60ae2870a34626a313957a88380a5c785df16e891aa66bab8d40b16eb84.scope: Deactivated successfully.
Jan 20 18:39:17 compute-0 podman[76207]: 2026-01-20 18:39:17.060404581 +0000 UTC m=+0.043057581 container create 14b2e48c4dda35cc660456ce87241397bdee057df7db75c47ead2ddd10ea3aab (image=quay.io/ceph/ceph:v19, name=upbeat_rhodes, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 20 18:39:17 compute-0 systemd[1]: Started libpod-conmon-14b2e48c4dda35cc660456ce87241397bdee057df7db75c47ead2ddd10ea3aab.scope.
Jan 20 18:39:17 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:39:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8273c9eef4b428e85829d6a3c10caef8f0ae7239b1ec9f7c6f322d8c3af4364d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:39:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8273c9eef4b428e85829d6a3c10caef8f0ae7239b1ec9f7c6f322d8c3af4364d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 18:39:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8273c9eef4b428e85829d6a3c10caef8f0ae7239b1ec9f7c6f322d8c3af4364d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:39:17 compute-0 podman[76207]: 2026-01-20 18:39:17.041088901 +0000 UTC m=+0.023741931 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:39:17 compute-0 podman[76207]: 2026-01-20 18:39:17.146076422 +0000 UTC m=+0.128729442 container init 14b2e48c4dda35cc660456ce87241397bdee057df7db75c47ead2ddd10ea3aab (image=quay.io/ceph/ceph:v19, name=upbeat_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Jan 20 18:39:17 compute-0 podman[76207]: 2026-01-20 18:39:17.151487587 +0000 UTC m=+0.134140607 container start 14b2e48c4dda35cc660456ce87241397bdee057df7db75c47ead2ddd10ea3aab (image=quay.io/ceph/ceph:v19, name=upbeat_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:39:17 compute-0 podman[76161]: 2026-01-20 18:39:17.153358858 +0000 UTC m=+0.530667390 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:39:17 compute-0 podman[76207]: 2026-01-20 18:39:17.155500636 +0000 UTC m=+0.138153656 container attach 14b2e48c4dda35cc660456ce87241397bdee057df7db75c47ead2ddd10ea3aab (image=quay.io/ceph/ceph:v19, name=upbeat_rhodes, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:39:17 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:17 compute-0 ceph-mon[74381]: Added host compute-0
Jan 20 18:39:17 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 20 18:39:17 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:17 compute-0 podman[76242]: 2026-01-20 18:39:17.262625875 +0000 UTC m=+0.039550387 container create fbf2bd0446dc848daf81b7c86520f542c4b890d4b1e2db56b7f4835afb84b77d (image=quay.io/ceph/ceph:v19, name=clever_stonebraker, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1)
Jan 20 18:39:17 compute-0 systemd[1]: Started libpod-conmon-fbf2bd0446dc848daf81b7c86520f542c4b890d4b1e2db56b7f4835afb84b77d.scope.
Jan 20 18:39:17 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:39:17 compute-0 podman[76242]: 2026-01-20 18:39:17.244939138 +0000 UTC m=+0.021863650 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:39:17 compute-0 podman[76242]: 2026-01-20 18:39:17.347337109 +0000 UTC m=+0.124261651 container init fbf2bd0446dc848daf81b7c86520f542c4b890d4b1e2db56b7f4835afb84b77d (image=quay.io/ceph/ceph:v19, name=clever_stonebraker, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Jan 20 18:39:17 compute-0 podman[76242]: 2026-01-20 18:39:17.357320118 +0000 UTC m=+0.134244630 container start fbf2bd0446dc848daf81b7c86520f542c4b890d4b1e2db56b7f4835afb84b77d (image=quay.io/ceph/ceph:v19, name=clever_stonebraker, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 18:39:17 compute-0 podman[76242]: 2026-01-20 18:39:17.361204013 +0000 UTC m=+0.138128525 container attach fbf2bd0446dc848daf81b7c86520f542c4b890d4b1e2db56b7f4835afb84b77d (image=quay.io/ceph/ceph:v19, name=clever_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 18:39:17 compute-0 clever_stonebraker[76278]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)
Jan 20 18:39:17 compute-0 systemd[1]: libpod-fbf2bd0446dc848daf81b7c86520f542c4b890d4b1e2db56b7f4835afb84b77d.scope: Deactivated successfully.
Jan 20 18:39:17 compute-0 podman[76242]: 2026-01-20 18:39:17.456703368 +0000 UTC m=+0.233627910 container died fbf2bd0446dc848daf81b7c86520f542c4b890d4b1e2db56b7f4835afb84b77d (image=quay.io/ceph/ceph:v19, name=clever_stonebraker, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Jan 20 18:39:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e255996b907c2261a35de4c2ccdf33ec341d7ec41e073e92858bebba3e62cdf-merged.mount: Deactivated successfully.
Jan 20 18:39:17 compute-0 podman[76242]: 2026-01-20 18:39:17.499248416 +0000 UTC m=+0.276172908 container remove fbf2bd0446dc848daf81b7c86520f542c4b890d4b1e2db56b7f4835afb84b77d (image=quay.io/ceph/ceph:v19, name=clever_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True)
Jan 20 18:39:17 compute-0 systemd[1]: libpod-conmon-fbf2bd0446dc848daf81b7c86520f542c4b890d4b1e2db56b7f4835afb84b77d.scope: Deactivated successfully.
Jan 20 18:39:17 compute-0 sudo[76101]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:17 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0)
Jan 20 18:39:17 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 18:39:17 compute-0 ceph-mgr[74676]: [cephadm INFO root] Saving service mgr spec with placement count:2
Jan 20 18:39:17 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Jan 20 18:39:17 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 20 18:39:17 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:17 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:17 compute-0 upbeat_rhodes[76223]: Scheduled mgr update...
Jan 20 18:39:17 compute-0 systemd[1]: libpod-14b2e48c4dda35cc660456ce87241397bdee057df7db75c47ead2ddd10ea3aab.scope: Deactivated successfully.
Jan 20 18:39:17 compute-0 podman[76207]: 2026-01-20 18:39:17.5962241 +0000 UTC m=+0.578877100 container died 14b2e48c4dda35cc660456ce87241397bdee057df7db75c47ead2ddd10ea3aab (image=quay.io/ceph/ceph:v19, name=upbeat_rhodes, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 18:39:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-8273c9eef4b428e85829d6a3c10caef8f0ae7239b1ec9f7c6f322d8c3af4364d-merged.mount: Deactivated successfully.
Jan 20 18:39:17 compute-0 sudo[76295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:39:17 compute-0 sudo[76295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:39:17 compute-0 sudo[76295]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:17 compute-0 podman[76207]: 2026-01-20 18:39:17.639440066 +0000 UTC m=+0.622093066 container remove 14b2e48c4dda35cc660456ce87241397bdee057df7db75c47ead2ddd10ea3aab (image=quay.io/ceph/ceph:v19, name=upbeat_rhodes, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:39:17 compute-0 systemd[1]: libpod-conmon-14b2e48c4dda35cc660456ce87241397bdee057df7db75c47ead2ddd10ea3aab.scope: Deactivated successfully.
Jan 20 18:39:17 compute-0 sudo[76333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Jan 20 18:39:17 compute-0 sudo[76333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:39:17 compute-0 podman[76335]: 2026-01-20 18:39:17.709733601 +0000 UTC m=+0.048507838 container create aa58e94730c58bfa3c73112d2d9a1b9e9018c40cb04aa3ac016121c635142e31 (image=quay.io/ceph/ceph:v19, name=elegant_visvesvaraya, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:39:17 compute-0 systemd[1]: Started libpod-conmon-aa58e94730c58bfa3c73112d2d9a1b9e9018c40cb04aa3ac016121c635142e31.scope.
Jan 20 18:39:17 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:39:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c119cf94d99973184ffcd05e3d964305dcd9b5d9d682b03bef9a9da41029c12/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:39:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c119cf94d99973184ffcd05e3d964305dcd9b5d9d682b03bef9a9da41029c12/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 18:39:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c119cf94d99973184ffcd05e3d964305dcd9b5d9d682b03bef9a9da41029c12/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:39:17 compute-0 podman[76335]: 2026-01-20 18:39:17.685338334 +0000 UTC m=+0.024112621 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:39:17 compute-0 podman[76335]: 2026-01-20 18:39:17.781136117 +0000 UTC m=+0.119910384 container init aa58e94730c58bfa3c73112d2d9a1b9e9018c40cb04aa3ac016121c635142e31 (image=quay.io/ceph/ceph:v19, name=elegant_visvesvaraya, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Jan 20 18:39:17 compute-0 podman[76335]: 2026-01-20 18:39:17.786294196 +0000 UTC m=+0.125068443 container start aa58e94730c58bfa3c73112d2d9a1b9e9018c40cb04aa3ac016121c635142e31 (image=quay.io/ceph/ceph:v19, name=elegant_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:39:17 compute-0 podman[76335]: 2026-01-20 18:39:17.789702628 +0000 UTC m=+0.128476885 container attach aa58e94730c58bfa3c73112d2d9a1b9e9018c40cb04aa3ac016121c635142e31 (image=quay.io/ceph/ceph:v19, name=elegant_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:39:18 compute-0 sudo[76333]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:18 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 18:39:18 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:18 compute-0 sudo[76418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:39:18 compute-0 sudo[76418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:39:18 compute-0 sudo[76418]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:18 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 18:39:18 compute-0 ceph-mgr[74676]: [cephadm INFO root] Saving service crash spec with placement *
Jan 20 18:39:18 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Jan 20 18:39:18 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Jan 20 18:39:18 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:18 compute-0 elegant_visvesvaraya[76375]: Scheduled crash update...
Jan 20 18:39:18 compute-0 systemd[1]: libpod-aa58e94730c58bfa3c73112d2d9a1b9e9018c40cb04aa3ac016121c635142e31.scope: Deactivated successfully.
Jan 20 18:39:18 compute-0 podman[76335]: 2026-01-20 18:39:18.171669728 +0000 UTC m=+0.510443955 container died aa58e94730c58bfa3c73112d2d9a1b9e9018c40cb04aa3ac016121c635142e31 (image=quay.io/ceph/ceph:v19, name=elegant_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:39:18 compute-0 sudo[76443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Jan 20 18:39:18 compute-0 sudo[76443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:39:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c119cf94d99973184ffcd05e3d964305dcd9b5d9d682b03bef9a9da41029c12-merged.mount: Deactivated successfully.
Jan 20 18:39:18 compute-0 podman[76335]: 2026-01-20 18:39:18.215789708 +0000 UTC m=+0.554563945 container remove aa58e94730c58bfa3c73112d2d9a1b9e9018c40cb04aa3ac016121c635142e31 (image=quay.io/ceph/ceph:v19, name=elegant_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 20 18:39:18 compute-0 systemd[1]: libpod-conmon-aa58e94730c58bfa3c73112d2d9a1b9e9018c40cb04aa3ac016121c635142e31.scope: Deactivated successfully.
Jan 20 18:39:18 compute-0 podman[76483]: 2026-01-20 18:39:18.282579969 +0000 UTC m=+0.042794575 container create 4d917467271b912f8082e4a4d1ed2d282211b6e45e4939d47df7092942fda96e (image=quay.io/ceph/ceph:v19, name=stoic_blackburn, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:39:18 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054710 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 18:39:18 compute-0 systemd[1]: Started libpod-conmon-4d917467271b912f8082e4a4d1ed2d282211b6e45e4939d47df7092942fda96e.scope.
Jan 20 18:39:18 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:39:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4b105842bab59e65a1ca05c9f64d599c83cf72cb8fb7b484dca2f5516eeb4ec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:39:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4b105842bab59e65a1ca05c9f64d599c83cf72cb8fb7b484dca2f5516eeb4ec/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 18:39:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4b105842bab59e65a1ca05c9f64d599c83cf72cb8fb7b484dca2f5516eeb4ec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:39:18 compute-0 podman[76483]: 2026-01-20 18:39:18.262859417 +0000 UTC m=+0.023074043 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:39:18 compute-0 podman[76483]: 2026-01-20 18:39:18.367247012 +0000 UTC m=+0.127461628 container init 4d917467271b912f8082e4a4d1ed2d282211b6e45e4939d47df7092942fda96e (image=quay.io/ceph/ceph:v19, name=stoic_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:39:18 compute-0 podman[76483]: 2026-01-20 18:39:18.374917379 +0000 UTC m=+0.135131975 container start 4d917467271b912f8082e4a4d1ed2d282211b6e45e4939d47df7092942fda96e (image=quay.io/ceph/ceph:v19, name=stoic_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:39:18 compute-0 podman[76483]: 2026-01-20 18:39:18.379129042 +0000 UTC m=+0.139343648 container attach 4d917467271b912f8082e4a4d1ed2d282211b6e45e4939d47df7092942fda96e (image=quay.io/ceph/ceph:v19, name=stoic_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 20 18:39:18 compute-0 ceph-mon[74381]: from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 18:39:18 compute-0 ceph-mon[74381]: Saving service mon spec with placement count:5
Jan 20 18:39:18 compute-0 ceph-mon[74381]: from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 18:39:18 compute-0 ceph-mon[74381]: Saving service mgr spec with placement count:2
Jan 20 18:39:18 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:18 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:18 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:18 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:18 compute-0 ceph-mgr[74676]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 20 18:39:18 compute-0 podman[76589]: 2026-01-20 18:39:18.672684388 +0000 UTC m=+0.045302352 container exec 2fba800b181b7eef43f9bbe592c3e2bd413c4a1140b3b26fe8cd839a68603da7 (image=quay.io/ceph/ceph:v19, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mon-compute-0, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:39:18 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0)
Jan 20 18:39:18 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/541868160' entity='client.admin' 
Jan 20 18:39:18 compute-0 systemd[1]: libpod-4d917467271b912f8082e4a4d1ed2d282211b6e45e4939d47df7092942fda96e.scope: Deactivated successfully.
Jan 20 18:39:18 compute-0 podman[76589]: 2026-01-20 18:39:18.772787977 +0000 UTC m=+0.145405911 container exec_died 2fba800b181b7eef43f9bbe592c3e2bd413c4a1140b3b26fe8cd839a68603da7 (image=quay.io/ceph/ceph:v19, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Jan 20 18:39:18 compute-0 podman[76483]: 2026-01-20 18:39:18.776664182 +0000 UTC m=+0.536878788 container died 4d917467271b912f8082e4a4d1ed2d282211b6e45e4939d47df7092942fda96e (image=quay.io/ceph/ceph:v19, name=stoic_blackburn, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True)
Jan 20 18:39:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-f4b105842bab59e65a1ca05c9f64d599c83cf72cb8fb7b484dca2f5516eeb4ec-merged.mount: Deactivated successfully.
Jan 20 18:39:18 compute-0 podman[76483]: 2026-01-20 18:39:18.821271425 +0000 UTC m=+0.581486031 container remove 4d917467271b912f8082e4a4d1ed2d282211b6e45e4939d47df7092942fda96e (image=quay.io/ceph/ceph:v19, name=stoic_blackburn, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Jan 20 18:39:18 compute-0 systemd[1]: libpod-conmon-4d917467271b912f8082e4a4d1ed2d282211b6e45e4939d47df7092942fda96e.scope: Deactivated successfully.
Jan 20 18:39:18 compute-0 sudo[76443]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:18 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 18:39:18 compute-0 podman[76637]: 2026-01-20 18:39:18.868482998 +0000 UTC m=+0.023945356 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:39:19 compute-0 podman[76637]: 2026-01-20 18:39:19.245089594 +0000 UTC m=+0.400551912 container create c2dfe6b3ba2d0f1e8a1a960e69bcaf11fded76d58d3a2b4f573c5270088a12aa (image=quay.io/ceph/ceph:v19, name=nervous_lalande, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Jan 20 18:39:19 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:19 compute-0 sudo[76664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:39:19 compute-0 sudo[76664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:39:19 compute-0 sudo[76664]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:19 compute-0 sudo[76689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 20 18:39:19 compute-0 sudo[76689]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:39:19 compute-0 systemd[1]: Started libpod-conmon-c2dfe6b3ba2d0f1e8a1a960e69bcaf11fded76d58d3a2b4f573c5270088a12aa.scope.
Jan 20 18:39:19 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:39:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/751accbc71ec48e28b4f5756c75a36768434076be9f6165b9dd18c9a00566aef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:39:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/751accbc71ec48e28b4f5756c75a36768434076be9f6165b9dd18c9a00566aef/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 18:39:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/751accbc71ec48e28b4f5756c75a36768434076be9f6165b9dd18c9a00566aef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:39:19 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 76728 (sysctl)
Jan 20 18:39:19 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Jan 20 18:39:19 compute-0 ceph-mon[74381]: from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 18:39:19 compute-0 ceph-mon[74381]: Saving service crash spec with placement *
Jan 20 18:39:19 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/541868160' entity='client.admin' 
Jan 20 18:39:19 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:19 compute-0 podman[76637]: 2026-01-20 18:39:19.933949829 +0000 UTC m=+1.089412197 container init c2dfe6b3ba2d0f1e8a1a960e69bcaf11fded76d58d3a2b4f573c5270088a12aa (image=quay.io/ceph/ceph:v19, name=nervous_lalande, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 20 18:39:19 compute-0 podman[76637]: 2026-01-20 18:39:19.942933421 +0000 UTC m=+1.098395749 container start c2dfe6b3ba2d0f1e8a1a960e69bcaf11fded76d58d3a2b4f573c5270088a12aa (image=quay.io/ceph/ceph:v19, name=nervous_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:39:19 compute-0 podman[76637]: 2026-01-20 18:39:19.948955973 +0000 UTC m=+1.104418331 container attach c2dfe6b3ba2d0f1e8a1a960e69bcaf11fded76d58d3a2b4f573c5270088a12aa (image=quay.io/ceph/ceph:v19, name=nervous_lalande, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Jan 20 18:39:19 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Jan 20 18:39:20 compute-0 sudo[76689]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:20 compute-0 sudo[76774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:39:20 compute-0 sudo[76774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:39:20 compute-0 sudo[76774]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:20 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 18:39:20 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0)
Jan 20 18:39:20 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:20 compute-0 systemd[1]: libpod-c2dfe6b3ba2d0f1e8a1a960e69bcaf11fded76d58d3a2b4f573c5270088a12aa.scope: Deactivated successfully.
Jan 20 18:39:20 compute-0 podman[76637]: 2026-01-20 18:39:20.355025353 +0000 UTC m=+1.510487761 container died c2dfe6b3ba2d0f1e8a1a960e69bcaf11fded76d58d3a2b4f573c5270088a12aa (image=quay.io/ceph/ceph:v19, name=nervous_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid)
Jan 20 18:39:20 compute-0 sudo[76799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Jan 20 18:39:20 compute-0 sudo[76799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:39:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-751accbc71ec48e28b4f5756c75a36768434076be9f6165b9dd18c9a00566aef-merged.mount: Deactivated successfully.
Jan 20 18:39:20 compute-0 podman[76637]: 2026-01-20 18:39:20.399293317 +0000 UTC m=+1.554755645 container remove c2dfe6b3ba2d0f1e8a1a960e69bcaf11fded76d58d3a2b4f573c5270088a12aa (image=quay.io/ceph/ceph:v19, name=nervous_lalande, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:39:20 compute-0 systemd[1]: libpod-conmon-c2dfe6b3ba2d0f1e8a1a960e69bcaf11fded76d58d3a2b4f573c5270088a12aa.scope: Deactivated successfully.
Jan 20 18:39:20 compute-0 podman[76839]: 2026-01-20 18:39:20.480913738 +0000 UTC m=+0.048996572 container create 37d55f347c7e9bc352c581a2705ed1f2e54763f7d931459924b160081cfdcc2c (image=quay.io/ceph/ceph:v19, name=epic_mclean, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:39:20 compute-0 systemd[1]: Started libpod-conmon-37d55f347c7e9bc352c581a2705ed1f2e54763f7d931459924b160081cfdcc2c.scope.
Jan 20 18:39:20 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:39:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/693840882bf8b8bd9045fd25f4a8d40536444c41d0586b579ff44250a95d2dfa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:39:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/693840882bf8b8bd9045fd25f4a8d40536444c41d0586b579ff44250a95d2dfa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:39:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/693840882bf8b8bd9045fd25f4a8d40536444c41d0586b579ff44250a95d2dfa/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 18:39:20 compute-0 podman[76839]: 2026-01-20 18:39:20.460079017 +0000 UTC m=+0.028161861 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:39:20 compute-0 podman[76839]: 2026-01-20 18:39:20.567966887 +0000 UTC m=+0.136049721 container init 37d55f347c7e9bc352c581a2705ed1f2e54763f7d931459924b160081cfdcc2c (image=quay.io/ceph/ceph:v19, name=epic_mclean, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:39:20 compute-0 ceph-mgr[74676]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 20 18:39:20 compute-0 podman[76839]: 2026-01-20 18:39:20.574884563 +0000 UTC m=+0.142967377 container start 37d55f347c7e9bc352c581a2705ed1f2e54763f7d931459924b160081cfdcc2c (image=quay.io/ceph/ceph:v19, name=epic_mclean, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid)
Jan 20 18:39:20 compute-0 podman[76839]: 2026-01-20 18:39:20.578368457 +0000 UTC m=+0.146451311 container attach 37d55f347c7e9bc352c581a2705ed1f2e54763f7d931459924b160081cfdcc2c (image=quay.io/ceph/ceph:v19, name=epic_mclean, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 20 18:39:20 compute-0 sudo[76799]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:20 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 18:39:20 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:20 compute-0 sudo[76878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:39:20 compute-0 sudo[76878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:39:20 compute-0 sudo[76878]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:20 compute-0 sudo[76905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- inventory --format=json-pretty --filter-for-batch
Jan 20 18:39:20 compute-0 sudo[76905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:39:20 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 18:39:20 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 20 18:39:20 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:20 compute-0 ceph-mgr[74676]: [cephadm INFO root] Added label _admin to host compute-0
Jan 20 18:39:20 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Jan 20 18:39:20 compute-0 epic_mclean[76855]: Added label _admin to host compute-0
Jan 20 18:39:21 compute-0 systemd[1]: libpod-37d55f347c7e9bc352c581a2705ed1f2e54763f7d931459924b160081cfdcc2c.scope: Deactivated successfully.
Jan 20 18:39:21 compute-0 podman[76839]: 2026-01-20 18:39:21.019181014 +0000 UTC m=+0.587263838 container died 37d55f347c7e9bc352c581a2705ed1f2e54763f7d931459924b160081cfdcc2c (image=quay.io/ceph/ceph:v19, name=epic_mclean, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:39:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-693840882bf8b8bd9045fd25f4a8d40536444c41d0586b579ff44250a95d2dfa-merged.mount: Deactivated successfully.
Jan 20 18:39:21 compute-0 podman[76839]: 2026-01-20 18:39:21.061273729 +0000 UTC m=+0.629356553 container remove 37d55f347c7e9bc352c581a2705ed1f2e54763f7d931459924b160081cfdcc2c (image=quay.io/ceph/ceph:v19, name=epic_mclean, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 20 18:39:21 compute-0 systemd[1]: libpod-conmon-37d55f347c7e9bc352c581a2705ed1f2e54763f7d931459924b160081cfdcc2c.scope: Deactivated successfully.
Jan 20 18:39:21 compute-0 podman[76987]: 2026-01-20 18:39:21.144186504 +0000 UTC m=+0.052581558 container create 3e2c558335ccf0d69b57e89ad87adbbf7ffe12590c2100c566c5808600428cfe (image=quay.io/ceph/ceph:v19, name=nifty_elion, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid)
Jan 20 18:39:21 compute-0 systemd[1]: Started libpod-conmon-3e2c558335ccf0d69b57e89ad87adbbf7ffe12590c2100c566c5808600428cfe.scope.
Jan 20 18:39:21 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:39:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c6edf4d5afc9b497648c841368f5362e4a9902d9970421f852d6bbef2fa7407/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 18:39:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c6edf4d5afc9b497648c841368f5362e4a9902d9970421f852d6bbef2fa7407/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:39:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c6edf4d5afc9b497648c841368f5362e4a9902d9970421f852d6bbef2fa7407/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:39:21 compute-0 podman[76987]: 2026-01-20 18:39:21.12100846 +0000 UTC m=+0.029403314 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:39:21 compute-0 podman[77015]: 2026-01-20 18:39:21.226077653 +0000 UTC m=+0.056808113 container create 2db2bf72f60d0b38d84851b83fa313621f0914a091208cb147b62ff87950795f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_northcutt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:39:21 compute-0 podman[76987]: 2026-01-20 18:39:21.235719452 +0000 UTC m=+0.144114296 container init 3e2c558335ccf0d69b57e89ad87adbbf7ffe12590c2100c566c5808600428cfe (image=quay.io/ceph/ceph:v19, name=nifty_elion, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Jan 20 18:39:21 compute-0 podman[76987]: 2026-01-20 18:39:21.241451438 +0000 UTC m=+0.149846262 container start 3e2c558335ccf0d69b57e89ad87adbbf7ffe12590c2100c566c5808600428cfe (image=quay.io/ceph/ceph:v19, name=nifty_elion, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:39:21 compute-0 podman[76987]: 2026-01-20 18:39:21.247951842 +0000 UTC m=+0.156346666 container attach 3e2c558335ccf0d69b57e89ad87adbbf7ffe12590c2100c566c5808600428cfe (image=quay.io/ceph/ceph:v19, name=nifty_elion, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:39:21 compute-0 systemd[1]: Started libpod-conmon-2db2bf72f60d0b38d84851b83fa313621f0914a091208cb147b62ff87950795f.scope.
Jan 20 18:39:21 compute-0 podman[77015]: 2026-01-20 18:39:21.195751895 +0000 UTC m=+0.026482445 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:39:21 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:39:21 compute-0 podman[77015]: 2026-01-20 18:39:21.314872467 +0000 UTC m=+0.145602977 container init 2db2bf72f60d0b38d84851b83fa313621f0914a091208cb147b62ff87950795f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_northcutt, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 20 18:39:21 compute-0 podman[77015]: 2026-01-20 18:39:21.319612864 +0000 UTC m=+0.150343324 container start 2db2bf72f60d0b38d84851b83fa313621f0914a091208cb147b62ff87950795f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_northcutt, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:39:21 compute-0 upbeat_northcutt[77039]: 167 167
Jan 20 18:39:21 compute-0 podman[77015]: 2026-01-20 18:39:21.323577461 +0000 UTC m=+0.154307961 container attach 2db2bf72f60d0b38d84851b83fa313621f0914a091208cb147b62ff87950795f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 18:39:21 compute-0 systemd[1]: libpod-2db2bf72f60d0b38d84851b83fa313621f0914a091208cb147b62ff87950795f.scope: Deactivated successfully.
Jan 20 18:39:21 compute-0 podman[77015]: 2026-01-20 18:39:21.325096652 +0000 UTC m=+0.155827112 container died 2db2bf72f60d0b38d84851b83fa313621f0914a091208cb147b62ff87950795f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 20 18:39:21 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0)
Jan 20 18:39:22 compute-0 ceph-mon[74381]: from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 18:39:22 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:22 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:22 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:22 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1293111657' entity='client.admin' 
Jan 20 18:39:22 compute-0 nifty_elion[77024]: set mgr/dashboard/cluster/status
Jan 20 18:39:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c52f05b10190b3160cbc9d2f32e417f1b635823c0f9df638dc336f39a262626-merged.mount: Deactivated successfully.
Jan 20 18:39:22 compute-0 podman[77015]: 2026-01-20 18:39:22.433954574 +0000 UTC m=+1.264685034 container remove 2db2bf72f60d0b38d84851b83fa313621f0914a091208cb147b62ff87950795f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Jan 20 18:39:22 compute-0 systemd[1]: libpod-conmon-2db2bf72f60d0b38d84851b83fa313621f0914a091208cb147b62ff87950795f.scope: Deactivated successfully.
Jan 20 18:39:22 compute-0 systemd[1]: libpod-3e2c558335ccf0d69b57e89ad87adbbf7ffe12590c2100c566c5808600428cfe.scope: Deactivated successfully.
Jan 20 18:39:22 compute-0 podman[76987]: 2026-01-20 18:39:22.453678366 +0000 UTC m=+1.362073190 container died 3e2c558335ccf0d69b57e89ad87adbbf7ffe12590c2100c566c5808600428cfe (image=quay.io/ceph/ceph:v19, name=nifty_elion, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:39:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-8c6edf4d5afc9b497648c841368f5362e4a9902d9970421f852d6bbef2fa7407-merged.mount: Deactivated successfully.
Jan 20 18:39:22 compute-0 podman[76987]: 2026-01-20 18:39:22.512628015 +0000 UTC m=+1.421022839 container remove 3e2c558335ccf0d69b57e89ad87adbbf7ffe12590c2100c566c5808600428cfe (image=quay.io/ceph/ceph:v19, name=nifty_elion, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Jan 20 18:39:22 compute-0 systemd[1]: libpod-conmon-3e2c558335ccf0d69b57e89ad87adbbf7ffe12590c2100c566c5808600428cfe.scope: Deactivated successfully.
Jan 20 18:39:22 compute-0 ceph-mgr[74676]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 20 18:39:22 compute-0 sudo[73332]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:22 compute-0 podman[77098]: 2026-01-20 18:39:22.687494341 +0000 UTC m=+0.034316056 container create fb6da9a064e0ffb3047d3a57ef2312812f57c7768d7d406b0c8107002952fd0e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_lehmann, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Jan 20 18:39:22 compute-0 systemd[1]: Started libpod-conmon-fb6da9a064e0ffb3047d3a57ef2312812f57c7768d7d406b0c8107002952fd0e.scope.
Jan 20 18:39:22 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:39:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1831273f1d0b3604fa379e253dcf147329dbbdb8fef6fcdd7f53928211d0a593/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 18:39:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1831273f1d0b3604fa379e253dcf147329dbbdb8fef6fcdd7f53928211d0a593/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:39:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1831273f1d0b3604fa379e253dcf147329dbbdb8fef6fcdd7f53928211d0a593/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:39:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1831273f1d0b3604fa379e253dcf147329dbbdb8fef6fcdd7f53928211d0a593/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 18:39:22 compute-0 podman[77098]: 2026-01-20 18:39:22.766971624 +0000 UTC m=+0.113793339 container init fb6da9a064e0ffb3047d3a57ef2312812f57c7768d7d406b0c8107002952fd0e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:39:22 compute-0 podman[77098]: 2026-01-20 18:39:22.673123423 +0000 UTC m=+0.019945168 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:39:22 compute-0 podman[77098]: 2026-01-20 18:39:22.773045798 +0000 UTC m=+0.119867523 container start fb6da9a064e0ffb3047d3a57ef2312812f57c7768d7d406b0c8107002952fd0e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_lehmann, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:39:22 compute-0 podman[77098]: 2026-01-20 18:39:22.776478871 +0000 UTC m=+0.123300606 container attach fb6da9a064e0ffb3047d3a57ef2312812f57c7768d7d406b0c8107002952fd0e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_lehmann, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:39:22 compute-0 sudo[77142]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcuhvoczxagwpperlbtirjcctgzkyltj ; /usr/bin/python3'
Jan 20 18:39:22 compute-0 sudo[77142]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:39:23 compute-0 python3[77144]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:39:23 compute-0 podman[77150]: 2026-01-20 18:39:23.086759118 +0000 UTC m=+0.022716524 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:39:23 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 18:39:23 compute-0 agitated_lehmann[77114]: [
Jan 20 18:39:23 compute-0 agitated_lehmann[77114]:     {
Jan 20 18:39:23 compute-0 agitated_lehmann[77114]:         "available": false,
Jan 20 18:39:23 compute-0 agitated_lehmann[77114]:         "being_replaced": false,
Jan 20 18:39:23 compute-0 agitated_lehmann[77114]:         "ceph_device_lvm": false,
Jan 20 18:39:23 compute-0 agitated_lehmann[77114]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 20 18:39:23 compute-0 agitated_lehmann[77114]:         "lsm_data": {},
Jan 20 18:39:23 compute-0 agitated_lehmann[77114]:         "lvs": [],
Jan 20 18:39:23 compute-0 agitated_lehmann[77114]:         "path": "/dev/sr0",
Jan 20 18:39:23 compute-0 agitated_lehmann[77114]:         "rejected_reasons": [
Jan 20 18:39:23 compute-0 agitated_lehmann[77114]:             "Insufficient space (<5GB)",
Jan 20 18:39:23 compute-0 agitated_lehmann[77114]:             "Has a FileSystem"
Jan 20 18:39:23 compute-0 agitated_lehmann[77114]:         ],
Jan 20 18:39:23 compute-0 agitated_lehmann[77114]:         "sys_api": {
Jan 20 18:39:23 compute-0 agitated_lehmann[77114]:             "actuators": null,
Jan 20 18:39:23 compute-0 agitated_lehmann[77114]:             "device_nodes": [
Jan 20 18:39:23 compute-0 agitated_lehmann[77114]:                 "sr0"
Jan 20 18:39:23 compute-0 agitated_lehmann[77114]:             ],
Jan 20 18:39:23 compute-0 agitated_lehmann[77114]:             "devname": "sr0",
Jan 20 18:39:23 compute-0 agitated_lehmann[77114]:             "human_readable_size": "482.00 KB",
Jan 20 18:39:23 compute-0 agitated_lehmann[77114]:             "id_bus": "ata",
Jan 20 18:39:23 compute-0 agitated_lehmann[77114]:             "model": "QEMU DVD-ROM",
Jan 20 18:39:23 compute-0 agitated_lehmann[77114]:             "nr_requests": "2",
Jan 20 18:39:23 compute-0 agitated_lehmann[77114]:             "parent": "/dev/sr0",
Jan 20 18:39:23 compute-0 agitated_lehmann[77114]:             "partitions": {},
Jan 20 18:39:23 compute-0 agitated_lehmann[77114]:             "path": "/dev/sr0",
Jan 20 18:39:23 compute-0 agitated_lehmann[77114]:             "removable": "1",
Jan 20 18:39:23 compute-0 agitated_lehmann[77114]:             "rev": "2.5+",
Jan 20 18:39:23 compute-0 agitated_lehmann[77114]:             "ro": "0",
Jan 20 18:39:23 compute-0 agitated_lehmann[77114]:             "rotational": "1",
Jan 20 18:39:23 compute-0 agitated_lehmann[77114]:             "sas_address": "",
Jan 20 18:39:23 compute-0 agitated_lehmann[77114]:             "sas_device_handle": "",
Jan 20 18:39:23 compute-0 agitated_lehmann[77114]:             "scheduler_mode": "mq-deadline",
Jan 20 18:39:23 compute-0 agitated_lehmann[77114]:             "sectors": 0,
Jan 20 18:39:23 compute-0 agitated_lehmann[77114]:             "sectorsize": "2048",
Jan 20 18:39:23 compute-0 agitated_lehmann[77114]:             "size": 493568.0,
Jan 20 18:39:23 compute-0 agitated_lehmann[77114]:             "support_discard": "2048",
Jan 20 18:39:23 compute-0 agitated_lehmann[77114]:             "type": "disk",
Jan 20 18:39:23 compute-0 agitated_lehmann[77114]:             "vendor": "QEMU"
Jan 20 18:39:23 compute-0 agitated_lehmann[77114]:         }
Jan 20 18:39:23 compute-0 agitated_lehmann[77114]:     }
Jan 20 18:39:23 compute-0 agitated_lehmann[77114]: ]
Jan 20 18:39:23 compute-0 systemd[1]: libpod-fb6da9a064e0ffb3047d3a57ef2312812f57c7768d7d406b0c8107002952fd0e.scope: Deactivated successfully.
Jan 20 18:39:23 compute-0 conmon[77114]: conmon fb6da9a064e0ffb3047d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fb6da9a064e0ffb3047d3a57ef2312812f57c7768d7d406b0c8107002952fd0e.scope/container/memory.events
Jan 20 18:39:23 compute-0 podman[77150]: 2026-01-20 18:39:23.728694688 +0000 UTC m=+0.664652064 container create 5a78d441574686319151dd73e1fc4b89ed9c8a475da7b4f081f020f1b041b4e1 (image=quay.io/ceph/ceph:v19, name=youthful_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:39:23 compute-0 ceph-mon[74381]: from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 18:39:23 compute-0 ceph-mon[74381]: Added label _admin to host compute-0
Jan 20 18:39:23 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/1293111657' entity='client.admin' 
Jan 20 18:39:23 compute-0 systemd[1]: Started libpod-conmon-5a78d441574686319151dd73e1fc4b89ed9c8a475da7b4f081f020f1b041b4e1.scope.
Jan 20 18:39:23 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:39:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7baf7d66f08b8f24766e13529945e2e3445f41226d39f0268111ff5660cf239/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:39:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7baf7d66f08b8f24766e13529945e2e3445f41226d39f0268111ff5660cf239/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:39:23 compute-0 podman[77150]: 2026-01-20 18:39:23.809522278 +0000 UTC m=+0.745479664 container init 5a78d441574686319151dd73e1fc4b89ed9c8a475da7b4f081f020f1b041b4e1 (image=quay.io/ceph/ceph:v19, name=youthful_shannon, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 18:39:23 compute-0 podman[77150]: 2026-01-20 18:39:23.816406433 +0000 UTC m=+0.752363809 container start 5a78d441574686319151dd73e1fc4b89ed9c8a475da7b4f081f020f1b041b4e1 (image=quay.io/ceph/ceph:v19, name=youthful_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Jan 20 18:39:24 compute-0 podman[77150]: 2026-01-20 18:39:24.047795743 +0000 UTC m=+0.983753139 container attach 5a78d441574686319151dd73e1fc4b89ed9c8a475da7b4f081f020f1b041b4e1 (image=quay.io/ceph/ceph:v19, name=youthful_shannon, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:39:24 compute-0 podman[77098]: 2026-01-20 18:39:24.118220481 +0000 UTC m=+1.465042196 container died fb6da9a064e0ffb3047d3a57ef2312812f57c7768d7d406b0c8107002952fd0e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:39:24 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0)
Jan 20 18:39:24 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3217505446' entity='client.admin' 
Jan 20 18:39:24 compute-0 ceph-mgr[74676]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 20 18:39:24 compute-0 systemd[1]: libpod-5a78d441574686319151dd73e1fc4b89ed9c8a475da7b4f081f020f1b041b4e1.scope: Deactivated successfully.
Jan 20 18:39:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-1831273f1d0b3604fa379e253dcf147329dbbdb8fef6fcdd7f53928211d0a593-merged.mount: Deactivated successfully.
Jan 20 18:39:24 compute-0 podman[78181]: 2026-01-20 18:39:24.648559943 +0000 UTC m=+1.145589243 container remove fb6da9a064e0ffb3047d3a57ef2312812f57c7768d7d406b0c8107002952fd0e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_lehmann, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:39:24 compute-0 systemd[1]: libpod-conmon-fb6da9a064e0ffb3047d3a57ef2312812f57c7768d7d406b0c8107002952fd0e.scope: Deactivated successfully.
Jan 20 18:39:24 compute-0 podman[77150]: 2026-01-20 18:39:24.660020891 +0000 UTC m=+1.595978277 container died 5a78d441574686319151dd73e1fc4b89ed9c8a475da7b4f081f020f1b041b4e1 (image=quay.io/ceph/ceph:v19, name=youthful_shannon, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid)
Jan 20 18:39:24 compute-0 sudo[76905]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:24 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 18:39:24 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:24 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 18:39:25 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:25 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 18:39:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-c7baf7d66f08b8f24766e13529945e2e3445f41226d39f0268111ff5660cf239-merged.mount: Deactivated successfully.
Jan 20 18:39:25 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:25 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 18:39:25 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:25 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 20 18:39:25 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 20 18:39:25 compute-0 podman[78224]: 2026-01-20 18:39:25.228102031 +0000 UTC m=+0.631317996 container remove 5a78d441574686319151dd73e1fc4b89ed9c8a475da7b4f081f020f1b041b4e1 (image=quay.io/ceph/ceph:v19, name=youthful_shannon, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:39:25 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 18:39:25 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:39:25 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 20 18:39:25 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 18:39:25 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Jan 20 18:39:25 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Jan 20 18:39:25 compute-0 systemd[1]: libpod-conmon-5a78d441574686319151dd73e1fc4b89ed9c8a475da7b4f081f020f1b041b4e1.scope: Deactivated successfully.
Jan 20 18:39:25 compute-0 sudo[77142]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:25 compute-0 sudo[78237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Jan 20 18:39:25 compute-0 sudo[78237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:39:25 compute-0 sudo[78237]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:25 compute-0 sudo[78262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/etc/ceph
Jan 20 18:39:25 compute-0 sudo[78262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:39:25 compute-0 sudo[78262]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:25 compute-0 sudo[78287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/etc/ceph/ceph.conf.new
Jan 20 18:39:25 compute-0 sudo[78287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:39:25 compute-0 sudo[78287]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:25 compute-0 sudo[78312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1
Jan 20 18:39:25 compute-0 sudo[78312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:39:25 compute-0 sudo[78312]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:25 compute-0 sudo[78337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/etc/ceph/ceph.conf.new
Jan 20 18:39:25 compute-0 sudo[78337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:39:25 compute-0 sudo[78337]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:25 compute-0 sudo[78385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/etc/ceph/ceph.conf.new
Jan 20 18:39:25 compute-0 sudo[78385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:39:25 compute-0 sudo[78385]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:25 compute-0 sudo[78433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/etc/ceph/ceph.conf.new
Jan 20 18:39:25 compute-0 sudo[78433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:39:25 compute-0 sudo[78433]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:25 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/3217505446' entity='client.admin' 
Jan 20 18:39:25 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:25 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:25 compute-0 sudo[78486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Jan 20 18:39:25 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:25 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:25 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 20 18:39:25 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:39:25 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 18:39:25 compute-0 sudo[78486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:39:25 compute-0 sudo[78486]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:25 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.conf
Jan 20 18:39:25 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.conf
Jan 20 18:39:25 compute-0 sudo[78535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config
Jan 20 18:39:25 compute-0 sudo[78535]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:39:25 compute-0 sudo[78535]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:25 compute-0 sudo[78560]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config
Jan 20 18:39:25 compute-0 sudo[78560]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:39:25 compute-0 sudo[78560]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:25 compute-0 sudo[78585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.conf.new
Jan 20 18:39:25 compute-0 sudo[78585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:39:25 compute-0 sudo[78585]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:25 compute-0 sudo[78615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1
Jan 20 18:39:25 compute-0 sudo[78615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:39:25 compute-0 sudo[78615]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:25 compute-0 sudo[78662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.conf.new
Jan 20 18:39:25 compute-0 sudo[78662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:39:25 compute-0 sudo[78662]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:26 compute-0 sudo[78753]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvnjmdwpdmocnfpscadypslidfpzorqo ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1768934365.5872622-37259-200122268519667/async_wrapper.py j111329493787 30 /home/zuul/.ansible/tmp/ansible-tmp-1768934365.5872622-37259-200122268519667/AnsiballZ_command.py _'
Jan 20 18:39:26 compute-0 sudo[78753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:39:26 compute-0 sudo[78757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.conf.new
Jan 20 18:39:26 compute-0 sudo[78757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:39:26 compute-0 sudo[78757]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:26 compute-0 sudo[78783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.conf.new
Jan 20 18:39:26 compute-0 sudo[78783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:39:26 compute-0 sudo[78783]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:26 compute-0 sudo[78808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.conf.new /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.conf
Jan 20 18:39:26 compute-0 sudo[78808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:39:26 compute-0 sudo[78808]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:26 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 20 18:39:26 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 20 18:39:26 compute-0 ansible-async_wrapper.py[78761]: Invoked with j111329493787 30 /home/zuul/.ansible/tmp/ansible-tmp-1768934365.5872622-37259-200122268519667/AnsiballZ_command.py _
Jan 20 18:39:26 compute-0 ansible-async_wrapper.py[78858]: Starting module and watcher
Jan 20 18:39:26 compute-0 ansible-async_wrapper.py[78858]: Start watching 78859 (30)
Jan 20 18:39:26 compute-0 ansible-async_wrapper.py[78859]: Start module (78859)
Jan 20 18:39:26 compute-0 ansible-async_wrapper.py[78761]: Return async_wrapper task started.
Jan 20 18:39:26 compute-0 sudo[78833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Jan 20 18:39:26 compute-0 sudo[78833]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:39:26 compute-0 sudo[78833]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:26 compute-0 sudo[78753]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:26 compute-0 sudo[78863]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/etc/ceph
Jan 20 18:39:26 compute-0 sudo[78863]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:39:26 compute-0 sudo[78863]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:26 compute-0 sudo[78888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/etc/ceph/ceph.client.admin.keyring.new
Jan 20 18:39:26 compute-0 sudo[78888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:39:26 compute-0 sudo[78888]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:26 compute-0 python3[78860]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:39:26 compute-0 sudo[78913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1
Jan 20 18:39:26 compute-0 ceph-mgr[74676]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Jan 20 18:39:26 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 18:39:26 compute-0 sudo[78913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:39:26 compute-0 sudo[78913]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:26 compute-0 sudo[78947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/etc/ceph/ceph.client.admin.keyring.new
Jan 20 18:39:26 compute-0 sudo[78947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:39:26 compute-0 sudo[78947]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:26 compute-0 podman[78914]: 2026-01-20 18:39:26.735765227 +0000 UTC m=+0.388120048 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:39:26 compute-0 sudo[78999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/etc/ceph/ceph.client.admin.keyring.new
Jan 20 18:39:26 compute-0 sudo[78999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:39:26 compute-0 sudo[78999]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:27 compute-0 sudo[79024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/etc/ceph/ceph.client.admin.keyring.new
Jan 20 18:39:27 compute-0 sudo[79024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:39:27 compute-0 sudo[79024]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:27 compute-0 sudo[79049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Jan 20 18:39:27 compute-0 sudo[79049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:39:27 compute-0 sudo[79049]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:27 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.client.admin.keyring
Jan 20 18:39:27 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.client.admin.keyring
Jan 20 18:39:27 compute-0 sudo[79074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config
Jan 20 18:39:27 compute-0 sudo[79074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:39:27 compute-0 sudo[79074]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:27 compute-0 sudo[79099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config
Jan 20 18:39:27 compute-0 sudo[79099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:39:27 compute-0 sudo[79099]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:27 compute-0 sudo[79147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.client.admin.keyring.new
Jan 20 18:39:27 compute-0 sudo[79147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:39:27 compute-0 sudo[79147]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:27 compute-0 ceph-mon[74381]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Jan 20 18:39:27 compute-0 sudo[79172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1
Jan 20 18:39:27 compute-0 sudo[79172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:39:27 compute-0 sudo[79172]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:27 compute-0 sudo[79197]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.client.admin.keyring.new
Jan 20 18:39:27 compute-0 sudo[79197]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:39:27 compute-0 sudo[79197]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:27 compute-0 sudo[79244]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekeqneiohsppzpovpppvupauznzboknr ; /usr/bin/python3'
Jan 20 18:39:27 compute-0 sudo[79244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:39:27 compute-0 sudo[79271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.client.admin.keyring.new
Jan 20 18:39:27 compute-0 sudo[79271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:39:27 compute-0 sudo[79271]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:27 compute-0 python3[79248]: ansible-ansible.legacy.async_status Invoked with jid=j111329493787.78761 mode=status _async_dir=/root/.ansible_async
Jan 20 18:39:27 compute-0 sudo[79244]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:27 compute-0 sudo[79296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.client.admin.keyring.new
Jan 20 18:39:27 compute-0 sudo[79296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:39:27 compute-0 sudo[79296]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:27 compute-0 podman[78914]: 2026-01-20 18:39:27.739709929 +0000 UTC m=+1.392064730 container create bafb9b3a2fa68599a744cae2ba76b6302a4636be841afad00986bb8ad60b3f8a (image=quay.io/ceph/ceph:v19, name=adoring_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:39:27 compute-0 ceph-mon[74381]: Updating compute-0:/etc/ceph/ceph.conf
Jan 20 18:39:27 compute-0 ceph-mon[74381]: Updating compute-0:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.conf
Jan 20 18:39:27 compute-0 ceph-mon[74381]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 20 18:39:27 compute-0 sudo[79321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.client.admin.keyring.new /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.client.admin.keyring
Jan 20 18:39:27 compute-0 sudo[79321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:39:27 compute-0 sudo[79321]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:27 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 18:39:27 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:27 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 18:39:27 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:27 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 18:39:27 compute-0 systemd[1]: Started libpod-conmon-bafb9b3a2fa68599a744cae2ba76b6302a4636be841afad00986bb8ad60b3f8a.scope.
Jan 20 18:39:27 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:27 compute-0 ceph-mgr[74676]: [progress INFO root] update: starting ev 2aba2ed6-c8b6-4657-a5c4-82065bb2a03b (Updating crash deployment (+1 -> 1))
Jan 20 18:39:27 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Jan 20 18:39:27 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 20 18:39:27 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 20 18:39:27 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 18:39:27 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:39:27 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Jan 20 18:39:27 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Jan 20 18:39:27 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:39:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c325561a4233a24e779de07da396b7aeb2f46be4cd1ea1a6ac299382d082d78/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:39:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c325561a4233a24e779de07da396b7aeb2f46be4cd1ea1a6ac299382d082d78/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:39:27 compute-0 podman[78914]: 2026-01-20 18:39:27.951739246 +0000 UTC m=+1.604094077 container init bafb9b3a2fa68599a744cae2ba76b6302a4636be841afad00986bb8ad60b3f8a (image=quay.io/ceph/ceph:v19, name=adoring_banzai, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:39:27 compute-0 podman[78914]: 2026-01-20 18:39:27.960114532 +0000 UTC m=+1.612469333 container start bafb9b3a2fa68599a744cae2ba76b6302a4636be841afad00986bb8ad60b3f8a (image=quay.io/ceph/ceph:v19, name=adoring_banzai, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:39:27 compute-0 podman[78914]: 2026-01-20 18:39:27.963355039 +0000 UTC m=+1.615709890 container attach bafb9b3a2fa68599a744cae2ba76b6302a4636be841afad00986bb8ad60b3f8a (image=quay.io/ceph/ceph:v19, name=adoring_banzai, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Jan 20 18:39:27 compute-0 sudo[79351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:39:27 compute-0 sudo[79351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:39:27 compute-0 sudo[79351]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:28 compute-0 sudo[79377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1
Jan 20 18:39:28 compute-0 sudo[79377]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:39:28 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 18:39:28 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.14162 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 20 18:39:28 compute-0 adoring_banzai[79348]: 
Jan 20 18:39:28 compute-0 adoring_banzai[79348]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 20 18:39:28 compute-0 systemd[1]: libpod-bafb9b3a2fa68599a744cae2ba76b6302a4636be841afad00986bb8ad60b3f8a.scope: Deactivated successfully.
Jan 20 18:39:28 compute-0 podman[78914]: 2026-01-20 18:39:28.399414838 +0000 UTC m=+2.051769639 container died bafb9b3a2fa68599a744cae2ba76b6302a4636be841afad00986bb8ad60b3f8a (image=quay.io/ceph/ceph:v19, name=adoring_banzai, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Jan 20 18:39:28 compute-0 podman[79462]: 2026-01-20 18:39:28.414161195 +0000 UTC m=+0.050209844 container create 1c1a3cf511185d604a54aba70bc6419303bfb3e0cb82c7e14c288f4ed6cf32ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_meninsky, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:39:28 compute-0 systemd[1]: Started libpod-conmon-1c1a3cf511185d604a54aba70bc6419303bfb3e0cb82c7e14c288f4ed6cf32ae.scope.
Jan 20 18:39:28 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:39:28 compute-0 podman[79462]: 2026-01-20 18:39:28.386044577 +0000 UTC m=+0.022093236 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:39:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-3c325561a4233a24e779de07da396b7aeb2f46be4cd1ea1a6ac299382d082d78-merged.mount: Deactivated successfully.
Jan 20 18:39:28 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 18:39:28 compute-0 sudo[79539]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-argkawndeebwurzirdamqdgwzlkhvpkr ; /usr/bin/python3'
Jan 20 18:39:28 compute-0 sudo[79539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:39:28 compute-0 python3[79541]: ansible-ansible.legacy.async_status Invoked with jid=j111329493787.78761 mode=status _async_dir=/root/.ansible_async
Jan 20 18:39:28 compute-0 sudo[79539]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:29 compute-0 podman[79462]: 2026-01-20 18:39:29.271366352 +0000 UTC m=+0.907414991 container init 1c1a3cf511185d604a54aba70bc6419303bfb3e0cb82c7e14c288f4ed6cf32ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_meninsky, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:39:29 compute-0 podman[79462]: 2026-01-20 18:39:29.27949229 +0000 UTC m=+0.915540919 container start 1c1a3cf511185d604a54aba70bc6419303bfb3e0cb82c7e14c288f4ed6cf32ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_meninsky, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Jan 20 18:39:29 compute-0 inspiring_meninsky[79490]: 167 167
Jan 20 18:39:29 compute-0 systemd[1]: libpod-1c1a3cf511185d604a54aba70bc6419303bfb3e0cb82c7e14c288f4ed6cf32ae.scope: Deactivated successfully.
Jan 20 18:39:29 compute-0 ceph-mon[74381]: pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 18:39:29 compute-0 ceph-mon[74381]: Updating compute-0:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.client.admin.keyring
Jan 20 18:39:29 compute-0 ceph-mon[74381]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Jan 20 18:39:29 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:29 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:29 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:29 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 20 18:39:29 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 20 18:39:29 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:39:29 compute-0 ceph-mon[74381]: Deploying daemon crash.compute-0 on compute-0
Jan 20 18:39:29 compute-0 ceph-mon[74381]: from='client.14162 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 20 18:39:29 compute-0 podman[79462]: 2026-01-20 18:39:29.409711351 +0000 UTC m=+1.045760080 container attach 1c1a3cf511185d604a54aba70bc6419303bfb3e0cb82c7e14c288f4ed6cf32ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_meninsky, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:39:29 compute-0 podman[79462]: 2026-01-20 18:39:29.410484672 +0000 UTC m=+1.046533301 container died 1c1a3cf511185d604a54aba70bc6419303bfb3e0cb82c7e14c288f4ed6cf32ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_meninsky, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Jan 20 18:39:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-c6cd36ab57513e2edd0cf71cd381ae909984fe41fc7151e80610e8180e53f13b-merged.mount: Deactivated successfully.
Jan 20 18:39:29 compute-0 podman[79462]: 2026-01-20 18:39:29.530601892 +0000 UTC m=+1.166650521 container remove 1c1a3cf511185d604a54aba70bc6419303bfb3e0cb82c7e14c288f4ed6cf32ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_meninsky, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:39:29 compute-0 podman[78914]: 2026-01-20 18:39:29.550817596 +0000 UTC m=+3.203172397 container remove bafb9b3a2fa68599a744cae2ba76b6302a4636be841afad00986bb8ad60b3f8a (image=quay.io/ceph/ceph:v19, name=adoring_banzai, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:39:29 compute-0 systemd[1]: libpod-conmon-bafb9b3a2fa68599a744cae2ba76b6302a4636be841afad00986bb8ad60b3f8a.scope: Deactivated successfully.
Jan 20 18:39:29 compute-0 ansible-async_wrapper.py[78859]: Module complete (78859)
Jan 20 18:39:29 compute-0 systemd[1]: Reloading.
Jan 20 18:39:29 compute-0 systemd-rc-local-generator[79583]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:39:29 compute-0 systemd-sysv-generator[79586]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:39:29 compute-0 systemd[1]: libpod-conmon-1c1a3cf511185d604a54aba70bc6419303bfb3e0cb82c7e14c288f4ed6cf32ae.scope: Deactivated successfully.
Jan 20 18:39:29 compute-0 systemd[1]: Reloading.
Jan 20 18:39:30 compute-0 systemd-rc-local-generator[79631]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:39:30 compute-0 systemd-sysv-generator[79636]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:39:30 compute-0 sudo[79677]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldsfxaifisdcyzkjcsawjzhuyauntnpx ; /usr/bin/python3'
Jan 20 18:39:30 compute-0 sudo[79677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:39:30 compute-0 systemd[1]: Starting Ceph crash.compute-0 for aecbbf3b-b405-507b-97d7-637a83f5b4b1...
Jan 20 18:39:30 compute-0 python3[79681]: ansible-ansible.legacy.async_status Invoked with jid=j111329493787.78761 mode=status _async_dir=/root/.ansible_async
Jan 20 18:39:30 compute-0 sudo[79677]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:30 compute-0 sudo[79789]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylwlsnxonihzsxjitkcbsgqkngpvcdah ; /usr/bin/python3'
Jan 20 18:39:30 compute-0 sudo[79789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:39:30 compute-0 podman[79730]: 2026-01-20 18:39:30.41980684 +0000 UTC m=+0.024185163 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:39:30 compute-0 podman[79730]: 2026-01-20 18:39:30.696445819 +0000 UTC m=+0.300824152 container create 7416fde3489a0650cf209c1b5d06f22debcd1e09731cda145c9c59092ccf5569 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-crash-compute-0, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 20 18:39:30 compute-0 ceph-mon[74381]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 18:39:30 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 18:39:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/710a36d325dcb54878a0b99df794ab97a86b3a2961e244178c9d9453a5385d5e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:39:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/710a36d325dcb54878a0b99df794ab97a86b3a2961e244178c9d9453a5385d5e/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 18:39:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/710a36d325dcb54878a0b99df794ab97a86b3a2961e244178c9d9453a5385d5e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:39:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/710a36d325dcb54878a0b99df794ab97a86b3a2961e244178c9d9453a5385d5e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 18:39:30 compute-0 python3[79791]: ansible-ansible.legacy.async_status Invoked with jid=j111329493787.78761 mode=cleanup _async_dir=/root/.ansible_async
Jan 20 18:39:30 compute-0 sudo[79789]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:30 compute-0 podman[79730]: 2026-01-20 18:39:30.803927678 +0000 UTC m=+0.408305981 container init 7416fde3489a0650cf209c1b5d06f22debcd1e09731cda145c9c59092ccf5569 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-crash-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:39:30 compute-0 podman[79730]: 2026-01-20 18:39:30.811206904 +0000 UTC m=+0.415585207 container start 7416fde3489a0650cf209c1b5d06f22debcd1e09731cda145c9c59092ccf5569 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-crash-compute-0, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:39:30 compute-0 bash[79730]: 7416fde3489a0650cf209c1b5d06f22debcd1e09731cda145c9c59092ccf5569
Jan 20 18:39:30 compute-0 systemd[1]: Started Ceph crash.compute-0 for aecbbf3b-b405-507b-97d7-637a83f5b4b1.
Jan 20 18:39:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-crash-compute-0[79794]: INFO:ceph-crash:pinging cluster to exercise our key
Jan 20 18:39:30 compute-0 sudo[79377]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:30 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 18:39:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-crash-compute-0[79794]: 2026-01-20T18:39:30.947+0000 7fa19a795640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Jan 20 18:39:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-crash-compute-0[79794]: 2026-01-20T18:39:30.947+0000 7fa19a795640 -1 AuthRegistry(0x7fa1940698f0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Jan 20 18:39:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-crash-compute-0[79794]: 2026-01-20T18:39:30.949+0000 7fa19a795640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Jan 20 18:39:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-crash-compute-0[79794]: 2026-01-20T18:39:30.949+0000 7fa19a795640 -1 AuthRegistry(0x7fa19a793ff0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Jan 20 18:39:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-crash-compute-0[79794]: 2026-01-20T18:39:30.950+0000 7fa193fff640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Jan 20 18:39:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-crash-compute-0[79794]: 2026-01-20T18:39:30.950+0000 7fa19a795640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Jan 20 18:39:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-crash-compute-0[79794]: [errno 13] RADOS permission denied (error connecting to the cluster)
Jan 20 18:39:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-crash-compute-0[79794]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Jan 20 18:39:31 compute-0 sudo[79834]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qsaegvupqxohctcxngcrxhufgalrplka ; /usr/bin/python3'
Jan 20 18:39:31 compute-0 sudo[79834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:39:31 compute-0 ansible-async_wrapper.py[78858]: Done in kid B.
Jan 20 18:39:31 compute-0 python3[79836]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 20 18:39:31 compute-0 sudo[79834]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:31 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:31 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 18:39:31 compute-0 sudo[79862]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayaohhhuwxspsqozadxanklvpudlqayt ; /usr/bin/python3'
Jan 20 18:39:31 compute-0 sudo[79862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:39:31 compute-0 python3[79864]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:39:31 compute-0 podman[79865]: 2026-01-20 18:39:31.758377676 +0000 UTC m=+0.021573353 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:39:31 compute-0 podman[79865]: 2026-01-20 18:39:31.87234449 +0000 UTC m=+0.135540137 container create 95255253d62d5b2e8aca841a20da0d37dc696cc517da1471ac65b9590bc31f85 (image=quay.io/ceph/ceph:v19, name=nice_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:39:31 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:31 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Jan 20 18:39:31 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:31 compute-0 ceph-mgr[74676]: [progress INFO root] complete: finished ev 2aba2ed6-c8b6-4657-a5c4-82065bb2a03b (Updating crash deployment (+1 -> 1))
Jan 20 18:39:31 compute-0 ceph-mgr[74676]: [progress INFO root] Completed event 2aba2ed6-c8b6-4657-a5c4-82065bb2a03b (Updating crash deployment (+1 -> 1)) in 4 seconds
Jan 20 18:39:31 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Jan 20 18:39:31 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:31 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 20 18:39:31 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:31 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 20 18:39:31 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:31 compute-0 systemd[1]: Started libpod-conmon-95255253d62d5b2e8aca841a20da0d37dc696cc517da1471ac65b9590bc31f85.scope.
Jan 20 18:39:31 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:39:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19b0fc8b6b2fe1076bbf16cf27478e6e1a12e2c336df22ee74338a16a4203790/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 20 18:39:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19b0fc8b6b2fe1076bbf16cf27478e6e1a12e2c336df22ee74338a16a4203790/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:39:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19b0fc8b6b2fe1076bbf16cf27478e6e1a12e2c336df22ee74338a16a4203790/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:39:31 compute-0 sudo[79881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 18:39:31 compute-0 sudo[79881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:39:31 compute-0 sudo[79881]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:32 compute-0 sudo[79909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:39:32 compute-0 sudo[79909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:39:32 compute-0 sudo[79909]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:32 compute-0 sudo[79934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Jan 20 18:39:32 compute-0 sudo[79934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:39:32 compute-0 ceph-mgr[74676]: [progress INFO root] Writing back 1 completed events
Jan 20 18:39:32 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 18:39:33 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 20 18:39:33 compute-0 podman[79865]: 2026-01-20 18:39:33.104524176 +0000 UTC m=+1.367719913 container init 95255253d62d5b2e8aca841a20da0d37dc696cc517da1471ac65b9590bc31f85 (image=quay.io/ceph/ceph:v19, name=nice_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:39:33 compute-0 podman[79865]: 2026-01-20 18:39:33.111197126 +0000 UTC m=+1.374392813 container start 95255253d62d5b2e8aca841a20da0d37dc696cc517da1471ac65b9590bc31f85 (image=quay.io/ceph/ceph:v19, name=nice_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 20 18:39:33 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 18:39:33 compute-0 ceph-mon[74381]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 18:39:33 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:33 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:33 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:33 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:33 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:33 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:33 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 20 18:39:33 compute-0 nice_stonebraker[79885]: 
Jan 20 18:39:33 compute-0 nice_stonebraker[79885]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 20 18:39:33 compute-0 systemd[1]: libpod-95255253d62d5b2e8aca841a20da0d37dc696cc517da1471ac65b9590bc31f85.scope: Deactivated successfully.
Jan 20 18:39:33 compute-0 podman[79865]: 2026-01-20 18:39:33.596125453 +0000 UTC m=+1.859321100 container attach 95255253d62d5b2e8aca841a20da0d37dc696cc517da1471ac65b9590bc31f85 (image=quay.io/ceph/ceph:v19, name=nice_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Jan 20 18:39:33 compute-0 podman[79865]: 2026-01-20 18:39:33.599102944 +0000 UTC m=+1.862298591 container died 95255253d62d5b2e8aca841a20da0d37dc696cc517da1471ac65b9590bc31f85 (image=quay.io/ceph/ceph:v19, name=nice_stonebraker, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 20 18:39:34 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:34 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 18:39:35 compute-0 ceph-mon[74381]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 18:39:35 compute-0 ceph-mon[74381]: from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 20 18:39:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-19b0fc8b6b2fe1076bbf16cf27478e6e1a12e2c336df22ee74338a16a4203790-merged.mount: Deactivated successfully.
Jan 20 18:39:35 compute-0 podman[79865]: 2026-01-20 18:39:35.544261806 +0000 UTC m=+3.807457453 container remove 95255253d62d5b2e8aca841a20da0d37dc696cc517da1471ac65b9590bc31f85 (image=quay.io/ceph/ceph:v19, name=nice_stonebraker, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:39:35 compute-0 sudo[79862]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:35 compute-0 systemd[1]: libpod-conmon-95255253d62d5b2e8aca841a20da0d37dc696cc517da1471ac65b9590bc31f85.scope: Deactivated successfully.
Jan 20 18:39:35 compute-0 sudo[80099]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbxfrajpdxcjcxnbddpormamemqigbri ; /usr/bin/python3'
Jan 20 18:39:35 compute-0 sudo[80099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:39:35 compute-0 podman[80063]: 2026-01-20 18:39:35.977129459 +0000 UTC m=+0.202310097 container exec 2fba800b181b7eef43f9bbe592c3e2bd413c4a1140b3b26fe8cd839a68603da7 (image=quay.io/ceph/ceph:v19, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mon-compute-0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 20 18:39:35 compute-0 python3[80101]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:39:36 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:36 compute-0 ceph-mon[74381]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 18:39:36 compute-0 podman[80063]: 2026-01-20 18:39:36.465772916 +0000 UTC m=+0.690953594 container exec_died 2fba800b181b7eef43f9bbe592c3e2bd413c4a1140b3b26fe8cd839a68603da7 (image=quay.io/ceph/ceph:v19, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:39:36 compute-0 podman[80108]: 2026-01-20 18:39:36.462756144 +0000 UTC m=+0.455881394 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:39:36 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 18:39:36 compute-0 podman[80108]: 2026-01-20 18:39:36.759914447 +0000 UTC m=+0.753039677 container create a9816dfad45c7391ea2f3eaf469140e50e1afdbd22675ab0697503e8f7e32b4b (image=quay.io/ceph/ceph:v19, name=naughty_hugle, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 20 18:39:36 compute-0 systemd[1]: Started libpod-conmon-a9816dfad45c7391ea2f3eaf469140e50e1afdbd22675ab0697503e8f7e32b4b.scope.
Jan 20 18:39:36 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:39:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12d8eae09ec06682a820bddeb1d52ec545fde0295d7f3f272620ebe577908ae1/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 20 18:39:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12d8eae09ec06682a820bddeb1d52ec545fde0295d7f3f272620ebe577908ae1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:39:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12d8eae09ec06682a820bddeb1d52ec545fde0295d7f3f272620ebe577908ae1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:39:36 compute-0 podman[80108]: 2026-01-20 18:39:36.960506816 +0000 UTC m=+0.953632046 container init a9816dfad45c7391ea2f3eaf469140e50e1afdbd22675ab0697503e8f7e32b4b (image=quay.io/ceph/ceph:v19, name=naughty_hugle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True)
Jan 20 18:39:36 compute-0 podman[80108]: 2026-01-20 18:39:36.972535991 +0000 UTC m=+0.965661221 container start a9816dfad45c7391ea2f3eaf469140e50e1afdbd22675ab0697503e8f7e32b4b (image=quay.io/ceph/ceph:v19, name=naughty_hugle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:39:36 compute-0 podman[80108]: 2026-01-20 18:39:36.978675676 +0000 UTC m=+0.971800926 container attach a9816dfad45c7391ea2f3eaf469140e50e1afdbd22675ab0697503e8f7e32b4b (image=quay.io/ceph/ceph:v19, name=naughty_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 20 18:39:37 compute-0 sudo[79934]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:37 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 18:39:37 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:37 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 18:39:37 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:37 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 18:39:37 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:39:37 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 20 18:39:37 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 18:39:37 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 18:39:37 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:39:37 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:39:37 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:39:37 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:39:37 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:39:37 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:39:37 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0)
Jan 20 18:39:37 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:37 compute-0 sudo[80191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 18:39:37 compute-0 sudo[80191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:39:37 compute-0 sudo[80191]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:37 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0)
Jan 20 18:39:37 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/170429457' entity='client.admin' 
Jan 20 18:39:37 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:37 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0)
Jan 20 18:39:37 compute-0 systemd[1]: libpod-a9816dfad45c7391ea2f3eaf469140e50e1afdbd22675ab0697503e8f7e32b4b.scope: Deactivated successfully.
Jan 20 18:39:37 compute-0 podman[80108]: 2026-01-20 18:39:37.825143732 +0000 UTC m=+1.818268982 container died a9816dfad45c7391ea2f3eaf469140e50e1afdbd22675ab0697503e8f7e32b4b (image=quay.io/ceph/ceph:v19, name=naughty_hugle, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 20 18:39:37 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:37 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0)
Jan 20 18:39:37 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:37 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0)
Jan 20 18:39:37 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:37 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Jan 20 18:39:37 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Jan 20 18:39:37 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Jan 20 18:39:37 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 20 18:39:37 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Jan 20 18:39:37 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 20 18:39:37 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 18:39:37 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:39:37 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Jan 20 18:39:37 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Jan 20 18:39:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-12d8eae09ec06682a820bddeb1d52ec545fde0295d7f3f272620ebe577908ae1-merged.mount: Deactivated successfully.
Jan 20 18:39:37 compute-0 sudo[80228]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:39:37 compute-0 sudo[80228]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:39:37 compute-0 sudo[80228]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:37 compute-0 sudo[80255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph:v19 --timeout 895 _orch deploy --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1
Jan 20 18:39:37 compute-0 sudo[80255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:39:38 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 18:39:38 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 18:39:38 compute-0 ceph-mon[74381]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 18:39:38 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:38 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:38 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:39:38 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 18:39:38 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:38 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/170429457' entity='client.admin' 
Jan 20 18:39:38 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:38 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:38 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:38 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:38 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 20 18:39:38 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 20 18:39:38 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:39:39 compute-0 podman[80108]: 2026-01-20 18:39:39.043193229 +0000 UTC m=+3.036318469 container remove a9816dfad45c7391ea2f3eaf469140e50e1afdbd22675ab0697503e8f7e32b4b (image=quay.io/ceph/ceph:v19, name=naughty_hugle, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 20 18:39:39 compute-0 sudo[80099]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:39 compute-0 systemd[1]: libpod-conmon-a9816dfad45c7391ea2f3eaf469140e50e1afdbd22675ab0697503e8f7e32b4b.scope: Deactivated successfully.
Jan 20 18:39:39 compute-0 sudo[80332]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xydqxbgrcrbvaxsnqvxlwngtsygdnstq ; /usr/bin/python3'
Jan 20 18:39:39 compute-0 podman[80296]: 2026-01-20 18:39:39.171904409 +0000 UTC m=+0.021045148 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:39:39 compute-0 sudo[80332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:39:39 compute-0 podman[80296]: 2026-01-20 18:39:39.290745464 +0000 UTC m=+0.139886183 container create 04920da133b99e957f14f9d9725d4b65b9e5645cbba481634a713f07be6a2007 (image=quay.io/ceph/ceph:v19, name=lucid_thompson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:39:39 compute-0 systemd[1]: Started libpod-conmon-04920da133b99e957f14f9d9725d4b65b9e5645cbba481634a713f07be6a2007.scope.
Jan 20 18:39:39 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:39:39 compute-0 python3[80334]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:39:39 compute-0 podman[80296]: 2026-01-20 18:39:39.862580884 +0000 UTC m=+0.711721703 container init 04920da133b99e957f14f9d9725d4b65b9e5645cbba481634a713f07be6a2007 (image=quay.io/ceph/ceph:v19, name=lucid_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Jan 20 18:39:39 compute-0 podman[80296]: 2026-01-20 18:39:39.870138897 +0000 UTC m=+0.719279606 container start 04920da133b99e957f14f9d9725d4b65b9e5645cbba481634a713f07be6a2007 (image=quay.io/ceph/ceph:v19, name=lucid_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:39:39 compute-0 lucid_thompson[80337]: 167 167
Jan 20 18:39:39 compute-0 systemd[1]: libpod-04920da133b99e957f14f9d9725d4b65b9e5645cbba481634a713f07be6a2007.scope: Deactivated successfully.
Jan 20 18:39:40 compute-0 ceph-mon[74381]: Reconfiguring mon.compute-0 (unknown last config time)...
Jan 20 18:39:40 compute-0 ceph-mon[74381]: Reconfiguring daemon mon.compute-0 on compute-0
Jan 20 18:39:40 compute-0 ceph-mon[74381]: pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 18:39:40 compute-0 podman[80296]: 2026-01-20 18:39:40.279209179 +0000 UTC m=+1.128349988 container attach 04920da133b99e957f14f9d9725d4b65b9e5645cbba481634a713f07be6a2007 (image=quay.io/ceph/ceph:v19, name=lucid_thompson, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 20 18:39:40 compute-0 podman[80296]: 2026-01-20 18:39:40.280175024 +0000 UTC m=+1.129315813 container died 04920da133b99e957f14f9d9725d4b65b9e5645cbba481634a713f07be6a2007 (image=quay.io/ceph/ceph:v19, name=lucid_thompson, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Jan 20 18:39:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-558a083d130b6daf70c91f34af40c7b63b68cf01903b4cf56a23eb82104fd8d9-merged.mount: Deactivated successfully.
Jan 20 18:39:40 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 18:39:40 compute-0 podman[80296]: 2026-01-20 18:39:40.795252146 +0000 UTC m=+1.644392905 container remove 04920da133b99e957f14f9d9725d4b65b9e5645cbba481634a713f07be6a2007 (image=quay.io/ceph/ceph:v19, name=lucid_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Jan 20 18:39:40 compute-0 systemd[1]: libpod-conmon-04920da133b99e957f14f9d9725d4b65b9e5645cbba481634a713f07be6a2007.scope: Deactivated successfully.
Jan 20 18:39:40 compute-0 podman[80340]: 2026-01-20 18:39:40.889431654 +0000 UTC m=+1.458175762 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:39:41 compute-0 podman[80340]: 2026-01-20 18:39:41.378875642 +0000 UTC m=+1.947619720 container create 73d6931f7513a2bf8062e1e0c7600514e92979f1069f039417c1d8f0f5fe46a6 (image=quay.io/ceph/ceph:v19, name=unruffled_einstein, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:39:41 compute-0 sudo[80255]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:41 compute-0 systemd[1]: Started libpod-conmon-73d6931f7513a2bf8062e1e0c7600514e92979f1069f039417c1d8f0f5fe46a6.scope.
Jan 20 18:39:41 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 18:39:41 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:39:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2003f951cd22c31bb8ebdbe40742f559775b6980dfd454ae76ecb63e7313fa5f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:39:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2003f951cd22c31bb8ebdbe40742f559775b6980dfd454ae76ecb63e7313fa5f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:39:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2003f951cd22c31bb8ebdbe40742f559775b6980dfd454ae76ecb63e7313fa5f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 20 18:39:41 compute-0 podman[80340]: 2026-01-20 18:39:41.546532474 +0000 UTC m=+2.115276602 container init 73d6931f7513a2bf8062e1e0c7600514e92979f1069f039417c1d8f0f5fe46a6 (image=quay.io/ceph/ceph:v19, name=unruffled_einstein, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Jan 20 18:39:41 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:41 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 18:39:41 compute-0 podman[80340]: 2026-01-20 18:39:41.554508338 +0000 UTC m=+2.123252416 container start 73d6931f7513a2bf8062e1e0c7600514e92979f1069f039417c1d8f0f5fe46a6 (image=quay.io/ceph/ceph:v19, name=unruffled_einstein, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Jan 20 18:39:41 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:41 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.cepfkm (unknown last config time)...
Jan 20 18:39:41 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.cepfkm (unknown last config time)...
Jan 20 18:39:41 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.cepfkm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Jan 20 18:39:41 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.cepfkm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 20 18:39:41 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 20 18:39:41 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 20 18:39:41 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 18:39:41 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:39:41 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.cepfkm on compute-0
Jan 20 18:39:41 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.cepfkm on compute-0
Jan 20 18:39:41 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0)
Jan 20 18:39:41 compute-0 podman[80340]: 2026-01-20 18:39:41.928615347 +0000 UTC m=+2.497359475 container attach 73d6931f7513a2bf8062e1e0c7600514e92979f1069f039417c1d8f0f5fe46a6 (image=quay.io/ceph/ceph:v19, name=unruffled_einstein, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default)
Jan 20 18:39:41 compute-0 sudo[80397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:39:41 compute-0 sudo[80397]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:39:41 compute-0 sudo[80397]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:42 compute-0 sudo[80422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph:v19 --timeout 895 _orch deploy --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1
Jan 20 18:39:42 compute-0 sudo[80422]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:39:42 compute-0 podman[80463]: 2026-01-20 18:39:42.26321005 +0000 UTC m=+0.025177390 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:39:42 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 18:39:42 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4187146740' entity='client.admin' 
Jan 20 18:39:42 compute-0 systemd[1]: libpod-73d6931f7513a2bf8062e1e0c7600514e92979f1069f039417c1d8f0f5fe46a6.scope: Deactivated successfully.
Jan 20 18:39:43 compute-0 podman[80463]: 2026-01-20 18:39:43.29675573 +0000 UTC m=+1.058723040 container create 270dd7cb964f543b0ed6bbcc9d94c1b85108ff8f4ef595b419328b389060ae81 (image=quay.io/ceph/ceph:v19, name=reverent_bardeen, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:39:43 compute-0 ceph-mon[74381]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 18:39:43 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:43 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:43 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.cepfkm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 20 18:39:43 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 20 18:39:43 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:39:43 compute-0 systemd[1]: Started libpod-conmon-270dd7cb964f543b0ed6bbcc9d94c1b85108ff8f4ef595b419328b389060ae81.scope.
Jan 20 18:39:43 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:39:43 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 18:39:43 compute-0 podman[80463]: 2026-01-20 18:39:43.606524653 +0000 UTC m=+1.368491993 container init 270dd7cb964f543b0ed6bbcc9d94c1b85108ff8f4ef595b419328b389060ae81 (image=quay.io/ceph/ceph:v19, name=reverent_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:39:43 compute-0 podman[80463]: 2026-01-20 18:39:43.613567613 +0000 UTC m=+1.375534923 container start 270dd7cb964f543b0ed6bbcc9d94c1b85108ff8f4ef595b419328b389060ae81 (image=quay.io/ceph/ceph:v19, name=reverent_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:39:43 compute-0 reverent_bardeen[80491]: 167 167
Jan 20 18:39:43 compute-0 systemd[1]: libpod-270dd7cb964f543b0ed6bbcc9d94c1b85108ff8f4ef595b419328b389060ae81.scope: Deactivated successfully.
Jan 20 18:39:43 compute-0 podman[80463]: 2026-01-20 18:39:43.849777653 +0000 UTC m=+1.611744983 container attach 270dd7cb964f543b0ed6bbcc9d94c1b85108ff8f4ef595b419328b389060ae81 (image=quay.io/ceph/ceph:v19, name=reverent_bardeen, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Jan 20 18:39:43 compute-0 podman[80463]: 2026-01-20 18:39:43.850302617 +0000 UTC m=+1.612269957 container died 270dd7cb964f543b0ed6bbcc9d94c1b85108ff8f4ef595b419328b389060ae81 (image=quay.io/ceph/ceph:v19, name=reverent_bardeen, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:39:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-410a230d789eeb9a3d20e32bc9166d02f04f41d3ec99eee260c0be10b8b1d535-merged.mount: Deactivated successfully.
Jan 20 18:39:43 compute-0 podman[80463]: 2026-01-20 18:39:43.89791894 +0000 UTC m=+1.659886250 container remove 270dd7cb964f543b0ed6bbcc9d94c1b85108ff8f4ef595b419328b389060ae81 (image=quay.io/ceph/ceph:v19, name=reverent_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 18:39:43 compute-0 podman[80340]: 2026-01-20 18:39:43.921826715 +0000 UTC m=+4.490570803 container died 73d6931f7513a2bf8062e1e0c7600514e92979f1069f039417c1d8f0f5fe46a6 (image=quay.io/ceph/ceph:v19, name=unruffled_einstein, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Jan 20 18:39:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-2003f951cd22c31bb8ebdbe40742f559775b6980dfd454ae76ecb63e7313fa5f-merged.mount: Deactivated successfully.
Jan 20 18:39:43 compute-0 sudo[80422]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:43 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 18:39:43 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:43 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 18:39:43 compute-0 podman[80340]: 2026-01-20 18:39:43.966558151 +0000 UTC m=+4.535302229 container remove 73d6931f7513a2bf8062e1e0c7600514e92979f1069f039417c1d8f0f5fe46a6 (image=quay.io/ceph/ceph:v19, name=unruffled_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 18:39:43 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:43 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 18:39:43 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:39:43 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 20 18:39:43 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 18:39:43 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 18:39:43 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:43 compute-0 systemd[1]: libpod-conmon-270dd7cb964f543b0ed6bbcc9d94c1b85108ff8f4ef595b419328b389060ae81.scope: Deactivated successfully.
Jan 20 18:39:43 compute-0 sudo[80332]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:44 compute-0 sudo[80509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 18:39:44 compute-0 sudo[80509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:39:44 compute-0 sudo[80509]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:44 compute-0 systemd[1]: libpod-conmon-73d6931f7513a2bf8062e1e0c7600514e92979f1069f039417c1d8f0f5fe46a6.scope: Deactivated successfully.
Jan 20 18:39:44 compute-0 sudo[80557]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqaxrdccxcagicvnfdtdabiustvtmxai ; /usr/bin/python3'
Jan 20 18:39:44 compute-0 sudo[80557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:39:44 compute-0 python3[80559]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:39:44 compute-0 podman[80560]: 2026-01-20 18:39:44.349028205 +0000 UTC m=+0.037167653 container create ae2a47f0c3bc02daa56af28f02444c3bccd8a6bc40a27cf7494343a3eb9f19f6 (image=quay.io/ceph/ceph:v19, name=objective_ramanujan, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:39:44 compute-0 systemd[1]: Started libpod-conmon-ae2a47f0c3bc02daa56af28f02444c3bccd8a6bc40a27cf7494343a3eb9f19f6.scope.
Jan 20 18:39:44 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:39:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ec09e59b1202b2d14093effcdb0936d6085e2ae4e92117999eb50a0c4ecab8a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:39:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ec09e59b1202b2d14093effcdb0936d6085e2ae4e92117999eb50a0c4ecab8a/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 20 18:39:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ec09e59b1202b2d14093effcdb0936d6085e2ae4e92117999eb50a0c4ecab8a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:39:44 compute-0 podman[80560]: 2026-01-20 18:39:44.412448436 +0000 UTC m=+0.100587914 container init ae2a47f0c3bc02daa56af28f02444c3bccd8a6bc40a27cf7494343a3eb9f19f6 (image=quay.io/ceph/ceph:v19, name=objective_ramanujan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 18:39:44 compute-0 podman[80560]: 2026-01-20 18:39:44.417248935 +0000 UTC m=+0.105388383 container start ae2a47f0c3bc02daa56af28f02444c3bccd8a6bc40a27cf7494343a3eb9f19f6 (image=quay.io/ceph/ceph:v19, name=objective_ramanujan, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:39:44 compute-0 podman[80560]: 2026-01-20 18:39:44.42078156 +0000 UTC m=+0.108921008 container attach ae2a47f0c3bc02daa56af28f02444c3bccd8a6bc40a27cf7494343a3eb9f19f6 (image=quay.io/ceph/ceph:v19, name=objective_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:39:44 compute-0 podman[80560]: 2026-01-20 18:39:44.333892336 +0000 UTC m=+0.022031804 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:39:44 compute-0 ceph-mon[74381]: Reconfiguring mgr.compute-0.cepfkm (unknown last config time)...
Jan 20 18:39:44 compute-0 ceph-mon[74381]: Reconfiguring daemon mgr.compute-0.cepfkm on compute-0
Jan 20 18:39:44 compute-0 ceph-mon[74381]: pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 18:39:44 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/4187146740' entity='client.admin' 
Jan 20 18:39:44 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:44 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:44 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:39:44 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 18:39:44 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:44 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 18:39:44 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0)
Jan 20 18:39:44 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1730507213' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Jan 20 18:39:45 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Jan 20 18:39:45 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 20 18:39:45 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/1730507213' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Jan 20 18:39:45 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1730507213' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Jan 20 18:39:45 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Jan 20 18:39:45 compute-0 objective_ramanujan[80576]: set require_min_compat_client to mimic
Jan 20 18:39:45 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Jan 20 18:39:45 compute-0 systemd[1]: libpod-ae2a47f0c3bc02daa56af28f02444c3bccd8a6bc40a27cf7494343a3eb9f19f6.scope: Deactivated successfully.
Jan 20 18:39:45 compute-0 podman[80601]: 2026-01-20 18:39:45.537458472 +0000 UTC m=+0.028170361 container died ae2a47f0c3bc02daa56af28f02444c3bccd8a6bc40a27cf7494343a3eb9f19f6 (image=quay.io/ceph/ceph:v19, name=objective_ramanujan, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:39:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ec09e59b1202b2d14093effcdb0936d6085e2ae4e92117999eb50a0c4ecab8a-merged.mount: Deactivated successfully.
Jan 20 18:39:45 compute-0 podman[80601]: 2026-01-20 18:39:45.572711102 +0000 UTC m=+0.063422961 container remove ae2a47f0c3bc02daa56af28f02444c3bccd8a6bc40a27cf7494343a3eb9f19f6 (image=quay.io/ceph/ceph:v19, name=objective_ramanujan, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 20 18:39:45 compute-0 systemd[1]: libpod-conmon-ae2a47f0c3bc02daa56af28f02444c3bccd8a6bc40a27cf7494343a3eb9f19f6.scope: Deactivated successfully.
Jan 20 18:39:45 compute-0 sudo[80557]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:46 compute-0 sudo[80639]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nodwtudpplmgipmesfwwvjoiyjcgzilh ; /usr/bin/python3'
Jan 20 18:39:46 compute-0 sudo[80639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:39:46 compute-0 python3[80641]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:39:46 compute-0 podman[80642]: 2026-01-20 18:39:46.260701075 +0000 UTC m=+0.037049740 container create 4ba2dba8e04e66a0f89d916992ec1b47b77cce489d5020353f6dfbb5247f2c09 (image=quay.io/ceph/ceph:v19, name=flamboyant_keller, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Jan 20 18:39:46 compute-0 systemd[1]: Started libpod-conmon-4ba2dba8e04e66a0f89d916992ec1b47b77cce489d5020353f6dfbb5247f2c09.scope.
Jan 20 18:39:46 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:39:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/424b39e6d66ecf259247b48b7b5d84a7e61b7ee09b0449446394f4599f375165/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 20 18:39:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/424b39e6d66ecf259247b48b7b5d84a7e61b7ee09b0449446394f4599f375165/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:39:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/424b39e6d66ecf259247b48b7b5d84a7e61b7ee09b0449446394f4599f375165/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:39:46 compute-0 podman[80642]: 2026-01-20 18:39:46.330875978 +0000 UTC m=+0.107224663 container init 4ba2dba8e04e66a0f89d916992ec1b47b77cce489d5020353f6dfbb5247f2c09 (image=quay.io/ceph/ceph:v19, name=flamboyant_keller, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:39:46 compute-0 podman[80642]: 2026-01-20 18:39:46.335878582 +0000 UTC m=+0.112227247 container start 4ba2dba8e04e66a0f89d916992ec1b47b77cce489d5020353f6dfbb5247f2c09 (image=quay.io/ceph/ceph:v19, name=flamboyant_keller, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:39:46 compute-0 podman[80642]: 2026-01-20 18:39:46.244512428 +0000 UTC m=+0.020861093 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:39:46 compute-0 podman[80642]: 2026-01-20 18:39:46.340183428 +0000 UTC m=+0.116532123 container attach 4ba2dba8e04e66a0f89d916992ec1b47b77cce489d5020353f6dfbb5247f2c09 (image=quay.io/ceph/ceph:v19, name=flamboyant_keller, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 20 18:39:46 compute-0 ceph-mon[74381]: pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 18:39:46 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/1730507213' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Jan 20 18:39:46 compute-0 ceph-mon[74381]: osdmap e3: 0 total, 0 up, 0 in
Jan 20 18:39:46 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 18:39:46 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 18:39:46 compute-0 sudo[80682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:39:46 compute-0 sudo[80682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:39:46 compute-0 sudo[80682]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:46 compute-0 sudo[80707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host --expect-hostname compute-0
Jan 20 18:39:46 compute-0 sudo[80707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:39:47 compute-0 sudo[80707]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:47 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 20 18:39:47 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:47 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 20 18:39:47 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:47 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 20 18:39:47 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:47 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 20 18:39:47 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:47 compute-0 ceph-mgr[74676]: [cephadm INFO root] Added host compute-0
Jan 20 18:39:47 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Added host compute-0
Jan 20 18:39:47 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 18:39:47 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:39:47 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 20 18:39:47 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 18:39:47 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 18:39:47 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:47 compute-0 sudo[80752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 18:39:47 compute-0 sudo[80752]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:39:47 compute-0 sudo[80752]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:48 compute-0 ceph-mon[74381]: from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 18:39:48 compute-0 ceph-mon[74381]: pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 18:39:48 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:48 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:48 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:48 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:48 compute-0 ceph-mon[74381]: Added host compute-0
Jan 20 18:39:48 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:39:48 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 18:39:48 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:48 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 18:39:48 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 18:39:48 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-1
Jan 20 18:39:48 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-1
Jan 20 18:39:50 compute-0 ceph-mon[74381]: pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 18:39:50 compute-0 ceph-mon[74381]: Deploying cephadm binary to compute-1
Jan 20 18:39:50 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 18:39:51 compute-0 ceph-mon[74381]: pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 18:39:52 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 20 18:39:52 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:52 compute-0 ceph-mgr[74676]: [cephadm INFO root] Added host compute-1
Jan 20 18:39:52 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Added host compute-1
Jan 20 18:39:52 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 18:39:52 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 20 18:39:52 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:53 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 20 18:39:53 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:53 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:53 compute-0 ceph-mon[74381]: Added host compute-1
Jan 20 18:39:53 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:53 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:53 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 18:39:53 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-2
Jan 20 18:39:53 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-2
Jan 20 18:39:54 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 20 18:39:54 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:54 compute-0 ceph-mon[74381]: pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 18:39:54 compute-0 ceph-mon[74381]: Deploying cephadm binary to compute-2
Jan 20 18:39:54 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:54 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 18:39:56 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 18:39:56 compute-0 ceph-mon[74381]: pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 18:39:57 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 20 18:39:57 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:57 compute-0 ceph-mgr[74676]: [cephadm INFO root] Added host compute-2
Jan 20 18:39:57 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Added host compute-2
Jan 20 18:39:57 compute-0 ceph-mgr[74676]: [cephadm INFO root] Saving service mon spec with placement compute-0;compute-1;compute-2
Jan 20 18:39:57 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0;compute-1;compute-2
Jan 20 18:39:57 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 20 18:39:57 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:57 compute-0 ceph-mgr[74676]: [cephadm INFO root] Saving service mgr spec with placement compute-0;compute-1;compute-2
Jan 20 18:39:57 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0;compute-1;compute-2
Jan 20 18:39:57 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 20 18:39:57 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:57 compute-0 ceph-mgr[74676]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Jan 20 18:39:57 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Jan 20 18:39:57 compute-0 ceph-mgr[74676]: [cephadm INFO root] Marking host: compute-1 for OSDSpec preview refresh.
Jan 20 18:39:57 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Marking host: compute-1 for OSDSpec preview refresh.
Jan 20 18:39:57 compute-0 ceph-mgr[74676]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Jan 20 18:39:57 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Jan 20 18:39:57 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0)
Jan 20 18:39:57 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:57 compute-0 flamboyant_keller[80658]: Added host 'compute-0' with addr '192.168.122.100'
Jan 20 18:39:57 compute-0 flamboyant_keller[80658]: Added host 'compute-1' with addr '192.168.122.101'
Jan 20 18:39:57 compute-0 flamboyant_keller[80658]: Added host 'compute-2' with addr '192.168.122.102'
Jan 20 18:39:57 compute-0 flamboyant_keller[80658]: Scheduled mon update...
Jan 20 18:39:57 compute-0 flamboyant_keller[80658]: Scheduled mgr update...
Jan 20 18:39:57 compute-0 flamboyant_keller[80658]: Scheduled osd.default_drive_group update...
Jan 20 18:39:57 compute-0 systemd[1]: libpod-4ba2dba8e04e66a0f89d916992ec1b47b77cce489d5020353f6dfbb5247f2c09.scope: Deactivated successfully.
Jan 20 18:39:57 compute-0 podman[80642]: 2026-01-20 18:39:57.481884115 +0000 UTC m=+11.258232780 container died 4ba2dba8e04e66a0f89d916992ec1b47b77cce489d5020353f6dfbb5247f2c09 (image=quay.io/ceph/ceph:v19, name=flamboyant_keller, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 20 18:39:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-424b39e6d66ecf259247b48b7b5d84a7e61b7ee09b0449446394f4599f375165-merged.mount: Deactivated successfully.
Jan 20 18:39:57 compute-0 podman[80642]: 2026-01-20 18:39:57.517677389 +0000 UTC m=+11.294026054 container remove 4ba2dba8e04e66a0f89d916992ec1b47b77cce489d5020353f6dfbb5247f2c09 (image=quay.io/ceph/ceph:v19, name=flamboyant_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:39:57 compute-0 systemd[1]: libpod-conmon-4ba2dba8e04e66a0f89d916992ec1b47b77cce489d5020353f6dfbb5247f2c09.scope: Deactivated successfully.
Jan 20 18:39:57 compute-0 sudo[80639]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:57 compute-0 sudo[80813]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkgmsjyswophatdymclfqhfkntlvmsuk ; /usr/bin/python3'
Jan 20 18:39:57 compute-0 sudo[80813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:39:57 compute-0 python3[80815]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:39:57 compute-0 podman[80817]: 2026-01-20 18:39:57.980128411 +0000 UTC m=+0.041853330 container create d48cd0b8b3c00e777b01f2aa2a8c41e52465a7e9e790c154c6cec88abdb14078 (image=quay.io/ceph/ceph:v19, name=xenodochial_curie, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:39:58 compute-0 systemd[1]: Started libpod-conmon-d48cd0b8b3c00e777b01f2aa2a8c41e52465a7e9e790c154c6cec88abdb14078.scope.
Jan 20 18:39:58 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:39:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2a66d3411da93d07114a8664e6b913f74ce3feff0c3a50f33eef6bd8838a964/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:39:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2a66d3411da93d07114a8664e6b913f74ce3feff0c3a50f33eef6bd8838a964/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 20 18:39:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2a66d3411da93d07114a8664e6b913f74ce3feff0c3a50f33eef6bd8838a964/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:39:58 compute-0 podman[80817]: 2026-01-20 18:39:58.051718881 +0000 UTC m=+0.113443830 container init d48cd0b8b3c00e777b01f2aa2a8c41e52465a7e9e790c154c6cec88abdb14078 (image=quay.io/ceph/ceph:v19, name=xenodochial_curie, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:39:58 compute-0 podman[80817]: 2026-01-20 18:39:57.96230872 +0000 UTC m=+0.024033669 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:39:58 compute-0 podman[80817]: 2026-01-20 18:39:58.058272578 +0000 UTC m=+0.119997497 container start d48cd0b8b3c00e777b01f2aa2a8c41e52465a7e9e790c154c6cec88abdb14078 (image=quay.io/ceph/ceph:v19, name=xenodochial_curie, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:39:58 compute-0 podman[80817]: 2026-01-20 18:39:58.061028613 +0000 UTC m=+0.122753522 container attach d48cd0b8b3c00e777b01f2aa2a8c41e52465a7e9e790c154c6cec88abdb14078 (image=quay.io/ceph/ceph:v19, name=xenodochial_curie, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Jan 20 18:39:58 compute-0 ceph-mon[74381]: pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 18:39:58 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:58 compute-0 ceph-mon[74381]: Added host compute-2
Jan 20 18:39:58 compute-0 ceph-mon[74381]: Saving service mon spec with placement compute-0;compute-1;compute-2
Jan 20 18:39:58 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:58 compute-0 ceph-mon[74381]: Saving service mgr spec with placement compute-0;compute-1;compute-2
Jan 20 18:39:58 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:58 compute-0 ceph-mon[74381]: Marking host: compute-0 for OSDSpec preview refresh.
Jan 20 18:39:58 compute-0 ceph-mon[74381]: Marking host: compute-1 for OSDSpec preview refresh.
Jan 20 18:39:58 compute-0 ceph-mon[74381]: Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Jan 20 18:39:58 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:39:58 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 20 18:39:58 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/435627713' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 20 18:39:58 compute-0 xenodochial_curie[80834]: 
Jan 20 18:39:58 compute-0 xenodochial_curie[80834]: {"fsid":"aecbbf3b-b405-507b-97d7-637a83f5b4b1","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":70,"monmap":{"epoch":1,"min_mon_release_name":"squid","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2026-01-20T18:38:46:076055+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2026-01-20T18:38:46.080282+0000","services":{}},"progress_events":{}}
Jan 20 18:39:58 compute-0 systemd[1]: libpod-d48cd0b8b3c00e777b01f2aa2a8c41e52465a7e9e790c154c6cec88abdb14078.scope: Deactivated successfully.
Jan 20 18:39:58 compute-0 podman[80859]: 2026-01-20 18:39:58.529510985 +0000 UTC m=+0.021532381 container died d48cd0b8b3c00e777b01f2aa2a8c41e52465a7e9e790c154c6cec88abdb14078 (image=quay.io/ceph/ceph:v19, name=xenodochial_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 18:39:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-e2a66d3411da93d07114a8664e6b913f74ce3feff0c3a50f33eef6bd8838a964-merged.mount: Deactivated successfully.
Jan 20 18:39:58 compute-0 podman[80859]: 2026-01-20 18:39:58.56048367 +0000 UTC m=+0.052505046 container remove d48cd0b8b3c00e777b01f2aa2a8c41e52465a7e9e790c154c6cec88abdb14078 (image=quay.io/ceph/ceph:v19, name=xenodochial_curie, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:39:58 compute-0 systemd[1]: libpod-conmon-d48cd0b8b3c00e777b01f2aa2a8c41e52465a7e9e790c154c6cec88abdb14078.scope: Deactivated successfully.
Jan 20 18:39:58 compute-0 sudo[80813]: pam_unix(sudo:session): session closed for user root
Jan 20 18:39:58 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 18:39:58 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 18:39:59 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/435627713' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 20 18:40:00 compute-0 ceph-mon[74381]: pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 18:40:00 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 18:40:02 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 18:40:03 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 18:40:04 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 18:40:06 compute-0 ceph-mon[74381]: pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 18:40:06 compute-0 ceph-mgr[74676]: [balancer INFO root] Optimize plan auto_2026-01-20_18:40:06
Jan 20 18:40:06 compute-0 ceph-mgr[74676]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 18:40:06 compute-0 ceph-mgr[74676]: [balancer INFO root] do_upmap
Jan 20 18:40:06 compute-0 ceph-mgr[74676]: [balancer INFO root] No pools available
Jan 20 18:40:06 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 18:40:07 compute-0 ceph-mon[74381]: pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 18:40:07 compute-0 ceph-mon[74381]: pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 18:40:07 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 18:40:07 compute-0 ceph-mgr[74676]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 18:40:07 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:40:07 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:40:07 compute-0 ceph-mgr[74676]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 18:40:07 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:40:07 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:40:07 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:40:07 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:40:08 compute-0 ceph-mon[74381]: pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 18:40:08 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 18:40:08 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 18:40:10 compute-0 ceph-mon[74381]: pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 18:40:10 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 18:40:12 compute-0 ceph-mon[74381]: pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 18:40:12 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 18:40:13 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 18:40:14 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 20 18:40:14 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:40:14 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 20 18:40:14 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:40:14 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 20 18:40:14 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:40:14 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 20 18:40:14 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:40:14 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Jan 20 18:40:14 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 20 18:40:14 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 18:40:14 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:40:14 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 20 18:40:14 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 18:40:14 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Jan 20 18:40:14 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Jan 20 18:40:14 compute-0 ceph-mon[74381]: pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 18:40:14 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:40:14 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:40:14 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:40:14 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:40:14 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 20 18:40:14 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:40:14 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 18:40:14 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 18:40:14 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.conf
Jan 20 18:40:14 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.conf
Jan 20 18:40:15 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 20 18:40:15 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 20 18:40:15 compute-0 ceph-mon[74381]: Updating compute-1:/etc/ceph/ceph.conf
Jan 20 18:40:16 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.client.admin.keyring
Jan 20 18:40:16 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.client.admin.keyring
Jan 20 18:40:16 compute-0 ceph-mon[74381]: pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 18:40:16 compute-0 ceph-mon[74381]: Updating compute-1:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.conf
Jan 20 18:40:16 compute-0 ceph-mon[74381]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 20 18:40:16 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 20 18:40:16 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:40:16 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 20 18:40:16 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:40:16 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 18:40:16 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:40:16 compute-0 ceph-mgr[74676]: [cephadm ERROR cephadm.serve] Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
                                           service_name: mon
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Jan 20 18:40:16 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
                                           service_name: mon
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Jan 20 18:40:16 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 18:40:16 compute-0 ceph-mgr[74676]: [cephadm ERROR cephadm.serve] Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
                                           service_name: mgr
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Jan 20 18:40:16 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
                                           service_name: mgr
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Jan 20 18:40:16 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 18:40:16 compute-0 ceph-mgr[74676]: [progress INFO root] update: starting ev ed627150-8d76-45af-b061-225f8418cfa7 (Updating crash deployment (+1 -> 2))
Jan 20 18:40:16 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Jan 20 18:40:16 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:40:16.697+0000 7f646a7e5640 -1 log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
Jan 20 18:40:16 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: service_name: mon
Jan 20 18:40:16 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: placement:
Jan 20 18:40:16 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]:   hosts:
Jan 20 18:40:16 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]:   - compute-0
Jan 20 18:40:16 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]:   - compute-1
Jan 20 18:40:16 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]:   - compute-2
Jan 20 18:40:16 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Jan 20 18:40:16 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:40:16.699+0000 7f646a7e5640 -1 log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
Jan 20 18:40:16 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: service_name: mgr
Jan 20 18:40:16 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: placement:
Jan 20 18:40:16 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]:   hosts:
Jan 20 18:40:16 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]:   - compute-0
Jan 20 18:40:16 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]:   - compute-1
Jan 20 18:40:16 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]:   - compute-2
Jan 20 18:40:16 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Jan 20 18:40:16 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 20 18:40:16 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 20 18:40:16 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 18:40:16 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:40:16 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-1 on compute-1
Jan 20 18:40:16 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-1 on compute-1
Jan 20 18:40:17 compute-0 ceph-mon[74381]: Updating compute-1:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.client.admin.keyring
Jan 20 18:40:17 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:40:17 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:40:17 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:40:17 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 20 18:40:17 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 20 18:40:17 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:40:17 compute-0 ceph-mon[74381]: log_channel(cluster) log [WRN] : Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Jan 20 18:40:18 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 18:40:18 compute-0 ceph-mon[74381]: Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
                                           service_name: mon
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Jan 20 18:40:18 compute-0 ceph-mon[74381]: pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 18:40:18 compute-0 ceph-mon[74381]: Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
                                           service_name: mgr
                                           placement:
                                             hosts:
                                             - compute-0
                                             - compute-1
                                             - compute-2
                                           ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Jan 20 18:40:18 compute-0 ceph-mon[74381]: pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 18:40:18 compute-0 ceph-mon[74381]: Deploying daemon crash.compute-1 on compute-1
Jan 20 18:40:18 compute-0 ceph-mon[74381]: Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Jan 20 18:40:18 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 18:40:19 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 20 18:40:19 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:40:19 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 20 18:40:19 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:40:19 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Jan 20 18:40:19 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:40:19 compute-0 ceph-mgr[74676]: [progress INFO root] complete: finished ev ed627150-8d76-45af-b061-225f8418cfa7 (Updating crash deployment (+1 -> 2))
Jan 20 18:40:19 compute-0 ceph-mgr[74676]: [progress INFO root] Completed event ed627150-8d76-45af-b061-225f8418cfa7 (Updating crash deployment (+1 -> 2)) in 3 seconds
Jan 20 18:40:19 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Jan 20 18:40:19 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:40:19 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 20 18:40:19 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 18:40:19 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 20 18:40:19 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 18:40:19 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 18:40:19 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:40:19 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 20 18:40:19 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 18:40:19 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 18:40:19 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:40:19 compute-0 sudo[80874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:40:19 compute-0 sudo[80874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:40:19 compute-0 sudo[80874]: pam_unix(sudo:session): session closed for user root
Jan 20 18:40:19 compute-0 sudo[80899]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 18:40:19 compute-0 sudo[80899]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:40:19 compute-0 ceph-mgr[74676]: [progress INFO root] Writing back 2 completed events
Jan 20 18:40:19 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 20 18:40:19 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:40:19 compute-0 podman[80964]: 2026-01-20 18:40:19.856334413 +0000 UTC m=+0.041152376 container create 2e12ef33cee15ae332160a3c404b5d270f6e2505d8542f58999f6609f9dacfc8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_mclean, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 20 18:40:19 compute-0 systemd[1]: Started libpod-conmon-2e12ef33cee15ae332160a3c404b5d270f6e2505d8542f58999f6609f9dacfc8.scope.
Jan 20 18:40:19 compute-0 podman[80964]: 2026-01-20 18:40:19.836063103 +0000 UTC m=+0.020881046 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:40:19 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:40:19 compute-0 podman[80964]: 2026-01-20 18:40:19.95042034 +0000 UTC m=+0.135238313 container init 2e12ef33cee15ae332160a3c404b5d270f6e2505d8542f58999f6609f9dacfc8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_mclean, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 18:40:19 compute-0 podman[80964]: 2026-01-20 18:40:19.956235901 +0000 UTC m=+0.141053824 container start 2e12ef33cee15ae332160a3c404b5d270f6e2505d8542f58999f6609f9dacfc8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_mclean, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:40:19 compute-0 podman[80964]: 2026-01-20 18:40:19.960124185 +0000 UTC m=+0.144942148 container attach 2e12ef33cee15ae332160a3c404b5d270f6e2505d8542f58999f6609f9dacfc8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_mclean, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 20 18:40:19 compute-0 ecstatic_mclean[80981]: 167 167
Jan 20 18:40:19 compute-0 systemd[1]: libpod-2e12ef33cee15ae332160a3c404b5d270f6e2505d8542f58999f6609f9dacfc8.scope: Deactivated successfully.
Jan 20 18:40:19 compute-0 podman[80964]: 2026-01-20 18:40:19.962476923 +0000 UTC m=+0.147294876 container died 2e12ef33cee15ae332160a3c404b5d270f6e2505d8542f58999f6609f9dacfc8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_mclean, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:40:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-ea94631a50f39973b61ec774d406c3e5d89c63def2a8c65a42f15719e88fe35f-merged.mount: Deactivated successfully.
Jan 20 18:40:20 compute-0 podman[80964]: 2026-01-20 18:40:20.006716863 +0000 UTC m=+0.191534786 container remove 2e12ef33cee15ae332160a3c404b5d270f6e2505d8542f58999f6609f9dacfc8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_mclean, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:40:20 compute-0 systemd[1]: libpod-conmon-2e12ef33cee15ae332160a3c404b5d270f6e2505d8542f58999f6609f9dacfc8.scope: Deactivated successfully.
Jan 20 18:40:20 compute-0 podman[81005]: 2026-01-20 18:40:20.17350045 +0000 UTC m=+0.044085608 container create 03978724fdd0acaaa6da8dd07b7d6199f1de5a2a2b84622d0698960b2dac4966 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_cannon, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:40:20 compute-0 systemd[1]: Started libpod-conmon-03978724fdd0acaaa6da8dd07b7d6199f1de5a2a2b84622d0698960b2dac4966.scope.
Jan 20 18:40:20 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:40:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0e515c23d1dcf1c89da1aa055a6659c5c74351511e27502547b4925801e0d31/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 18:40:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0e515c23d1dcf1c89da1aa055a6659c5c74351511e27502547b4925801e0d31/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:40:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0e515c23d1dcf1c89da1aa055a6659c5c74351511e27502547b4925801e0d31/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:40:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0e515c23d1dcf1c89da1aa055a6659c5c74351511e27502547b4925801e0d31/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 18:40:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0e515c23d1dcf1c89da1aa055a6659c5c74351511e27502547b4925801e0d31/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 18:40:20 compute-0 podman[81005]: 2026-01-20 18:40:20.155878583 +0000 UTC m=+0.026463771 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:40:20 compute-0 ceph-mon[74381]: pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 18:40:20 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:40:20 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:40:20 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:40:20 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:40:20 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 18:40:20 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 18:40:20 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:40:20 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 18:40:20 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:40:20 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:40:20 compute-0 podman[81005]: 2026-01-20 18:40:20.265498246 +0000 UTC m=+0.136083414 container init 03978724fdd0acaaa6da8dd07b7d6199f1de5a2a2b84622d0698960b2dac4966 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_cannon, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:40:20 compute-0 podman[81005]: 2026-01-20 18:40:20.278445211 +0000 UTC m=+0.149030369 container start 03978724fdd0acaaa6da8dd07b7d6199f1de5a2a2b84622d0698960b2dac4966 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:40:20 compute-0 podman[81005]: 2026-01-20 18:40:20.281467824 +0000 UTC m=+0.152053052 container attach 03978724fdd0acaaa6da8dd07b7d6199f1de5a2a2b84622d0698960b2dac4966 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_cannon, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Jan 20 18:40:20 compute-0 great_cannon[81021]: --> passed data devices: 0 physical, 1 LVM
Jan 20 18:40:20 compute-0 great_cannon[81021]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 20 18:40:20 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 18:40:20 compute-0 great_cannon[81021]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 20 18:40:20 compute-0 great_cannon[81021]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 5f53c0c6-6046-4836-83f9-ff93da7e674e
Jan 20 18:40:21 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "5f53c0c6-6046-4836-83f9-ff93da7e674e"} v 0)
Jan 20 18:40:21 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3520844977' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "5f53c0c6-6046-4836-83f9-ff93da7e674e"}]: dispatch
Jan 20 18:40:21 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Jan 20 18:40:21 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 20 18:40:21 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3520844977' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "5f53c0c6-6046-4836-83f9-ff93da7e674e"}]': finished
Jan 20 18:40:21 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Jan 20 18:40:21 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Jan 20 18:40:21 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 20 18:40:21 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 20 18:40:21 compute-0 ceph-mgr[74676]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 20 18:40:21 compute-0 great_cannon[81021]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Jan 20 18:40:21 compute-0 great_cannon[81021]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Jan 20 18:40:21 compute-0 great_cannon[81021]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 20 18:40:21 compute-0 great_cannon[81021]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 20 18:40:21 compute-0 lvm[81083]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 18:40:21 compute-0 lvm[81083]: VG ceph_vg0 finished
Jan 20 18:40:21 compute-0 great_cannon[81021]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Jan 20 18:40:21 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/3520844977' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "5f53c0c6-6046-4836-83f9-ff93da7e674e"}]: dispatch
Jan 20 18:40:21 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/3520844977' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "5f53c0c6-6046-4836-83f9-ff93da7e674e"}]': finished
Jan 20 18:40:21 compute-0 ceph-mon[74381]: osdmap e4: 1 total, 0 up, 1 in
Jan 20 18:40:21 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 20 18:40:21 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "b50a4f9a-3833-43f1-8f2f-3854da3bc102"} v 0)
Jan 20 18:40:21 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/3423407483' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b50a4f9a-3833-43f1-8f2f-3854da3bc102"}]: dispatch
Jan 20 18:40:21 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Jan 20 18:40:21 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 20 18:40:21 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/3423407483' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "b50a4f9a-3833-43f1-8f2f-3854da3bc102"}]': finished
Jan 20 18:40:21 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Jan 20 18:40:21 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Jan 20 18:40:21 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 20 18:40:21 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 20 18:40:21 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 20 18:40:21 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 20 18:40:21 compute-0 ceph-mgr[74676]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 20 18:40:21 compute-0 ceph-mgr[74676]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 20 18:40:21 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Jan 20 18:40:21 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1589986016' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Jan 20 18:40:21 compute-0 great_cannon[81021]:  stderr: got monmap epoch 1
Jan 20 18:40:21 compute-0 great_cannon[81021]: --> Creating keyring file for osd.0
Jan 20 18:40:21 compute-0 great_cannon[81021]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Jan 20 18:40:21 compute-0 great_cannon[81021]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Jan 20 18:40:21 compute-0 great_cannon[81021]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 5f53c0c6-6046-4836-83f9-ff93da7e674e --setuser ceph --setgroup ceph
Jan 20 18:40:22 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Jan 20 18:40:22 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/472000004' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Jan 20 18:40:22 compute-0 ceph-mon[74381]: pgmap v32: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 18:40:22 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/3423407483' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "b50a4f9a-3833-43f1-8f2f-3854da3bc102"}]: dispatch
Jan 20 18:40:22 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/3423407483' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "b50a4f9a-3833-43f1-8f2f-3854da3bc102"}]': finished
Jan 20 18:40:22 compute-0 ceph-mon[74381]: osdmap e5: 2 total, 0 up, 2 in
Jan 20 18:40:22 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 20 18:40:22 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 20 18:40:22 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/1589986016' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Jan 20 18:40:22 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/472000004' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Jan 20 18:40:22 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 18:40:23 compute-0 ceph-mon[74381]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Jan 20 18:40:23 compute-0 ceph-mon[74381]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Jan 20 18:40:23 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 18:40:24 compute-0 ceph-mon[74381]: pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 18:40:24 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 18:40:24 compute-0 great_cannon[81021]:  stderr: 2026-01-20T18:40:21.784+0000 7f92a8cd8740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) No valid bdev label found
Jan 20 18:40:24 compute-0 great_cannon[81021]:  stderr: 2026-01-20T18:40:22.046+0000 7f92a8cd8740 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Jan 20 18:40:24 compute-0 great_cannon[81021]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Jan 20 18:40:24 compute-0 great_cannon[81021]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 20 18:40:24 compute-0 great_cannon[81021]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Jan 20 18:40:25 compute-0 great_cannon[81021]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 20 18:40:25 compute-0 great_cannon[81021]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Jan 20 18:40:25 compute-0 great_cannon[81021]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 20 18:40:25 compute-0 great_cannon[81021]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 20 18:40:25 compute-0 great_cannon[81021]: --> ceph-volume lvm activate successful for osd ID: 0
Jan 20 18:40:25 compute-0 great_cannon[81021]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Jan 20 18:40:25 compute-0 systemd[1]: libpod-03978724fdd0acaaa6da8dd07b7d6199f1de5a2a2b84622d0698960b2dac4966.scope: Deactivated successfully.
Jan 20 18:40:25 compute-0 systemd[1]: libpod-03978724fdd0acaaa6da8dd07b7d6199f1de5a2a2b84622d0698960b2dac4966.scope: Consumed 2.064s CPU time.
Jan 20 18:40:25 compute-0 conmon[81021]: conmon 03978724fdd0acaaa6da <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-03978724fdd0acaaa6da8dd07b7d6199f1de5a2a2b84622d0698960b2dac4966.scope/container/memory.events
Jan 20 18:40:25 compute-0 podman[81005]: 2026-01-20 18:40:25.261967012 +0000 UTC m=+5.132552190 container died 03978724fdd0acaaa6da8dd07b7d6199f1de5a2a2b84622d0698960b2dac4966 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_cannon, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 20 18:40:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-f0e515c23d1dcf1c89da1aa055a6659c5c74351511e27502547b4925801e0d31-merged.mount: Deactivated successfully.
Jan 20 18:40:25 compute-0 podman[81005]: 2026-01-20 18:40:25.319659938 +0000 UTC m=+5.190245106 container remove 03978724fdd0acaaa6da8dd07b7d6199f1de5a2a2b84622d0698960b2dac4966 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 20 18:40:25 compute-0 systemd[1]: libpod-conmon-03978724fdd0acaaa6da8dd07b7d6199f1de5a2a2b84622d0698960b2dac4966.scope: Deactivated successfully.
Jan 20 18:40:25 compute-0 sudo[80899]: pam_unix(sudo:session): session closed for user root
Jan 20 18:40:25 compute-0 sudo[81998]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:40:25 compute-0 sudo[81998]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:40:25 compute-0 sudo[81998]: pam_unix(sudo:session): session closed for user root
Jan 20 18:40:25 compute-0 sudo[82023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- lvm list --format json
Jan 20 18:40:25 compute-0 sudo[82023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:40:25 compute-0 podman[82083]: 2026-01-20 18:40:25.879606891 +0000 UTC m=+0.041451235 container create 1b37bb7f0abef6a12fab17356cb8913c2f5fa4b1efe169e57e17428855d084dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_murdock, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Jan 20 18:40:25 compute-0 systemd[1]: Started libpod-conmon-1b37bb7f0abef6a12fab17356cb8913c2f5fa4b1efe169e57e17428855d084dd.scope.
Jan 20 18:40:25 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:40:25 compute-0 podman[82083]: 2026-01-20 18:40:25.944176863 +0000 UTC m=+0.106021227 container init 1b37bb7f0abef6a12fab17356cb8913c2f5fa4b1efe169e57e17428855d084dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_murdock, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:40:25 compute-0 podman[82083]: 2026-01-20 18:40:25.951073891 +0000 UTC m=+0.112918245 container start 1b37bb7f0abef6a12fab17356cb8913c2f5fa4b1efe169e57e17428855d084dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Jan 20 18:40:25 compute-0 podman[82083]: 2026-01-20 18:40:25.954069833 +0000 UTC m=+0.115914207 container attach 1b37bb7f0abef6a12fab17356cb8913c2f5fa4b1efe169e57e17428855d084dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_murdock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Jan 20 18:40:25 compute-0 quirky_murdock[82099]: 167 167
Jan 20 18:40:25 compute-0 systemd[1]: libpod-1b37bb7f0abef6a12fab17356cb8913c2f5fa4b1efe169e57e17428855d084dd.scope: Deactivated successfully.
Jan 20 18:40:25 compute-0 podman[82083]: 2026-01-20 18:40:25.957934127 +0000 UTC m=+0.119778481 container died 1b37bb7f0abef6a12fab17356cb8913c2f5fa4b1efe169e57e17428855d084dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Jan 20 18:40:25 compute-0 podman[82083]: 2026-01-20 18:40:25.863153072 +0000 UTC m=+0.024997456 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:40:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-afa1519314f8e0b681173bda7b839f884ea0692ca51dae44e6aa4759368223fc-merged.mount: Deactivated successfully.
Jan 20 18:40:25 compute-0 podman[82083]: 2026-01-20 18:40:25.997046083 +0000 UTC m=+0.158890437 container remove 1b37bb7f0abef6a12fab17356cb8913c2f5fa4b1efe169e57e17428855d084dd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_murdock, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:40:26 compute-0 systemd[1]: libpod-conmon-1b37bb7f0abef6a12fab17356cb8913c2f5fa4b1efe169e57e17428855d084dd.scope: Deactivated successfully.
Jan 20 18:40:26 compute-0 podman[82122]: 2026-01-20 18:40:26.172272954 +0000 UTC m=+0.051177780 container create 9ecdb57ca290fc3dbf49f75fe08aea04b2fe35540a31f73a3cfb8968a63adac8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:40:26 compute-0 systemd[1]: Started libpod-conmon-9ecdb57ca290fc3dbf49f75fe08aea04b2fe35540a31f73a3cfb8968a63adac8.scope.
Jan 20 18:40:26 compute-0 podman[82122]: 2026-01-20 18:40:26.147954056 +0000 UTC m=+0.026858882 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:40:26 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:40:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ab30ce2cdb915ec8f0a4c9e0584388469726262f602b62879ac9b3494920f5b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 18:40:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ab30ce2cdb915ec8f0a4c9e0584388469726262f602b62879ac9b3494920f5b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:40:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ab30ce2cdb915ec8f0a4c9e0584388469726262f602b62879ac9b3494920f5b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:40:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ab30ce2cdb915ec8f0a4c9e0584388469726262f602b62879ac9b3494920f5b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 18:40:26 compute-0 podman[82122]: 2026-01-20 18:40:26.264284122 +0000 UTC m=+0.143188918 container init 9ecdb57ca290fc3dbf49f75fe08aea04b2fe35540a31f73a3cfb8968a63adac8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_galois, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Jan 20 18:40:26 compute-0 podman[82122]: 2026-01-20 18:40:26.270644415 +0000 UTC m=+0.149549211 container start 9ecdb57ca290fc3dbf49f75fe08aea04b2fe35540a31f73a3cfb8968a63adac8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_galois, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:40:26 compute-0 podman[82122]: 2026-01-20 18:40:26.275519594 +0000 UTC m=+0.154424410 container attach 9ecdb57ca290fc3dbf49f75fe08aea04b2fe35540a31f73a3cfb8968a63adac8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_galois, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Jan 20 18:40:26 compute-0 vigilant_galois[82138]: {
Jan 20 18:40:26 compute-0 vigilant_galois[82138]:     "0": [
Jan 20 18:40:26 compute-0 vigilant_galois[82138]:         {
Jan 20 18:40:26 compute-0 vigilant_galois[82138]:             "devices": [
Jan 20 18:40:26 compute-0 vigilant_galois[82138]:                 "/dev/loop3"
Jan 20 18:40:26 compute-0 vigilant_galois[82138]:             ],
Jan 20 18:40:26 compute-0 vigilant_galois[82138]:             "lv_name": "ceph_lv0",
Jan 20 18:40:26 compute-0 vigilant_galois[82138]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 18:40:26 compute-0 vigilant_galois[82138]:             "lv_size": "21470642176",
Jan 20 18:40:26 compute-0 vigilant_galois[82138]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=aecbbf3b-b405-507b-97d7-637a83f5b4b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5f53c0c6-6046-4836-83f9-ff93da7e674e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 18:40:26 compute-0 vigilant_galois[82138]:             "lv_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 18:40:26 compute-0 vigilant_galois[82138]:             "name": "ceph_lv0",
Jan 20 18:40:26 compute-0 vigilant_galois[82138]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 18:40:26 compute-0 vigilant_galois[82138]:             "tags": {
Jan 20 18:40:26 compute-0 vigilant_galois[82138]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 18:40:26 compute-0 vigilant_galois[82138]:                 "ceph.block_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 18:40:26 compute-0 vigilant_galois[82138]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 18:40:26 compute-0 vigilant_galois[82138]:                 "ceph.cluster_fsid": "aecbbf3b-b405-507b-97d7-637a83f5b4b1",
Jan 20 18:40:26 compute-0 vigilant_galois[82138]:                 "ceph.cluster_name": "ceph",
Jan 20 18:40:26 compute-0 vigilant_galois[82138]:                 "ceph.crush_device_class": "",
Jan 20 18:40:26 compute-0 vigilant_galois[82138]:                 "ceph.encrypted": "0",
Jan 20 18:40:26 compute-0 vigilant_galois[82138]:                 "ceph.osd_fsid": "5f53c0c6-6046-4836-83f9-ff93da7e674e",
Jan 20 18:40:26 compute-0 vigilant_galois[82138]:                 "ceph.osd_id": "0",
Jan 20 18:40:26 compute-0 vigilant_galois[82138]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 18:40:26 compute-0 vigilant_galois[82138]:                 "ceph.type": "block",
Jan 20 18:40:26 compute-0 vigilant_galois[82138]:                 "ceph.vdo": "0",
Jan 20 18:40:26 compute-0 vigilant_galois[82138]:                 "ceph.with_tpm": "0"
Jan 20 18:40:26 compute-0 vigilant_galois[82138]:             },
Jan 20 18:40:26 compute-0 vigilant_galois[82138]:             "type": "block",
Jan 20 18:40:26 compute-0 vigilant_galois[82138]:             "vg_name": "ceph_vg0"
Jan 20 18:40:26 compute-0 vigilant_galois[82138]:         }
Jan 20 18:40:26 compute-0 vigilant_galois[82138]:     ]
Jan 20 18:40:26 compute-0 vigilant_galois[82138]: }
Jan 20 18:40:26 compute-0 systemd[1]: libpod-9ecdb57ca290fc3dbf49f75fe08aea04b2fe35540a31f73a3cfb8968a63adac8.scope: Deactivated successfully.
Jan 20 18:40:26 compute-0 podman[82122]: 2026-01-20 18:40:26.550849937 +0000 UTC m=+0.429754743 container died 9ecdb57ca290fc3dbf49f75fe08aea04b2fe35540a31f73a3cfb8968a63adac8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_galois, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:40:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-6ab30ce2cdb915ec8f0a4c9e0584388469726262f602b62879ac9b3494920f5b-merged.mount: Deactivated successfully.
Jan 20 18:40:26 compute-0 ceph-mon[74381]: pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 18:40:26 compute-0 podman[82122]: 2026-01-20 18:40:26.597320573 +0000 UTC m=+0.476225359 container remove 9ecdb57ca290fc3dbf49f75fe08aea04b2fe35540a31f73a3cfb8968a63adac8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_galois, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:40:26 compute-0 systemd[1]: libpod-conmon-9ecdb57ca290fc3dbf49f75fe08aea04b2fe35540a31f73a3cfb8968a63adac8.scope: Deactivated successfully.
Jan 20 18:40:26 compute-0 sudo[82023]: pam_unix(sudo:session): session closed for user root
Jan 20 18:40:26 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Jan 20 18:40:26 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Jan 20 18:40:26 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 18:40:26 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:40:26 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Jan 20 18:40:26 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Jan 20 18:40:26 compute-0 sudo[82157]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:40:26 compute-0 sudo[82157]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:40:26 compute-0 sudo[82157]: pam_unix(sudo:session): session closed for user root
Jan 20 18:40:26 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 18:40:26 compute-0 sudo[82182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1
Jan 20 18:40:26 compute-0 sudo[82182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:40:26 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Jan 20 18:40:26 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Jan 20 18:40:26 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 18:40:26 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:40:26 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-1
Jan 20 18:40:26 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-1
Jan 20 18:40:27 compute-0 podman[82247]: 2026-01-20 18:40:27.249500667 +0000 UTC m=+0.042127960 container create 7038a0dd5f22078d6801e0f80fff37ec15ade40a74089213516a4f0989a24b01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_fermat, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid)
Jan 20 18:40:27 compute-0 systemd[1]: Started libpod-conmon-7038a0dd5f22078d6801e0f80fff37ec15ade40a74089213516a4f0989a24b01.scope.
Jan 20 18:40:27 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:40:27 compute-0 podman[82247]: 2026-01-20 18:40:27.307162534 +0000 UTC m=+0.099789827 container init 7038a0dd5f22078d6801e0f80fff37ec15ade40a74089213516a4f0989a24b01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 20 18:40:27 compute-0 podman[82247]: 2026-01-20 18:40:27.312569954 +0000 UTC m=+0.105197257 container start 7038a0dd5f22078d6801e0f80fff37ec15ade40a74089213516a4f0989a24b01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_fermat, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 20 18:40:27 compute-0 podman[82247]: 2026-01-20 18:40:27.315869764 +0000 UTC m=+0.108497067 container attach 7038a0dd5f22078d6801e0f80fff37ec15ade40a74089213516a4f0989a24b01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_fermat, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:40:27 compute-0 mystifying_fermat[82263]: 167 167
Jan 20 18:40:27 compute-0 systemd[1]: libpod-7038a0dd5f22078d6801e0f80fff37ec15ade40a74089213516a4f0989a24b01.scope: Deactivated successfully.
Jan 20 18:40:27 compute-0 podman[82247]: 2026-01-20 18:40:27.317017462 +0000 UTC m=+0.109644755 container died 7038a0dd5f22078d6801e0f80fff37ec15ade40a74089213516a4f0989a24b01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_fermat, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Jan 20 18:40:27 compute-0 podman[82247]: 2026-01-20 18:40:27.233196633 +0000 UTC m=+0.025823936 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:40:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-cae1b22b6605fb090b07d04cf3e5ec0ac1c181718d6303687b3713344b605e92-merged.mount: Deactivated successfully.
Jan 20 18:40:27 compute-0 podman[82247]: 2026-01-20 18:40:27.35204913 +0000 UTC m=+0.144676433 container remove 7038a0dd5f22078d6801e0f80fff37ec15ade40a74089213516a4f0989a24b01 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_fermat, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 20 18:40:27 compute-0 systemd[1]: libpod-conmon-7038a0dd5f22078d6801e0f80fff37ec15ade40a74089213516a4f0989a24b01.scope: Deactivated successfully.
Jan 20 18:40:27 compute-0 podman[82293]: 2026-01-20 18:40:27.582316163 +0000 UTC m=+0.034801314 container create 7522bed4adfb1ea31ee913bc6548e9c05ca5908e0e5f332ac160921474a173b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-osd-0-activate-test, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 20 18:40:27 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Jan 20 18:40:27 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:40:27 compute-0 ceph-mon[74381]: Deploying daemon osd.0 on compute-0
Jan 20 18:40:27 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Jan 20 18:40:27 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:40:27 compute-0 systemd[1]: Started libpod-conmon-7522bed4adfb1ea31ee913bc6548e9c05ca5908e0e5f332ac160921474a173b2.scope.
Jan 20 18:40:27 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:40:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cf70ab23f2cf7b4232a23e3d76ebf5ffac21f08666f5db59cf496c537e37b3b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 18:40:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cf70ab23f2cf7b4232a23e3d76ebf5ffac21f08666f5db59cf496c537e37b3b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:40:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cf70ab23f2cf7b4232a23e3d76ebf5ffac21f08666f5db59cf496c537e37b3b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:40:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cf70ab23f2cf7b4232a23e3d76ebf5ffac21f08666f5db59cf496c537e37b3b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 18:40:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cf70ab23f2cf7b4232a23e3d76ebf5ffac21f08666f5db59cf496c537e37b3b/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 20 18:40:27 compute-0 podman[82293]: 2026-01-20 18:40:27.568426907 +0000 UTC m=+0.020912068 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:40:27 compute-0 podman[82293]: 2026-01-20 18:40:27.669070933 +0000 UTC m=+0.121556084 container init 7522bed4adfb1ea31ee913bc6548e9c05ca5908e0e5f332ac160921474a173b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-osd-0-activate-test, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 18:40:27 compute-0 podman[82293]: 2026-01-20 18:40:27.678824999 +0000 UTC m=+0.131310140 container start 7522bed4adfb1ea31ee913bc6548e9c05ca5908e0e5f332ac160921474a173b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-osd-0-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Jan 20 18:40:27 compute-0 podman[82293]: 2026-01-20 18:40:27.68259372 +0000 UTC m=+0.135078871 container attach 7522bed4adfb1ea31ee913bc6548e9c05ca5908e0e5f332ac160921474a173b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-osd-0-activate-test, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:40:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-osd-0-activate-test[82309]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Jan 20 18:40:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-osd-0-activate-test[82309]:                             [--no-systemd] [--no-tmpfs]
Jan 20 18:40:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-osd-0-activate-test[82309]: ceph-volume activate: error: unrecognized arguments: --bad-option
Jan 20 18:40:27 compute-0 systemd[1]: libpod-7522bed4adfb1ea31ee913bc6548e9c05ca5908e0e5f332ac160921474a173b2.scope: Deactivated successfully.
Jan 20 18:40:27 compute-0 podman[82293]: 2026-01-20 18:40:27.855979647 +0000 UTC m=+0.308464798 container died 7522bed4adfb1ea31ee913bc6548e9c05ca5908e0e5f332ac160921474a173b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-osd-0-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:40:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-5cf70ab23f2cf7b4232a23e3d76ebf5ffac21f08666f5db59cf496c537e37b3b-merged.mount: Deactivated successfully.
Jan 20 18:40:27 compute-0 podman[82293]: 2026-01-20 18:40:27.900096795 +0000 UTC m=+0.352581986 container remove 7522bed4adfb1ea31ee913bc6548e9c05ca5908e0e5f332ac160921474a173b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-osd-0-activate-test, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Jan 20 18:40:27 compute-0 systemd[1]: libpod-conmon-7522bed4adfb1ea31ee913bc6548e9c05ca5908e0e5f332ac160921474a173b2.scope: Deactivated successfully.
Jan 20 18:40:28 compute-0 systemd[1]: Reloading.
Jan 20 18:40:28 compute-0 systemd-sysv-generator[82376]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:40:28 compute-0 systemd-rc-local-generator[82372]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:40:28 compute-0 systemd[1]: Reloading.
Jan 20 18:40:28 compute-0 systemd-rc-local-generator[82411]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:40:28 compute-0 systemd-sysv-generator[82415]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:40:28 compute-0 ceph-mon[74381]: pgmap v37: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 18:40:28 compute-0 ceph-mon[74381]: Deploying daemon osd.1 on compute-1
Jan 20 18:40:28 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 18:40:28 compute-0 systemd[1]: Starting Ceph osd.0 for aecbbf3b-b405-507b-97d7-637a83f5b4b1...
Jan 20 18:40:28 compute-0 sudo[82445]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvrfuwasqkuononubwwjjnwjnrscbgtj ; /usr/bin/python3'
Jan 20 18:40:28 compute-0 sudo[82445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:40:28 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 18:40:28 compute-0 python3[82450]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:40:28 compute-0 podman[82491]: 2026-01-20 18:40:28.902441915 +0000 UTC m=+0.040371898 container create 7cedae96fd4450426d0febf16ee2957cb8b2efdb397a4bae7dcad1ca336e319a (image=quay.io/ceph/ceph:v19, name=exciting_hamilton, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:40:28 compute-0 podman[82503]: 2026-01-20 18:40:28.933784124 +0000 UTC m=+0.054215903 container create 2e3b8b457b95ca47523420a61c7aa1d3cbd80f18c9f3de3edc9f699107ea786d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-osd-0-activate, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Jan 20 18:40:28 compute-0 systemd[1]: Started libpod-conmon-7cedae96fd4450426d0febf16ee2957cb8b2efdb397a4bae7dcad1ca336e319a.scope.
Jan 20 18:40:28 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:40:28 compute-0 podman[82491]: 2026-01-20 18:40:28.884105702 +0000 UTC m=+0.022035715 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:40:28 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:40:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3aed3aa701618f89fd97d3d70af60b179bb26b8ae9cb6643eacfeedcf1ffb579/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:40:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3aed3aa701618f89fd97d3d70af60b179bb26b8ae9cb6643eacfeedcf1ffb579/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 20 18:40:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3aed3aa701618f89fd97d3d70af60b179bb26b8ae9cb6643eacfeedcf1ffb579/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:40:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6670a06826b94a2c35328214a4530301adfd1242f8c0af22b95d9d0a63772af/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 18:40:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6670a06826b94a2c35328214a4530301adfd1242f8c0af22b95d9d0a63772af/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:40:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6670a06826b94a2c35328214a4530301adfd1242f8c0af22b95d9d0a63772af/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:40:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6670a06826b94a2c35328214a4530301adfd1242f8c0af22b95d9d0a63772af/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 18:40:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6670a06826b94a2c35328214a4530301adfd1242f8c0af22b95d9d0a63772af/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 20 18:40:29 compute-0 podman[82491]: 2026-01-20 18:40:29.007660182 +0000 UTC m=+0.145590185 container init 7cedae96fd4450426d0febf16ee2957cb8b2efdb397a4bae7dcad1ca336e319a (image=quay.io/ceph/ceph:v19, name=exciting_hamilton, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:40:29 compute-0 podman[82503]: 2026-01-20 18:40:28.913675657 +0000 UTC m=+0.034107446 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:40:29 compute-0 podman[82503]: 2026-01-20 18:40:29.016770823 +0000 UTC m=+0.137202652 container init 2e3b8b457b95ca47523420a61c7aa1d3cbd80f18c9f3de3edc9f699107ea786d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-osd-0-activate, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:40:29 compute-0 podman[82491]: 2026-01-20 18:40:29.0203982 +0000 UTC m=+0.158328183 container start 7cedae96fd4450426d0febf16ee2957cb8b2efdb397a4bae7dcad1ca336e319a (image=quay.io/ceph/ceph:v19, name=exciting_hamilton, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 18:40:29 compute-0 podman[82503]: 2026-01-20 18:40:29.023531186 +0000 UTC m=+0.143962965 container start 2e3b8b457b95ca47523420a61c7aa1d3cbd80f18c9f3de3edc9f699107ea786d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-osd-0-activate, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:40:29 compute-0 podman[82491]: 2026-01-20 18:40:29.025125635 +0000 UTC m=+0.163055618 container attach 7cedae96fd4450426d0febf16ee2957cb8b2efdb397a4bae7dcad1ca336e319a (image=quay.io/ceph/ceph:v19, name=exciting_hamilton, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Jan 20 18:40:29 compute-0 podman[82503]: 2026-01-20 18:40:29.030395112 +0000 UTC m=+0.150826891 container attach 2e3b8b457b95ca47523420a61c7aa1d3cbd80f18c9f3de3edc9f699107ea786d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-osd-0-activate, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:40:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-osd-0-activate[82529]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 20 18:40:29 compute-0 bash[82503]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 20 18:40:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-osd-0-activate[82529]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 20 18:40:29 compute-0 bash[82503]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 20 18:40:29 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 20 18:40:29 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3181733026' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 20 18:40:29 compute-0 exciting_hamilton[82524]: 
Jan 20 18:40:29 compute-0 exciting_hamilton[82524]: {"fsid":"aecbbf3b-b405-507b-97d7-637a83f5b4b1","health":{"status":"HEALTH_WARN","checks":{"CEPHADM_APPLY_SPEC_FAIL":{"severity":"HEALTH_WARN","summary":{"message":"Failed to apply 2 service(s): mon,mgr","count":2},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":101,"monmap":{"epoch":1,"min_mon_release_name":"squid","num_mons":1},"osdmap":{"epoch":5,"num_osds":2,"num_up_osds":0,"osd_up_since":0,"num_in_osds":2,"osd_in_since":1768934421,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2026-01-20T18:38:46:076055+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-01-20T18:40:08.727656+0000","services":{}},"progress_events":{}}
Jan 20 18:40:29 compute-0 systemd[1]: libpod-7cedae96fd4450426d0febf16ee2957cb8b2efdb397a4bae7dcad1ca336e319a.scope: Deactivated successfully.
Jan 20 18:40:29 compute-0 podman[82491]: 2026-01-20 18:40:29.454545988 +0000 UTC m=+0.592475961 container died 7cedae96fd4450426d0febf16ee2957cb8b2efdb397a4bae7dcad1ca336e319a (image=quay.io/ceph/ceph:v19, name=exciting_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:40:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-3aed3aa701618f89fd97d3d70af60b179bb26b8ae9cb6643eacfeedcf1ffb579-merged.mount: Deactivated successfully.
Jan 20 18:40:29 compute-0 podman[82491]: 2026-01-20 18:40:29.492159889 +0000 UTC m=+0.630089862 container remove 7cedae96fd4450426d0febf16ee2957cb8b2efdb397a4bae7dcad1ca336e319a (image=quay.io/ceph/ceph:v19, name=exciting_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Jan 20 18:40:29 compute-0 systemd[1]: libpod-conmon-7cedae96fd4450426d0febf16ee2957cb8b2efdb397a4bae7dcad1ca336e319a.scope: Deactivated successfully.
Jan 20 18:40:29 compute-0 sudo[82445]: pam_unix(sudo:session): session closed for user root
Jan 20 18:40:29 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/3181733026' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 20 18:40:29 compute-0 lvm[82643]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 18:40:29 compute-0 lvm[82643]: VG ceph_vg0 finished
Jan 20 18:40:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-osd-0-activate[82529]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 20 18:40:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-osd-0-activate[82529]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 20 18:40:29 compute-0 bash[82503]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 20 18:40:29 compute-0 bash[82503]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 20 18:40:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-osd-0-activate[82529]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 20 18:40:29 compute-0 bash[82503]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 20 18:40:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-osd-0-activate[82529]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 20 18:40:29 compute-0 bash[82503]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 20 18:40:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-osd-0-activate[82529]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Jan 20 18:40:29 compute-0 bash[82503]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Jan 20 18:40:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-osd-0-activate[82529]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 20 18:40:30 compute-0 bash[82503]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 20 18:40:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-osd-0-activate[82529]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Jan 20 18:40:30 compute-0 bash[82503]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Jan 20 18:40:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-osd-0-activate[82529]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 20 18:40:30 compute-0 bash[82503]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 20 18:40:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-osd-0-activate[82529]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 20 18:40:30 compute-0 bash[82503]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 20 18:40:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-osd-0-activate[82529]: --> ceph-volume lvm activate successful for osd ID: 0
Jan 20 18:40:30 compute-0 bash[82503]: --> ceph-volume lvm activate successful for osd ID: 0
Jan 20 18:40:30 compute-0 systemd[1]: libpod-2e3b8b457b95ca47523420a61c7aa1d3cbd80f18c9f3de3edc9f699107ea786d.scope: Deactivated successfully.
Jan 20 18:40:30 compute-0 systemd[1]: libpod-2e3b8b457b95ca47523420a61c7aa1d3cbd80f18c9f3de3edc9f699107ea786d.scope: Consumed 1.435s CPU time.
Jan 20 18:40:30 compute-0 podman[82756]: 2026-01-20 18:40:30.377659012 +0000 UTC m=+0.032155259 container died 2e3b8b457b95ca47523420a61c7aa1d3cbd80f18c9f3de3edc9f699107ea786d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-osd-0-activate, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:40:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-c6670a06826b94a2c35328214a4530301adfd1242f8c0af22b95d9d0a63772af-merged.mount: Deactivated successfully.
Jan 20 18:40:30 compute-0 podman[82756]: 2026-01-20 18:40:30.435571383 +0000 UTC m=+0.090067580 container remove 2e3b8b457b95ca47523420a61c7aa1d3cbd80f18c9f3de3edc9f699107ea786d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-osd-0-activate, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 20 18:40:30 compute-0 ceph-mon[74381]: pgmap v38: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 18:40:30 compute-0 podman[82816]: 2026-01-20 18:40:30.648316043 +0000 UTC m=+0.048791503 container create d1930c87ccd84d529047deb7d8d742f1c84ce3a39cbfe3fcb18aa72364283239 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-osd-0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid)
Jan 20 18:40:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4339d1ba93e6a5bd43e3344fbb0052f5d7469fcfd052ea880be160937f1a065/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 18:40:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4339d1ba93e6a5bd43e3344fbb0052f5d7469fcfd052ea880be160937f1a065/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:40:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4339d1ba93e6a5bd43e3344fbb0052f5d7469fcfd052ea880be160937f1a065/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:40:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4339d1ba93e6a5bd43e3344fbb0052f5d7469fcfd052ea880be160937f1a065/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 18:40:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4339d1ba93e6a5bd43e3344fbb0052f5d7469fcfd052ea880be160937f1a065/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 20 18:40:30 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 18:40:30 compute-0 podman[82816]: 2026-01-20 18:40:30.715095939 +0000 UTC m=+0.115571399 container init d1930c87ccd84d529047deb7d8d742f1c84ce3a39cbfe3fcb18aa72364283239 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-osd-0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Jan 20 18:40:30 compute-0 podman[82816]: 2026-01-20 18:40:30.626202227 +0000 UTC m=+0.026677727 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:40:30 compute-0 podman[82816]: 2026-01-20 18:40:30.722202311 +0000 UTC m=+0.122677761 container start d1930c87ccd84d529047deb7d8d742f1c84ce3a39cbfe3fcb18aa72364283239 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-osd-0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid)
Jan 20 18:40:30 compute-0 bash[82816]: d1930c87ccd84d529047deb7d8d742f1c84ce3a39cbfe3fcb18aa72364283239
Jan 20 18:40:30 compute-0 systemd[1]: Started Ceph osd.0 for aecbbf3b-b405-507b-97d7-637a83f5b4b1.
Jan 20 18:40:30 compute-0 ceph-osd[82836]: set uid:gid to 167:167 (ceph:ceph)
Jan 20 18:40:30 compute-0 ceph-osd[82836]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-osd, pid 2
Jan 20 18:40:30 compute-0 ceph-osd[82836]: pidfile_write: ignore empty --pid-file
Jan 20 18:40:30 compute-0 ceph-osd[82836]: bdev(0x5649cc11f800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 20 18:40:30 compute-0 ceph-osd[82836]: bdev(0x5649cc11f800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 20 18:40:30 compute-0 ceph-osd[82836]: bdev(0x5649cc11f800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 18:40:30 compute-0 ceph-osd[82836]: bdev(0x5649cc11f800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 18:40:30 compute-0 ceph-osd[82836]: bdev(0x5649cc11f800 /var/lib/ceph/osd/ceph-0/block) close
Jan 20 18:40:30 compute-0 sudo[82182]: pam_unix(sudo:session): session closed for user root
Jan 20 18:40:30 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 18:40:30 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:40:30 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 18:40:30 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:40:30 compute-0 sudo[82848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:40:30 compute-0 sudo[82848]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:40:30 compute-0 sudo[82848]: pam_unix(sudo:session): session closed for user root
Jan 20 18:40:30 compute-0 sudo[82873]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- raw list --format json
Jan 20 18:40:30 compute-0 sudo[82873]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:40:31 compute-0 ceph-osd[82836]: bdev(0x5649cc11f800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 20 18:40:31 compute-0 ceph-osd[82836]: bdev(0x5649cc11f800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 20 18:40:31 compute-0 ceph-osd[82836]: bdev(0x5649cc11f800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 18:40:31 compute-0 ceph-osd[82836]: bdev(0x5649cc11f800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 18:40:31 compute-0 ceph-osd[82836]: bdev(0x5649cc11f800 /var/lib/ceph/osd/ceph-0/block) close
Jan 20 18:40:31 compute-0 ceph-osd[82836]: bdev(0x5649cc11f800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 20 18:40:31 compute-0 ceph-osd[82836]: bdev(0x5649cc11f800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 20 18:40:31 compute-0 ceph-osd[82836]: bdev(0x5649cc11f800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 18:40:31 compute-0 ceph-osd[82836]: bdev(0x5649cc11f800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 18:40:31 compute-0 ceph-osd[82836]: bdev(0x5649cc11f800 /var/lib/ceph/osd/ceph-0/block) close
Jan 20 18:40:31 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 20 18:40:31 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:40:31 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 20 18:40:31 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:40:31 compute-0 ceph-osd[82836]: bdev(0x5649cc11f800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 20 18:40:31 compute-0 ceph-osd[82836]: bdev(0x5649cc11f800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 20 18:40:31 compute-0 ceph-osd[82836]: bdev(0x5649cc11f800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 18:40:31 compute-0 ceph-osd[82836]: bdev(0x5649cc11f800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 18:40:31 compute-0 ceph-osd[82836]: bdev(0x5649cc11f800 /var/lib/ceph/osd/ceph-0/block) close
Jan 20 18:40:31 compute-0 ceph-osd[82836]: bdev(0x5649cc11f800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 20 18:40:31 compute-0 ceph-osd[82836]: bdev(0x5649cc11f800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 20 18:40:31 compute-0 ceph-osd[82836]: bdev(0x5649cc11f800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 18:40:31 compute-0 ceph-osd[82836]: bdev(0x5649cc11f800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 18:40:31 compute-0 ceph-osd[82836]: bdev(0x5649cc11f800 /var/lib/ceph/osd/ceph-0/block) close
Jan 20 18:40:31 compute-0 ceph-osd[82836]: bdev(0x5649cc11f800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 20 18:40:31 compute-0 ceph-osd[82836]: bdev(0x5649cc11f800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 20 18:40:31 compute-0 ceph-osd[82836]: bdev(0x5649cc11f800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 18:40:31 compute-0 ceph-osd[82836]: bdev(0x5649cc11f800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 18:40:31 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 20 18:40:31 compute-0 ceph-osd[82836]: bdev(0x5649cc11fc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 20 18:40:31 compute-0 ceph-osd[82836]: bdev(0x5649cc11fc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 20 18:40:31 compute-0 ceph-osd[82836]: bdev(0x5649cc11fc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 18:40:31 compute-0 ceph-osd[82836]: bdev(0x5649cc11fc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 18:40:31 compute-0 ceph-osd[82836]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Jan 20 18:40:31 compute-0 ceph-osd[82836]: bdev(0x5649cc11fc00 /var/lib/ceph/osd/ceph-0/block) close
Jan 20 18:40:31 compute-0 podman[82944]: 2026-01-20 18:40:31.4674863 +0000 UTC m=+0.057895232 container create 80c2347b32ee3f84237817d14591a7b6d5601df7ad1860acebe6360f4ba22c43 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_elgamal, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 20 18:40:31 compute-0 systemd[1]: Started libpod-conmon-80c2347b32ee3f84237817d14591a7b6d5601df7ad1860acebe6360f4ba22c43.scope.
Jan 20 18:40:31 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:40:31 compute-0 podman[82944]: 2026-01-20 18:40:31.450954099 +0000 UTC m=+0.041363051 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:40:31 compute-0 podman[82944]: 2026-01-20 18:40:31.5554822 +0000 UTC m=+0.145891222 container init 80c2347b32ee3f84237817d14591a7b6d5601df7ad1860acebe6360f4ba22c43 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default)
Jan 20 18:40:31 compute-0 podman[82944]: 2026-01-20 18:40:31.568250249 +0000 UTC m=+0.158659181 container start 80c2347b32ee3f84237817d14591a7b6d5601df7ad1860acebe6360f4ba22c43 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 18:40:31 compute-0 podman[82944]: 2026-01-20 18:40:31.571768833 +0000 UTC m=+0.162177845 container attach 80c2347b32ee3f84237817d14591a7b6d5601df7ad1860acebe6360f4ba22c43 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_elgamal, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 20 18:40:31 compute-0 systemd[1]: libpod-80c2347b32ee3f84237817d14591a7b6d5601df7ad1860acebe6360f4ba22c43.scope: Deactivated successfully.
Jan 20 18:40:31 compute-0 wizardly_elgamal[82964]: 167 167
Jan 20 18:40:31 compute-0 conmon[82964]: conmon 80c2347b32ee3f842378 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-80c2347b32ee3f84237817d14591a7b6d5601df7ad1860acebe6360f4ba22c43.scope/container/memory.events
Jan 20 18:40:31 compute-0 podman[82944]: 2026-01-20 18:40:31.576929458 +0000 UTC m=+0.167338410 container died 80c2347b32ee3f84237817d14591a7b6d5601df7ad1860acebe6360f4ba22c43 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_elgamal, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:40:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-7af832c789bfc21277ceb091517b0aed22f589df78388fc13134b8844241a55b-merged.mount: Deactivated successfully.
Jan 20 18:40:31 compute-0 podman[82944]: 2026-01-20 18:40:31.633748854 +0000 UTC m=+0.224157786 container remove 80c2347b32ee3f84237817d14591a7b6d5601df7ad1860acebe6360f4ba22c43 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:40:31 compute-0 systemd[1]: libpod-conmon-80c2347b32ee3f84237817d14591a7b6d5601df7ad1860acebe6360f4ba22c43.scope: Deactivated successfully.
Jan 20 18:40:31 compute-0 ceph-osd[82836]: bdev(0x5649cc11f800 /var/lib/ceph/osd/ceph-0/block) close
Jan 20 18:40:31 compute-0 podman[82989]: 2026-01-20 18:40:31.778086668 +0000 UTC m=+0.042385318 container create 97f97fcb1e2b5b2fa0342d4ab26ce6176b6c8afa0d11d2dcae44a3dae2a154f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Jan 20 18:40:31 compute-0 ceph-mon[74381]: pgmap v39: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 18:40:31 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:40:31 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:40:31 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:40:31 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:40:31 compute-0 systemd[1]: Started libpod-conmon-97f97fcb1e2b5b2fa0342d4ab26ce6176b6c8afa0d11d2dcae44a3dae2a154f9.scope.
Jan 20 18:40:31 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:40:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0173bdd9fbb03c1819f01bb3a7d9d34e7958f3364c49d940114c3104a63a827e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 18:40:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0173bdd9fbb03c1819f01bb3a7d9d34e7958f3364c49d940114c3104a63a827e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:40:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0173bdd9fbb03c1819f01bb3a7d9d34e7958f3364c49d940114c3104a63a827e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:40:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0173bdd9fbb03c1819f01bb3a7d9d34e7958f3364c49d940114c3104a63a827e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 18:40:31 compute-0 podman[82989]: 2026-01-20 18:40:31.756673099 +0000 UTC m=+0.020971769 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:40:31 compute-0 podman[82989]: 2026-01-20 18:40:31.867640155 +0000 UTC m=+0.131938825 container init 97f97fcb1e2b5b2fa0342d4ab26ce6176b6c8afa0d11d2dcae44a3dae2a154f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_bohr, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True)
Jan 20 18:40:31 compute-0 podman[82989]: 2026-01-20 18:40:31.877035082 +0000 UTC m=+0.141333732 container start 97f97fcb1e2b5b2fa0342d4ab26ce6176b6c8afa0d11d2dcae44a3dae2a154f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Jan 20 18:40:31 compute-0 podman[82989]: 2026-01-20 18:40:31.881430519 +0000 UTC m=+0.145729169 container attach 97f97fcb1e2b5b2fa0342d4ab26ce6176b6c8afa0d11d2dcae44a3dae2a154f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_bohr, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Jan 20 18:40:31 compute-0 ceph-osd[82836]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Jan 20 18:40:31 compute-0 ceph-osd[82836]: load: jerasure load: lrc 
Jan 20 18:40:31 compute-0 ceph-osd[82836]: bdev(0x5649ccfc4c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 20 18:40:31 compute-0 ceph-osd[82836]: bdev(0x5649ccfc4c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 20 18:40:31 compute-0 ceph-osd[82836]: bdev(0x5649ccfc4c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 18:40:31 compute-0 ceph-osd[82836]: bdev(0x5649ccfc4c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 18:40:31 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 20 18:40:31 compute-0 ceph-osd[82836]: bdev(0x5649ccfc4c00 /var/lib/ceph/osd/ceph-0/block) close
Jan 20 18:40:32 compute-0 ceph-osd[82836]: bdev(0x5649ccfc4c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 20 18:40:32 compute-0 ceph-osd[82836]: bdev(0x5649ccfc4c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 20 18:40:32 compute-0 ceph-osd[82836]: bdev(0x5649ccfc4c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 18:40:32 compute-0 ceph-osd[82836]: bdev(0x5649ccfc4c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 18:40:32 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 20 18:40:32 compute-0 ceph-osd[82836]: bdev(0x5649ccfc4c00 /var/lib/ceph/osd/ceph-0/block) close
Jan 20 18:40:32 compute-0 ceph-osd[82836]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Jan 20 18:40:32 compute-0 ceph-osd[82836]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Jan 20 18:40:32 compute-0 ceph-osd[82836]: bdev(0x5649ccfc4c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 20 18:40:32 compute-0 ceph-osd[82836]: bdev(0x5649ccfc4c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 20 18:40:32 compute-0 ceph-osd[82836]: bdev(0x5649ccfc4c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 18:40:32 compute-0 ceph-osd[82836]: bdev(0x5649ccfc4c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 18:40:32 compute-0 ceph-osd[82836]: bdev(0x5649ccfc4c00 /var/lib/ceph/osd/ceph-0/block) close
Jan 20 18:40:32 compute-0 ceph-osd[82836]: bdev(0x5649ccfc4c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 20 18:40:32 compute-0 ceph-osd[82836]: bdev(0x5649ccfc4c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 20 18:40:32 compute-0 ceph-osd[82836]: bdev(0x5649ccfc4c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 18:40:32 compute-0 ceph-osd[82836]: bdev(0x5649ccfc4c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 18:40:32 compute-0 ceph-osd[82836]: bdev(0x5649ccfc4c00 /var/lib/ceph/osd/ceph-0/block) close
Jan 20 18:40:32 compute-0 lvm[83097]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 18:40:32 compute-0 lvm[83097]: VG ceph_vg0 finished
Jan 20 18:40:32 compute-0 ceph-osd[82836]: bdev(0x5649ccfc4c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 20 18:40:32 compute-0 ceph-osd[82836]: bdev(0x5649ccfc4c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 20 18:40:32 compute-0 ceph-osd[82836]: bdev(0x5649ccfc4c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 18:40:32 compute-0 ceph-osd[82836]: bdev(0x5649ccfc4c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 18:40:32 compute-0 ceph-osd[82836]: bdev(0x5649ccfc4c00 /var/lib/ceph/osd/ceph-0/block) close
Jan 20 18:40:32 compute-0 lvm[83101]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 18:40:32 compute-0 lvm[83101]: VG ceph_vg0 finished
Jan 20 18:40:32 compute-0 ceph-osd[82836]: bdev(0x5649ccfc4c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 20 18:40:32 compute-0 ceph-osd[82836]: bdev(0x5649ccfc4c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 20 18:40:32 compute-0 ceph-osd[82836]: bdev(0x5649ccfc4c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 18:40:32 compute-0 ceph-osd[82836]: bdev(0x5649ccfc4c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 18:40:32 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 20 18:40:32 compute-0 ceph-osd[82836]: bdev(0x5649ccfc5000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 20 18:40:32 compute-0 ceph-osd[82836]: bdev(0x5649ccfc5000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 20 18:40:32 compute-0 ceph-osd[82836]: bdev(0x5649ccfc5000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 18:40:32 compute-0 ceph-osd[82836]: bdev(0x5649ccfc5000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 18:40:32 compute-0 ceph-osd[82836]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Jan 20 18:40:32 compute-0 ceph-osd[82836]: bluefs mount
Jan 20 18:40:32 compute-0 ceph-osd[82836]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 20 18:40:32 compute-0 ceph-osd[82836]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 20 18:40:32 compute-0 ceph-osd[82836]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 20 18:40:32 compute-0 ceph-osd[82836]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 20 18:40:32 compute-0 ceph-osd[82836]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 20 18:40:32 compute-0 ceph-osd[82836]: bluefs mount shared_bdev_used = 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: RocksDB version: 7.9.2
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Git sha 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Compile date 2025-07-17 03:12:14
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: DB SUMMARY
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: DB Session ID:  65QL8S4MTQUI0P0MRR2G
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: CURRENT file:  CURRENT
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: IDENTITY file:  IDENTITY
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                         Options.error_if_exists: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                       Options.create_if_missing: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                         Options.paranoid_checks: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                                     Options.env: 0x5649ccf95dc0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                                Options.info_log: 0x5649ccf997a0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.max_file_opening_threads: 16
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                              Options.statistics: (nil)
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                               Options.use_fsync: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                       Options.max_log_file_size: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                         Options.allow_fallocate: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                        Options.use_direct_reads: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.create_missing_column_families: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                              Options.db_log_dir: 
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                                 Options.wal_dir: db.wal
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.advise_random_on_open: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                    Options.write_buffer_manager: 0x5649cd090a00
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                            Options.rate_limiter: (nil)
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.unordered_write: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                               Options.row_cache: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                              Options.wal_filter: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:             Options.allow_ingest_behind: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:             Options.two_write_queues: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:             Options.manual_wal_flush: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:             Options.wal_compression: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:             Options.atomic_flush: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                 Options.log_readahead_size: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:             Options.allow_data_in_errors: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:             Options.db_host_id: __hostname__
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:             Options.max_background_jobs: 4
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:             Options.max_background_compactions: -1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:             Options.max_subcompactions: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                          Options.max_open_files: -1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                          Options.bytes_per_sync: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.max_background_flushes: -1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Compression algorithms supported:
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         kZSTD supported: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         kXpressCompression supported: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         kBZip2Compression supported: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         kLZ4Compression supported: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         kZlibCompression supported: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         kLZ4HCCompression supported: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         kSnappyCompression supported: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.compaction_filter: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5649ccf99b60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5649cc1b5350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.compression: LZ4
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:             Options.num_levels: 7
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                           Options.bloom_locality: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                               Options.ttl: 2592000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                       Options.enable_blob_files: false
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                           Options.min_blob_size: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:           Options.merge_operator: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.compaction_filter: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5649ccf99b60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5649cc1b5350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.compression: LZ4
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:             Options.num_levels: 7
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                           Options.bloom_locality: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                               Options.ttl: 2592000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                       Options.enable_blob_files: false
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                           Options.min_blob_size: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:           Options.merge_operator: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.compaction_filter: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5649ccf99b60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5649cc1b5350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.compression: LZ4
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:             Options.num_levels: 7
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                           Options.bloom_locality: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                               Options.ttl: 2592000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                       Options.enable_blob_files: false
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                           Options.min_blob_size: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:           Options.merge_operator: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.compaction_filter: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5649ccf99b60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5649cc1b5350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.compression: LZ4
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:             Options.num_levels: 7
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                           Options.bloom_locality: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                               Options.ttl: 2592000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                       Options.enable_blob_files: false
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                           Options.min_blob_size: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:           Options.merge_operator: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.compaction_filter: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5649ccf99b60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5649cc1b5350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.compression: LZ4
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:             Options.num_levels: 7
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 18:40:32 compute-0 elated_bohr[83006]: {}
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                           Options.bloom_locality: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                               Options.ttl: 2592000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                       Options.enable_blob_files: false
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                           Options.min_blob_size: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:           Options.merge_operator: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.compaction_filter: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5649ccf99b60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5649cc1b5350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.compression: LZ4
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:             Options.num_levels: 7
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                           Options.bloom_locality: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                               Options.ttl: 2592000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                       Options.enable_blob_files: false
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                           Options.min_blob_size: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:           Options.merge_operator: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.compaction_filter: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5649ccf99b60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5649cc1b5350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.compression: LZ4
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:             Options.num_levels: 7
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                           Options.bloom_locality: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                               Options.ttl: 2592000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                       Options.enable_blob_files: false
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                           Options.min_blob_size: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:           Options.merge_operator: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.compaction_filter: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5649ccf99b80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5649cc1b49b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.compression: LZ4
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:             Options.num_levels: 7
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                           Options.bloom_locality: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                               Options.ttl: 2592000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                       Options.enable_blob_files: false
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                           Options.min_blob_size: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:           Options.merge_operator: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.compaction_filter: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5649ccf99b80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5649cc1b49b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.compression: LZ4
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:             Options.num_levels: 7
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                           Options.bloom_locality: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                               Options.ttl: 2592000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                       Options.enable_blob_files: false
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                           Options.min_blob_size: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:           Options.merge_operator: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.compaction_filter: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5649ccf99b80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5649cc1b49b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.compression: LZ4
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 18:40:32 compute-0 lvm[83110]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 18:40:32 compute-0 lvm[83110]: VG ceph_vg0 finished
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:             Options.num_levels: 7
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                           Options.bloom_locality: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                               Options.ttl: 2592000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                       Options.enable_blob_files: false
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                           Options.min_blob_size: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 3cd883b2-c6e5-460a-8b6a-5f4ad1ea2d2b
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768934432626861, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768934432627081, "job": 1, "event": "recovery_finished"}
Jan 20 18:40:32 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Jan 20 18:40:32 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Jan 20 18:40:32 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Jan 20 18:40:32 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: freelist init
Jan 20 18:40:32 compute-0 ceph-osd[82836]: freelist _read_cfg
Jan 20 18:40:32 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 20 18:40:32 compute-0 ceph-osd[82836]: bluefs umount
Jan 20 18:40:32 compute-0 ceph-osd[82836]: bdev(0x5649ccfc5000 /var/lib/ceph/osd/ceph-0/block) close
Jan 20 18:40:32 compute-0 podman[82989]: 2026-01-20 18:40:32.64042175 +0000 UTC m=+0.904720430 container died 97f97fcb1e2b5b2fa0342d4ab26ce6176b6c8afa0d11d2dcae44a3dae2a154f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_bohr, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Jan 20 18:40:32 compute-0 systemd[1]: libpod-97f97fcb1e2b5b2fa0342d4ab26ce6176b6c8afa0d11d2dcae44a3dae2a154f9.scope: Deactivated successfully.
Jan 20 18:40:32 compute-0 systemd[1]: libpod-97f97fcb1e2b5b2fa0342d4ab26ce6176b6c8afa0d11d2dcae44a3dae2a154f9.scope: Consumed 1.093s CPU time.
Jan 20 18:40:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-0173bdd9fbb03c1819f01bb3a7d9d34e7958f3364c49d940114c3104a63a827e-merged.mount: Deactivated successfully.
Jan 20 18:40:32 compute-0 podman[82989]: 2026-01-20 18:40:32.684109097 +0000 UTC m=+0.948407747 container remove 97f97fcb1e2b5b2fa0342d4ab26ce6176b6c8afa0d11d2dcae44a3dae2a154f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_bohr, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 18:40:32 compute-0 systemd[1]: libpod-conmon-97f97fcb1e2b5b2fa0342d4ab26ce6176b6c8afa0d11d2dcae44a3dae2a154f9.scope: Deactivated successfully.
Jan 20 18:40:32 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 18:40:32 compute-0 sudo[82873]: pam_unix(sudo:session): session closed for user root
Jan 20 18:40:32 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 18:40:32 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:40:32 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 18:40:32 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:40:32 compute-0 ceph-osd[82836]: bdev(0x5649ccfc5000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 20 18:40:32 compute-0 ceph-osd[82836]: bdev(0x5649ccfc5000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 20 18:40:32 compute-0 ceph-osd[82836]: bdev(0x5649ccfc5000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 18:40:32 compute-0 ceph-osd[82836]: bdev(0x5649ccfc5000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 18:40:32 compute-0 ceph-osd[82836]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Jan 20 18:40:32 compute-0 ceph-osd[82836]: bluefs mount
Jan 20 18:40:32 compute-0 ceph-osd[82836]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 20 18:40:32 compute-0 ceph-osd[82836]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 20 18:40:32 compute-0 ceph-osd[82836]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 20 18:40:32 compute-0 ceph-osd[82836]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 20 18:40:32 compute-0 ceph-osd[82836]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 20 18:40:32 compute-0 ceph-osd[82836]: bluefs mount shared_bdev_used = 4718592
Jan 20 18:40:32 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: RocksDB version: 7.9.2
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Git sha 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Compile date 2025-07-17 03:12:14
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: DB SUMMARY
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: DB Session ID:  65QL8S4MTQUI0P0MRR2H
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: CURRENT file:  CURRENT
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: IDENTITY file:  IDENTITY
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                         Options.error_if_exists: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                       Options.create_if_missing: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                         Options.paranoid_checks: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                                     Options.env: 0x5649cd1342a0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                                Options.info_log: 0x5649ccf99920
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.max_file_opening_threads: 16
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                              Options.statistics: (nil)
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                               Options.use_fsync: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                       Options.max_log_file_size: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                         Options.allow_fallocate: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                        Options.use_direct_reads: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.create_missing_column_families: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                              Options.db_log_dir: 
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                                 Options.wal_dir: db.wal
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.advise_random_on_open: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                    Options.write_buffer_manager: 0x5649cd090a00
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                            Options.rate_limiter: (nil)
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.unordered_write: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                               Options.row_cache: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                              Options.wal_filter: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:             Options.allow_ingest_behind: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:             Options.two_write_queues: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:             Options.manual_wal_flush: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:             Options.wal_compression: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:             Options.atomic_flush: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                 Options.log_readahead_size: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:             Options.allow_data_in_errors: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:             Options.db_host_id: __hostname__
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:             Options.max_background_jobs: 4
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:             Options.max_background_compactions: -1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:             Options.max_subcompactions: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                          Options.max_open_files: -1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                          Options.bytes_per_sync: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.max_background_flushes: -1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Compression algorithms supported:
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         kZSTD supported: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         kXpressCompression supported: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         kBZip2Compression supported: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         kLZ4Compression supported: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         kZlibCompression supported: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         kLZ4HCCompression supported: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         kSnappyCompression supported: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.compaction_filter: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5649ccf99680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5649cc1b5350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.compression: LZ4
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:             Options.num_levels: 7
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                           Options.bloom_locality: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                               Options.ttl: 2592000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                       Options.enable_blob_files: false
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                           Options.min_blob_size: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:           Options.merge_operator: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.compaction_filter: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5649ccf99680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5649cc1b5350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.compression: LZ4
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:             Options.num_levels: 7
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                           Options.bloom_locality: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                               Options.ttl: 2592000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                       Options.enable_blob_files: false
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                           Options.min_blob_size: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:           Options.merge_operator: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.compaction_filter: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5649ccf99680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5649cc1b5350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.compression: LZ4
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:             Options.num_levels: 7
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 18:40:32 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                           Options.bloom_locality: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                               Options.ttl: 2592000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                       Options.enable_blob_files: false
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                           Options.min_blob_size: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:           Options.merge_operator: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.compaction_filter: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5649ccf99680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5649cc1b5350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.compression: LZ4
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:             Options.num_levels: 7
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 18:40:32 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                           Options.bloom_locality: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                               Options.ttl: 2592000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                       Options.enable_blob_files: false
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                           Options.min_blob_size: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:           Options.merge_operator: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.compaction_filter: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5649ccf99680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5649cc1b5350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.compression: LZ4
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:             Options.num_levels: 7
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                           Options.bloom_locality: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                               Options.ttl: 2592000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                       Options.enable_blob_files: false
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                           Options.min_blob_size: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:           Options.merge_operator: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.compaction_filter: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 18:40:32 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5649ccf99680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5649cc1b5350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.compression: LZ4
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:             Options.num_levels: 7
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                           Options.bloom_locality: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                               Options.ttl: 2592000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                       Options.enable_blob_files: false
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                           Options.min_blob_size: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:           Options.merge_operator: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.compaction_filter: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5649ccf99680)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5649cc1b5350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.compression: LZ4
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:             Options.num_levels: 7
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                           Options.bloom_locality: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                               Options.ttl: 2592000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                       Options.enable_blob_files: false
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                           Options.min_blob_size: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:           Options.merge_operator: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.compaction_filter: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5649ccf99ac0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5649cc1b49b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.compression: LZ4
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:             Options.num_levels: 7
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                           Options.bloom_locality: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                               Options.ttl: 2592000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                       Options.enable_blob_files: false
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                           Options.min_blob_size: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:           Options.merge_operator: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.compaction_filter: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5649ccf99ac0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5649cc1b49b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.compression: LZ4
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:             Options.num_levels: 7
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                           Options.bloom_locality: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                               Options.ttl: 2592000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                       Options.enable_blob_files: false
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                           Options.min_blob_size: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:           Options.merge_operator: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.compaction_filter: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5649ccf99ac0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5649cc1b49b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.compression: LZ4
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:             Options.num_levels: 7
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 18:40:32 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                           Options.bloom_locality: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                               Options.ttl: 2592000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                       Options.enable_blob_files: false
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                           Options.min_blob_size: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 3cd883b2-c6e5-460a-8b6a-5f4ad1ea2d2b
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768934432883279, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768934432886156, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768934432, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3cd883b2-c6e5-460a-8b6a-5f4ad1ea2d2b", "db_session_id": "65QL8S4MTQUI0P0MRR2H", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768934432892051, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768934432, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3cd883b2-c6e5-460a-8b6a-5f4ad1ea2d2b", "db_session_id": "65QL8S4MTQUI0P0MRR2H", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768934432896315, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768934432, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "3cd883b2-c6e5-460a-8b6a-5f4ad1ea2d2b", "db_session_id": "65QL8S4MTQUI0P0MRR2H", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768934432898342, "job": 1, "event": "recovery_finished"}
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5649cc1e6000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: DB pointer 0x5649cd140000
Jan 20 18:40:32 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 20 18:40:32 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Jan 20 18:40:32 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 18:40:32 compute-0 ceph-osd[82836]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5649cc1b5350#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5649cc1b5350#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5649cc1b5350#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5649cc1b5350#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5649cc1b5350#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5649cc1b5350#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5649cc1b5350#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5649cc1b49b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5649cc1b49b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5649cc1b49b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5649cc1b5350#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5649cc1b5350#2 capacity: 460.80 MB usage: 1.39 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 20 18:40:32 compute-0 ceph-osd[82836]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Jan 20 18:40:32 compute-0 ceph-osd[82836]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/hello/cls_hello.cc:316: loading cls_hello
Jan 20 18:40:32 compute-0 ceph-osd[82836]: _get_class not permitted to load lua
Jan 20 18:40:32 compute-0 ceph-osd[82836]: _get_class not permitted to load sdk
Jan 20 18:40:32 compute-0 ceph-osd[82836]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Jan 20 18:40:32 compute-0 ceph-osd[82836]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Jan 20 18:40:32 compute-0 ceph-osd[82836]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Jan 20 18:40:32 compute-0 ceph-osd[82836]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Jan 20 18:40:32 compute-0 ceph-osd[82836]: osd.0 0 load_pgs
Jan 20 18:40:32 compute-0 ceph-osd[82836]: osd.0 0 load_pgs opened 0 pgs
Jan 20 18:40:32 compute-0 ceph-osd[82836]: osd.0 0 log_to_monitors true
Jan 20 18:40:32 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-osd-0[82832]: 2026-01-20T18:40:32.945+0000 7f520e120740 -1 osd.0 0 log_to_monitors true
Jan 20 18:40:32 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0)
Jan 20 18:40:32 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1335086582,v1:192.168.122.100:6803/1335086582]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Jan 20 18:40:32 compute-0 sudo[83488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 18:40:32 compute-0 sudo[83488]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:40:32 compute-0 sudo[83488]: pam_unix(sudo:session): session closed for user root
Jan 20 18:40:33 compute-0 sudo[83546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:40:33 compute-0 sudo[83546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:40:33 compute-0 sudo[83546]: pam_unix(sudo:session): session closed for user root
Jan 20 18:40:33 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0)
Jan 20 18:40:33 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/3306145470,v1:192.168.122.101:6801/3306145470]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Jan 20 18:40:33 compute-0 sudo[83571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Jan 20 18:40:33 compute-0 sudo[83571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:40:33 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 18:40:33 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 20 18:40:33 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:40:33 compute-0 podman[83665]: 2026-01-20 18:40:33.722210883 +0000 UTC m=+0.062132315 container exec 2fba800b181b7eef43f9bbe592c3e2bd413c4a1140b3b26fe8cd839a68603da7 (image=quay.io/ceph/ceph:v19, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 20 18:40:33 compute-0 ceph-mon[74381]: pgmap v40: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 18:40:33 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:40:33 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:40:33 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:40:33 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:40:33 compute-0 ceph-mon[74381]: from='osd.0 [v2:192.168.122.100:6802/1335086582,v1:192.168.122.100:6803/1335086582]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Jan 20 18:40:33 compute-0 ceph-mon[74381]: from='osd.1 [v2:192.168.122.101:6800/3306145470,v1:192.168.122.101:6801/3306145470]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Jan 20 18:40:33 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:40:33 compute-0 podman[83665]: 2026-01-20 18:40:33.844133883 +0000 UTC m=+0.184055285 container exec_died 2fba800b181b7eef43f9bbe592c3e2bd413c4a1140b3b26fe8cd839a68603da7 (image=quay.io/ceph/ceph:v19, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mon-compute-0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Jan 20 18:40:33 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Jan 20 18:40:33 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 20 18:40:33 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1335086582,v1:192.168.122.100:6803/1335086582]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Jan 20 18:40:33 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/3306145470,v1:192.168.122.101:6801/3306145470]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Jan 20 18:40:33 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e6 e6: 2 total, 0 up, 2 in
Jan 20 18:40:33 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e6: 2 total, 0 up, 2 in
Jan 20 18:40:33 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Jan 20 18:40:33 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1335086582,v1:192.168.122.100:6803/1335086582]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Jan 20 18:40:33 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e6 create-or-move crush item name 'osd.0' initial_weight 0.0195 at location {host=compute-0,root=default}
Jan 20 18:40:33 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-1", "root=default"]} v 0)
Jan 20 18:40:33 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/3306145470,v1:192.168.122.101:6801/3306145470]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]: dispatch
Jan 20 18:40:33 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e6 create-or-move crush item name 'osd.1' initial_weight 0.0195 at location {host=compute-1,root=default}
Jan 20 18:40:33 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 20 18:40:33 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 20 18:40:33 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 20 18:40:33 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 20 18:40:33 compute-0 ceph-mgr[74676]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 20 18:40:33 compute-0 ceph-mgr[74676]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 20 18:40:33 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Jan 20 18:40:33 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Jan 20 18:40:34 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 20 18:40:34 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:40:34 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 20 18:40:34 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:40:34 compute-0 sudo[83571]: pam_unix(sudo:session): session closed for user root
Jan 20 18:40:34 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 18:40:34 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:40:34 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 18:40:34 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:40:34 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 20 18:40:34 compute-0 sudo[83750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:40:34 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:40:34 compute-0 sudo[83750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:40:34 compute-0 sudo[83750]: pam_unix(sudo:session): session closed for user root
Jan 20 18:40:34 compute-0 sudo[83775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 20 18:40:34 compute-0 sudo[83775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:40:34 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v42: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 18:40:34 compute-0 sudo[83775]: pam_unix(sudo:session): session closed for user root
Jan 20 18:40:34 compute-0 sudo[83831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:40:34 compute-0 sudo[83831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:40:34 compute-0 sudo[83831]: pam_unix(sudo:session): session closed for user root
Jan 20 18:40:34 compute-0 sudo[83856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- inventory --format=json-pretty --filter-for-batch
Jan 20 18:40:34 compute-0 sudo[83856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:40:34 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Jan 20 18:40:34 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 20 18:40:34 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/1335086582,v1:192.168.122.100:6803/1335086582]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 20 18:40:34 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/3306145470,v1:192.168.122.101:6801/3306145470]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]': finished
Jan 20 18:40:34 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e7 e7: 2 total, 0 up, 2 in
Jan 20 18:40:34 compute-0 ceph-osd[82836]: osd.0 0 done with init, starting boot process
Jan 20 18:40:34 compute-0 ceph-osd[82836]: osd.0 0 start_boot
Jan 20 18:40:34 compute-0 ceph-osd[82836]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Jan 20 18:40:34 compute-0 ceph-osd[82836]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Jan 20 18:40:34 compute-0 ceph-osd[82836]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Jan 20 18:40:34 compute-0 ceph-osd[82836]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Jan 20 18:40:34 compute-0 ceph-osd[82836]: osd.0 0  bench count 12288000 bsize 4 KiB
Jan 20 18:40:34 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e7: 2 total, 0 up, 2 in
Jan 20 18:40:34 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 20 18:40:34 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 20 18:40:34 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 20 18:40:34 compute-0 ceph-mgr[74676]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 20 18:40:34 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 20 18:40:34 compute-0 ceph-mgr[74676]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 20 18:40:34 compute-0 ceph-mon[74381]: from='osd.0 [v2:192.168.122.100:6802/1335086582,v1:192.168.122.100:6803/1335086582]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Jan 20 18:40:34 compute-0 ceph-mon[74381]: from='osd.1 [v2:192.168.122.101:6800/3306145470,v1:192.168.122.101:6801/3306145470]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Jan 20 18:40:34 compute-0 ceph-mon[74381]: osdmap e6: 2 total, 0 up, 2 in
Jan 20 18:40:34 compute-0 ceph-mon[74381]: from='osd.0 [v2:192.168.122.100:6802/1335086582,v1:192.168.122.100:6803/1335086582]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Jan 20 18:40:34 compute-0 ceph-mon[74381]: from='osd.1 [v2:192.168.122.101:6800/3306145470,v1:192.168.122.101:6801/3306145470]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]: dispatch
Jan 20 18:40:34 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 20 18:40:34 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 20 18:40:34 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:40:34 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:40:34 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:40:34 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:40:34 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:40:34 compute-0 ceph-mgr[74676]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1335086582; not ready for session (expect reconnect)
Jan 20 18:40:34 compute-0 ceph-mgr[74676]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/3306145470; not ready for session (expect reconnect)
Jan 20 18:40:34 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 20 18:40:34 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 20 18:40:34 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 20 18:40:34 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 20 18:40:34 compute-0 ceph-mgr[74676]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 20 18:40:34 compute-0 ceph-mgr[74676]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 20 18:40:35 compute-0 podman[83923]: 2026-01-20 18:40:35.338719429 +0000 UTC m=+0.086025434 container create 0edc6ce2781a34ba7048d7c74d1c31b78e91d26415f429c06ae5ccd9f65c1f71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True)
Jan 20 18:40:35 compute-0 systemd[1]: Started libpod-conmon-0edc6ce2781a34ba7048d7c74d1c31b78e91d26415f429c06ae5ccd9f65c1f71.scope.
Jan 20 18:40:35 compute-0 podman[83923]: 2026-01-20 18:40:35.297782318 +0000 UTC m=+0.045088353 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:40:35 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:40:35 compute-0 podman[83923]: 2026-01-20 18:40:35.439351655 +0000 UTC m=+0.186657680 container init 0edc6ce2781a34ba7048d7c74d1c31b78e91d26415f429c06ae5ccd9f65c1f71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 18:40:35 compute-0 podman[83923]: 2026-01-20 18:40:35.448984448 +0000 UTC m=+0.196290473 container start 0edc6ce2781a34ba7048d7c74d1c31b78e91d26415f429c06ae5ccd9f65c1f71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_bhabha, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:40:35 compute-0 awesome_bhabha[83939]: 167 167
Jan 20 18:40:35 compute-0 systemd[1]: libpod-0edc6ce2781a34ba7048d7c74d1c31b78e91d26415f429c06ae5ccd9f65c1f71.scope: Deactivated successfully.
Jan 20 18:40:35 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 20 18:40:35 compute-0 podman[83923]: 2026-01-20 18:40:35.470084059 +0000 UTC m=+0.217390064 container attach 0edc6ce2781a34ba7048d7c74d1c31b78e91d26415f429c06ae5ccd9f65c1f71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 20 18:40:35 compute-0 podman[83923]: 2026-01-20 18:40:35.470499038 +0000 UTC m=+0.217805043 container died 0edc6ce2781a34ba7048d7c74d1c31b78e91d26415f429c06ae5ccd9f65c1f71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_bhabha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 18:40:35 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:40:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-f5bc7f148ba76a82507ba91bb9f9e43429006ece6d823040781f11ff1cb8a901-merged.mount: Deactivated successfully.
Jan 20 18:40:35 compute-0 podman[83923]: 2026-01-20 18:40:35.626020233 +0000 UTC m=+0.373326238 container remove 0edc6ce2781a34ba7048d7c74d1c31b78e91d26415f429c06ae5ccd9f65c1f71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Jan 20 18:40:35 compute-0 systemd[1]: libpod-conmon-0edc6ce2781a34ba7048d7c74d1c31b78e91d26415f429c06ae5ccd9f65c1f71.scope: Deactivated successfully.
Jan 20 18:40:35 compute-0 podman[83963]: 2026-01-20 18:40:35.826597257 +0000 UTC m=+0.056117469 container create 1b4f5862c43494aff4104a9b9d8ddbea6c79dbea50d0fea74ee0d443c1ae5239 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Jan 20 18:40:35 compute-0 systemd[1]: Started libpod-conmon-1b4f5862c43494aff4104a9b9d8ddbea6c79dbea50d0fea74ee0d443c1ae5239.scope.
Jan 20 18:40:35 compute-0 podman[83963]: 2026-01-20 18:40:35.797164585 +0000 UTC m=+0.026684777 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:40:35 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:40:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fc9fb04f8363db01c26b8435942534cbd4451eb7af9abd51395250c1445005e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 18:40:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fc9fb04f8363db01c26b8435942534cbd4451eb7af9abd51395250c1445005e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:40:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fc9fb04f8363db01c26b8435942534cbd4451eb7af9abd51395250c1445005e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:40:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fc9fb04f8363db01c26b8435942534cbd4451eb7af9abd51395250c1445005e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 18:40:35 compute-0 ceph-mgr[74676]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/3306145470; not ready for session (expect reconnect)
Jan 20 18:40:35 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 20 18:40:35 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 20 18:40:35 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 20 18:40:35 compute-0 ceph-mgr[74676]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1335086582; not ready for session (expect reconnect)
Jan 20 18:40:35 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 20 18:40:35 compute-0 ceph-mgr[74676]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 20 18:40:35 compute-0 ceph-mgr[74676]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 20 18:40:35 compute-0 ceph-mon[74381]: pgmap v42: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 18:40:35 compute-0 ceph-mon[74381]: from='osd.0 [v2:192.168.122.100:6802/1335086582,v1:192.168.122.100:6803/1335086582]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 20 18:40:35 compute-0 ceph-mon[74381]: from='osd.1 [v2:192.168.122.101:6800/3306145470,v1:192.168.122.101:6801/3306145470]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]': finished
Jan 20 18:40:35 compute-0 ceph-mon[74381]: osdmap e7: 2 total, 0 up, 2 in
Jan 20 18:40:35 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 20 18:40:35 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 20 18:40:35 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 20 18:40:35 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 20 18:40:35 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:40:35 compute-0 podman[83963]: 2026-01-20 18:40:35.980622765 +0000 UTC m=+0.210142987 container init 1b4f5862c43494aff4104a9b9d8ddbea6c79dbea50d0fea74ee0d443c1ae5239 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_perlman, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:40:35 compute-0 podman[83963]: 2026-01-20 18:40:35.987540503 +0000 UTC m=+0.217060685 container start 1b4f5862c43494aff4104a9b9d8ddbea6c79dbea50d0fea74ee0d443c1ae5239 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:40:36 compute-0 podman[83963]: 2026-01-20 18:40:36.014193938 +0000 UTC m=+0.243714110 container attach 1b4f5862c43494aff4104a9b9d8ddbea6c79dbea50d0fea74ee0d443c1ae5239 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_perlman, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 20 18:40:36 compute-0 priceless_perlman[83979]: [
Jan 20 18:40:36 compute-0 priceless_perlman[83979]:     {
Jan 20 18:40:36 compute-0 priceless_perlman[83979]:         "available": false,
Jan 20 18:40:36 compute-0 priceless_perlman[83979]:         "being_replaced": false,
Jan 20 18:40:36 compute-0 priceless_perlman[83979]:         "ceph_device_lvm": false,
Jan 20 18:40:36 compute-0 priceless_perlman[83979]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 20 18:40:36 compute-0 priceless_perlman[83979]:         "lsm_data": {},
Jan 20 18:40:36 compute-0 priceless_perlman[83979]:         "lvs": [],
Jan 20 18:40:36 compute-0 priceless_perlman[83979]:         "path": "/dev/sr0",
Jan 20 18:40:36 compute-0 priceless_perlman[83979]:         "rejected_reasons": [
Jan 20 18:40:36 compute-0 priceless_perlman[83979]:             "Insufficient space (<5GB)",
Jan 20 18:40:36 compute-0 priceless_perlman[83979]:             "Has a FileSystem"
Jan 20 18:40:36 compute-0 priceless_perlman[83979]:         ],
Jan 20 18:40:36 compute-0 priceless_perlman[83979]:         "sys_api": {
Jan 20 18:40:36 compute-0 priceless_perlman[83979]:             "actuators": null,
Jan 20 18:40:36 compute-0 priceless_perlman[83979]:             "device_nodes": [
Jan 20 18:40:36 compute-0 priceless_perlman[83979]:                 "sr0"
Jan 20 18:40:36 compute-0 priceless_perlman[83979]:             ],
Jan 20 18:40:36 compute-0 priceless_perlman[83979]:             "devname": "sr0",
Jan 20 18:40:36 compute-0 priceless_perlman[83979]:             "human_readable_size": "482.00 KB",
Jan 20 18:40:36 compute-0 priceless_perlman[83979]:             "id_bus": "ata",
Jan 20 18:40:36 compute-0 priceless_perlman[83979]:             "model": "QEMU DVD-ROM",
Jan 20 18:40:36 compute-0 priceless_perlman[83979]:             "nr_requests": "2",
Jan 20 18:40:36 compute-0 priceless_perlman[83979]:             "parent": "/dev/sr0",
Jan 20 18:40:36 compute-0 priceless_perlman[83979]:             "partitions": {},
Jan 20 18:40:36 compute-0 priceless_perlman[83979]:             "path": "/dev/sr0",
Jan 20 18:40:36 compute-0 priceless_perlman[83979]:             "removable": "1",
Jan 20 18:40:36 compute-0 priceless_perlman[83979]:             "rev": "2.5+",
Jan 20 18:40:36 compute-0 priceless_perlman[83979]:             "ro": "0",
Jan 20 18:40:36 compute-0 priceless_perlman[83979]:             "rotational": "1",
Jan 20 18:40:36 compute-0 priceless_perlman[83979]:             "sas_address": "",
Jan 20 18:40:36 compute-0 priceless_perlman[83979]:             "sas_device_handle": "",
Jan 20 18:40:36 compute-0 priceless_perlman[83979]:             "scheduler_mode": "mq-deadline",
Jan 20 18:40:36 compute-0 priceless_perlman[83979]:             "sectors": 0,
Jan 20 18:40:36 compute-0 priceless_perlman[83979]:             "sectorsize": "2048",
Jan 20 18:40:36 compute-0 priceless_perlman[83979]:             "size": 493568.0,
Jan 20 18:40:36 compute-0 priceless_perlman[83979]:             "support_discard": "2048",
Jan 20 18:40:36 compute-0 priceless_perlman[83979]:             "type": "disk",
Jan 20 18:40:36 compute-0 priceless_perlman[83979]:             "vendor": "QEMU"
Jan 20 18:40:36 compute-0 priceless_perlman[83979]:         }
Jan 20 18:40:36 compute-0 priceless_perlman[83979]:     }
Jan 20 18:40:36 compute-0 priceless_perlman[83979]: ]
Jan 20 18:40:36 compute-0 systemd[1]: libpod-1b4f5862c43494aff4104a9b9d8ddbea6c79dbea50d0fea74ee0d443c1ae5239.scope: Deactivated successfully.
Jan 20 18:40:36 compute-0 podman[83963]: 2026-01-20 18:40:36.694983945 +0000 UTC m=+0.924504117 container died 1b4f5862c43494aff4104a9b9d8ddbea6c79dbea50d0fea74ee0d443c1ae5239 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_perlman, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 20 18:40:36 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v44: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 18:40:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-5fc9fb04f8363db01c26b8435942534cbd4451eb7af9abd51395250c1445005e-merged.mount: Deactivated successfully.
Jan 20 18:40:36 compute-0 podman[83963]: 2026-01-20 18:40:36.832366242 +0000 UTC m=+1.061886414 container remove 1b4f5862c43494aff4104a9b9d8ddbea6c79dbea50d0fea74ee0d443c1ae5239 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_perlman, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Jan 20 18:40:36 compute-0 systemd[1]: libpod-conmon-1b4f5862c43494aff4104a9b9d8ddbea6c79dbea50d0fea74ee0d443c1ae5239.scope: Deactivated successfully.
Jan 20 18:40:36 compute-0 sudo[83856]: pam_unix(sudo:session): session closed for user root
Jan 20 18:40:36 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 18:40:36 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:40:36 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 18:40:36 compute-0 sshd-session[85057]: Connection closed by 43.103.0.45 port 41766
Jan 20 18:40:36 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:40:36 compute-0 ceph-mgr[74676]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/3306145470; not ready for session (expect reconnect)
Jan 20 18:40:36 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 18:40:36 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 20 18:40:36 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 20 18:40:36 compute-0 ceph-mgr[74676]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 20 18:40:36 compute-0 ceph-mgr[74676]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1335086582; not ready for session (expect reconnect)
Jan 20 18:40:36 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:40:36 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 20 18:40:36 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 20 18:40:36 compute-0 ceph-mgr[74676]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 20 18:40:36 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 18:40:36 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:40:36 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Jan 20 18:40:36 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Jan 20 18:40:36 compute-0 ceph-mgr[74676]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 127.9M
Jan 20 18:40:36 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 127.9M
Jan 20 18:40:36 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Jan 20 18:40:37 compute-0 ceph-mgr[74676]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 134214860: error parsing value: Value '134214860' is below minimum 939524096
Jan 20 18:40:37 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 134214860: error parsing value: Value '134214860' is below minimum 939524096
Jan 20 18:40:37 compute-0 ceph-mon[74381]: purged_snaps scrub starts
Jan 20 18:40:37 compute-0 ceph-mon[74381]: purged_snaps scrub ok
Jan 20 18:40:37 compute-0 ceph-mon[74381]: purged_snaps scrub starts
Jan 20 18:40:37 compute-0 ceph-mon[74381]: purged_snaps scrub ok
Jan 20 18:40:37 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 20 18:40:37 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 20 18:40:37 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:40:37 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:40:37 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 20 18:40:37 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:40:37 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 20 18:40:37 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:40:37 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:40:37 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:40:37 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:40:37 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:40:37 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:40:37 compute-0 ceph-mgr[74676]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/3306145470; not ready for session (expect reconnect)
Jan 20 18:40:37 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 20 18:40:37 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 20 18:40:37 compute-0 ceph-mgr[74676]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 20 18:40:37 compute-0 ceph-mgr[74676]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1335086582; not ready for session (expect reconnect)
Jan 20 18:40:37 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 20 18:40:37 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 20 18:40:37 compute-0 ceph-mgr[74676]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 20 18:40:38 compute-0 ceph-mon[74381]: pgmap v44: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 18:40:38 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:40:38 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Jan 20 18:40:38 compute-0 ceph-mon[74381]: Adjusting osd_memory_target on compute-0 to 127.9M
Jan 20 18:40:38 compute-0 ceph-mon[74381]: Unable to set osd_memory_target on compute-0 to 134214860: error parsing value: Value '134214860' is below minimum 939524096
Jan 20 18:40:38 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 20 18:40:38 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 20 18:40:38 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 20 18:40:38 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:40:38 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 20 18:40:38 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:40:38 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 20 18:40:38 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:40:38 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 20 18:40:38 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:40:38 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Jan 20 18:40:38 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Jan 20 18:40:38 compute-0 ceph-mgr[74676]: [cephadm INFO root] Adjusting osd_memory_target on compute-1 to  5247M
Jan 20 18:40:38 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-1 to  5247M
Jan 20 18:40:38 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Jan 20 18:40:38 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:40:38 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e7 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 18:40:38 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v45: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 18:40:38 compute-0 ceph-osd[82836]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 24.669 iops: 6315.293 elapsed_sec: 0.475
Jan 20 18:40:38 compute-0 ceph-osd[82836]: log_channel(cluster) log [WRN] : OSD bench result of 6315.292799 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 20 18:40:38 compute-0 ceph-osd[82836]: osd.0 0 waiting for initial osdmap
Jan 20 18:40:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-osd-0[82832]: 2026-01-20T18:40:38.815+0000 7f520a0a3640 -1 osd.0 0 waiting for initial osdmap
Jan 20 18:40:38 compute-0 ceph-osd[82836]: osd.0 7 crush map has features 288514050185494528, adjusting msgr requires for clients
Jan 20 18:40:38 compute-0 ceph-osd[82836]: osd.0 7 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Jan 20 18:40:38 compute-0 ceph-osd[82836]: osd.0 7 crush map has features 3314932999778484224, adjusting msgr requires for osds
Jan 20 18:40:38 compute-0 ceph-osd[82836]: osd.0 7 check_osdmap_features require_osd_release unknown -> squid
Jan 20 18:40:38 compute-0 ceph-osd[82836]: osd.0 7 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 20 18:40:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-osd-0[82832]: 2026-01-20T18:40:38.844+0000 7f52056cb640 -1 osd.0 7 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 20 18:40:38 compute-0 ceph-osd[82836]: osd.0 7 set_numa_affinity not setting numa affinity
Jan 20 18:40:38 compute-0 ceph-osd[82836]: osd.0 7 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial no unique device path for loop3: no symlink to loop3 in /dev/disk/by-path
Jan 20 18:40:38 compute-0 ceph-mgr[74676]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/3306145470; not ready for session (expect reconnect)
Jan 20 18:40:38 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 20 18:40:38 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 20 18:40:38 compute-0 ceph-mgr[74676]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 20 18:40:38 compute-0 ceph-mgr[74676]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/1335086582; not ready for session (expect reconnect)
Jan 20 18:40:38 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 20 18:40:38 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 20 18:40:38 compute-0 ceph-mgr[74676]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 20 18:40:39 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:40:39 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:40:39 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:40:39 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:40:39 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Jan 20 18:40:39 compute-0 ceph-mon[74381]: Adjusting osd_memory_target on compute-1 to  5247M
Jan 20 18:40:39 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:40:39 compute-0 ceph-mon[74381]: OSD bench result of 6315.292799 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 20 18:40:39 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 20 18:40:39 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 20 18:40:39 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Jan 20 18:40:39 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 20 18:40:39 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e8 e8: 2 total, 1 up, 2 in
Jan 20 18:40:39 compute-0 ceph-mon[74381]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/1335086582,v1:192.168.122.100:6803/1335086582] boot
Jan 20 18:40:39 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e8: 2 total, 1 up, 2 in
Jan 20 18:40:39 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 20 18:40:39 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 20 18:40:39 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 20 18:40:39 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 20 18:40:39 compute-0 ceph-mgr[74676]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 20 18:40:39 compute-0 ceph-osd[82836]: osd.0 8 state: booting -> active
Jan 20 18:40:39 compute-0 ceph-mgr[74676]: [devicehealth INFO root] creating mgr pool
Jan 20 18:40:39 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0)
Jan 20 18:40:39 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Jan 20 18:40:39 compute-0 ceph-mgr[74676]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/3306145470; not ready for session (expect reconnect)
Jan 20 18:40:39 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 20 18:40:39 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 20 18:40:39 compute-0 ceph-mgr[74676]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 20 18:40:40 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Jan 20 18:40:40 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 20 18:40:40 compute-0 ceph-mon[74381]: pgmap v45: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 18:40:40 compute-0 ceph-mon[74381]: osd.0 [v2:192.168.122.100:6802/1335086582,v1:192.168.122.100:6803/1335086582] boot
Jan 20 18:40:40 compute-0 ceph-mon[74381]: osdmap e8: 2 total, 1 up, 2 in
Jan 20 18:40:40 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 20 18:40:40 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 20 18:40:40 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Jan 20 18:40:40 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 20 18:40:40 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Jan 20 18:40:40 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e9 e9: 2 total, 1 up, 2 in
Jan 20 18:40:40 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e9 crush map has features 3314933000852226048, adjusting msgr requires
Jan 20 18:40:40 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e9 crush map has features 288514051259236352, adjusting msgr requires
Jan 20 18:40:40 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e9 crush map has features 288514051259236352, adjusting msgr requires
Jan 20 18:40:40 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e9 crush map has features 288514051259236352, adjusting msgr requires
Jan 20 18:40:40 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e9: 2 total, 1 up, 2 in
Jan 20 18:40:40 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 20 18:40:40 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 20 18:40:40 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0)
Jan 20 18:40:40 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Jan 20 18:40:40 compute-0 ceph-mgr[74676]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 20 18:40:40 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v48: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Jan 20 18:40:40 compute-0 ceph-osd[82836]: osd.0 9 crush map has features 288514051259236352, adjusting msgr requires for clients
Jan 20 18:40:40 compute-0 ceph-osd[82836]: osd.0 9 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Jan 20 18:40:40 compute-0 ceph-osd[82836]: osd.0 9 crush map has features 3314933000852226048, adjusting msgr requires for osds
Jan 20 18:40:40 compute-0 ceph-mgr[74676]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/3306145470; not ready for session (expect reconnect)
Jan 20 18:40:40 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 20 18:40:40 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 20 18:40:40 compute-0 ceph-mgr[74676]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 20 18:40:41 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Jan 20 18:40:41 compute-0 ceph-mon[74381]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 20 18:40:41 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Jan 20 18:40:41 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e10 e10: 2 total, 1 up, 2 in
Jan 20 18:40:41 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e10: 2 total, 1 up, 2 in
Jan 20 18:40:41 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 20 18:40:41 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 20 18:40:41 compute-0 ceph-mgr[74676]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 20 18:40:41 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Jan 20 18:40:41 compute-0 ceph-mon[74381]: osdmap e9: 2 total, 1 up, 2 in
Jan 20 18:40:41 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 20 18:40:41 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Jan 20 18:40:41 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 20 18:40:41 compute-0 ceph-mgr[74676]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/3306145470; not ready for session (expect reconnect)
Jan 20 18:40:41 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 20 18:40:41 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 20 18:40:41 compute-0 ceph-mgr[74676]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 20 18:40:42 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Jan 20 18:40:42 compute-0 ceph-mon[74381]: pgmap v48: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Jan 20 18:40:42 compute-0 ceph-mon[74381]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 20 18:40:42 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Jan 20 18:40:42 compute-0 ceph-mon[74381]: osdmap e10: 2 total, 1 up, 2 in
Jan 20 18:40:42 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 20 18:40:42 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 20 18:40:42 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v50: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Jan 20 18:40:42 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e11 e11: 2 total, 2 up, 2 in
Jan 20 18:40:42 compute-0 ceph-mon[74381]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.101:6800/3306145470,v1:192.168.122.101:6801/3306145470] boot
Jan 20 18:40:42 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e11: 2 total, 2 up, 2 in
Jan 20 18:40:42 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 20 18:40:42 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 20 18:40:43 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e11 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 18:40:43 compute-0 ceph-mon[74381]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 20 18:40:43 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Jan 20 18:40:43 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e12 e12: 2 total, 2 up, 2 in
Jan 20 18:40:43 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e12: 2 total, 2 up, 2 in
Jan 20 18:40:43 compute-0 ceph-mon[74381]: OSD bench result of 2585.777704 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 20 18:40:43 compute-0 ceph-mon[74381]: pgmap v50: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Jan 20 18:40:43 compute-0 ceph-mon[74381]: osd.1 [v2:192.168.122.101:6800/3306145470,v1:192.168.122.101:6801/3306145470] boot
Jan 20 18:40:43 compute-0 ceph-mon[74381]: osdmap e11: 2 total, 2 up, 2 in
Jan 20 18:40:43 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 20 18:40:43 compute-0 ceph-mon[74381]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 20 18:40:43 compute-0 ceph-mgr[74676]: [devicehealth INFO root] creating main.db for devicehealth
Jan 20 18:40:43 compute-0 ceph-mgr[74676]: [devicehealth INFO root] Check health
Jan 20 18:40:43 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Jan 20 18:40:43 compute-0 sudo[85085]:     ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda
Jan 20 18:40:43 compute-0 sudo[85085]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Jan 20 18:40:43 compute-0 sudo[85085]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167)
Jan 20 18:40:43 compute-0 sudo[85085]: pam_unix(sudo:session): session closed for user root
Jan 20 18:40:43 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Jan 20 18:40:43 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 20 18:40:43 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 20 18:40:44 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v53: 1 pgs: 1 creating+peering; 0 B data, 852 MiB used, 39 GiB / 40 GiB avail
Jan 20 18:40:44 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Jan 20 18:40:44 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e13 e13: 2 total, 2 up, 2 in
Jan 20 18:40:44 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e13: 2 total, 2 up, 2 in
Jan 20 18:40:44 compute-0 ceph-mon[74381]: osdmap e12: 2 total, 2 up, 2 in
Jan 20 18:40:44 compute-0 ceph-mon[74381]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Jan 20 18:40:44 compute-0 ceph-mon[74381]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Jan 20 18:40:44 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 20 18:40:44 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.cepfkm(active, since 98s)
Jan 20 18:40:45 compute-0 ceph-mon[74381]: pgmap v53: 1 pgs: 1 creating+peering; 0 B data, 852 MiB used, 39 GiB / 40 GiB avail
Jan 20 18:40:45 compute-0 ceph-mon[74381]: osdmap e13: 2 total, 2 up, 2 in
Jan 20 18:40:45 compute-0 ceph-mon[74381]: mgrmap e9: compute-0.cepfkm(active, since 98s)
Jan 20 18:40:46 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v55: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Jan 20 18:40:48 compute-0 ceph-mon[74381]: pgmap v55: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Jan 20 18:40:48 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e13 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 18:40:48 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Jan 20 18:40:50 compute-0 ceph-mon[74381]: pgmap v56: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Jan 20 18:40:50 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Jan 20 18:40:52 compute-0 ceph-mon[74381]: pgmap v57: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Jan 20 18:40:52 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Jan 20 18:40:53 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e13 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 18:40:53 compute-0 ceph-mon[74381]: pgmap v58: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Jan 20 18:40:54 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Jan 20 18:40:55 compute-0 ceph-mon[74381]: pgmap v59: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Jan 20 18:40:56 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Jan 20 18:40:57 compute-0 ceph-mon[74381]: pgmap v60: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Jan 20 18:40:58 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e13 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 18:40:58 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Jan 20 18:40:59 compute-0 sudo[85111]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdvinxzqcvwcajqshovrtivhovxiozmk ; /usr/bin/python3'
Jan 20 18:40:59 compute-0 sudo[85111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:40:59 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 20 18:40:59 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:40:59 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 20 18:40:59 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:40:59 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 20 18:40:59 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:40:59 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 20 18:40:59 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:40:59 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Jan 20 18:40:59 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 20 18:40:59 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 18:40:59 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:40:59 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 20 18:40:59 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 18:40:59 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Jan 20 18:40:59 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Jan 20 18:40:59 compute-0 python3[85113]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:40:59 compute-0 podman[85115]: 2026-01-20 18:40:59.821700674 +0000 UTC m=+0.042297615 container create b81cad28191b47e94a82d595da7c67b080c70198d9ac394a55c4534769a127c8 (image=quay.io/ceph/ceph:v19, name=magical_faraday, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:40:59 compute-0 systemd[1]: Started libpod-conmon-b81cad28191b47e94a82d595da7c67b080c70198d9ac394a55c4534769a127c8.scope.
Jan 20 18:40:59 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:40:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/438d316b4c6947aa614d010ea2ce4f956dfcae03a9fe04de6900d534b3a11c36/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 20 18:40:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/438d316b4c6947aa614d010ea2ce4f956dfcae03a9fe04de6900d534b3a11c36/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:40:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/438d316b4c6947aa614d010ea2ce4f956dfcae03a9fe04de6900d534b3a11c36/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:40:59 compute-0 podman[85115]: 2026-01-20 18:40:59.806170498 +0000 UTC m=+0.026767469 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:40:59 compute-0 podman[85115]: 2026-01-20 18:40:59.902743476 +0000 UTC m=+0.123340437 container init b81cad28191b47e94a82d595da7c67b080c70198d9ac394a55c4534769a127c8 (image=quay.io/ceph/ceph:v19, name=magical_faraday, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:40:59 compute-0 podman[85115]: 2026-01-20 18:40:59.910320569 +0000 UTC m=+0.130917510 container start b81cad28191b47e94a82d595da7c67b080c70198d9ac394a55c4534769a127c8 (image=quay.io/ceph/ceph:v19, name=magical_faraday, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 18:40:59 compute-0 podman[85115]: 2026-01-20 18:40:59.913499985 +0000 UTC m=+0.134096956 container attach b81cad28191b47e94a82d595da7c67b080c70198d9ac394a55c4534769a127c8 (image=quay.io/ceph/ceph:v19, name=magical_faraday, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:41:00 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 20 18:41:00 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4184939469' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 20 18:41:00 compute-0 magical_faraday[85131]: 
Jan 20 18:41:00 compute-0 magical_faraday[85131]: {"fsid":"aecbbf3b-b405-507b-97d7-637a83f5b4b1","health":{"status":"HEALTH_WARN","checks":{"CEPHADM_APPLY_SPEC_FAIL":{"severity":"HEALTH_WARN","summary":{"message":"Failed to apply 2 service(s): mon,mgr","count":2},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":132,"monmap":{"epoch":1,"min_mon_release_name":"squid","num_mons":1},"osdmap":{"epoch":13,"num_osds":2,"num_up_osds":2,"osd_up_since":1768934442,"num_in_osds":2,"osd_in_since":1768934421,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":1}],"num_pgs":1,"num_pools":1,"num_objects":2,"data_bytes":459280,"bytes_used":894664704,"bytes_avail":42046619648,"bytes_total":42941284352},"fsmap":{"epoch":1,"btime":"2026-01-20T18:38:46:076055+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-01-20T18:40:08.727656+0000","services":{}},"progress_events":{}}
Jan 20 18:41:00 compute-0 systemd[1]: libpod-b81cad28191b47e94a82d595da7c67b080c70198d9ac394a55c4534769a127c8.scope: Deactivated successfully.
Jan 20 18:41:00 compute-0 podman[85115]: 2026-01-20 18:41:00.372429603 +0000 UTC m=+0.593026544 container died b81cad28191b47e94a82d595da7c67b080c70198d9ac394a55c4534769a127c8 (image=quay.io/ceph/ceph:v19, name=magical_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Jan 20 18:41:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-438d316b4c6947aa614d010ea2ce4f956dfcae03a9fe04de6900d534b3a11c36-merged.mount: Deactivated successfully.
Jan 20 18:41:00 compute-0 podman[85115]: 2026-01-20 18:41:00.430343116 +0000 UTC m=+0.650940057 container remove b81cad28191b47e94a82d595da7c67b080c70198d9ac394a55c4534769a127c8 (image=quay.io/ceph/ceph:v19, name=magical_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:41:00 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.conf
Jan 20 18:41:00 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.conf
Jan 20 18:41:00 compute-0 systemd[1]: libpod-conmon-b81cad28191b47e94a82d595da7c67b080c70198d9ac394a55c4534769a127c8.scope: Deactivated successfully.
Jan 20 18:41:00 compute-0 sudo[85111]: pam_unix(sudo:session): session closed for user root
Jan 20 18:41:00 compute-0 ceph-mon[74381]: pgmap v61: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Jan 20 18:41:00 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:00 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:00 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:00 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:00 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 20 18:41:00 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:41:00 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 18:41:00 compute-0 ceph-mon[74381]: Updating compute-2:/etc/ceph/ceph.conf
Jan 20 18:41:00 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/4184939469' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 20 18:41:00 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v62: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Jan 20 18:41:00 compute-0 sudo[85193]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obakjspjfdgiefghbmsbrpydrltsgrvp ; /usr/bin/python3'
Jan 20 18:41:00 compute-0 sudo[85193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:41:00 compute-0 python3[85195]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:41:00 compute-0 podman[85196]: 2026-01-20 18:41:00.899602964 +0000 UTC m=+0.037959030 container create 994d5805d5f621c38bdf0a6fa6d2e144993e2a90d940258622b8f189210834ad (image=quay.io/ceph/ceph:v19, name=exciting_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 20 18:41:00 compute-0 systemd[1]: Started libpod-conmon-994d5805d5f621c38bdf0a6fa6d2e144993e2a90d940258622b8f189210834ad.scope.
Jan 20 18:41:00 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:41:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/487a443d18db16e8d239a8a5ca1d6cbf6d4630b53805cd37f50df4b92f0a302f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:41:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/487a443d18db16e8d239a8a5ca1d6cbf6d4630b53805cd37f50df4b92f0a302f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:41:00 compute-0 podman[85196]: 2026-01-20 18:41:00.971425522 +0000 UTC m=+0.109781648 container init 994d5805d5f621c38bdf0a6fa6d2e144993e2a90d940258622b8f189210834ad (image=quay.io/ceph/ceph:v19, name=exciting_taussig, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:41:00 compute-0 podman[85196]: 2026-01-20 18:41:00.883169666 +0000 UTC m=+0.021525752 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:41:00 compute-0 podman[85196]: 2026-01-20 18:41:00.979047146 +0000 UTC m=+0.117403212 container start 994d5805d5f621c38bdf0a6fa6d2e144993e2a90d940258622b8f189210834ad (image=quay.io/ceph/ceph:v19, name=exciting_taussig, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 20 18:41:00 compute-0 podman[85196]: 2026-01-20 18:41:00.983003912 +0000 UTC m=+0.121359988 container attach 994d5805d5f621c38bdf0a6fa6d2e144993e2a90d940258622b8f189210834ad (image=quay.io/ceph/ceph:v19, name=exciting_taussig, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:41:01 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 20 18:41:01 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 20 18:41:01 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 20 18:41:01 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3293934440' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 20 18:41:01 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Jan 20 18:41:01 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3293934440' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 20 18:41:01 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e14 e14: 2 total, 2 up, 2 in
Jan 20 18:41:01 compute-0 exciting_taussig[85211]: pool 'vms' created
Jan 20 18:41:01 compute-0 ceph-mon[74381]: Updating compute-2:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.conf
Jan 20 18:41:01 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/3293934440' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 20 18:41:01 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e14: 2 total, 2 up, 2 in
Jan 20 18:41:01 compute-0 systemd[1]: libpod-994d5805d5f621c38bdf0a6fa6d2e144993e2a90d940258622b8f189210834ad.scope: Deactivated successfully.
Jan 20 18:41:01 compute-0 podman[85196]: 2026-01-20 18:41:01.727138573 +0000 UTC m=+0.865494639 container died 994d5805d5f621c38bdf0a6fa6d2e144993e2a90d940258622b8f189210834ad (image=quay.io/ceph/ceph:v19, name=exciting_taussig, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:41:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-487a443d18db16e8d239a8a5ca1d6cbf6d4630b53805cd37f50df4b92f0a302f-merged.mount: Deactivated successfully.
Jan 20 18:41:01 compute-0 podman[85196]: 2026-01-20 18:41:01.761431003 +0000 UTC m=+0.899787069 container remove 994d5805d5f621c38bdf0a6fa6d2e144993e2a90d940258622b8f189210834ad (image=quay.io/ceph/ceph:v19, name=exciting_taussig, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Jan 20 18:41:01 compute-0 systemd[1]: libpod-conmon-994d5805d5f621c38bdf0a6fa6d2e144993e2a90d940258622b8f189210834ad.scope: Deactivated successfully.
Jan 20 18:41:01 compute-0 sudo[85193]: pam_unix(sudo:session): session closed for user root
Jan 20 18:41:01 compute-0 sudo[85273]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxojcheiqjlxqdnafoeomiwdvgavpcnm ; /usr/bin/python3'
Jan 20 18:41:01 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.client.admin.keyring
Jan 20 18:41:01 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.client.admin.keyring
Jan 20 18:41:01 compute-0 sudo[85273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:41:02 compute-0 python3[85275]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:41:02 compute-0 podman[85276]: 2026-01-20 18:41:02.096701788 +0000 UTC m=+0.043059563 container create 2ddd82cae186e7815561b10f87c425f716b363a2eaae4a6bc1ef75067c483413 (image=quay.io/ceph/ceph:v19, name=thirsty_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 20 18:41:02 compute-0 systemd[1]: Started libpod-conmon-2ddd82cae186e7815561b10f87c425f716b363a2eaae4a6bc1ef75067c483413.scope.
Jan 20 18:41:02 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:41:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ee8e55ab550b3bdbad2d4afe0c74ee7b3221d15b3bdfbb60503c90fe218ddca/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:41:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ee8e55ab550b3bdbad2d4afe0c74ee7b3221d15b3bdfbb60503c90fe218ddca/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:41:02 compute-0 podman[85276]: 2026-01-20 18:41:02.076529669 +0000 UTC m=+0.022887474 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:41:02 compute-0 podman[85276]: 2026-01-20 18:41:02.173822355 +0000 UTC m=+0.120180150 container init 2ddd82cae186e7815561b10f87c425f716b363a2eaae4a6bc1ef75067c483413 (image=quay.io/ceph/ceph:v19, name=thirsty_wiles, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True)
Jan 20 18:41:02 compute-0 podman[85276]: 2026-01-20 18:41:02.183001116 +0000 UTC m=+0.129358891 container start 2ddd82cae186e7815561b10f87c425f716b363a2eaae4a6bc1ef75067c483413 (image=quay.io/ceph/ceph:v19, name=thirsty_wiles, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Jan 20 18:41:02 compute-0 podman[85276]: 2026-01-20 18:41:02.18726468 +0000 UTC m=+0.133622455 container attach 2ddd82cae186e7815561b10f87c425f716b363a2eaae4a6bc1ef75067c483413 (image=quay.io/ceph/ceph:v19, name=thirsty_wiles, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:41:02 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 20 18:41:02 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3390807309' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 20 18:41:02 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 20 18:41:02 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:02 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 20 18:41:02 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:02 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 18:41:02 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:02 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v64: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Jan 20 18:41:02 compute-0 ceph-mgr[74676]: [progress INFO root] update: starting ev c7680c61-05cf-405e-8acd-77ad70cd9c77 (Updating mon deployment (+2 -> 3))
Jan 20 18:41:02 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Jan 20 18:41:02 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 20 18:41:02 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Jan 20 18:41:02 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 20 18:41:02 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 18:41:02 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:41:02 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-2 on compute-2
Jan 20 18:41:02 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-2 on compute-2
Jan 20 18:41:02 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Jan 20 18:41:02 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3390807309' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 20 18:41:02 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e15 e15: 2 total, 2 up, 2 in
Jan 20 18:41:02 compute-0 thirsty_wiles[85291]: pool 'volumes' created
Jan 20 18:41:02 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e15: 2 total, 2 up, 2 in
Jan 20 18:41:02 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 15 pg[3.0( empty local-lis/les=0/0 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=15) [0] r=0 lpr=15 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:02 compute-0 ceph-mon[74381]: pgmap v62: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Jan 20 18:41:02 compute-0 ceph-mon[74381]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 20 18:41:02 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/3293934440' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 20 18:41:02 compute-0 ceph-mon[74381]: osdmap e14: 2 total, 2 up, 2 in
Jan 20 18:41:02 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/3390807309' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 20 18:41:02 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:02 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:02 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:02 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 20 18:41:02 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 20 18:41:02 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:41:02 compute-0 systemd[1]: libpod-2ddd82cae186e7815561b10f87c425f716b363a2eaae4a6bc1ef75067c483413.scope: Deactivated successfully.
Jan 20 18:41:02 compute-0 podman[85276]: 2026-01-20 18:41:02.715554027 +0000 UTC m=+0.661911802 container died 2ddd82cae186e7815561b10f87c425f716b363a2eaae4a6bc1ef75067c483413 (image=quay.io/ceph/ceph:v19, name=thirsty_wiles, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 20 18:41:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ee8e55ab550b3bdbad2d4afe0c74ee7b3221d15b3bdfbb60503c90fe218ddca-merged.mount: Deactivated successfully.
Jan 20 18:41:02 compute-0 podman[85276]: 2026-01-20 18:41:02.748927345 +0000 UTC m=+0.695285120 container remove 2ddd82cae186e7815561b10f87c425f716b363a2eaae4a6bc1ef75067c483413 (image=quay.io/ceph/ceph:v19, name=thirsty_wiles, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 20 18:41:02 compute-0 systemd[1]: libpod-conmon-2ddd82cae186e7815561b10f87c425f716b363a2eaae4a6bc1ef75067c483413.scope: Deactivated successfully.
Jan 20 18:41:02 compute-0 sudo[85273]: pam_unix(sudo:session): session closed for user root
Jan 20 18:41:02 compute-0 sudo[85352]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhgxmfufhbysfgnnjdptkrvebeojsbnu ; /usr/bin/python3'
Jan 20 18:41:02 compute-0 sudo[85352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:41:03 compute-0 python3[85354]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:41:03 compute-0 podman[85355]: 2026-01-20 18:41:03.049856098 +0000 UTC m=+0.036307929 container create 0644b4994d01aa296c3e2d1f4a3d82d50f635467a1cc4ec6630984a8134be2fd (image=quay.io/ceph/ceph:v19, name=stupefied_poincare, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:41:03 compute-0 systemd[1]: Started libpod-conmon-0644b4994d01aa296c3e2d1f4a3d82d50f635467a1cc4ec6630984a8134be2fd.scope.
Jan 20 18:41:03 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:41:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6538d41ca0cb7d82e7a90cf0d8a52c32a57b1ab32599aa1f7c81e0a09a984a35/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:41:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6538d41ca0cb7d82e7a90cf0d8a52c32a57b1ab32599aa1f7c81e0a09a984a35/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:41:03 compute-0 podman[85355]: 2026-01-20 18:41:03.111688495 +0000 UTC m=+0.098140336 container init 0644b4994d01aa296c3e2d1f4a3d82d50f635467a1cc4ec6630984a8134be2fd (image=quay.io/ceph/ceph:v19, name=stupefied_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:41:03 compute-0 podman[85355]: 2026-01-20 18:41:03.117774512 +0000 UTC m=+0.104226353 container start 0644b4994d01aa296c3e2d1f4a3d82d50f635467a1cc4ec6630984a8134be2fd (image=quay.io/ceph/ceph:v19, name=stupefied_poincare, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:41:03 compute-0 podman[85355]: 2026-01-20 18:41:03.121368759 +0000 UTC m=+0.107820620 container attach 0644b4994d01aa296c3e2d1f4a3d82d50f635467a1cc4ec6630984a8134be2fd (image=quay.io/ceph/ceph:v19, name=stupefied_poincare, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 18:41:03 compute-0 podman[85355]: 2026-01-20 18:41:03.034153588 +0000 UTC m=+0.020605449 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:41:03 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 20 18:41:03 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2064839815' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 20 18:41:03 compute-0 ceph-mon[74381]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 20 18:41:03 compute-0 ceph-mon[74381]: log_channel(cluster) log [INF] : Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Jan 20 18:41:03 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e15 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 18:41:03 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Jan 20 18:41:03 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2064839815' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 20 18:41:03 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e16 e16: 2 total, 2 up, 2 in
Jan 20 18:41:03 compute-0 stupefied_poincare[85371]: pool 'backups' created
Jan 20 18:41:03 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e16: 2 total, 2 up, 2 in
Jan 20 18:41:03 compute-0 ceph-mon[74381]: Updating compute-2:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.client.admin.keyring
Jan 20 18:41:03 compute-0 ceph-mon[74381]: pgmap v64: 2 pgs: 1 unknown, 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Jan 20 18:41:03 compute-0 ceph-mon[74381]: Deploying daemon mon.compute-2 on compute-2
Jan 20 18:41:03 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/3390807309' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 20 18:41:03 compute-0 ceph-mon[74381]: osdmap e15: 2 total, 2 up, 2 in
Jan 20 18:41:03 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/2064839815' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 20 18:41:03 compute-0 ceph-mon[74381]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 20 18:41:03 compute-0 ceph-mon[74381]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Jan 20 18:41:03 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 16 pg[4.0( empty local-lis/les=0/0 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=16) [0] r=0 lpr=16 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:03 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 16 pg[3.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=15) [0] r=0 lpr=15 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:03 compute-0 systemd[1]: libpod-0644b4994d01aa296c3e2d1f4a3d82d50f635467a1cc4ec6630984a8134be2fd.scope: Deactivated successfully.
Jan 20 18:41:03 compute-0 conmon[85371]: conmon 0644b4994d01aa296c3e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0644b4994d01aa296c3e2d1f4a3d82d50f635467a1cc4ec6630984a8134be2fd.scope/container/memory.events
Jan 20 18:41:03 compute-0 podman[85355]: 2026-01-20 18:41:03.72871859 +0000 UTC m=+0.715170421 container died 0644b4994d01aa296c3e2d1f4a3d82d50f635467a1cc4ec6630984a8134be2fd (image=quay.io/ceph/ceph:v19, name=stupefied_poincare, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Jan 20 18:41:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-6538d41ca0cb7d82e7a90cf0d8a52c32a57b1ab32599aa1f7c81e0a09a984a35-merged.mount: Deactivated successfully.
Jan 20 18:41:04 compute-0 podman[85355]: 2026-01-20 18:41:04.025680477 +0000 UTC m=+1.012132308 container remove 0644b4994d01aa296c3e2d1f4a3d82d50f635467a1cc4ec6630984a8134be2fd (image=quay.io/ceph/ceph:v19, name=stupefied_poincare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 20 18:41:04 compute-0 sudo[85352]: pam_unix(sudo:session): session closed for user root
Jan 20 18:41:04 compute-0 systemd[1]: libpod-conmon-0644b4994d01aa296c3e2d1f4a3d82d50f635467a1cc4ec6630984a8134be2fd.scope: Deactivated successfully.
Jan 20 18:41:04 compute-0 sudo[85433]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdljoakraipokjsgrqahmuwyxwjhckvr ; /usr/bin/python3'
Jan 20 18:41:04 compute-0 sudo[85433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:41:04 compute-0 python3[85435]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:41:04 compute-0 podman[85436]: 2026-01-20 18:41:04.344024842 +0000 UTC m=+0.039460266 container create f411c2958e78cfec5b6bf5b1d91848d233e1612a6ad63e3a6bec1ace0f8a1b99 (image=quay.io/ceph/ceph:v19, name=thirsty_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Jan 20 18:41:04 compute-0 systemd[1]: Started libpod-conmon-f411c2958e78cfec5b6bf5b1d91848d233e1612a6ad63e3a6bec1ace0f8a1b99.scope.
Jan 20 18:41:04 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:41:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b2a2c4831c540526f615f2b09c045c0550bbd7026d90740a6657c54c779b3a4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:41:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b2a2c4831c540526f615f2b09c045c0550bbd7026d90740a6657c54c779b3a4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:41:04 compute-0 podman[85436]: 2026-01-20 18:41:04.409042916 +0000 UTC m=+0.104478340 container init f411c2958e78cfec5b6bf5b1d91848d233e1612a6ad63e3a6bec1ace0f8a1b99 (image=quay.io/ceph/ceph:v19, name=thirsty_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:41:04 compute-0 podman[85436]: 2026-01-20 18:41:04.41337755 +0000 UTC m=+0.108812974 container start f411c2958e78cfec5b6bf5b1d91848d233e1612a6ad63e3a6bec1ace0f8a1b99 (image=quay.io/ceph/ceph:v19, name=thirsty_neumann, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:41:04 compute-0 podman[85436]: 2026-01-20 18:41:04.416678771 +0000 UTC m=+0.112114195 container attach f411c2958e78cfec5b6bf5b1d91848d233e1612a6ad63e3a6bec1ace0f8a1b99 (image=quay.io/ceph/ceph:v19, name=thirsty_neumann, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Jan 20 18:41:04 compute-0 podman[85436]: 2026-01-20 18:41:04.327268417 +0000 UTC m=+0.022703861 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:41:04 compute-0 ceph-mgr[74676]: [progress WARNING root] Starting Global Recovery Event,2 pgs not in active + clean state
Jan 20 18:41:04 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v67: 4 pgs: 2 active+clean, 2 unknown; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Jan 20 18:41:04 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Jan 20 18:41:04 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e17 e17: 2 total, 2 up, 2 in
Jan 20 18:41:04 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/2064839815' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 20 18:41:04 compute-0 ceph-mon[74381]: osdmap e16: 2 total, 2 up, 2 in
Jan 20 18:41:04 compute-0 ceph-mon[74381]: pgmap v67: 4 pgs: 2 active+clean, 2 unknown; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Jan 20 18:41:04 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e17: 2 total, 2 up, 2 in
Jan 20 18:41:04 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 17 pg[4.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=16) [0] r=0 lpr=16 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:04 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 20 18:41:04 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1296988523' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 20 18:41:05 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 20 18:41:05 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:05 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 20 18:41:05 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Jan 20 18:41:05 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Jan 20 18:41:05 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:05 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 20 18:41:05 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Jan 20 18:41:05 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:05 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Jan 20 18:41:05 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 20 18:41:05 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Jan 20 18:41:05 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 20 18:41:05 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1296988523' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 20 18:41:05 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e18 e18: 2 total, 2 up, 2 in
Jan 20 18:41:05 compute-0 thirsty_neumann[85452]: pool 'images' created
Jan 20 18:41:05 compute-0 ceph-mon[74381]: osdmap e17: 2 total, 2 up, 2 in
Jan 20 18:41:05 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/1296988523' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 20 18:41:05 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:05 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:05 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e18: 2 total, 2 up, 2 in
Jan 20 18:41:05 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 18:41:05 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:41:05 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 18 pg[5.0( empty local-lis/les=0/0 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [0] r=0 lpr=18 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:05 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-1 on compute-1
Jan 20 18:41:05 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-1 on compute-1
Jan 20 18:41:05 compute-0 systemd[1]: libpod-f411c2958e78cfec5b6bf5b1d91848d233e1612a6ad63e3a6bec1ace0f8a1b99.scope: Deactivated successfully.
Jan 20 18:41:05 compute-0 conmon[85452]: conmon f411c2958e78cfec5b6b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f411c2958e78cfec5b6bf5b1d91848d233e1612a6ad63e3a6bec1ace0f8a1b99.scope/container/memory.events
Jan 20 18:41:05 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Jan 20 18:41:05 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).monmap v1 adding/updating compute-2 at [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to monitor cluster
Jan 20 18:41:05 compute-0 ceph-mgr[74676]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/2203561367; not ready for session (expect reconnect)
Jan 20 18:41:05 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Jan 20 18:41:05 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 20 18:41:05 compute-0 ceph-mgr[74676]: mgr finish mon failed to return metadata for mon.compute-2: (2) No such file or directory
Jan 20 18:41:05 compute-0 ceph-mon[74381]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 20 18:41:05 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 20 18:41:05 compute-0 ceph-mon[74381]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Jan 20 18:41:05 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 20 18:41:05 compute-0 ceph-mon[74381]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Jan 20 18:41:05 compute-0 ceph-mon[74381]: paxos.0).electionLogic(5) init, last seen epoch 5, mid-election, bumping
Jan 20 18:41:05 compute-0 ceph-mgr[74676]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Jan 20 18:41:05 compute-0 podman[85479]: 2026-01-20 18:41:05.801262604 +0000 UTC m=+0.023794297 container died f411c2958e78cfec5b6bf5b1d91848d233e1612a6ad63e3a6bec1ace0f8a1b99 (image=quay.io/ceph/ceph:v19, name=thirsty_neumann, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 20 18:41:05 compute-0 ceph-mon[74381]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 20 18:41:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b2a2c4831c540526f615f2b09c045c0550bbd7026d90740a6657c54c779b3a4-merged.mount: Deactivated successfully.
Jan 20 18:41:05 compute-0 podman[85479]: 2026-01-20 18:41:05.833906564 +0000 UTC m=+0.056438207 container remove f411c2958e78cfec5b6bf5b1d91848d233e1612a6ad63e3a6bec1ace0f8a1b99 (image=quay.io/ceph/ceph:v19, name=thirsty_neumann, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:41:05 compute-0 systemd[1]: libpod-conmon-f411c2958e78cfec5b6bf5b1d91848d233e1612a6ad63e3a6bec1ace0f8a1b99.scope: Deactivated successfully.
Jan 20 18:41:05 compute-0 sudo[85433]: pam_unix(sudo:session): session closed for user root
Jan 20 18:41:05 compute-0 sudo[85517]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qezccbbvofoictadxjhxzmpksilsumlc ; /usr/bin/python3'
Jan 20 18:41:05 compute-0 sudo[85517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:41:06 compute-0 python3[85519]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:41:06 compute-0 podman[85520]: 2026-01-20 18:41:06.157295741 +0000 UTC m=+0.041997758 container create ffb33701f39a17d67d828e3a0a2c67fd5e8c12857ad5284987a9e3c495fb62fc (image=quay.io/ceph/ceph:v19, name=condescending_herschel, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:41:06 compute-0 systemd[1]: Started libpod-conmon-ffb33701f39a17d67d828e3a0a2c67fd5e8c12857ad5284987a9e3c495fb62fc.scope.
Jan 20 18:41:06 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:41:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0863d0d20433a1f1ebae7bf08b229cf9381d105124f3864cad5ab34a4b16e7c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:41:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0863d0d20433a1f1ebae7bf08b229cf9381d105124f3864cad5ab34a4b16e7c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:41:06 compute-0 podman[85520]: 2026-01-20 18:41:06.222010177 +0000 UTC m=+0.106712184 container init ffb33701f39a17d67d828e3a0a2c67fd5e8c12857ad5284987a9e3c495fb62fc (image=quay.io/ceph/ceph:v19, name=condescending_herschel, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:41:06 compute-0 podman[85520]: 2026-01-20 18:41:06.227978871 +0000 UTC m=+0.112680878 container start ffb33701f39a17d67d828e3a0a2c67fd5e8c12857ad5284987a9e3c495fb62fc (image=quay.io/ceph/ceph:v19, name=condescending_herschel, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:41:06 compute-0 podman[85520]: 2026-01-20 18:41:06.136108398 +0000 UTC m=+0.020810425 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:41:06 compute-0 podman[85520]: 2026-01-20 18:41:06.23162016 +0000 UTC m=+0.116322197 container attach ffb33701f39a17d67d828e3a0a2c67fd5e8c12857ad5284987a9e3c495fb62fc (image=quay.io/ceph/ceph:v19, name=condescending_herschel, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:41:06 compute-0 ceph-mon[74381]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Jan 20 18:41:06 compute-0 ceph-mon[74381]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Jan 20 18:41:06 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v70: 5 pgs: 1 unknown, 4 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Jan 20 18:41:06 compute-0 ceph-mgr[74676]: [balancer INFO root] Optimize plan auto_2026-01-20_18:41:06
Jan 20 18:41:06 compute-0 ceph-mgr[74676]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 18:41:06 compute-0 ceph-mgr[74676]: [balancer INFO root] Some PGs (0.200000) are unknown; try again later
Jan 20 18:41:06 compute-0 ceph-mgr[74676]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/2203561367; not ready for session (expect reconnect)
Jan 20 18:41:06 compute-0 ceph-mon[74381]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Jan 20 18:41:06 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 20 18:41:06 compute-0 ceph-mgr[74676]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Jan 20 18:41:06 compute-0 ceph-mon[74381]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Jan 20 18:41:07 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 18:41:07 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Jan 20 18:41:07 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 18:41:07 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Jan 20 18:41:07 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 20 18:41:07 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Jan 20 18:41:07 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 20 18:41:07 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Jan 20 18:41:07 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 20 18:41:07 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Jan 20 18:41:07 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 20 18:41:07 compute-0 ceph-mon[74381]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0)
Jan 20 18:41:07 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Jan 20 18:41:07 compute-0 ceph-mgr[74676]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 18:41:07 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:41:07 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:41:07 compute-0 ceph-mgr[74676]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 18:41:07 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:41:07 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:41:07 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:41:07 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:41:07 compute-0 ceph-mon[74381]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Jan 20 18:41:07 compute-0 ceph-mgr[74676]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/2203561367; not ready for session (expect reconnect)
Jan 20 18:41:07 compute-0 ceph-mon[74381]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Jan 20 18:41:07 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 20 18:41:07 compute-0 ceph-mgr[74676]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Jan 20 18:41:07 compute-0 ceph-mon[74381]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Jan 20 18:41:07 compute-0 ceph-mon[74381]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Jan 20 18:41:07 compute-0 ceph-mon[74381]: mon.compute-0@0(electing) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 20 18:41:08 compute-0 ceph-mon[74381]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Jan 20 18:41:08 compute-0 ceph-mgr[74676]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2619206661; not ready for session (expect reconnect)
Jan 20 18:41:08 compute-0 ceph-mon[74381]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Jan 20 18:41:08 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 20 18:41:08 compute-0 ceph-mgr[74676]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Jan 20 18:41:08 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v71: 5 pgs: 1 unknown, 4 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 20 18:41:08 compute-0 ceph-mgr[74676]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/2203561367; not ready for session (expect reconnect)
Jan 20 18:41:08 compute-0 ceph-mon[74381]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Jan 20 18:41:08 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 20 18:41:08 compute-0 ceph-mgr[74676]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Jan 20 18:41:09 compute-0 ceph-mgr[74676]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2619206661; not ready for session (expect reconnect)
Jan 20 18:41:09 compute-0 ceph-mon[74381]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Jan 20 18:41:09 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 20 18:41:09 compute-0 ceph-mgr[74676]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Jan 20 18:41:09 compute-0 ceph-mon[74381]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Jan 20 18:41:09 compute-0 ceph-mon[74381]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Jan 20 18:41:09 compute-0 ceph-mgr[74676]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/2203561367; not ready for session (expect reconnect)
Jan 20 18:41:09 compute-0 ceph-mon[74381]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Jan 20 18:41:09 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 20 18:41:09 compute-0 ceph-mgr[74676]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Jan 20 18:41:09 compute-0 ceph-mon[74381]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Jan 20 18:41:10 compute-0 ceph-mon[74381]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Jan 20 18:41:10 compute-0 ceph-mgr[74676]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2619206661; not ready for session (expect reconnect)
Jan 20 18:41:10 compute-0 ceph-mon[74381]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Jan 20 18:41:10 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 20 18:41:10 compute-0 ceph-mgr[74676]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Jan 20 18:41:10 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v72: 5 pgs: 1 creating+peering, 4 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 20 18:41:10 compute-0 ceph-mon[74381]: mon.compute-0@0(electing) e2 handle_auth_request failed to assign global_id
Jan 20 18:41:10 compute-0 ceph-mgr[74676]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/2203561367; not ready for session (expect reconnect)
Jan 20 18:41:10 compute-0 ceph-mon[74381]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Jan 20 18:41:10 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 20 18:41:10 compute-0 ceph-mgr[74676]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Jan 20 18:41:10 compute-0 ceph-mon[74381]: paxos.0).electionLogic(7) init, last seen epoch 7, mid-election, bumping
Jan 20 18:41:10 compute-0 ceph-mon[74381]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 20 18:41:10 compute-0 ceph-mon[74381]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Jan 20 18:41:10 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : monmap epoch 2
Jan 20 18:41:10 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1
Jan 20 18:41:10 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : last_changed 2026-01-20T18:41:05.793025+0000
Jan 20 18:41:10 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : created 2026-01-20T18:38:43.724879+0000
Jan 20 18:41:10 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Jan 20 18:41:10 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : election_strategy: 1
Jan 20 18:41:10 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 20 18:41:10 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Jan 20 18:41:10 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 20 18:41:10 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : fsmap 
Jan 20 18:41:10 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e18: 2 total, 2 up, 2 in
Jan 20 18:41:10 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.cepfkm(active, since 2m)
Jan 20 18:41:10 compute-0 ceph-mon[74381]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 3 pool(s) do not have an application enabled
Jan 20 18:41:10 compute-0 ceph-mon[74381]: log_channel(cluster) log [WRN] : [WRN] POOL_APP_NOT_ENABLED: 3 pool(s) do not have an application enabled
Jan 20 18:41:10 compute-0 ceph-mon[74381]: log_channel(cluster) log [WRN] :     application not enabled on pool 'vms'
Jan 20 18:41:10 compute-0 ceph-mon[74381]: log_channel(cluster) log [WRN] :     application not enabled on pool 'volumes'
Jan 20 18:41:10 compute-0 ceph-mon[74381]: log_channel(cluster) log [WRN] :     application not enabled on pool 'backups'
Jan 20 18:41:10 compute-0 ceph-mon[74381]: log_channel(cluster) log [WRN] :     use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Jan 20 18:41:10 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:10 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 20 18:41:10 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:10 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 20 18:41:10 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:10 compute-0 ceph-mgr[74676]: [progress INFO root] complete: finished ev c7680c61-05cf-405e-8acd-77ad70cd9c77 (Updating mon deployment (+2 -> 3))
Jan 20 18:41:10 compute-0 ceph-mgr[74676]: [progress INFO root] Completed event c7680c61-05cf-405e-8acd-77ad70cd9c77 (Updating mon deployment (+2 -> 3)) in 8 seconds
Jan 20 18:41:10 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 20 18:41:10 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Jan 20 18:41:10 compute-0 ceph-mon[74381]: log_channel(cluster) log [WRN] : Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 20 18:41:10 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:10 compute-0 ceph-mgr[74676]: [progress INFO root] update: starting ev c11c05fb-aef8-4352-bf3a-8fe9a61992df (Updating mgr deployment (+2 -> 3))
Jan 20 18:41:10 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.pyghhf", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Jan 20 18:41:10 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.pyghhf", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 20 18:41:10 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Jan 20 18:41:10 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e19 e19: 2 total, 2 up, 2 in
Jan 20 18:41:10 compute-0 ceph-mon[74381]: Deploying daemon mon.compute-1 on compute-1
Jan 20 18:41:10 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 20 18:41:10 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 20 18:41:10 compute-0 ceph-mon[74381]: mon.compute-0 calling monitor election
Jan 20 18:41:10 compute-0 ceph-mon[74381]: pgmap v70: 5 pgs: 1 unknown, 4 active+clean; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Jan 20 18:41:10 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 20 18:41:10 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Jan 20 18:41:10 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 20 18:41:10 compute-0 ceph-mon[74381]: mon.compute-2 calling monitor election
Jan 20 18:41:10 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 20 18:41:10 compute-0 ceph-mon[74381]: pgmap v71: 5 pgs: 1 unknown, 4 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 20 18:41:10 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 20 18:41:10 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 20 18:41:10 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 20 18:41:10 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 20 18:41:10 compute-0 ceph-mon[74381]: pgmap v72: 5 pgs: 1 creating+peering, 4 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 20 18:41:10 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 20 18:41:10 compute-0 ceph-mon[74381]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Jan 20 18:41:10 compute-0 ceph-mon[74381]: monmap epoch 2
Jan 20 18:41:10 compute-0 ceph-mon[74381]: fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1
Jan 20 18:41:10 compute-0 ceph-mon[74381]: last_changed 2026-01-20T18:41:05.793025+0000
Jan 20 18:41:10 compute-0 ceph-mon[74381]: created 2026-01-20T18:38:43.724879+0000
Jan 20 18:41:10 compute-0 ceph-mon[74381]: min_mon_release 19 (squid)
Jan 20 18:41:10 compute-0 ceph-mon[74381]: election_strategy: 1
Jan 20 18:41:10 compute-0 ceph-mon[74381]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 20 18:41:10 compute-0 ceph-mon[74381]: 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Jan 20 18:41:10 compute-0 ceph-mon[74381]: fsmap 
Jan 20 18:41:10 compute-0 ceph-mon[74381]: osdmap e18: 2 total, 2 up, 2 in
Jan 20 18:41:10 compute-0 ceph-mon[74381]: mgrmap e9: compute-0.cepfkm(active, since 2m)
Jan 20 18:41:10 compute-0 ceph-mon[74381]: Health detail: HEALTH_WARN 3 pool(s) do not have an application enabled
Jan 20 18:41:10 compute-0 ceph-mon[74381]: [WRN] POOL_APP_NOT_ENABLED: 3 pool(s) do not have an application enabled
Jan 20 18:41:10 compute-0 ceph-mon[74381]:     application not enabled on pool 'vms'
Jan 20 18:41:10 compute-0 ceph-mon[74381]:     application not enabled on pool 'volumes'
Jan 20 18:41:10 compute-0 ceph-mon[74381]:     application not enabled on pool 'backups'
Jan 20 18:41:10 compute-0 ceph-mon[74381]:     use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Jan 20 18:41:10 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:10 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:10 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:10 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e19: 2 total, 2 up, 2 in
Jan 20 18:41:10 compute-0 ceph-mgr[74676]: [progress INFO root] update: starting ev 833771ee-f8cb-4236-b843-9d5c1a09f2e5 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Jan 20 18:41:10 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0)
Jan 20 18:41:10 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Jan 20 18:41:10 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 19 pg[5.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=18) [0] r=0 lpr=18 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:10 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.pyghhf", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 20 18:41:10 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 20 18:41:10 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 20 18:41:10 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 18:41:10 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:41:10 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-2.pyghhf on compute-2
Jan 20 18:41:10 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-2.pyghhf on compute-2
Jan 20 18:41:11 compute-0 ceph-mgr[74676]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2619206661; not ready for session (expect reconnect)
Jan 20 18:41:11 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Jan 20 18:41:11 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 20 18:41:11 compute-0 ceph-mgr[74676]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Jan 20 18:41:11 compute-0 ceph-mgr[74676]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/2203561367; not ready for session (expect reconnect)
Jan 20 18:41:11 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Jan 20 18:41:11 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 20 18:41:11 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Jan 20 18:41:11 compute-0 ceph-mon[74381]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 20 18:41:11 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:11 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.pyghhf", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 20 18:41:11 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Jan 20 18:41:11 compute-0 ceph-mon[74381]: osdmap e19: 2 total, 2 up, 2 in
Jan 20 18:41:11 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Jan 20 18:41:11 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.pyghhf", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 20 18:41:11 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 20 18:41:11 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:41:11 compute-0 ceph-mon[74381]: Deploying daemon mgr.compute-2.pyghhf on compute-2
Jan 20 18:41:11 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 20 18:41:11 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 20 18:41:11 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Jan 20 18:41:11 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e20 e20: 2 total, 2 up, 2 in
Jan 20 18:41:11 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e20: 2 total, 2 up, 2 in
Jan 20 18:41:11 compute-0 ceph-mgr[74676]: [progress INFO root] update: starting ev 2ac42b79-bd07-4622-8857-7ad728274a81 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Jan 20 18:41:11 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0)
Jan 20 18:41:11 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Jan 20 18:41:12 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Jan 20 18:41:12 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).monmap v2 adding/updating compute-1 at [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to monitor cluster
Jan 20 18:41:12 compute-0 ceph-mgr[74676]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2619206661; not ready for session (expect reconnect)
Jan 20 18:41:12 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Jan 20 18:41:12 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 20 18:41:12 compute-0 ceph-mgr[74676]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Jan 20 18:41:12 compute-0 ceph-mon[74381]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0)
Jan 20 18:41:12 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Jan 20 18:41:12 compute-0 ceph-mon[74381]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 20 18:41:12 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 20 18:41:12 compute-0 ceph-mon[74381]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Jan 20 18:41:12 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 20 18:41:12 compute-0 ceph-mon[74381]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Jan 20 18:41:12 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 20 18:41:12 compute-0 ceph-mgr[74676]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 20 18:41:12 compute-0 ceph-mon[74381]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Jan 20 18:41:12 compute-0 ceph-mon[74381]: paxos.0).electionLogic(10) init, last seen epoch 10
Jan 20 18:41:12 compute-0 ceph-mon[74381]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 20 18:41:12 compute-0 ceph-mon[74381]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 20 18:41:12 compute-0 ceph-mon[74381]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 20 18:41:12 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v75: 5 pgs: 1 creating+peering, 4 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 20 18:41:12 compute-0 ceph-mon[74381]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0)
Jan 20 18:41:12 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 20 18:41:12 compute-0 ceph-mon[74381]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0)
Jan 20 18:41:12 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 20 18:41:12 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:41:12.796+0000 7f6478801640 -1 mgr.server handle_report got status from non-daemon mon.compute-2
Jan 20 18:41:12 compute-0 ceph-mgr[74676]: mgr.server handle_report got status from non-daemon mon.compute-2
Jan 20 18:41:12 compute-0 ceph-mon[74381]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 20 18:41:12 compute-0 ceph-mon[74381]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 20 18:41:12 compute-0 ceph-mon[74381]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 20 18:41:13 compute-0 ceph-mgr[74676]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2619206661; not ready for session (expect reconnect)
Jan 20 18:41:13 compute-0 ceph-mon[74381]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Jan 20 18:41:13 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 20 18:41:13 compute-0 ceph-mgr[74676]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 20 18:41:13 compute-0 ceph-mon[74381]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 20 18:41:13 compute-0 ceph-mon[74381]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 20 18:41:13 compute-0 ceph-mon[74381]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 20 18:41:14 compute-0 ceph-mgr[74676]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2619206661; not ready for session (expect reconnect)
Jan 20 18:41:14 compute-0 ceph-mon[74381]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Jan 20 18:41:14 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 20 18:41:14 compute-0 ceph-mgr[74676]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 20 18:41:14 compute-0 ceph-mon[74381]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 20 18:41:14 compute-0 ceph-mgr[74676]: [progress INFO root] Writing back 3 completed events
Jan 20 18:41:14 compute-0 ceph-mon[74381]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 20 18:41:14 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v76: 5 pgs: 1 creating+peering, 4 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 20 18:41:14 compute-0 ceph-mon[74381]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0)
Jan 20 18:41:14 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 20 18:41:14 compute-0 ceph-mon[74381]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0)
Jan 20 18:41:14 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 20 18:41:15 compute-0 ceph-mgr[74676]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2619206661; not ready for session (expect reconnect)
Jan 20 18:41:15 compute-0 ceph-mon[74381]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Jan 20 18:41:15 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 20 18:41:15 compute-0 ceph-mgr[74676]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 20 18:41:15 compute-0 ceph-mon[74381]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 20 18:41:15 compute-0 ceph-mon[74381]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 20 18:41:15 compute-0 ceph-mon[74381]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 20 18:41:15 compute-0 ceph-mon[74381]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 20 18:41:16 compute-0 ceph-mgr[74676]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2619206661; not ready for session (expect reconnect)
Jan 20 18:41:16 compute-0 ceph-mon[74381]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Jan 20 18:41:16 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 20 18:41:16 compute-0 ceph-mgr[74676]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 20 18:41:16 compute-0 ceph-mon[74381]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 20 18:41:16 compute-0 ceph-mon[74381]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 20 18:41:16 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v77: 5 pgs: 5 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 20 18:41:16 compute-0 ceph-mon[74381]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0)
Jan 20 18:41:16 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 20 18:41:16 compute-0 ceph-mon[74381]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0)
Jan 20 18:41:16 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 20 18:41:16 compute-0 ceph-mon[74381]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Jan 20 18:41:17 compute-0 ceph-mgr[74676]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2619206661; not ready for session (expect reconnect)
Jan 20 18:41:17 compute-0 ceph-mon[74381]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Jan 20 18:41:17 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 20 18:41:17 compute-0 ceph-mgr[74676]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 20 18:41:17 compute-0 ceph-mon[74381]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Jan 20 18:41:17 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : monmap epoch 3
Jan 20 18:41:17 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1
Jan 20 18:41:17 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : last_changed 2026-01-20T18:41:12.004140+0000
Jan 20 18:41:17 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : created 2026-01-20T18:38:43.724879+0000
Jan 20 18:41:17 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Jan 20 18:41:17 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : election_strategy: 1
Jan 20 18:41:17 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 20 18:41:17 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Jan 20 18:41:17 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : 2: [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon.compute-1
Jan 20 18:41:17 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 20 18:41:17 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : fsmap 
Jan 20 18:41:17 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e20: 2 total, 2 up, 2 in
Jan 20 18:41:17 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.cepfkm(active, since 2m)
Jan 20 18:41:17 compute-0 ceph-mon[74381]: log_channel(cluster) log [WRN] : Health check failed: 1/3 mons down, quorum compute-0,compute-2 (MON_DOWN)
Jan 20 18:41:17 compute-0 ceph-mon[74381]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1/3 mons down, quorum compute-0,compute-2; 4 pool(s) do not have an application enabled
Jan 20 18:41:17 compute-0 ceph-mon[74381]: log_channel(cluster) log [WRN] : [WRN] MON_DOWN: 1/3 mons down, quorum compute-0,compute-2
Jan 20 18:41:17 compute-0 ceph-mon[74381]: log_channel(cluster) log [WRN] :     mon.compute-1 (rank 2) addr [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] is down (out of quorum)
Jan 20 18:41:17 compute-0 ceph-mon[74381]: log_channel(cluster) log [WRN] : [WRN] POOL_APP_NOT_ENABLED: 4 pool(s) do not have an application enabled
Jan 20 18:41:17 compute-0 ceph-mon[74381]: log_channel(cluster) log [WRN] :     application not enabled on pool 'vms'
Jan 20 18:41:17 compute-0 ceph-mon[74381]: log_channel(cluster) log [WRN] :     application not enabled on pool 'volumes'
Jan 20 18:41:17 compute-0 ceph-mon[74381]: log_channel(cluster) log [WRN] :     application not enabled on pool 'backups'
Jan 20 18:41:17 compute-0 ceph-mon[74381]: log_channel(cluster) log [WRN] :     application not enabled on pool 'images'
Jan 20 18:41:17 compute-0 ceph-mon[74381]: log_channel(cluster) log [WRN] :     use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Jan 20 18:41:17 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:17 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 20 18:41:17 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:17 compute-0 ceph-mgr[74676]: [progress INFO root] Completed event 78f26301-0edf-42fd-820c-2ccd753dd036 (Global Recovery Event) in 13 seconds
Jan 20 18:41:17 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:17 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 20 18:41:17 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:17 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-1.whkwsm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Jan 20 18:41:17 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.whkwsm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 20 18:41:17 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Jan 20 18:41:17 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.whkwsm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 20 18:41:17 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 20 18:41:17 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 20 18:41:17 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 18:41:17 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:41:17 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-1.whkwsm on compute-1
Jan 20 18:41:17 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-1.whkwsm on compute-1
Jan 20 18:41:17 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Jan 20 18:41:17 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 18:41:17 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 18:41:17 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 18:41:17 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 18:41:17 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 18:41:17 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 18:41:17 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e21 e21: 2 total, 2 up, 2 in
Jan 20 18:41:17 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Jan 20 18:41:17 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 20 18:41:17 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 20 18:41:17 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 20 18:41:17 compute-0 ceph-mon[74381]: mon.compute-0 calling monitor election
Jan 20 18:41:17 compute-0 ceph-mon[74381]: mon.compute-2 calling monitor election
Jan 20 18:41:17 compute-0 ceph-mon[74381]: pgmap v75: 5 pgs: 1 creating+peering, 4 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 20 18:41:17 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 20 18:41:17 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 20 18:41:17 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 20 18:41:17 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 20 18:41:17 compute-0 ceph-mon[74381]: pgmap v76: 5 pgs: 1 creating+peering, 4 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 20 18:41:17 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 20 18:41:17 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 20 18:41:17 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 20 18:41:17 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 20 18:41:17 compute-0 ceph-mon[74381]: pgmap v77: 5 pgs: 5 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 20 18:41:17 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 20 18:41:17 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 20 18:41:17 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 20 18:41:17 compute-0 ceph-mon[74381]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Jan 20 18:41:17 compute-0 ceph-mon[74381]: monmap epoch 3
Jan 20 18:41:17 compute-0 ceph-mon[74381]: fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1
Jan 20 18:41:17 compute-0 ceph-mon[74381]: last_changed 2026-01-20T18:41:12.004140+0000
Jan 20 18:41:17 compute-0 ceph-mon[74381]: created 2026-01-20T18:38:43.724879+0000
Jan 20 18:41:17 compute-0 ceph-mon[74381]: min_mon_release 19 (squid)
Jan 20 18:41:17 compute-0 ceph-mon[74381]: election_strategy: 1
Jan 20 18:41:17 compute-0 ceph-mon[74381]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 20 18:41:17 compute-0 ceph-mon[74381]: 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Jan 20 18:41:17 compute-0 ceph-mon[74381]: 2: [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon.compute-1
Jan 20 18:41:17 compute-0 ceph-mon[74381]: fsmap 
Jan 20 18:41:17 compute-0 ceph-mon[74381]: osdmap e20: 2 total, 2 up, 2 in
Jan 20 18:41:17 compute-0 ceph-mon[74381]: mgrmap e9: compute-0.cepfkm(active, since 2m)
Jan 20 18:41:17 compute-0 ceph-mon[74381]: Health check failed: 1/3 mons down, quorum compute-0,compute-2 (MON_DOWN)
Jan 20 18:41:17 compute-0 ceph-mon[74381]: Health detail: HEALTH_WARN 1/3 mons down, quorum compute-0,compute-2; 4 pool(s) do not have an application enabled
Jan 20 18:41:17 compute-0 ceph-mon[74381]: [WRN] MON_DOWN: 1/3 mons down, quorum compute-0,compute-2
Jan 20 18:41:17 compute-0 ceph-mon[74381]:     mon.compute-1 (rank 2) addr [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] is down (out of quorum)
Jan 20 18:41:17 compute-0 ceph-mon[74381]: [WRN] POOL_APP_NOT_ENABLED: 4 pool(s) do not have an application enabled
Jan 20 18:41:17 compute-0 ceph-mon[74381]:     application not enabled on pool 'vms'
Jan 20 18:41:17 compute-0 ceph-mon[74381]:     application not enabled on pool 'volumes'
Jan 20 18:41:17 compute-0 ceph-mon[74381]:     application not enabled on pool 'backups'
Jan 20 18:41:17 compute-0 ceph-mon[74381]:     application not enabled on pool 'images'
Jan 20 18:41:17 compute-0 ceph-mon[74381]:     use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Jan 20 18:41:17 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:17 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:17 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:17 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:17 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.whkwsm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 20 18:41:17 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e21: 2 total, 2 up, 2 in
Jan 20 18:41:17 compute-0 ceph-mgr[74676]: [progress INFO root] update: starting ev 6a5f1425-ca93-4e4e-b6be-cf6e2dff63c1 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Jan 20 18:41:17 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0)
Jan 20 18:41:17 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Jan 20 18:41:18 compute-0 ceph-mgr[74676]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2619206661; not ready for session (expect reconnect)
Jan 20 18:41:18 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Jan 20 18:41:18 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 20 18:41:18 compute-0 ceph-mgr[74676]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 20 18:41:18 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Jan 20 18:41:18 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.whkwsm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 20 18:41:18 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 20 18:41:18 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:41:18 compute-0 ceph-mon[74381]: Deploying daemon mgr.compute-1.whkwsm on compute-1
Jan 20 18:41:18 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Jan 20 18:41:18 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 18:41:18 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 18:41:18 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 18:41:18 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 18:41:18 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 18:41:18 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 18:41:18 compute-0 ceph-mon[74381]: osdmap e21: 2 total, 2 up, 2 in
Jan 20 18:41:18 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Jan 20 18:41:18 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 20 18:41:18 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Jan 20 18:41:18 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e22 e22: 2 total, 2 up, 2 in
Jan 20 18:41:18 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e22: 2 total, 2 up, 2 in
Jan 20 18:41:18 compute-0 ceph-mgr[74676]: [progress INFO root] update: starting ev cde8bd7b-42de-4afc-babc-19e0e3a801de (PG autoscaler increasing pool 5 PGs from 1 to 32)
Jan 20 18:41:18 compute-0 ceph-mgr[74676]: [progress INFO root] complete: finished ev 833771ee-f8cb-4236-b843-9d5c1a09f2e5 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Jan 20 18:41:18 compute-0 ceph-mgr[74676]: [progress INFO root] Completed event 833771ee-f8cb-4236-b843-9d5c1a09f2e5 (PG autoscaler increasing pool 2 PGs from 1 to 32) in 7 seconds
Jan 20 18:41:18 compute-0 ceph-mgr[74676]: [progress INFO root] complete: finished ev 2ac42b79-bd07-4622-8857-7ad728274a81 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Jan 20 18:41:18 compute-0 ceph-mgr[74676]: [progress INFO root] Completed event 2ac42b79-bd07-4622-8857-7ad728274a81 (PG autoscaler increasing pool 3 PGs from 1 to 32) in 6 seconds
Jan 20 18:41:18 compute-0 ceph-mgr[74676]: [progress INFO root] complete: finished ev 6a5f1425-ca93-4e4e-b6be-cf6e2dff63c1 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Jan 20 18:41:18 compute-0 ceph-mgr[74676]: [progress INFO root] Completed event 6a5f1425-ca93-4e4e-b6be-cf6e2dff63c1 (PG autoscaler increasing pool 4 PGs from 1 to 32) in 1 seconds
Jan 20 18:41:18 compute-0 ceph-mgr[74676]: [progress INFO root] complete: finished ev cde8bd7b-42de-4afc-babc-19e0e3a801de (PG autoscaler increasing pool 5 PGs from 1 to 32)
Jan 20 18:41:18 compute-0 ceph-mgr[74676]: [progress INFO root] Completed event cde8bd7b-42de-4afc-babc-19e0e3a801de (PG autoscaler increasing pool 5 PGs from 1 to 32) in 0 seconds
Jan 20 18:41:18 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v80: 67 pgs: 1 peering, 62 unknown, 4 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 20 18:41:18 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0)
Jan 20 18:41:18 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 20 18:41:18 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0)
Jan 20 18:41:18 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 20 18:41:18 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 20 18:41:18 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4293271698' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 20 18:41:18 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e22 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 18:41:18 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 21 pg[3.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=21 pruub=8.975322723s) [0] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 active pruub 54.785179138s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:18 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 22 pg[3.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=21 pruub=8.975322723s) [0] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 unknown pruub 54.785179138s@ mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:18 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 22 pg[3.2( empty local-lis/les=15/16 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [0] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:18 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 22 pg[3.17( empty local-lis/les=15/16 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [0] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:18 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 22 pg[3.1( empty local-lis/les=15/16 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [0] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:18 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 22 pg[3.18( empty local-lis/les=15/16 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [0] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:18 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 22 pg[3.1b( empty local-lis/les=15/16 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [0] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:18 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 22 pg[3.1c( empty local-lis/les=15/16 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [0] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:18 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 22 pg[3.19( empty local-lis/les=15/16 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [0] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:18 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 22 pg[3.1a( empty local-lis/les=15/16 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [0] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:18 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 22 pg[3.1f( empty local-lis/les=15/16 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [0] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:18 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 22 pg[3.1d( empty local-lis/les=15/16 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [0] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:18 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 22 pg[3.1e( empty local-lis/les=15/16 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [0] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:18 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 22 pg[3.3( empty local-lis/les=15/16 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [0] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:18 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 22 pg[3.4( empty local-lis/les=15/16 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [0] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:18 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 22 pg[3.8( empty local-lis/les=15/16 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [0] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:18 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 22 pg[3.7( empty local-lis/les=15/16 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [0] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:18 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 22 pg[3.5( empty local-lis/les=15/16 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [0] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:18 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 22 pg[3.6( empty local-lis/les=15/16 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [0] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:18 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 22 pg[3.b( empty local-lis/les=15/16 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [0] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:18 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 22 pg[3.9( empty local-lis/les=15/16 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [0] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:18 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 22 pg[3.c( empty local-lis/les=15/16 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [0] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:18 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 22 pg[3.a( empty local-lis/les=15/16 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [0] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:18 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 22 pg[3.d( empty local-lis/les=15/16 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [0] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:18 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 22 pg[3.e( empty local-lis/les=15/16 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [0] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:18 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 22 pg[3.11( empty local-lis/les=15/16 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [0] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:18 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 22 pg[3.12( empty local-lis/les=15/16 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [0] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:18 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 22 pg[3.f( empty local-lis/les=15/16 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [0] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:18 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 22 pg[3.10( empty local-lis/les=15/16 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [0] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:18 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 22 pg[3.15( empty local-lis/les=15/16 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [0] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:18 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 22 pg[3.16( empty local-lis/les=15/16 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [0] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:18 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 22 pg[3.13( empty local-lis/les=15/16 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [0] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:18 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 22 pg[3.14( empty local-lis/les=15/16 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [0] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:19 compute-0 ceph-mgr[74676]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2619206661; not ready for session (expect reconnect)
Jan 20 18:41:19 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Jan 20 18:41:19 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 20 18:41:19 compute-0 ceph-mgr[74676]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Jan 20 18:41:19 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0)
Jan 20 18:41:19 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 20 18:41:19 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0)
Jan 20 18:41:19 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 20 18:41:19 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 20 18:41:19 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4293271698' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 20 18:41:19 compute-0 ceph-mon[74381]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Jan 20 18:41:19 compute-0 ceph-mon[74381]: paxos.0).electionLogic(12) init, last seen epoch 12
Jan 20 18:41:19 compute-0 ceph-mon[74381]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 20 18:41:19 compute-0 ceph-mon[74381]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Jan 20 18:41:19 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : monmap epoch 3
Jan 20 18:41:19 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1
Jan 20 18:41:19 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : last_changed 2026-01-20T18:41:12.004140+0000
Jan 20 18:41:19 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : created 2026-01-20T18:38:43.724879+0000
Jan 20 18:41:19 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Jan 20 18:41:19 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : election_strategy: 1
Jan 20 18:41:19 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 20 18:41:19 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Jan 20 18:41:19 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : 2: [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon.compute-1
Jan 20 18:41:19 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 20 18:41:19 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : fsmap 
Jan 20 18:41:19 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e22: 2 total, 2 up, 2 in
Jan 20 18:41:19 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.cepfkm(active, since 2m)
Jan 20 18:41:19 compute-0 ceph-mon[74381]: log_channel(cluster) log [INF] : Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-0,compute-2)
Jan 20 18:41:19 compute-0 ceph-mon[74381]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 4 pool(s) do not have an application enabled
Jan 20 18:41:19 compute-0 ceph-mon[74381]: log_channel(cluster) log [WRN] : [WRN] POOL_APP_NOT_ENABLED: 4 pool(s) do not have an application enabled
Jan 20 18:41:19 compute-0 ceph-mon[74381]: log_channel(cluster) log [WRN] :     application not enabled on pool 'vms'
Jan 20 18:41:19 compute-0 ceph-mon[74381]: log_channel(cluster) log [WRN] :     application not enabled on pool 'volumes'
Jan 20 18:41:19 compute-0 ceph-mon[74381]: log_channel(cluster) log [WRN] :     application not enabled on pool 'backups'
Jan 20 18:41:19 compute-0 ceph-mon[74381]: log_channel(cluster) log [WRN] :     application not enabled on pool 'images'
Jan 20 18:41:19 compute-0 ceph-mon[74381]: log_channel(cluster) log [WRN] :     use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Jan 20 18:41:19 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 20 18:41:19 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:19 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 20 18:41:19 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:19 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 20 18:41:19 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:19 compute-0 ceph-mgr[74676]: [progress INFO root] complete: finished ev c11c05fb-aef8-4352-bf3a-8fe9a61992df (Updating mgr deployment (+2 -> 3))
Jan 20 18:41:19 compute-0 ceph-mgr[74676]: [progress INFO root] Completed event c11c05fb-aef8-4352-bf3a-8fe9a61992df (Updating mgr deployment (+2 -> 3)) in 8 seconds
Jan 20 18:41:19 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 20 18:41:19 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:19 compute-0 ceph-mgr[74676]: [progress INFO root] update: starting ev 75f8f1f9-5061-4f3f-8b61-8836a9d12f68 (Updating crash deployment (+1 -> 3))
Jan 20 18:41:19 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Jan 20 18:41:19 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 20 18:41:19 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 20 18:41:19 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 18:41:19 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:41:19 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-2 on compute-2
Jan 20 18:41:19 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-2 on compute-2
Jan 20 18:41:19 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Jan 20 18:41:19 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 18:41:19 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 18:41:19 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4293271698' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 20 18:41:19 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e23 e23: 2 total, 2 up, 2 in
Jan 20 18:41:19 compute-0 condescending_herschel[85536]: pool 'cephfs.cephfs.meta' created
Jan 20 18:41:19 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e23: 2 total, 2 up, 2 in
Jan 20 18:41:19 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 23 pg[6.0( empty local-lis/les=0/0 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [0] r=0 lpr=23 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:19 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 23 pg[4.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=23 pruub=9.490025520s) [0] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active pruub 55.802097321s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:19 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 23 pg[5.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=23 pruub=15.675267220s) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active pruub 61.987434387s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:19 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 23 pg[5.0( empty local-lis/les=18/19 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=23 pruub=15.675267220s) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown pruub 61.987434387s@ mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:19 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 23 pg[4.0( empty local-lis/les=16/17 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=23 pruub=9.490025520s) [0] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown pruub 55.802097321s@ mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:19 compute-0 ceph-mon[74381]: mon.compute-1 calling monitor election
Jan 20 18:41:19 compute-0 ceph-mon[74381]: mon.compute-2 calling monitor election
Jan 20 18:41:19 compute-0 ceph-mon[74381]: mon.compute-1 calling monitor election
Jan 20 18:41:19 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 20 18:41:19 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 20 18:41:19 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/4293271698' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 20 18:41:19 compute-0 ceph-mon[74381]: mon.compute-0 calling monitor election
Jan 20 18:41:19 compute-0 ceph-mon[74381]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Jan 20 18:41:19 compute-0 ceph-mon[74381]: monmap epoch 3
Jan 20 18:41:19 compute-0 ceph-mon[74381]: fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1
Jan 20 18:41:19 compute-0 ceph-mon[74381]: last_changed 2026-01-20T18:41:12.004140+0000
Jan 20 18:41:19 compute-0 ceph-mon[74381]: created 2026-01-20T18:38:43.724879+0000
Jan 20 18:41:19 compute-0 ceph-mon[74381]: min_mon_release 19 (squid)
Jan 20 18:41:19 compute-0 ceph-mon[74381]: election_strategy: 1
Jan 20 18:41:19 compute-0 ceph-mon[74381]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 20 18:41:19 compute-0 ceph-mon[74381]: 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Jan 20 18:41:19 compute-0 ceph-mon[74381]: 2: [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon.compute-1
Jan 20 18:41:19 compute-0 ceph-mon[74381]: fsmap 
Jan 20 18:41:19 compute-0 ceph-mon[74381]: osdmap e22: 2 total, 2 up, 2 in
Jan 20 18:41:19 compute-0 ceph-mon[74381]: mgrmap e9: compute-0.cepfkm(active, since 2m)
Jan 20 18:41:19 compute-0 ceph-mon[74381]: Health check cleared: MON_DOWN (was: 1/3 mons down, quorum compute-0,compute-2)
Jan 20 18:41:19 compute-0 ceph-mon[74381]: Health detail: HEALTH_WARN 4 pool(s) do not have an application enabled
Jan 20 18:41:19 compute-0 ceph-mon[74381]: [WRN] POOL_APP_NOT_ENABLED: 4 pool(s) do not have an application enabled
Jan 20 18:41:19 compute-0 ceph-mon[74381]:     application not enabled on pool 'vms'
Jan 20 18:41:19 compute-0 ceph-mon[74381]:     application not enabled on pool 'volumes'
Jan 20 18:41:19 compute-0 ceph-mon[74381]:     application not enabled on pool 'backups'
Jan 20 18:41:19 compute-0 ceph-mon[74381]:     application not enabled on pool 'images'
Jan 20 18:41:19 compute-0 ceph-mon[74381]:     use 'ceph osd pool application enable <pool-name> <app-name>', where <app-name> is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.
Jan 20 18:41:19 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:19 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:19 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:19 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:19 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 20 18:41:19 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 20 18:41:19 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:41:19 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 23 pg[3.1a( empty local-lis/les=21/23 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [0] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:19 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 23 pg[3.17( empty local-lis/les=21/23 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [0] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:19 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 23 pg[3.19( empty local-lis/les=21/23 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [0] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:19 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 23 pg[3.18( empty local-lis/les=21/23 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [0] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:19 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 23 pg[3.16( empty local-lis/les=21/23 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [0] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:19 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 23 pg[3.15( empty local-lis/les=21/23 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [0] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:19 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 23 pg[3.14( empty local-lis/les=21/23 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [0] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:19 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 23 pg[3.13( empty local-lis/les=21/23 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [0] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:19 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 23 pg[3.11( empty local-lis/les=21/23 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [0] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:19 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 23 pg[3.e( empty local-lis/les=21/23 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [0] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:19 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 23 pg[3.f( empty local-lis/les=21/23 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [0] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:19 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 23 pg[3.10( empty local-lis/les=21/23 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [0] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:19 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 23 pg[3.d( empty local-lis/les=21/23 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [0] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:19 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 23 pg[3.c( empty local-lis/les=21/23 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [0] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:19 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 23 pg[3.b( empty local-lis/les=21/23 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [0] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:19 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 23 pg[3.7( empty local-lis/les=21/23 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [0] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:19 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 23 pg[3.0( empty local-lis/les=21/23 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [0] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:19 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 23 pg[3.6( empty local-lis/les=21/23 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [0] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:19 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 23 pg[3.1( empty local-lis/les=21/23 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [0] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:19 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 23 pg[3.2( empty local-lis/les=21/23 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [0] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:19 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 23 pg[3.4( empty local-lis/les=21/23 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [0] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:19 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 23 pg[3.3( empty local-lis/les=21/23 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [0] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:19 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 23 pg[3.8( empty local-lis/les=21/23 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [0] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:19 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 23 pg[3.5( empty local-lis/les=21/23 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [0] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:19 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 23 pg[3.a( empty local-lis/les=21/23 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [0] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:19 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 23 pg[3.9( empty local-lis/les=21/23 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [0] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:19 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 23 pg[3.1b( empty local-lis/les=21/23 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [0] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:19 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 23 pg[3.1c( empty local-lis/les=21/23 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [0] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:19 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 23 pg[3.1f( empty local-lis/les=21/23 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [0] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:19 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 23 pg[3.1d( empty local-lis/les=21/23 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [0] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:19 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 23 pg[3.1e( empty local-lis/les=21/23 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [0] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:19 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 23 pg[3.12( empty local-lis/les=21/23 n=0 ec=21/15 lis/c=15/15 les/c/f=16/16/0 sis=21) [0] r=0 lpr=21 pi=[15,21)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:19 compute-0 systemd[1]: libpod-ffb33701f39a17d67d828e3a0a2c67fd5e8c12857ad5284987a9e3c495fb62fc.scope: Deactivated successfully.
Jan 20 18:41:19 compute-0 podman[85520]: 2026-01-20 18:41:19.274735098 +0000 UTC m=+13.159437105 container died ffb33701f39a17d67d828e3a0a2c67fd5e8c12857ad5284987a9e3c495fb62fc (image=quay.io/ceph/ceph:v19, name=condescending_herschel, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:41:19 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Jan 20 18:41:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-c0863d0d20433a1f1ebae7bf08b229cf9381d105124f3864cad5ab34a4b16e7c-merged.mount: Deactivated successfully.
Jan 20 18:41:19 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Jan 20 18:41:19 compute-0 systemd[75709]: Starting Mark boot as successful...
Jan 20 18:41:19 compute-0 systemd[75709]: Finished Mark boot as successful.
Jan 20 18:41:19 compute-0 podman[85520]: 2026-01-20 18:41:19.32349833 +0000 UTC m=+13.208200337 container remove ffb33701f39a17d67d828e3a0a2c67fd5e8c12857ad5284987a9e3c495fb62fc (image=quay.io/ceph/ceph:v19, name=condescending_herschel, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 20 18:41:19 compute-0 systemd[1]: libpod-conmon-ffb33701f39a17d67d828e3a0a2c67fd5e8c12857ad5284987a9e3c495fb62fc.scope: Deactivated successfully.
Jan 20 18:41:19 compute-0 sudo[85517]: pam_unix(sudo:session): session closed for user root
Jan 20 18:41:19 compute-0 sudo[85600]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrvpgxpuafwpunadzocvwmzvcpbjxtko ; /usr/bin/python3'
Jan 20 18:41:19 compute-0 sudo[85600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:41:19 compute-0 python3[85602]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:41:19 compute-0 podman[85603]: 2026-01-20 18:41:19.737225983 +0000 UTC m=+0.044061979 container create f4441fbac06e214204791750387bd65ebe82a595462e2236fcf66d93de5468a7 (image=quay.io/ceph/ceph:v19, name=priceless_agnesi, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:41:19 compute-0 systemd[1]: Started libpod-conmon-f4441fbac06e214204791750387bd65ebe82a595462e2236fcf66d93de5468a7.scope.
Jan 20 18:41:19 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:41:19 compute-0 podman[85603]: 2026-01-20 18:41:19.718857087 +0000 UTC m=+0.025693103 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:41:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a927ac8898ff4c338eaadf56ca8d6dae0b79d2cdf0dd5255931ad32daff4bc4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:41:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a927ac8898ff4c338eaadf56ca8d6dae0b79d2cdf0dd5255931ad32daff4bc4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:41:19 compute-0 podman[85603]: 2026-01-20 18:41:19.83710633 +0000 UTC m=+0.143942356 container init f4441fbac06e214204791750387bd65ebe82a595462e2236fcf66d93de5468a7 (image=quay.io/ceph/ceph:v19, name=priceless_agnesi, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Jan 20 18:41:19 compute-0 podman[85603]: 2026-01-20 18:41:19.847128795 +0000 UTC m=+0.153964821 container start f4441fbac06e214204791750387bd65ebe82a595462e2236fcf66d93de5468a7 (image=quay.io/ceph/ceph:v19, name=priceless_agnesi, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:41:19 compute-0 podman[85603]: 2026-01-20 18:41:19.851910012 +0000 UTC m=+0.158746028 container attach f4441fbac06e214204791750387bd65ebe82a595462e2236fcf66d93de5468a7 (image=quay.io/ceph/ceph:v19, name=priceless_agnesi, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 18:41:20 compute-0 ceph-mgr[74676]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/2619206661; not ready for session (expect reconnect)
Jan 20 18:41:20 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Jan 20 18:41:20 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 20 18:41:20 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Jan 20 18:41:20 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e24 e24: 2 total, 2 up, 2 in
Jan 20 18:41:20 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e24: 2 total, 2 up, 2 in
Jan 20 18:41:20 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 20 18:41:20 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2118483369' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[5.1c( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[4.1d( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [0] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[5.1f( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[4.1e( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [0] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[5.1e( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[4.1f( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [0] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[4.10( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [0] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[5.10( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[4.11( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [0] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[5.13( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[5.12( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[4.12( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [0] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[4.13( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [0] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[5.15( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[5.11( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[4.14( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [0] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[5.14( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[5.17( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[4.16( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [0] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[5.16( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[4.17( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [0] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[5.9( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[4.8( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [0] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[5.8( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[4.9( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [0] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[4.15( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [0] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[5.b( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[4.a( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [0] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[5.a( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[4.b( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [0] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[5.d( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[4.c( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [0] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[5.6( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[4.7( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [0] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[5.1( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[4.1( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [0] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[5.7( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[4.6( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [0] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[5.4( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[4.5( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [0] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[5.5( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[4.4( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [0] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[5.2( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[5.3( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[4.3( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [0] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[4.2( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [0] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[5.e( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[4.f( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [0] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[5.f( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[4.e( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [0] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[5.c( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[4.d( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [0] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[5.1d( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[4.1c( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [0] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[5.1a( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[4.1b( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [0] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[5.1b( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[4.1a( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [0] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[5.18( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[4.19( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [0] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[5.19( empty local-lis/les=18/19 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[4.18( empty local-lis/les=16/17 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [0] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[4.1d( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [0] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[5.1c( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[5.1f( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[4.1e( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [0] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[4.1f( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [0] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[5.1e( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[4.10( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [0] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[5.10( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[4.11( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [0] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[5.13( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[5.12( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[4.13( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [0] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[5.15( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[4.14( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [0] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[5.14( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[5.17( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[5.16( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[4.16( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [0] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[5.9( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[4.8( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [0] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[5.8( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[4.9( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [0] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[4.12( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [0] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[5.b( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[4.15( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [0] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[5.11( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[4.a( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [0] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[5.a( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[5.d( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[4.c( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [0] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[5.6( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[4.0( empty local-lis/les=23/24 n=0 ec=16/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [0] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[4.17( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [0] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[5.0( empty local-lis/les=23/24 n=0 ec=18/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[4.7( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [0] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[5.1( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[4.1( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [0] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[5.7( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[4.6( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [0] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[5.4( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[4.b( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [0] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[5.5( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[4.4( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [0] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[5.2( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[6.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [0] r=0 lpr=23 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[4.3( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [0] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[4.2( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [0] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[5.e( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[4.f( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [0] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[5.f( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[4.e( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [0] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[5.c( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[5.1d( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[4.1c( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [0] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[4.d( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [0] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[5.3( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[4.5( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [0] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[5.1b( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[4.1a( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [0] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[5.18( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[4.19( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [0] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[5.19( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[4.18( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [0] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[4.1b( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=16/16 les/c/f=17/17/0 sis=23) [0] r=0 lpr=23 pi=[16,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 24 pg[5.1a( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=18/18 les/c/f=19/19/0 sis=23) [0] r=0 lpr=23 pi=[18,23)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:20 compute-0 ceph-mon[74381]: 2.1e scrub starts
Jan 20 18:41:20 compute-0 ceph-mon[74381]: 2.1e scrub ok
Jan 20 18:41:20 compute-0 ceph-mon[74381]: Deploying daemon crash.compute-2 on compute-2
Jan 20 18:41:20 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 18:41:20 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 18:41:20 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/4293271698' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 20 18:41:20 compute-0 ceph-mon[74381]: osdmap e23: 2 total, 2 up, 2 in
Jan 20 18:41:20 compute-0 ceph-mon[74381]: 2.1f scrub starts
Jan 20 18:41:20 compute-0 ceph-mon[74381]: 3.17 scrub starts
Jan 20 18:41:20 compute-0 ceph-mon[74381]: 2.1f scrub ok
Jan 20 18:41:20 compute-0 ceph-mon[74381]: 3.17 scrub ok
Jan 20 18:41:20 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 20 18:41:20 compute-0 ceph-mon[74381]: osdmap e24: 2 total, 2 up, 2 in
Jan 20 18:41:20 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/2118483369' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Jan 20 18:41:20 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Jan 20 18:41:20 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Jan 20 18:41:20 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v83: 130 pgs: 1 creating+peering, 3 peering, 93 unknown, 33 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 20 18:41:21 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 20 18:41:21 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:21 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 20 18:41:21 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:21 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Jan 20 18:41:21 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:21 compute-0 ceph-mgr[74676]: [progress INFO root] complete: finished ev 75f8f1f9-5061-4f3f-8b61-8836a9d12f68 (Updating crash deployment (+1 -> 3))
Jan 20 18:41:21 compute-0 ceph-mgr[74676]: [progress INFO root] Completed event 75f8f1f9-5061-4f3f-8b61-8836a9d12f68 (Updating crash deployment (+1 -> 3)) in 2 seconds
Jan 20 18:41:21 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Jan 20 18:41:21 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Jan 20 18:41:21 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:21 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 20 18:41:21 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 18:41:21 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 20 18:41:21 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 18:41:21 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 18:41:21 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:41:21 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2118483369' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 20 18:41:21 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e25 e25: 2 total, 2 up, 2 in
Jan 20 18:41:21 compute-0 priceless_agnesi[85618]: pool 'cephfs.cephfs.data' created
Jan 20 18:41:21 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e25: 2 total, 2 up, 2 in
Jan 20 18:41:21 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 20 18:41:21 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 18:41:21 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 18:41:21 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:41:21 compute-0 ceph-mon[74381]: log_channel(cluster) log [WRN] : Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 20 18:41:21 compute-0 systemd[1]: libpod-f4441fbac06e214204791750387bd65ebe82a595462e2236fcf66d93de5468a7.scope: Deactivated successfully.
Jan 20 18:41:21 compute-0 podman[85603]: 2026-01-20 18:41:21.287941263 +0000 UTC m=+1.594777279 container died f4441fbac06e214204791750387bd65ebe82a595462e2236fcf66d93de5468a7 (image=quay.io/ceph/ceph:v19, name=priceless_agnesi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Jan 20 18:41:21 compute-0 ceph-mon[74381]: 2.9 deep-scrub starts
Jan 20 18:41:21 compute-0 ceph-mon[74381]: 2.9 deep-scrub ok
Jan 20 18:41:21 compute-0 ceph-mon[74381]: 3.19 scrub starts
Jan 20 18:41:21 compute-0 ceph-mon[74381]: 3.19 scrub ok
Jan 20 18:41:21 compute-0 ceph-mon[74381]: pgmap v83: 130 pgs: 1 creating+peering, 3 peering, 93 unknown, 33 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 20 18:41:21 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:21 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:21 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:21 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:21 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 18:41:21 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 18:41:21 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:41:21 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/2118483369' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 20 18:41:21 compute-0 ceph-mon[74381]: osdmap e25: 2 total, 2 up, 2 in
Jan 20 18:41:21 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 18:41:21 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:41:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a927ac8898ff4c338eaadf56ca8d6dae0b79d2cdf0dd5255931ad32daff4bc4-merged.mount: Deactivated successfully.
Jan 20 18:41:21 compute-0 podman[85603]: 2026-01-20 18:41:21.328790976 +0000 UTC m=+1.635626982 container remove f4441fbac06e214204791750387bd65ebe82a595462e2236fcf66d93de5468a7 (image=quay.io/ceph/ceph:v19, name=priceless_agnesi, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:41:21 compute-0 sudo[85645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:41:21 compute-0 sudo[85645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:41:21 compute-0 systemd[1]: libpod-conmon-f4441fbac06e214204791750387bd65ebe82a595462e2236fcf66d93de5468a7.scope: Deactivated successfully.
Jan 20 18:41:21 compute-0 sudo[85645]: pam_unix(sudo:session): session closed for user root
Jan 20 18:41:21 compute-0 sudo[85600]: pam_unix(sudo:session): session closed for user root
Jan 20 18:41:21 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Jan 20 18:41:21 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Jan 20 18:41:21 compute-0 sudo[85681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 18:41:21 compute-0 sudo[85681]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:41:21 compute-0 sudo[85729]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onnyzyrwtgsmxvxbhbhfluvqcgqoncfe ; /usr/bin/python3'
Jan 20 18:41:21 compute-0 sudo[85729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:41:21 compute-0 python3[85731]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:41:21 compute-0 podman[85739]: 2026-01-20 18:41:21.68220481 +0000 UTC m=+0.045596490 container create f707cb654c205194c863293fd98ea088a43837c5f7c2ee3b341692a24e975989 (image=quay.io/ceph/ceph:v19, name=frosty_cohen, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 20 18:41:21 compute-0 systemd[1]: Started libpod-conmon-f707cb654c205194c863293fd98ea088a43837c5f7c2ee3b341692a24e975989.scope.
Jan 20 18:41:21 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:41:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b713466eeaade9ab828014d36d8b06b0e65df7699fcc5907c56ec8e02eb64de8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:41:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b713466eeaade9ab828014d36d8b06b0e65df7699fcc5907c56ec8e02eb64de8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:41:21 compute-0 podman[85739]: 2026-01-20 18:41:21.757465484 +0000 UTC m=+0.120857234 container init f707cb654c205194c863293fd98ea088a43837c5f7c2ee3b341692a24e975989 (image=quay.io/ceph/ceph:v19, name=frosty_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:41:21 compute-0 podman[85739]: 2026-01-20 18:41:21.66218892 +0000 UTC m=+0.025580620 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:41:21 compute-0 podman[85739]: 2026-01-20 18:41:21.765345342 +0000 UTC m=+0.128737022 container start f707cb654c205194c863293fd98ea088a43837c5f7c2ee3b341692a24e975989 (image=quay.io/ceph/ceph:v19, name=frosty_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Jan 20 18:41:21 compute-0 podman[85739]: 2026-01-20 18:41:21.769602915 +0000 UTC m=+0.132994645 container attach f707cb654c205194c863293fd98ea088a43837c5f7c2ee3b341692a24e975989 (image=quay.io/ceph/ceph:v19, name=frosty_cohen, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid)
Jan 20 18:41:21 compute-0 podman[85791]: 2026-01-20 18:41:21.915619725 +0000 UTC m=+0.049683538 container create d6b82e85cf78508de01733fd5e7f71d09e7a94f064aa60456e170aef9dc0c084 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_booth, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 20 18:41:21 compute-0 systemd[1]: Started libpod-conmon-d6b82e85cf78508de01733fd5e7f71d09e7a94f064aa60456e170aef9dc0c084.scope.
Jan 20 18:41:21 compute-0 podman[85791]: 2026-01-20 18:41:21.889984975 +0000 UTC m=+0.024048768 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:41:22 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:41:22 compute-0 podman[85791]: 2026-01-20 18:41:22.016616211 +0000 UTC m=+0.150680034 container init d6b82e85cf78508de01733fd5e7f71d09e7a94f064aa60456e170aef9dc0c084 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_booth, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 18:41:22 compute-0 podman[85791]: 2026-01-20 18:41:22.027960081 +0000 UTC m=+0.162023864 container start d6b82e85cf78508de01733fd5e7f71d09e7a94f064aa60456e170aef9dc0c084 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_booth, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:41:22 compute-0 podman[85791]: 2026-01-20 18:41:22.031558606 +0000 UTC m=+0.165622479 container attach d6b82e85cf78508de01733fd5e7f71d09e7a94f064aa60456e170aef9dc0c084 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_booth, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:41:22 compute-0 goofy_booth[85827]: 167 167
Jan 20 18:41:22 compute-0 systemd[1]: libpod-d6b82e85cf78508de01733fd5e7f71d09e7a94f064aa60456e170aef9dc0c084.scope: Deactivated successfully.
Jan 20 18:41:22 compute-0 podman[85791]: 2026-01-20 18:41:22.03659583 +0000 UTC m=+0.170659613 container died d6b82e85cf78508de01733fd5e7f71d09e7a94f064aa60456e170aef9dc0c084 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_booth, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 20 18:41:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f163783367231e19df4566170f3b95c1b5fbc3fbb36d907b5187bd13b3ea4d3-merged.mount: Deactivated successfully.
Jan 20 18:41:22 compute-0 podman[85791]: 2026-01-20 18:41:22.076930609 +0000 UTC m=+0.210994392 container remove d6b82e85cf78508de01733fd5e7f71d09e7a94f064aa60456e170aef9dc0c084 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:41:22 compute-0 systemd[1]: libpod-conmon-d6b82e85cf78508de01733fd5e7f71d09e7a94f064aa60456e170aef9dc0c084.scope: Deactivated successfully.
Jan 20 18:41:22 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0)
Jan 20 18:41:22 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2772028160' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Jan 20 18:41:22 compute-0 ceph-mgr[74676]: [progress INFO root] Writing back 10 completed events
Jan 20 18:41:22 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 20 18:41:22 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Jan 20 18:41:22 compute-0 podman[85851]: 2026-01-20 18:41:22.247654993 +0000 UTC m=+0.026101723 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:41:22 compute-0 podman[85851]: 2026-01-20 18:41:22.361565891 +0000 UTC m=+0.140012611 container create 974280137fd1034f203fb95e86f5b9ef139beb2e98d30ec23bcea11eee4306fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_mcnulty, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Jan 20 18:41:22 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Jan 20 18:41:22 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Jan 20 18:41:22 compute-0 systemd[1]: Started libpod-conmon-974280137fd1034f203fb95e86f5b9ef139beb2e98d30ec23bcea11eee4306fa.scope.
Jan 20 18:41:22 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:41:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e7f5bccaa36f6b0b2d00e7867c24d51f63a4bad4c1b26ecf297296374546f25/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 18:41:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e7f5bccaa36f6b0b2d00e7867c24d51f63a4bad4c1b26ecf297296374546f25/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:41:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e7f5bccaa36f6b0b2d00e7867c24d51f63a4bad4c1b26ecf297296374546f25/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:41:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e7f5bccaa36f6b0b2d00e7867c24d51f63a4bad4c1b26ecf297296374546f25/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 18:41:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e7f5bccaa36f6b0b2d00e7867c24d51f63a4bad4c1b26ecf297296374546f25/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 18:41:22 compute-0 podman[85851]: 2026-01-20 18:41:22.442614588 +0000 UTC m=+0.221061338 container init 974280137fd1034f203fb95e86f5b9ef139beb2e98d30ec23bcea11eee4306fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_mcnulty, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 20 18:41:22 compute-0 podman[85851]: 2026-01-20 18:41:22.449915461 +0000 UTC m=+0.228362171 container start 974280137fd1034f203fb95e86f5b9ef139beb2e98d30ec23bcea11eee4306fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_mcnulty, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:41:22 compute-0 podman[85851]: 2026-01-20 18:41:22.45512021 +0000 UTC m=+0.233566940 container attach 974280137fd1034f203fb95e86f5b9ef139beb2e98d30ec23bcea11eee4306fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_mcnulty, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Jan 20 18:41:22 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v85: 131 pgs: 1 creating+peering, 3 peering, 94 unknown, 33 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 20 18:41:22 compute-0 unruffled_mcnulty[85867]: --> passed data devices: 0 physical, 1 LVM
Jan 20 18:41:22 compute-0 unruffled_mcnulty[85867]: --> All data devices are unavailable
Jan 20 18:41:22 compute-0 systemd[1]: libpod-974280137fd1034f203fb95e86f5b9ef139beb2e98d30ec23bcea11eee4306fa.scope: Deactivated successfully.
Jan 20 18:41:22 compute-0 podman[85851]: 2026-01-20 18:41:22.750906767 +0000 UTC m=+0.529353477 container died 974280137fd1034f203fb95e86f5b9ef139beb2e98d30ec23bcea11eee4306fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_mcnulty, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 18:41:23 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Jan 20 18:41:23 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Jan 20 18:41:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e7f5bccaa36f6b0b2d00e7867c24d51f63a4bad4c1b26ecf297296374546f25-merged.mount: Deactivated successfully.
Jan 20 18:41:23 compute-0 podman[85851]: 2026-01-20 18:41:23.506237961 +0000 UTC m=+1.284684671 container remove 974280137fd1034f203fb95e86f5b9ef139beb2e98d30ec23bcea11eee4306fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_mcnulty, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Jan 20 18:41:23 compute-0 systemd[1]: libpod-conmon-974280137fd1034f203fb95e86f5b9ef139beb2e98d30ec23bcea11eee4306fa.scope: Deactivated successfully.
Jan 20 18:41:23 compute-0 sudo[85681]: pam_unix(sudo:session): session closed for user root
Jan 20 18:41:23 compute-0 sudo[85895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:41:23 compute-0 sudo[85895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:41:23 compute-0 sudo[85895]: pam_unix(sudo:session): session closed for user root
Jan 20 18:41:23 compute-0 sudo[85920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- lvm list --format json
Jan 20 18:41:23 compute-0 sudo[85920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:41:23 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:23 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2772028160' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Jan 20 18:41:23 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e26 e26: 2 total, 2 up, 2 in
Jan 20 18:41:23 compute-0 frosty_cohen[85774]: enabled application 'rbd' on pool 'vms'
Jan 20 18:41:23 compute-0 ceph-mon[74381]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 20 18:41:23 compute-0 ceph-mon[74381]: 2.1d scrub starts
Jan 20 18:41:23 compute-0 ceph-mon[74381]: 2.1d scrub ok
Jan 20 18:41:23 compute-0 ceph-mon[74381]: 3.18 scrub starts
Jan 20 18:41:23 compute-0 ceph-mon[74381]: 3.18 scrub ok
Jan 20 18:41:23 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/2772028160' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Jan 20 18:41:23 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e26: 2 total, 2 up, 2 in
Jan 20 18:41:23 compute-0 systemd[1]: libpod-f707cb654c205194c863293fd98ea088a43837c5f7c2ee3b341692a24e975989.scope: Deactivated successfully.
Jan 20 18:41:23 compute-0 podman[85739]: 2026-01-20 18:41:23.849061855 +0000 UTC m=+2.212453545 container died f707cb654c205194c863293fd98ea088a43837c5f7c2ee3b341692a24e975989 (image=quay.io/ceph/ceph:v19, name=frosty_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:41:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-b713466eeaade9ab828014d36d8b06b0e65df7699fcc5907c56ec8e02eb64de8-merged.mount: Deactivated successfully.
Jan 20 18:41:23 compute-0 podman[85739]: 2026-01-20 18:41:23.888183311 +0000 UTC m=+2.251574991 container remove f707cb654c205194c863293fd98ea088a43837c5f7c2ee3b341692a24e975989 (image=quay.io/ceph/ceph:v19, name=frosty_cohen, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:41:23 compute-0 systemd[1]: libpod-conmon-f707cb654c205194c863293fd98ea088a43837c5f7c2ee3b341692a24e975989.scope: Deactivated successfully.
Jan 20 18:41:23 compute-0 sudo[85729]: pam_unix(sudo:session): session closed for user root
Jan 20 18:41:23 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.pyghhf started
Jan 20 18:41:23 compute-0 ceph-mgr[74676]: mgr.server handle_open ignoring open from mgr.compute-2.pyghhf 192.168.122.102:0/2898323745; not ready for session (expect reconnect)
Jan 20 18:41:24 compute-0 podman[85991]: 2026-01-20 18:41:24.009837784 +0000 UTC m=+0.037120514 container create 02aefbdd9ceaab66b3c2e9e32d4ccebdf69364aba2e4f68820238b266fde169c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_perlman, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default)
Jan 20 18:41:24 compute-0 systemd[1]: Started libpod-conmon-02aefbdd9ceaab66b3c2e9e32d4ccebdf69364aba2e4f68820238b266fde169c.scope.
Jan 20 18:41:24 compute-0 sudo[86030]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijnimmpwuzdtlzsbrtpgrgmokpxgrqbk ; /usr/bin/python3'
Jan 20 18:41:24 compute-0 sudo[86030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:41:24 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:41:24 compute-0 podman[85991]: 2026-01-20 18:41:23.993447721 +0000 UTC m=+0.020730461 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:41:24 compute-0 podman[85991]: 2026-01-20 18:41:24.093940093 +0000 UTC m=+0.121222813 container init 02aefbdd9ceaab66b3c2e9e32d4ccebdf69364aba2e4f68820238b266fde169c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_perlman, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Jan 20 18:41:24 compute-0 podman[85991]: 2026-01-20 18:41:24.10021123 +0000 UTC m=+0.127493950 container start 02aefbdd9ceaab66b3c2e9e32d4ccebdf69364aba2e4f68820238b266fde169c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_perlman, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:41:24 compute-0 elegant_perlman[86032]: 167 167
Jan 20 18:41:24 compute-0 systemd[1]: libpod-02aefbdd9ceaab66b3c2e9e32d4ccebdf69364aba2e4f68820238b266fde169c.scope: Deactivated successfully.
Jan 20 18:41:24 compute-0 podman[85991]: 2026-01-20 18:41:24.105296085 +0000 UTC m=+0.132578805 container attach 02aefbdd9ceaab66b3c2e9e32d4ccebdf69364aba2e4f68820238b266fde169c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_perlman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325)
Jan 20 18:41:24 compute-0 podman[85991]: 2026-01-20 18:41:24.105717215 +0000 UTC m=+0.132999935 container died 02aefbdd9ceaab66b3c2e9e32d4ccebdf69364aba2e4f68820238b266fde169c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_perlman, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Jan 20 18:41:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-534fd3494c91d110aa2868d4bdf65ce281d03f58da56b540583db3936be89f42-merged.mount: Deactivated successfully.
Jan 20 18:41:24 compute-0 podman[85991]: 2026-01-20 18:41:24.138461183 +0000 UTC m=+0.165743903 container remove 02aefbdd9ceaab66b3c2e9e32d4ccebdf69364aba2e4f68820238b266fde169c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_perlman, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:41:24 compute-0 systemd[1]: libpod-conmon-02aefbdd9ceaab66b3c2e9e32d4ccebdf69364aba2e4f68820238b266fde169c.scope: Deactivated successfully.
Jan 20 18:41:24 compute-0 python3[86034]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:41:24 compute-0 podman[86052]: 2026-01-20 18:41:24.257117327 +0000 UTC m=+0.046555764 container create 9eb80d244e12be09dab37623459b6a1d385b4bdded362ff9809731abe48ac406 (image=quay.io/ceph/ceph:v19, name=sleepy_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Jan 20 18:41:24 compute-0 podman[86069]: 2026-01-20 18:41:24.28402108 +0000 UTC m=+0.040132874 container create bc6484aed11577024349060948862f3d0a327ec8cea77eb28a06a3f01ac4acfd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:41:24 compute-0 systemd[1]: Started libpod-conmon-9eb80d244e12be09dab37623459b6a1d385b4bdded362ff9809731abe48ac406.scope.
Jan 20 18:41:24 compute-0 systemd[1]: Started libpod-conmon-bc6484aed11577024349060948862f3d0a327ec8cea77eb28a06a3f01ac4acfd.scope.
Jan 20 18:41:24 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:41:24 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:41:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8167ef002b44a8409568f0e4b81439bf5f1f5381832b3ec85daba9a704131ef3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:41:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8167ef002b44a8409568f0e4b81439bf5f1f5381832b3ec85daba9a704131ef3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:41:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50d11a44b325d0ec37aed752271048a62ca3e6134eb231c5c2cb8bb157f40295/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 18:41:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50d11a44b325d0ec37aed752271048a62ca3e6134eb231c5c2cb8bb157f40295/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:41:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50d11a44b325d0ec37aed752271048a62ca3e6134eb231c5c2cb8bb157f40295/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:41:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50d11a44b325d0ec37aed752271048a62ca3e6134eb231c5c2cb8bb157f40295/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 18:41:24 compute-0 podman[86052]: 2026-01-20 18:41:24.325617882 +0000 UTC m=+0.115056339 container init 9eb80d244e12be09dab37623459b6a1d385b4bdded362ff9809731abe48ac406 (image=quay.io/ceph/ceph:v19, name=sleepy_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Jan 20 18:41:24 compute-0 podman[86052]: 2026-01-20 18:41:24.233886611 +0000 UTC m=+0.023325078 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:41:24 compute-0 podman[86069]: 2026-01-20 18:41:24.330129111 +0000 UTC m=+0.086240935 container init bc6484aed11577024349060948862f3d0a327ec8cea77eb28a06a3f01ac4acfd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 20 18:41:24 compute-0 podman[86052]: 2026-01-20 18:41:24.333052359 +0000 UTC m=+0.122490796 container start 9eb80d244e12be09dab37623459b6a1d385b4bdded362ff9809731abe48ac406 (image=quay.io/ceph/ceph:v19, name=sleepy_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:41:24 compute-0 podman[86052]: 2026-01-20 18:41:24.336365847 +0000 UTC m=+0.125804304 container attach 9eb80d244e12be09dab37623459b6a1d385b4bdded362ff9809731abe48ac406 (image=quay.io/ceph/ceph:v19, name=sleepy_gates, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 18:41:24 compute-0 podman[86069]: 2026-01-20 18:41:24.33687225 +0000 UTC m=+0.092984044 container start bc6484aed11577024349060948862f3d0a327ec8cea77eb28a06a3f01ac4acfd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_satoshi, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 18:41:24 compute-0 podman[86069]: 2026-01-20 18:41:24.340794874 +0000 UTC m=+0.096906728 container attach bc6484aed11577024349060948862f3d0a327ec8cea77eb28a06a3f01ac4acfd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_satoshi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Jan 20 18:41:24 compute-0 podman[86069]: 2026-01-20 18:41:24.267026909 +0000 UTC m=+0.023138723 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:41:24 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Jan 20 18:41:24 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Jan 20 18:41:24 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v87: 131 pgs: 2 creating+peering, 2 peering, 62 unknown, 65 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 20 18:41:24 compute-0 sad_satoshi[86090]: {
Jan 20 18:41:24 compute-0 sad_satoshi[86090]:     "0": [
Jan 20 18:41:24 compute-0 sad_satoshi[86090]:         {
Jan 20 18:41:24 compute-0 sad_satoshi[86090]:             "devices": [
Jan 20 18:41:24 compute-0 sad_satoshi[86090]:                 "/dev/loop3"
Jan 20 18:41:24 compute-0 sad_satoshi[86090]:             ],
Jan 20 18:41:24 compute-0 sad_satoshi[86090]:             "lv_name": "ceph_lv0",
Jan 20 18:41:24 compute-0 sad_satoshi[86090]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 18:41:24 compute-0 sad_satoshi[86090]:             "lv_size": "21470642176",
Jan 20 18:41:24 compute-0 sad_satoshi[86090]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=aecbbf3b-b405-507b-97d7-637a83f5b4b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5f53c0c6-6046-4836-83f9-ff93da7e674e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 18:41:24 compute-0 sad_satoshi[86090]:             "lv_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 18:41:24 compute-0 sad_satoshi[86090]:             "name": "ceph_lv0",
Jan 20 18:41:24 compute-0 sad_satoshi[86090]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 18:41:24 compute-0 sad_satoshi[86090]:             "tags": {
Jan 20 18:41:24 compute-0 sad_satoshi[86090]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 18:41:24 compute-0 sad_satoshi[86090]:                 "ceph.block_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 18:41:24 compute-0 sad_satoshi[86090]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 18:41:24 compute-0 sad_satoshi[86090]:                 "ceph.cluster_fsid": "aecbbf3b-b405-507b-97d7-637a83f5b4b1",
Jan 20 18:41:24 compute-0 sad_satoshi[86090]:                 "ceph.cluster_name": "ceph",
Jan 20 18:41:24 compute-0 sad_satoshi[86090]:                 "ceph.crush_device_class": "",
Jan 20 18:41:24 compute-0 sad_satoshi[86090]:                 "ceph.encrypted": "0",
Jan 20 18:41:24 compute-0 sad_satoshi[86090]:                 "ceph.osd_fsid": "5f53c0c6-6046-4836-83f9-ff93da7e674e",
Jan 20 18:41:24 compute-0 sad_satoshi[86090]:                 "ceph.osd_id": "0",
Jan 20 18:41:24 compute-0 sad_satoshi[86090]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 18:41:24 compute-0 sad_satoshi[86090]:                 "ceph.type": "block",
Jan 20 18:41:24 compute-0 sad_satoshi[86090]:                 "ceph.vdo": "0",
Jan 20 18:41:24 compute-0 sad_satoshi[86090]:                 "ceph.with_tpm": "0"
Jan 20 18:41:24 compute-0 sad_satoshi[86090]:             },
Jan 20 18:41:24 compute-0 sad_satoshi[86090]:             "type": "block",
Jan 20 18:41:24 compute-0 sad_satoshi[86090]:             "vg_name": "ceph_vg0"
Jan 20 18:41:24 compute-0 sad_satoshi[86090]:         }
Jan 20 18:41:24 compute-0 sad_satoshi[86090]:     ]
Jan 20 18:41:24 compute-0 sad_satoshi[86090]: }
Jan 20 18:41:24 compute-0 systemd[1]: libpod-bc6484aed11577024349060948862f3d0a327ec8cea77eb28a06a3f01ac4acfd.scope: Deactivated successfully.
Jan 20 18:41:24 compute-0 conmon[86090]: conmon bc6484aed11577024349 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bc6484aed11577024349060948862f3d0a327ec8cea77eb28a06a3f01ac4acfd.scope/container/memory.events
Jan 20 18:41:24 compute-0 podman[86069]: 2026-01-20 18:41:24.647007967 +0000 UTC m=+0.403119771 container died bc6484aed11577024349060948862f3d0a327ec8cea77eb28a06a3f01ac4acfd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_satoshi, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Jan 20 18:41:24 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0)
Jan 20 18:41:24 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/484498579' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Jan 20 18:41:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-50d11a44b325d0ec37aed752271048a62ca3e6134eb231c5c2cb8bb157f40295-merged.mount: Deactivated successfully.
Jan 20 18:41:24 compute-0 podman[86069]: 2026-01-20 18:41:24.691000804 +0000 UTC m=+0.447112598 container remove bc6484aed11577024349060948862f3d0a327ec8cea77eb28a06a3f01ac4acfd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_satoshi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Jan 20 18:41:24 compute-0 systemd[1]: libpod-conmon-bc6484aed11577024349060948862f3d0a327ec8cea77eb28a06a3f01ac4acfd.scope: Deactivated successfully.
Jan 20 18:41:24 compute-0 sudo[85920]: pam_unix(sudo:session): session closed for user root
Jan 20 18:41:24 compute-0 sudo[86132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:41:24 compute-0 sudo[86132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:41:24 compute-0 sudo[86132]: pam_unix(sudo:session): session closed for user root
Jan 20 18:41:24 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Jan 20 18:41:24 compute-0 ceph-mon[74381]: 2.b scrub starts
Jan 20 18:41:24 compute-0 ceph-mon[74381]: 2.b scrub ok
Jan 20 18:41:24 compute-0 ceph-mon[74381]: 3.13 scrub starts
Jan 20 18:41:24 compute-0 ceph-mon[74381]: 3.13 scrub ok
Jan 20 18:41:24 compute-0 ceph-mon[74381]: pgmap v85: 131 pgs: 1 creating+peering, 3 peering, 94 unknown, 33 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 20 18:41:24 compute-0 ceph-mon[74381]: 2.8 scrub starts
Jan 20 18:41:24 compute-0 ceph-mon[74381]: 2.8 scrub ok
Jan 20 18:41:24 compute-0 ceph-mon[74381]: 3.16 scrub starts
Jan 20 18:41:24 compute-0 ceph-mon[74381]: 3.16 scrub ok
Jan 20 18:41:24 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:24 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/2772028160' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Jan 20 18:41:24 compute-0 ceph-mon[74381]: osdmap e26: 2 total, 2 up, 2 in
Jan 20 18:41:24 compute-0 ceph-mon[74381]: Standby manager daemon compute-2.pyghhf started
Jan 20 18:41:24 compute-0 ceph-mon[74381]: pgmap v87: 131 pgs: 2 creating+peering, 2 peering, 62 unknown, 65 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 20 18:41:24 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/484498579' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Jan 20 18:41:24 compute-0 sudo[86157]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- raw list --format json
Jan 20 18:41:24 compute-0 sudo[86157]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:41:24 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/484498579' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Jan 20 18:41:24 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e27 e27: 2 total, 2 up, 2 in
Jan 20 18:41:24 compute-0 sleepy_gates[86086]: enabled application 'rbd' on pool 'volumes'
Jan 20 18:41:24 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e27: 2 total, 2 up, 2 in
Jan 20 18:41:24 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.cepfkm(active, since 2m), standbys: compute-2.pyghhf
Jan 20 18:41:24 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.pyghhf", "id": "compute-2.pyghhf"} v 0)
Jan 20 18:41:24 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mgr metadata", "who": "compute-2.pyghhf", "id": "compute-2.pyghhf"}]: dispatch
Jan 20 18:41:24 compute-0 systemd[1]: libpod-9eb80d244e12be09dab37623459b6a1d385b4bdded362ff9809731abe48ac406.scope: Deactivated successfully.
Jan 20 18:41:24 compute-0 podman[86052]: 2026-01-20 18:41:24.879350654 +0000 UTC m=+0.668789111 container died 9eb80d244e12be09dab37623459b6a1d385b4bdded362ff9809731abe48ac406 (image=quay.io/ceph/ceph:v19, name=sleepy_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 20 18:41:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-8167ef002b44a8409568f0e4b81439bf5f1f5381832b3ec85daba9a704131ef3-merged.mount: Deactivated successfully.
Jan 20 18:41:24 compute-0 podman[86052]: 2026-01-20 18:41:24.924393538 +0000 UTC m=+0.713831975 container remove 9eb80d244e12be09dab37623459b6a1d385b4bdded362ff9809731abe48ac406 (image=quay.io/ceph/ceph:v19, name=sleepy_gates, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:41:24 compute-0 systemd[1]: libpod-conmon-9eb80d244e12be09dab37623459b6a1d385b4bdded362ff9809731abe48ac406.scope: Deactivated successfully.
Jan 20 18:41:24 compute-0 sudo[86030]: pam_unix(sudo:session): session closed for user root
Jan 20 18:41:25 compute-0 sudo[86231]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydzmornwplrjvzkgwbdbvxkyxkfyqtza ; /usr/bin/python3'
Jan 20 18:41:25 compute-0 sudo[86231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:41:25 compute-0 podman[86262]: 2026-01-20 18:41:25.231347731 +0000 UTC m=+0.042312292 container create 4a9835f14fc271f5d16596d123fb1529b17b68efd53830ce2945bb239d77a0fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_kirch, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:41:25 compute-0 python3[86235]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:41:25 compute-0 systemd[1]: Started libpod-conmon-4a9835f14fc271f5d16596d123fb1529b17b68efd53830ce2945bb239d77a0fc.scope.
Jan 20 18:41:25 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd new", "uuid": "bfde61c0-168d-4828-bed5-7d716c4e6136"} v 0)
Jan 20 18:41:25 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "bfde61c0-168d-4828-bed5-7d716c4e6136"}]: dispatch
Jan 20 18:41:25 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Jan 20 18:41:25 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "bfde61c0-168d-4828-bed5-7d716c4e6136"}]': finished
Jan 20 18:41:25 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e28 e28: 3 total, 2 up, 3 in
Jan 20 18:41:25 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 2 up, 3 in
Jan 20 18:41:25 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 20 18:41:25 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 20 18:41:25 compute-0 ceph-mgr[74676]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 20 18:41:25 compute-0 podman[86276]: 2026-01-20 18:41:25.286457401 +0000 UTC m=+0.040736290 container create 94122ddde3659d1b29a7193ceaaac0c127248f21a6eb35417ace032fd48883b0 (image=quay.io/ceph/ceph:v19, name=eloquent_lamport, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 18:41:25 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:41:25 compute-0 podman[86262]: 2026-01-20 18:41:25.304014067 +0000 UTC m=+0.114978638 container init 4a9835f14fc271f5d16596d123fb1529b17b68efd53830ce2945bb239d77a0fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:41:25 compute-0 podman[86262]: 2026-01-20 18:41:25.207970081 +0000 UTC m=+0.018934662 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:41:25 compute-0 systemd[1]: Started libpod-conmon-94122ddde3659d1b29a7193ceaaac0c127248f21a6eb35417ace032fd48883b0.scope.
Jan 20 18:41:25 compute-0 podman[86262]: 2026-01-20 18:41:25.311900125 +0000 UTC m=+0.122864686 container start 4a9835f14fc271f5d16596d123fb1529b17b68efd53830ce2945bb239d77a0fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1)
Jan 20 18:41:25 compute-0 podman[86262]: 2026-01-20 18:41:25.315567292 +0000 UTC m=+0.126531873 container attach 4a9835f14fc271f5d16596d123fb1529b17b68efd53830ce2945bb239d77a0fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_kirch, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:41:25 compute-0 beautiful_kirch[86289]: 167 167
Jan 20 18:41:25 compute-0 systemd[1]: libpod-4a9835f14fc271f5d16596d123fb1529b17b68efd53830ce2945bb239d77a0fc.scope: Deactivated successfully.
Jan 20 18:41:25 compute-0 podman[86262]: 2026-01-20 18:41:25.316720973 +0000 UTC m=+0.127685544 container died 4a9835f14fc271f5d16596d123fb1529b17b68efd53830ce2945bb239d77a0fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_kirch, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Jan 20 18:41:25 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:41:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a1acb0bf13ed592eb6c1dc25ccd0b6d7afc3adb32452ead4325bb3cd810b90b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:41:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a1acb0bf13ed592eb6c1dc25ccd0b6d7afc3adb32452ead4325bb3cd810b90b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:41:25 compute-0 podman[86276]: 2026-01-20 18:41:25.350854957 +0000 UTC m=+0.105133866 container init 94122ddde3659d1b29a7193ceaaac0c127248f21a6eb35417ace032fd48883b0 (image=quay.io/ceph/ceph:v19, name=eloquent_lamport, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Jan 20 18:41:25 compute-0 podman[86276]: 2026-01-20 18:41:25.359088886 +0000 UTC m=+0.113367775 container start 94122ddde3659d1b29a7193ceaaac0c127248f21a6eb35417ace032fd48883b0 (image=quay.io/ceph/ceph:v19, name=eloquent_lamport, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:41:25 compute-0 podman[86276]: 2026-01-20 18:41:25.266757089 +0000 UTC m=+0.021035998 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:41:25 compute-0 podman[86262]: 2026-01-20 18:41:25.364387496 +0000 UTC m=+0.175352057 container remove 4a9835f14fc271f5d16596d123fb1529b17b68efd53830ce2945bb239d77a0fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_kirch, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:41:25 compute-0 systemd[1]: libpod-conmon-4a9835f14fc271f5d16596d123fb1529b17b68efd53830ce2945bb239d77a0fc.scope: Deactivated successfully.
Jan 20 18:41:25 compute-0 podman[86276]: 2026-01-20 18:41:25.373021235 +0000 UTC m=+0.127300124 container attach 94122ddde3659d1b29a7193ceaaac0c127248f21a6eb35417ace032fd48883b0 (image=quay.io/ceph/ceph:v19, name=eloquent_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 20 18:41:25 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Jan 20 18:41:25 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Jan 20 18:41:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-3736efa4c4eb85c346df79b3d67336c166d8019534f03cd17ed88c1f6cb793b9-merged.mount: Deactivated successfully.
Jan 20 18:41:25 compute-0 podman[86330]: 2026-01-20 18:41:25.528164796 +0000 UTC m=+0.051379553 container create f6139b810be8c3ee109c5e0a710f8327193ff927fffb3acff8bccedecefa79ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_ardinghelli, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 20 18:41:25 compute-0 systemd[1]: Started libpod-conmon-f6139b810be8c3ee109c5e0a710f8327193ff927fffb3acff8bccedecefa79ab.scope.
Jan 20 18:41:25 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:41:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dea3a587ac3233dabed7b7ae4380b5f3038622952a10ae89cb1fd10148e811d0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 18:41:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dea3a587ac3233dabed7b7ae4380b5f3038622952a10ae89cb1fd10148e811d0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:41:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dea3a587ac3233dabed7b7ae4380b5f3038622952a10ae89cb1fd10148e811d0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:41:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dea3a587ac3233dabed7b7ae4380b5f3038622952a10ae89cb1fd10148e811d0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 18:41:25 compute-0 podman[86330]: 2026-01-20 18:41:25.604015805 +0000 UTC m=+0.127230582 container init f6139b810be8c3ee109c5e0a710f8327193ff927fffb3acff8bccedecefa79ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_ardinghelli, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1)
Jan 20 18:41:25 compute-0 podman[86330]: 2026-01-20 18:41:25.508612688 +0000 UTC m=+0.031827465 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:41:25 compute-0 podman[86330]: 2026-01-20 18:41:25.611823753 +0000 UTC m=+0.135038510 container start f6139b810be8c3ee109c5e0a710f8327193ff927fffb3acff8bccedecefa79ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Jan 20 18:41:25 compute-0 podman[86330]: 2026-01-20 18:41:25.615155641 +0000 UTC m=+0.138370398 container attach f6139b810be8c3ee109c5e0a710f8327193ff927fffb3acff8bccedecefa79ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_ardinghelli, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:41:25 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0)
Jan 20 18:41:25 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3529559985' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Jan 20 18:41:25 compute-0 ceph-mon[74381]: 2.a scrub starts
Jan 20 18:41:25 compute-0 ceph-mon[74381]: 2.a scrub ok
Jan 20 18:41:25 compute-0 ceph-mon[74381]: 3.15 scrub starts
Jan 20 18:41:25 compute-0 ceph-mon[74381]: 3.15 scrub ok
Jan 20 18:41:25 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/484498579' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Jan 20 18:41:25 compute-0 ceph-mon[74381]: osdmap e27: 2 total, 2 up, 2 in
Jan 20 18:41:25 compute-0 ceph-mon[74381]: mgrmap e10: compute-0.cepfkm(active, since 2m), standbys: compute-2.pyghhf
Jan 20 18:41:25 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mgr metadata", "who": "compute-2.pyghhf", "id": "compute-2.pyghhf"}]: dispatch
Jan 20 18:41:25 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/1701210965' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "bfde61c0-168d-4828-bed5-7d716c4e6136"}]: dispatch
Jan 20 18:41:25 compute-0 ceph-mon[74381]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "bfde61c0-168d-4828-bed5-7d716c4e6136"}]: dispatch
Jan 20 18:41:25 compute-0 ceph-mon[74381]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "bfde61c0-168d-4828-bed5-7d716c4e6136"}]': finished
Jan 20 18:41:25 compute-0 ceph-mon[74381]: osdmap e28: 3 total, 2 up, 3 in
Jan 20 18:41:25 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 20 18:41:25 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/3529559985' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Jan 20 18:41:26 compute-0 lvm[86435]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 18:41:26 compute-0 lvm[86435]: VG ceph_vg0 finished
Jan 20 18:41:26 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Jan 20 18:41:26 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3529559985' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Jan 20 18:41:26 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e29 e29: 3 total, 2 up, 3 in
Jan 20 18:41:26 compute-0 eloquent_lamport[86297]: enabled application 'rbd' on pool 'backups'
Jan 20 18:41:26 compute-0 gifted_ardinghelli[86360]: {}
Jan 20 18:41:26 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 2 up, 3 in
Jan 20 18:41:26 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 20 18:41:26 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 20 18:41:26 compute-0 ceph-mgr[74676]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 20 18:41:26 compute-0 systemd[1]: libpod-94122ddde3659d1b29a7193ceaaac0c127248f21a6eb35417ace032fd48883b0.scope: Deactivated successfully.
Jan 20 18:41:26 compute-0 conmon[86297]: conmon 94122ddde3659d1b29a7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-94122ddde3659d1b29a7193ceaaac0c127248f21a6eb35417ace032fd48883b0.scope/container/memory.events
Jan 20 18:41:26 compute-0 podman[86276]: 2026-01-20 18:41:26.312843587 +0000 UTC m=+1.067122496 container died 94122ddde3659d1b29a7193ceaaac0c127248f21a6eb35417ace032fd48883b0 (image=quay.io/ceph/ceph:v19, name=eloquent_lamport, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:41:26 compute-0 systemd[1]: libpod-f6139b810be8c3ee109c5e0a710f8327193ff927fffb3acff8bccedecefa79ab.scope: Deactivated successfully.
Jan 20 18:41:26 compute-0 podman[86330]: 2026-01-20 18:41:26.322488872 +0000 UTC m=+0.845703629 container died f6139b810be8c3ee109c5e0a710f8327193ff927fffb3acff8bccedecefa79ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_ardinghelli, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Jan 20 18:41:26 compute-0 systemd[1]: libpod-f6139b810be8c3ee109c5e0a710f8327193ff927fffb3acff8bccedecefa79ab.scope: Consumed 1.131s CPU time.
Jan 20 18:41:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-4a1acb0bf13ed592eb6c1dc25ccd0b6d7afc3adb32452ead4325bb3cd810b90b-merged.mount: Deactivated successfully.
Jan 20 18:41:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-dea3a587ac3233dabed7b7ae4380b5f3038622952a10ae89cb1fd10148e811d0-merged.mount: Deactivated successfully.
Jan 20 18:41:26 compute-0 podman[86330]: 2026-01-20 18:41:26.397409728 +0000 UTC m=+0.920624485 container remove f6139b810be8c3ee109c5e0a710f8327193ff927fffb3acff8bccedecefa79ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:41:26 compute-0 systemd[1]: libpod-conmon-f6139b810be8c3ee109c5e0a710f8327193ff927fffb3acff8bccedecefa79ab.scope: Deactivated successfully.
Jan 20 18:41:26 compute-0 podman[86276]: 2026-01-20 18:41:26.420655863 +0000 UTC m=+1.174934752 container remove 94122ddde3659d1b29a7193ceaaac0c127248f21a6eb35417ace032fd48883b0 (image=quay.io/ceph/ceph:v19, name=eloquent_lamport, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:41:26 compute-0 systemd[1]: libpod-conmon-94122ddde3659d1b29a7193ceaaac0c127248f21a6eb35417ace032fd48883b0.scope: Deactivated successfully.
Jan 20 18:41:26 compute-0 sudo[86157]: pam_unix(sudo:session): session closed for user root
Jan 20 18:41:26 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 18:41:26 compute-0 sudo[86231]: pam_unix(sudo:session): session closed for user root
Jan 20 18:41:26 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:26 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 18:41:26 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Jan 20 18:41:26 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Jan 20 18:41:26 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:26 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v91: 131 pgs: 1 creating+peering, 130 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 20 18:41:26 compute-0 sudo[86485]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-beolhqmiurpteihkvctnghugnqdhurgb ; /usr/bin/python3'
Jan 20 18:41:26 compute-0 sudo[86485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:41:27 compute-0 ceph-mon[74381]: log_channel(cluster) log [WRN] : Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 20 18:41:27 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Jan 20 18:41:27 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Jan 20 18:41:27 compute-0 python3[86487]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:41:27 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.whkwsm started
Jan 20 18:41:27 compute-0 ceph-mgr[74676]: mgr.server handle_open ignoring open from mgr.compute-1.whkwsm 192.168.122.101:0/1582776645; not ready for session (expect reconnect)
Jan 20 18:41:27 compute-0 podman[86488]: 2026-01-20 18:41:27.606929656 +0000 UTC m=+0.041545911 container create 38104af0280b64a4385a8ad5329bf26f3988b5986b5db99a9ff7223f660331a5 (image=quay.io/ceph/ceph:v19, name=strange_ellis, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Jan 20 18:41:27 compute-0 systemd[1]: Started libpod-conmon-38104af0280b64a4385a8ad5329bf26f3988b5986b5db99a9ff7223f660331a5.scope.
Jan 20 18:41:27 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:41:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1b0f778b83e0ee81b7d5c6bc5a077e5bb32ca45d200b354fdb60931066a7b01/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:41:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1b0f778b83e0ee81b7d5c6bc5a077e5bb32ca45d200b354fdb60931066a7b01/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:41:27 compute-0 podman[86488]: 2026-01-20 18:41:27.586848754 +0000 UTC m=+0.021465039 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:41:27 compute-0 podman[86488]: 2026-01-20 18:41:27.693713755 +0000 UTC m=+0.128330020 container init 38104af0280b64a4385a8ad5329bf26f3988b5986b5db99a9ff7223f660331a5 (image=quay.io/ceph/ceph:v19, name=strange_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 18:41:27 compute-0 podman[86488]: 2026-01-20 18:41:27.7033145 +0000 UTC m=+0.137930755 container start 38104af0280b64a4385a8ad5329bf26f3988b5986b5db99a9ff7223f660331a5 (image=quay.io/ceph/ceph:v19, name=strange_ellis, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Jan 20 18:41:27 compute-0 podman[86488]: 2026-01-20 18:41:27.708381805 +0000 UTC m=+0.142998160 container attach 38104af0280b64a4385a8ad5329bf26f3988b5986b5db99a9ff7223f660331a5 (image=quay.io/ceph/ceph:v19, name=strange_ellis, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Jan 20 18:41:27 compute-0 ceph-mon[74381]: 2.7 scrub starts
Jan 20 18:41:27 compute-0 ceph-mon[74381]: 2.7 scrub ok
Jan 20 18:41:27 compute-0 ceph-mon[74381]: 5.1c scrub starts
Jan 20 18:41:27 compute-0 ceph-mon[74381]: 5.1c scrub ok
Jan 20 18:41:27 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/3504336488' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Jan 20 18:41:27 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/3529559985' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Jan 20 18:41:27 compute-0 ceph-mon[74381]: osdmap e29: 3 total, 2 up, 3 in
Jan 20 18:41:27 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 20 18:41:27 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:27 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:28 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.cepfkm(active, since 2m), standbys: compute-2.pyghhf, compute-1.whkwsm
Jan 20 18:41:28 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.whkwsm", "id": "compute-1.whkwsm"} v 0)
Jan 20 18:41:28 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mgr metadata", "who": "compute-1.whkwsm", "id": "compute-1.whkwsm"}]: dispatch
Jan 20 18:41:28 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0)
Jan 20 18:41:28 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1306592034' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Jan 20 18:41:28 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Jan 20 18:41:28 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Jan 20 18:41:28 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v92: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 20 18:41:28 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 20 18:41:28 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 20 18:41:28 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 20 18:41:28 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 20 18:41:28 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 20 18:41:28 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 20 18:41:28 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 20 18:41:28 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 20 18:41:28 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e29 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 18:41:28 compute-0 ceph-mon[74381]: 2.6 scrub starts
Jan 20 18:41:28 compute-0 ceph-mon[74381]: 2.6 scrub ok
Jan 20 18:41:28 compute-0 ceph-mon[74381]: 5.1f scrub starts
Jan 20 18:41:28 compute-0 ceph-mon[74381]: 5.1f scrub ok
Jan 20 18:41:28 compute-0 ceph-mon[74381]: pgmap v91: 131 pgs: 1 creating+peering, 130 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 20 18:41:28 compute-0 ceph-mon[74381]: 2.5 scrub starts
Jan 20 18:41:28 compute-0 ceph-mon[74381]: 2.5 scrub ok
Jan 20 18:41:28 compute-0 ceph-mon[74381]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 20 18:41:28 compute-0 ceph-mon[74381]: 5.1e scrub starts
Jan 20 18:41:28 compute-0 ceph-mon[74381]: 5.1e scrub ok
Jan 20 18:41:28 compute-0 ceph-mon[74381]: Standby manager daemon compute-1.whkwsm started
Jan 20 18:41:28 compute-0 ceph-mon[74381]: mgrmap e11: compute-0.cepfkm(active, since 2m), standbys: compute-2.pyghhf, compute-1.whkwsm
Jan 20 18:41:28 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mgr metadata", "who": "compute-1.whkwsm", "id": "compute-1.whkwsm"}]: dispatch
Jan 20 18:41:28 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/1306592034' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Jan 20 18:41:28 compute-0 ceph-mon[74381]: pgmap v92: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 20 18:41:28 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 20 18:41:28 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 20 18:41:28 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 20 18:41:28 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 20 18:41:29 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Jan 20 18:41:29 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1306592034' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Jan 20 18:41:29 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 18:41:29 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 18:41:29 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 18:41:29 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 18:41:29 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e30 e30: 3 total, 2 up, 3 in
Jan 20 18:41:29 compute-0 strange_ellis[86503]: enabled application 'rbd' on pool 'images'
Jan 20 18:41:29 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 2 up, 3 in
Jan 20 18:41:29 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 20 18:41:29 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 20 18:41:29 compute-0 ceph-mgr[74676]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 20 18:41:29 compute-0 systemd[1]: libpod-38104af0280b64a4385a8ad5329bf26f3988b5986b5db99a9ff7223f660331a5.scope: Deactivated successfully.
Jan 20 18:41:29 compute-0 podman[86488]: 2026-01-20 18:41:29.154465011 +0000 UTC m=+1.589081266 container died 38104af0280b64a4385a8ad5329bf26f3988b5986b5db99a9ff7223f660331a5 (image=quay.io/ceph/ceph:v19, name=strange_ellis, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[2.1e( empty local-lis/les=0/0 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[2.1f( empty local-lis/les=0/0 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[2.9( empty local-lis/les=0/0 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[2.4( empty local-lis/les=0/0 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[2.6( empty local-lis/les=0/0 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[2.1( empty local-lis/les=0/0 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[2.a( empty local-lis/les=0/0 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[2.d( empty local-lis/les=0/0 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[2.c( empty local-lis/les=0/0 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[2.e( empty local-lis/les=0/0 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[2.10( empty local-lis/les=0/0 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[2.13( empty local-lis/les=0/0 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[2.15( empty local-lis/les=0/0 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[2.19( empty local-lis/les=0/0 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[2.1b( empty local-lis/les=0/0 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[5.1f( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.057390213s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active pruub 71.324623108s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[3.1a( empty local-lis/les=21/23 n=0 ec=21/15 lis/c=21/21 les/c/f=23/23/0 sis=30 pruub=14.057758331s) [1] r=-1 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 active pruub 70.325042725s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[5.1f( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.057325363s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.324623108s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[3.1a( empty local-lis/les=21/23 n=0 ec=21/15 lis/c=21/21 les/c/f=23/23/0 sis=30 pruub=14.057727814s) [1] r=-1 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.325042725s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[4.1f( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.057422638s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active pruub 71.324851990s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[5.11( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.064923286s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active pruub 71.332382202s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[5.11( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.064909935s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.332382202s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[4.1f( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.057408333s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.324851990s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[5.10( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.064011574s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active pruub 71.331558228s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[5.10( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.063983917s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.331558228s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[3.16( empty local-lis/les=21/23 n=0 ec=21/15 lis/c=21/21 les/c/f=23/23/0 sis=30 pruub=14.057654381s) [1] r=-1 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 active pruub 70.325256348s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[3.16( empty local-lis/les=21/23 n=0 ec=21/15 lis/c=21/21 les/c/f=23/23/0 sis=30 pruub=14.057637215s) [1] r=-1 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.325256348s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[3.15( empty local-lis/les=21/23 n=0 ec=21/15 lis/c=21/21 les/c/f=23/23/0 sis=30 pruub=14.057537079s) [1] r=-1 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 active pruub 70.325263977s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[5.1c( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.056868553s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active pruub 71.324615479s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[3.15( empty local-lis/les=21/23 n=0 ec=21/15 lis/c=21/21 les/c/f=23/23/0 sis=30 pruub=14.057518005s) [1] r=-1 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.325263977s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[4.13( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.063800812s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active pruub 71.331588745s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[5.1c( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.056832314s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.324615479s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[3.14( empty local-lis/les=21/23 n=0 ec=21/15 lis/c=21/21 les/c/f=23/23/0 sis=30 pruub=14.057464600s) [1] r=-1 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 active pruub 70.325263977s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[4.13( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.063727379s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.331588745s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[5.15( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.064114571s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active pruub 71.331993103s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[5.15( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.064098358s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.331993103s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[3.14( empty local-lis/les=21/23 n=0 ec=21/15 lis/c=21/21 les/c/f=23/23/0 sis=30 pruub=14.057446480s) [1] r=-1 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.325263977s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[4.15( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.064282417s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active pruub 71.332374573s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[4.15( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.064263344s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.332374573s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[3.11( empty local-lis/les=21/23 n=0 ec=21/15 lis/c=21/21 les/c/f=23/23/0 sis=30 pruub=14.057225227s) [1] r=-1 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 active pruub 70.325355530s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[3.11( empty local-lis/les=21/23 n=0 ec=21/15 lis/c=21/21 les/c/f=23/23/0 sis=30 pruub=14.057209015s) [1] r=-1 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.325355530s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[5.16( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.063751221s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active pruub 71.331977844s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[5.16( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.063739777s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.331977844s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[3.10( empty local-lis/les=21/23 n=0 ec=21/15 lis/c=21/21 les/c/f=23/23/0 sis=30 pruub=14.057138443s) [1] r=-1 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 active pruub 70.325401306s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[3.10( empty local-lis/les=21/23 n=0 ec=21/15 lis/c=21/21 les/c/f=23/23/0 sis=30 pruub=14.057110786s) [1] r=-1 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.325401306s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[3.13( empty local-lis/les=21/23 n=0 ec=21/15 lis/c=21/21 les/c/f=23/23/0 sis=30 pruub=14.056859970s) [1] r=-1 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 active pruub 70.325271606s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[5.9( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.063686371s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active pruub 71.332359314s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[5.9( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.063672066s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.332359314s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[3.f( empty local-lis/les=21/23 n=0 ec=21/15 lis/c=21/21 les/c/f=23/23/0 sis=30 pruub=14.056609154s) [1] r=-1 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 active pruub 70.325370789s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[3.13( empty local-lis/les=21/23 n=0 ec=21/15 lis/c=21/21 les/c/f=23/23/0 sis=30 pruub=14.056673050s) [1] r=-1 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.325271606s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[3.f( empty local-lis/les=21/23 n=0 ec=21/15 lis/c=21/21 les/c/f=23/23/0 sis=30 pruub=14.056595802s) [1] r=-1 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.325370789s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[3.e( empty local-lis/les=21/23 n=0 ec=21/15 lis/c=21/21 les/c/f=23/23/0 sis=30 pruub=14.056520462s) [1] r=-1 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 active pruub 70.325363159s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[3.e( empty local-lis/les=21/23 n=0 ec=21/15 lis/c=21/21 les/c/f=23/23/0 sis=30 pruub=14.056504250s) [1] r=-1 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.325363159s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[4.8( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.063467026s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active pruub 71.332351685s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[4.9( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.063462257s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active pruub 71.332351685s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[3.d( empty local-lis/les=21/23 n=0 ec=21/15 lis/c=21/21 les/c/f=23/23/0 sis=30 pruub=14.056510925s) [1] r=-1 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 active pruub 70.325408936s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[4.a( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.063455582s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active pruub 71.332382202s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[4.9( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.063441277s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.332351685s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[3.d( empty local-lis/les=21/23 n=0 ec=21/15 lis/c=21/21 les/c/f=23/23/0 sis=30 pruub=14.056497574s) [1] r=-1 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.325408936s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[4.a( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.063443184s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.332382202s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[4.8( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.063446999s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.332351685s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[3.c( empty local-lis/les=21/23 n=0 ec=21/15 lis/c=21/21 les/c/f=23/23/0 sis=30 pruub=14.056396484s) [1] r=-1 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 active pruub 70.325416565s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[3.c( empty local-lis/les=21/23 n=0 ec=21/15 lis/c=21/21 les/c/f=23/23/0 sis=30 pruub=14.056375504s) [1] r=-1 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.325416565s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[4.1( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.063915253s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active pruub 71.333099365s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[4.1( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.063900948s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.333099365s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[5.1( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.063833237s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active pruub 71.333091736s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[5.1( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.063816071s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.333091736s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[5.7( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.063853264s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active pruub 71.333099365s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[5.7( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.063782692s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.333099365s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[5.4( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.063747406s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active pruub 71.333152771s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[4.5( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.064519882s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active pruub 71.333930969s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[5.4( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.063732147s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.333152771s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[4.5( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.064508438s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.333930969s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[4.c( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.063367844s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active pruub 71.332435608s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[4.c( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.062870979s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.332435608s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[3.3( empty local-lis/les=21/23 n=0 ec=21/15 lis/c=21/21 les/c/f=23/23/0 sis=30 pruub=14.056129456s) [1] r=-1 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 active pruub 70.325836182s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[3.3( empty local-lis/les=21/23 n=0 ec=21/15 lis/c=21/21 les/c/f=23/23/0 sis=30 pruub=14.056115150s) [1] r=-1 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.325836182s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[5.2( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.063484192s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active pruub 71.333297729s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[3.5( empty local-lis/les=21/23 n=0 ec=21/15 lis/c=21/21 les/c/f=23/23/0 sis=30 pruub=14.056077003s) [1] r=-1 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 active pruub 70.325912476s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[5.2( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.063464165s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.333297729s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[3.5( empty local-lis/les=21/23 n=0 ec=21/15 lis/c=21/21 les/c/f=23/23/0 sis=30 pruub=14.056066513s) [1] r=-1 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.325912476s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[5.e( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.063477516s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active pruub 71.333412170s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[5.f( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.063921928s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active pruub 71.333869934s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[5.e( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.063462257s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.333412170s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[5.f( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.063905716s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.333869934s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[3.9( empty local-lis/les=21/23 n=0 ec=21/15 lis/c=21/21 les/c/f=23/23/0 sis=30 pruub=14.055932045s) [1] r=-1 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 active pruub 70.325920105s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[3.9( empty local-lis/les=21/23 n=0 ec=21/15 lis/c=21/21 les/c/f=23/23/0 sis=30 pruub=14.055915833s) [1] r=-1 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.325920105s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[4.e( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.063888550s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active pruub 71.333953857s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[4.e( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.063878059s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.333953857s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[3.a( empty local-lis/les=21/23 n=0 ec=21/15 lis/c=21/21 les/c/f=23/23/0 sis=30 pruub=14.055813789s) [1] r=-1 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 active pruub 70.325920105s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[4.d( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.063795090s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active pruub 71.333923340s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[3.a( empty local-lis/les=21/23 n=0 ec=21/15 lis/c=21/21 les/c/f=23/23/0 sis=30 pruub=14.055796623s) [1] r=-1 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.325920105s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[4.d( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.063777924s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.333923340s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[5.1a( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.063742638s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active pruub 71.333938599s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[5.1a( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.063731194s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.333938599s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[3.1c( empty local-lis/les=21/23 n=0 ec=21/15 lis/c=21/21 les/c/f=23/23/0 sis=30 pruub=14.055721283s) [1] r=-1 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 active pruub 70.325973511s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[4.1b( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.063848495s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active pruub 71.334106445s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[3.1c( empty local-lis/les=21/23 n=0 ec=21/15 lis/c=21/21 les/c/f=23/23/0 sis=30 pruub=14.055704117s) [1] r=-1 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.325973511s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[5.1b( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.063656807s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active pruub 71.333946228s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[4.1b( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.063832283s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.334106445s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[5.1b( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.063644409s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.333946228s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[4.1a( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.063570023s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active pruub 71.334037781s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[3.1d( empty local-lis/les=21/23 n=0 ec=21/15 lis/c=21/21 les/c/f=23/23/0 sis=30 pruub=14.055494308s) [1] r=-1 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 active pruub 70.325988770s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[4.1a( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.063550949s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.334037781s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[5.18( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.063496590s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active pruub 71.334037781s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[4.18( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.063503265s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 active pruub 71.334045410s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[3.1d( empty local-lis/les=21/23 n=0 ec=21/15 lis/c=21/21 les/c/f=23/23/0 sis=30 pruub=14.055454254s) [1] r=-1 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 70.325988770s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[5.18( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.063478470s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.334037781s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 30 pg[4.18( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=30 pruub=15.063483238s) [1] r=-1 lpr=30 pi=[23,30)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 71.334045410s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-f1b0f778b83e0ee81b7d5c6bc5a077e5bb32ca45d200b354fdb60931066a7b01-merged.mount: Deactivated successfully.
Jan 20 18:41:29 compute-0 podman[86488]: 2026-01-20 18:41:29.407369541 +0000 UTC m=+1.841985796 container remove 38104af0280b64a4385a8ad5329bf26f3988b5986b5db99a9ff7223f660331a5 (image=quay.io/ceph/ceph:v19, name=strange_ellis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 20 18:41:29 compute-0 sudo[86485]: pam_unix(sudo:session): session closed for user root
Jan 20 18:41:29 compute-0 systemd[1]: libpod-conmon-38104af0280b64a4385a8ad5329bf26f3988b5986b5db99a9ff7223f660331a5.scope: Deactivated successfully.
Jan 20 18:41:29 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Jan 20 18:41:29 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Jan 20 18:41:29 compute-0 sudo[86564]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjwjohsqnwuffbkadddbiykodyrcmpak ; /usr/bin/python3'
Jan 20 18:41:29 compute-0 sudo[86564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:41:29 compute-0 python3[86566]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:41:29 compute-0 podman[86567]: 2026-01-20 18:41:29.724126265 +0000 UTC m=+0.046302668 container create 13517807d3639eebf313bb9c4649202e73083cea914a1277fb9f10666c1459b6 (image=quay.io/ceph/ceph:v19, name=zealous_mcnulty, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Jan 20 18:41:29 compute-0 systemd[1]: Started libpod-conmon-13517807d3639eebf313bb9c4649202e73083cea914a1277fb9f10666c1459b6.scope.
Jan 20 18:41:29 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:41:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d3e7e71c2b1e27b72428a97a4b94e302094bde551e6985572bc1cd956d3a38a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:41:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d3e7e71c2b1e27b72428a97a4b94e302094bde551e6985572bc1cd956d3a38a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:41:29 compute-0 podman[86567]: 2026-01-20 18:41:29.699081011 +0000 UTC m=+0.021257434 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:41:30 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Jan 20 18:41:30 compute-0 podman[86567]: 2026-01-20 18:41:30.20816882 +0000 UTC m=+0.530345243 container init 13517807d3639eebf313bb9c4649202e73083cea914a1277fb9f10666c1459b6 (image=quay.io/ceph/ceph:v19, name=zealous_mcnulty, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:41:30 compute-0 podman[86567]: 2026-01-20 18:41:30.217558449 +0000 UTC m=+0.539734852 container start 13517807d3639eebf313bb9c4649202e73083cea914a1277fb9f10666c1459b6 (image=quay.io/ceph/ceph:v19, name=zealous_mcnulty, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Jan 20 18:41:30 compute-0 ceph-mon[74381]: 2.4 scrub starts
Jan 20 18:41:30 compute-0 ceph-mon[74381]: 2.4 scrub ok
Jan 20 18:41:30 compute-0 ceph-mon[74381]: 5.10 scrub starts
Jan 20 18:41:30 compute-0 ceph-mon[74381]: 5.10 scrub ok
Jan 20 18:41:30 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/1306592034' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Jan 20 18:41:30 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 18:41:30 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 18:41:30 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 18:41:30 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 18:41:30 compute-0 ceph-mon[74381]: osdmap e30: 3 total, 2 up, 3 in
Jan 20 18:41:30 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 20 18:41:30 compute-0 podman[86567]: 2026-01-20 18:41:30.278775971 +0000 UTC m=+0.600952374 container attach 13517807d3639eebf313bb9c4649202e73083cea914a1277fb9f10666c1459b6 (image=quay.io/ceph/ceph:v19, name=zealous_mcnulty, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 20 18:41:30 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e31 e31: 3 total, 2 up, 3 in
Jan 20 18:41:30 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 4.1e deep-scrub starts
Jan 20 18:41:30 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 4.1e deep-scrub ok
Jan 20 18:41:30 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v95: 131 pgs: 15 peering, 116 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 20 18:41:30 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 2 up, 3 in
Jan 20 18:41:30 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0)
Jan 20 18:41:30 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1704553861' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Jan 20 18:41:30 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 20 18:41:30 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 20 18:41:30 compute-0 ceph-mgr[74676]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 20 18:41:30 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 31 pg[2.1f( empty local-lis/les=30/31 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:30 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 31 pg[2.1e( empty local-lis/les=30/31 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:30 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 31 pg[2.9( empty local-lis/les=30/31 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:30 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 31 pg[2.4( empty local-lis/les=30/31 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:30 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 31 pg[2.1( empty local-lis/les=30/31 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:30 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 31 pg[2.6( empty local-lis/les=30/31 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:30 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 31 pg[2.a( empty local-lis/les=30/31 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:30 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 31 pg[2.d( empty local-lis/les=30/31 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:30 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 31 pg[2.c( empty local-lis/les=30/31 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:30 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 31 pg[2.e( empty local-lis/les=30/31 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:30 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 31 pg[2.13( empty local-lis/les=30/31 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:30 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 31 pg[2.10( empty local-lis/les=30/31 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:30 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 31 pg[2.15( empty local-lis/les=30/31 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:30 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 31 pg[2.19( empty local-lis/les=30/31 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:30 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 31 pg[2.1b( empty local-lis/les=30/31 n=0 ec=21/14 lis/c=21/21 les/c/f=22/22/0 sis=30) [0] r=0 lpr=30 pi=[21,30)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:41:31 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Jan 20 18:41:31 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0)
Jan 20 18:41:31 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Jan 20 18:41:31 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 18:41:31 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:41:31 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-2
Jan 20 18:41:31 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-2
Jan 20 18:41:31 compute-0 ceph-mon[74381]: 2.1c scrub starts
Jan 20 18:41:31 compute-0 ceph-mon[74381]: 2.1c scrub ok
Jan 20 18:41:31 compute-0 ceph-mon[74381]: 4.1d scrub starts
Jan 20 18:41:31 compute-0 ceph-mon[74381]: 4.1d scrub ok
Jan 20 18:41:31 compute-0 ceph-mon[74381]: 2.2 scrub starts
Jan 20 18:41:31 compute-0 ceph-mon[74381]: 2.2 scrub ok
Jan 20 18:41:31 compute-0 ceph-mon[74381]: 4.1e deep-scrub starts
Jan 20 18:41:31 compute-0 ceph-mon[74381]: 4.1e deep-scrub ok
Jan 20 18:41:31 compute-0 ceph-mon[74381]: osdmap e31: 3 total, 2 up, 3 in
Jan 20 18:41:31 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/1704553861' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Jan 20 18:41:31 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 20 18:41:31 compute-0 ceph-mon[74381]: pgmap v95: 131 pgs: 15 peering, 116 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 20 18:41:31 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Jan 20 18:41:31 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Jan 20 18:41:31 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1704553861' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Jan 20 18:41:31 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e32 e32: 3 total, 2 up, 3 in
Jan 20 18:41:31 compute-0 zealous_mcnulty[86582]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Jan 20 18:41:31 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 2 up, 3 in
Jan 20 18:41:31 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 20 18:41:31 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 20 18:41:31 compute-0 ceph-mgr[74676]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 20 18:41:31 compute-0 systemd[1]: libpod-13517807d3639eebf313bb9c4649202e73083cea914a1277fb9f10666c1459b6.scope: Deactivated successfully.
Jan 20 18:41:31 compute-0 podman[86567]: 2026-01-20 18:41:31.900415189 +0000 UTC m=+2.222591582 container died 13517807d3639eebf313bb9c4649202e73083cea914a1277fb9f10666c1459b6 (image=quay.io/ceph/ceph:v19, name=zealous_mcnulty, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:41:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-8d3e7e71c2b1e27b72428a97a4b94e302094bde551e6985572bc1cd956d3a38a-merged.mount: Deactivated successfully.
Jan 20 18:41:31 compute-0 podman[86567]: 2026-01-20 18:41:31.953150426 +0000 UTC m=+2.275326849 container remove 13517807d3639eebf313bb9c4649202e73083cea914a1277fb9f10666c1459b6 (image=quay.io/ceph/ceph:v19, name=zealous_mcnulty, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 20 18:41:31 compute-0 systemd[1]: libpod-conmon-13517807d3639eebf313bb9c4649202e73083cea914a1277fb9f10666c1459b6.scope: Deactivated successfully.
Jan 20 18:41:31 compute-0 sudo[86564]: pam_unix(sudo:session): session closed for user root
Jan 20 18:41:32 compute-0 sudo[86641]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwvyhfahyjibzfjrlhkrhutbnajxbrze ; /usr/bin/python3'
Jan 20 18:41:32 compute-0 sudo[86641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:41:32 compute-0 python3[86643]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:41:32 compute-0 podman[86644]: 2026-01-20 18:41:32.323382956 +0000 UTC m=+0.047356646 container create 664898e592f925606e4d765026a3000537ecd53cd4233e1e7393992487a89064 (image=quay.io/ceph/ceph:v19, name=heuristic_hellman, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 20 18:41:32 compute-0 systemd[1]: Started libpod-conmon-664898e592f925606e4d765026a3000537ecd53cd4233e1e7393992487a89064.scope.
Jan 20 18:41:32 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:41:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9216fc4a72bb513b3322fe4cba4de07d12fad10af8548153c2abe59448cdaa33/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:41:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9216fc4a72bb513b3322fe4cba4de07d12fad10af8548153c2abe59448cdaa33/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:41:32 compute-0 podman[86644]: 2026-01-20 18:41:32.300774407 +0000 UTC m=+0.024748117 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:41:32 compute-0 podman[86644]: 2026-01-20 18:41:32.40279597 +0000 UTC m=+0.126769680 container init 664898e592f925606e4d765026a3000537ecd53cd4233e1e7393992487a89064 (image=quay.io/ceph/ceph:v19, name=heuristic_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Jan 20 18:41:32 compute-0 podman[86644]: 2026-01-20 18:41:32.409530139 +0000 UTC m=+0.133503829 container start 664898e592f925606e4d765026a3000537ecd53cd4233e1e7393992487a89064 (image=quay.io/ceph/ceph:v19, name=heuristic_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 20 18:41:32 compute-0 podman[86644]: 2026-01-20 18:41:32.412967109 +0000 UTC m=+0.136940819 container attach 664898e592f925606e4d765026a3000537ecd53cd4233e1e7393992487a89064 (image=quay.io/ceph/ceph:v19, name=heuristic_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Jan 20 18:41:32 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Jan 20 18:41:32 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Jan 20 18:41:32 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v97: 131 pgs: 15 peering, 116 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 20 18:41:32 compute-0 ceph-mon[74381]: 2.0 deep-scrub starts
Jan 20 18:41:32 compute-0 ceph-mon[74381]: 2.0 deep-scrub ok
Jan 20 18:41:32 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Jan 20 18:41:32 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:41:32 compute-0 ceph-mon[74381]: Deploying daemon osd.2 on compute-2
Jan 20 18:41:32 compute-0 ceph-mon[74381]: 4.10 scrub starts
Jan 20 18:41:32 compute-0 ceph-mon[74381]: 4.10 scrub ok
Jan 20 18:41:32 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/1704553861' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Jan 20 18:41:32 compute-0 ceph-mon[74381]: osdmap e32: 3 total, 2 up, 3 in
Jan 20 18:41:32 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 20 18:41:32 compute-0 ceph-mon[74381]: pgmap v97: 131 pgs: 15 peering, 116 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 20 18:41:32 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0)
Jan 20 18:41:32 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1496096561' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Jan 20 18:41:32 compute-0 ceph-mon[74381]: log_channel(cluster) log [WRN] : Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 20 18:41:33 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Jan 20 18:41:33 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Jan 20 18:41:33 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 18:41:33 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Jan 20 18:41:33 compute-0 ceph-mon[74381]: 2.3 scrub starts
Jan 20 18:41:33 compute-0 ceph-mon[74381]: 2.3 scrub ok
Jan 20 18:41:33 compute-0 ceph-mon[74381]: 4.11 scrub starts
Jan 20 18:41:33 compute-0 ceph-mon[74381]: 4.11 scrub ok
Jan 20 18:41:33 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/1496096561' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Jan 20 18:41:33 compute-0 ceph-mon[74381]: Health check update: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 20 18:41:34 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1496096561' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Jan 20 18:41:34 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e33 e33: 3 total, 2 up, 3 in
Jan 20 18:41:34 compute-0 heuristic_hellman[86659]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Jan 20 18:41:34 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 2 up, 3 in
Jan 20 18:41:34 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 20 18:41:34 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 20 18:41:34 compute-0 ceph-mgr[74676]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 20 18:41:34 compute-0 systemd[1]: libpod-664898e592f925606e4d765026a3000537ecd53cd4233e1e7393992487a89064.scope: Deactivated successfully.
Jan 20 18:41:34 compute-0 podman[86684]: 2026-01-20 18:41:34.462470954 +0000 UTC m=+0.024037908 container died 664898e592f925606e4d765026a3000537ecd53cd4233e1e7393992487a89064 (image=quay.io/ceph/ceph:v19, name=heuristic_hellman, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 18:41:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-9216fc4a72bb513b3322fe4cba4de07d12fad10af8548153c2abe59448cdaa33-merged.mount: Deactivated successfully.
Jan 20 18:41:34 compute-0 podman[86684]: 2026-01-20 18:41:34.494695118 +0000 UTC m=+0.056262062 container remove 664898e592f925606e4d765026a3000537ecd53cd4233e1e7393992487a89064 (image=quay.io/ceph/ceph:v19, name=heuristic_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325)
Jan 20 18:41:34 compute-0 systemd[1]: libpod-conmon-664898e592f925606e4d765026a3000537ecd53cd4233e1e7393992487a89064.scope: Deactivated successfully.
Jan 20 18:41:34 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 5.12 deep-scrub starts
Jan 20 18:41:34 compute-0 sudo[86641]: pam_unix(sudo:session): session closed for user root
Jan 20 18:41:34 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 5.12 deep-scrub ok
Jan 20 18:41:34 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v99: 131 pgs: 15 peering, 116 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 20 18:41:34 compute-0 ceph-mon[74381]: 2.f scrub starts
Jan 20 18:41:34 compute-0 ceph-mon[74381]: 2.f scrub ok
Jan 20 18:41:34 compute-0 ceph-mon[74381]: 5.13 scrub starts
Jan 20 18:41:34 compute-0 ceph-mon[74381]: 5.13 scrub ok
Jan 20 18:41:34 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/1496096561' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Jan 20 18:41:34 compute-0 ceph-mon[74381]: osdmap e33: 3 total, 2 up, 3 in
Jan 20 18:41:34 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 20 18:41:34 compute-0 ceph-mon[74381]: pgmap v99: 131 pgs: 15 peering, 116 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 20 18:41:35 compute-0 ceph-mon[74381]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 20 18:41:35 compute-0 ceph-mon[74381]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 20 18:41:35 compute-0 python3[86774]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 18:41:35 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Jan 20 18:41:35 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Jan 20 18:41:35 compute-0 python3[86845]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768934495.2120218-37382-258099604825384/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=ad866aa1f51f395809dd7ac5cb7a56d43c167b49 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:41:35 compute-0 ceph-mon[74381]: 2.11 scrub starts
Jan 20 18:41:35 compute-0 ceph-mon[74381]: 2.11 scrub ok
Jan 20 18:41:35 compute-0 ceph-mon[74381]: 5.12 deep-scrub starts
Jan 20 18:41:35 compute-0 ceph-mon[74381]: 5.12 deep-scrub ok
Jan 20 18:41:35 compute-0 ceph-mon[74381]: 2.12 deep-scrub starts
Jan 20 18:41:35 compute-0 ceph-mon[74381]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 20 18:41:35 compute-0 ceph-mon[74381]: Cluster is now healthy
Jan 20 18:41:36 compute-0 sudo[86945]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jotekiakehyganyjzjhydpovlroliaip ; /usr/bin/python3'
Jan 20 18:41:36 compute-0 sudo[86945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:41:36 compute-0 python3[86947]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 18:41:36 compute-0 sudo[86945]: pam_unix(sudo:session): session closed for user root
Jan 20 18:41:36 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 20 18:41:36 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Jan 20 18:41:36 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Jan 20 18:41:36 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v100: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 20 18:41:36 compute-0 sudo[87020]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxrtcvyamekxddhczkiyxwbhcevbauqt ; /usr/bin/python3'
Jan 20 18:41:36 compute-0 sudo[87020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:41:37 compute-0 python3[87022]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768934496.1572485-37396-14933906158317/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=a665c855d93d5fc3afe55470f505b78ed95f4d8c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:41:37 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:37 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 20 18:41:37 compute-0 sudo[87020]: pam_unix(sudo:session): session closed for user root
Jan 20 18:41:37 compute-0 ceph-mon[74381]: 2.12 deep-scrub ok
Jan 20 18:41:37 compute-0 ceph-mon[74381]: 4.12 scrub starts
Jan 20 18:41:37 compute-0 ceph-mon[74381]: 4.12 scrub ok
Jan 20 18:41:37 compute-0 ceph-mon[74381]: 2.14 scrub starts
Jan 20 18:41:37 compute-0 ceph-mon[74381]: 2.14 scrub ok
Jan 20 18:41:37 compute-0 ceph-mon[74381]: 4.14 scrub starts
Jan 20 18:41:37 compute-0 ceph-mon[74381]: 4.14 scrub ok
Jan 20 18:41:37 compute-0 ceph-mon[74381]: pgmap v100: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 20 18:41:37 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:37 compute-0 sudo[87070]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etoanribkgxhqoqbescxcnzqfipejgva ; /usr/bin/python3'
Jan 20 18:41:37 compute-0 sudo[87070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:41:37 compute-0 python3[87072]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:41:37 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:41:37 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:41:37 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:41:37 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:41:37 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:41:37 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:41:37 compute-0 podman[87073]: 2026-01-20 18:41:37.529869551 +0000 UTC m=+0.057307050 container create d65a0a4db29d9f41f959b4eecc3c097928a37b46da83d8b0c93af98a0fac5851 (image=quay.io/ceph/ceph:v19, name=interesting_faraday, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Jan 20 18:41:37 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Jan 20 18:41:37 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Jan 20 18:41:37 compute-0 systemd[1]: Started libpod-conmon-d65a0a4db29d9f41f959b4eecc3c097928a37b46da83d8b0c93af98a0fac5851.scope.
Jan 20 18:41:37 compute-0 podman[87073]: 2026-01-20 18:41:37.503745508 +0000 UTC m=+0.031183027 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:41:37 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:41:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8135ee2e431ed87b89704cdfa42c1ec1723e6791dcf2b2ab8ee54c824cf7fb1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:41:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8135ee2e431ed87b89704cdfa42c1ec1723e6791dcf2b2ab8ee54c824cf7fb1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:41:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8135ee2e431ed87b89704cdfa42c1ec1723e6791dcf2b2ab8ee54c824cf7fb1/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 20 18:41:37 compute-0 podman[87073]: 2026-01-20 18:41:37.618699644 +0000 UTC m=+0.146137143 container init d65a0a4db29d9f41f959b4eecc3c097928a37b46da83d8b0c93af98a0fac5851 (image=quay.io/ceph/ceph:v19, name=interesting_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:41:37 compute-0 podman[87073]: 2026-01-20 18:41:37.624998051 +0000 UTC m=+0.152435550 container start d65a0a4db29d9f41f959b4eecc3c097928a37b46da83d8b0c93af98a0fac5851 (image=quay.io/ceph/ceph:v19, name=interesting_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:41:37 compute-0 podman[87073]: 2026-01-20 18:41:37.628484833 +0000 UTC m=+0.155922332 container attach d65a0a4db29d9f41f959b4eecc3c097928a37b46da83d8b0c93af98a0fac5851 (image=quay.io/ceph/ceph:v19, name=interesting_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Jan 20 18:41:37 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Jan 20 18:41:37 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3315323894' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 20 18:41:38 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3315323894' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 20 18:41:38 compute-0 interesting_faraday[87088]: 
Jan 20 18:41:38 compute-0 interesting_faraday[87088]: [global]
Jan 20 18:41:38 compute-0 interesting_faraday[87088]:         fsid = aecbbf3b-b405-507b-97d7-637a83f5b4b1
Jan 20 18:41:38 compute-0 interesting_faraday[87088]:         mon_host = 192.168.122.100
Jan 20 18:41:38 compute-0 systemd[1]: libpod-d65a0a4db29d9f41f959b4eecc3c097928a37b46da83d8b0c93af98a0fac5851.scope: Deactivated successfully.
Jan 20 18:41:38 compute-0 podman[87073]: 2026-01-20 18:41:38.101387664 +0000 UTC m=+0.628825173 container died d65a0a4db29d9f41f959b4eecc3c097928a37b46da83d8b0c93af98a0fac5851 (image=quay.io/ceph/ceph:v19, name=interesting_faraday, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 20 18:41:38 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:38 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:38 compute-0 ceph-mon[74381]: 2.16 deep-scrub starts
Jan 20 18:41:38 compute-0 ceph-mon[74381]: 5.14 scrub starts
Jan 20 18:41:38 compute-0 ceph-mon[74381]: 5.14 scrub ok
Jan 20 18:41:38 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/3315323894' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Jan 20 18:41:38 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/3315323894' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 20 18:41:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-a8135ee2e431ed87b89704cdfa42c1ec1723e6791dcf2b2ab8ee54c824cf7fb1-merged.mount: Deactivated successfully.
Jan 20 18:41:38 compute-0 podman[87073]: 2026-01-20 18:41:38.274340506 +0000 UTC m=+0.801778005 container remove d65a0a4db29d9f41f959b4eecc3c097928a37b46da83d8b0c93af98a0fac5851 (image=quay.io/ceph/ceph:v19, name=interesting_faraday, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 18:41:38 compute-0 sudo[87070]: pam_unix(sudo:session): session closed for user root
Jan 20 18:41:38 compute-0 systemd[1]: libpod-conmon-d65a0a4db29d9f41f959b4eecc3c097928a37b46da83d8b0c93af98a0fac5851.scope: Deactivated successfully.
Jan 20 18:41:38 compute-0 sudo[87149]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpekslbxjarscevxqfrzynjhstkbvrpp ; /usr/bin/python3'
Jan 20 18:41:38 compute-0 sudo[87149]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:41:38 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Jan 20 18:41:38 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Jan 20 18:41:38 compute-0 python3[87151]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:41:38 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v101: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 20 18:41:38 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 18:41:38 compute-0 podman[87152]: 2026-01-20 18:41:38.598884836 +0000 UTC m=+0.024908812 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:41:38 compute-0 podman[87152]: 2026-01-20 18:41:38.806359723 +0000 UTC m=+0.232383679 container create fd181fbade5549c199af64ff867043bad576aa19449359c729bbf5746f735c1d (image=quay.io/ceph/ceph:v19, name=gallant_shtern, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default)
Jan 20 18:41:38 compute-0 systemd[1]: Started libpod-conmon-fd181fbade5549c199af64ff867043bad576aa19449359c729bbf5746f735c1d.scope.
Jan 20 18:41:38 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:41:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e20d0f7a553d462d464e0e04dc372e1e90e41fd8d0e834ba6a3a8825c3f53b8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:41:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e20d0f7a553d462d464e0e04dc372e1e90e41fd8d0e834ba6a3a8825c3f53b8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:41:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e20d0f7a553d462d464e0e04dc372e1e90e41fd8d0e834ba6a3a8825c3f53b8/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 20 18:41:38 compute-0 podman[87152]: 2026-01-20 18:41:38.989779293 +0000 UTC m=+0.415803349 container init fd181fbade5549c199af64ff867043bad576aa19449359c729bbf5746f735c1d (image=quay.io/ceph/ceph:v19, name=gallant_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Jan 20 18:41:39 compute-0 podman[87152]: 2026-01-20 18:41:39.002047948 +0000 UTC m=+0.428071934 container start fd181fbade5549c199af64ff867043bad576aa19449359c729bbf5746f735c1d (image=quay.io/ceph/ceph:v19, name=gallant_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 20 18:41:39 compute-0 podman[87152]: 2026-01-20 18:41:39.070640876 +0000 UTC m=+0.496664852 container attach fd181fbade5549c199af64ff867043bad576aa19449359c729bbf5746f735c1d (image=quay.io/ceph/ceph:v19, name=gallant_shtern, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:41:39 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 20 18:41:39 compute-0 ceph-mon[74381]: 2.16 deep-scrub ok
Jan 20 18:41:39 compute-0 ceph-mon[74381]: 2.17 scrub starts
Jan 20 18:41:39 compute-0 ceph-mon[74381]: 2.17 scrub ok
Jan 20 18:41:39 compute-0 ceph-mon[74381]: 3.12 scrub starts
Jan 20 18:41:39 compute-0 ceph-mon[74381]: 3.12 scrub ok
Jan 20 18:41:39 compute-0 ceph-mon[74381]: pgmap v101: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 20 18:41:39 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:39 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 20 18:41:39 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Jan 20 18:41:39 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Jan 20 18:41:39 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0)
Jan 20 18:41:39 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:39 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0)
Jan 20 18:41:39 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Jan 20 18:41:39 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2249653865' entity='client.admin' 
Jan 20 18:41:39 compute-0 gallant_shtern[87167]: set ssl_option
Jan 20 18:41:39 compute-0 systemd[1]: libpod-fd181fbade5549c199af64ff867043bad576aa19449359c729bbf5746f735c1d.scope: Deactivated successfully.
Jan 20 18:41:39 compute-0 podman[87152]: 2026-01-20 18:41:39.682414886 +0000 UTC m=+1.108438852 container died fd181fbade5549c199af64ff867043bad576aa19449359c729bbf5746f735c1d (image=quay.io/ceph/ceph:v19, name=gallant_shtern, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 18:41:39 compute-0 sudo[87191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 18:41:39 compute-0 sudo[87191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:41:39 compute-0 sudo[87191]: pam_unix(sudo:session): session closed for user root
Jan 20 18:41:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e20d0f7a553d462d464e0e04dc372e1e90e41fd8d0e834ba6a3a8825c3f53b8-merged.mount: Deactivated successfully.
Jan 20 18:41:39 compute-0 podman[87152]: 2026-01-20 18:41:39.890006376 +0000 UTC m=+1.316030332 container remove fd181fbade5549c199af64ff867043bad576aa19449359c729bbf5746f735c1d (image=quay.io/ceph/ceph:v19, name=gallant_shtern, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 18:41:39 compute-0 systemd[1]: libpod-conmon-fd181fbade5549c199af64ff867043bad576aa19449359c729bbf5746f735c1d.scope: Deactivated successfully.
Jan 20 18:41:39 compute-0 sudo[87149]: pam_unix(sudo:session): session closed for user root
Jan 20 18:41:39 compute-0 sudo[87229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:41:39 compute-0 sudo[87229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:41:39 compute-0 sudo[87229]: pam_unix(sudo:session): session closed for user root
Jan 20 18:41:39 compute-0 sudo[87254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 20 18:41:40 compute-0 sudo[87254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:41:40 compute-0 sudo[87302]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alrsnlfhbxqxqavbtlxvflilsadffywn ; /usr/bin/python3'
Jan 20 18:41:40 compute-0 sudo[87302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:41:40 compute-0 python3[87304]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:41:40 compute-0 podman[87305]: 2026-01-20 18:41:40.232747598 +0000 UTC m=+0.042979580 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:41:40 compute-0 podman[87305]: 2026-01-20 18:41:40.373923679 +0000 UTC m=+0.184155631 container create 25106cfea4192928c385d6015e4cfeee497739a6db090b53a2b3d4ce8d52ecab (image=quay.io/ceph/ceph:v19, name=infallible_kowalevski, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:41:40 compute-0 ceph-mon[74381]: 2.18 deep-scrub starts
Jan 20 18:41:40 compute-0 ceph-mon[74381]: 2.18 deep-scrub ok
Jan 20 18:41:40 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:40 compute-0 ceph-mon[74381]: 5.17 scrub starts
Jan 20 18:41:40 compute-0 ceph-mon[74381]: 5.17 scrub ok
Jan 20 18:41:40 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:40 compute-0 ceph-mon[74381]: from='osd.2 [v2:192.168.122.102:6800/426593654,v1:192.168.122.102:6801/426593654]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Jan 20 18:41:40 compute-0 ceph-mon[74381]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Jan 20 18:41:40 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/2249653865' entity='client.admin' 
Jan 20 18:41:40 compute-0 systemd[1]: Started libpod-conmon-25106cfea4192928c385d6015e4cfeee497739a6db090b53a2b3d4ce8d52ecab.scope.
Jan 20 18:41:40 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:41:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7fa58d08550c1c9a802851dde9d11fcf94a3453c2b3ad2f89de4f9071aac832/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:41:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7fa58d08550c1c9a802851dde9d11fcf94a3453c2b3ad2f89de4f9071aac832/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 20 18:41:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7fa58d08550c1c9a802851dde9d11fcf94a3453c2b3ad2f89de4f9071aac832/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:41:40 compute-0 podman[87305]: 2026-01-20 18:41:40.459574238 +0000 UTC m=+0.269806220 container init 25106cfea4192928c385d6015e4cfeee497739a6db090b53a2b3d4ce8d52ecab (image=quay.io/ceph/ceph:v19, name=infallible_kowalevski, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:41:40 compute-0 podman[87305]: 2026-01-20 18:41:40.465802062 +0000 UTC m=+0.276034024 container start 25106cfea4192928c385d6015e4cfeee497739a6db090b53a2b3d4ce8d52ecab (image=quay.io/ceph/ceph:v19, name=infallible_kowalevski, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:41:40 compute-0 podman[87305]: 2026-01-20 18:41:40.469548582 +0000 UTC m=+0.279780554 container attach 25106cfea4192928c385d6015e4cfeee497739a6db090b53a2b3d4ce8d52ecab (image=quay.io/ceph/ceph:v19, name=infallible_kowalevski, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:41:40 compute-0 sudo[87254]: pam_unix(sudo:session): session closed for user root
Jan 20 18:41:40 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Jan 20 18:41:40 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Jan 20 18:41:40 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v102: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 20 18:41:40 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Jan 20 18:41:40 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.14286 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 18:41:40 compute-0 ceph-mgr[74676]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 20 18:41:40 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 20 18:41:40 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Jan 20 18:41:41 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Jan 20 18:41:41 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e34 e34: 3 total, 2 up, 3 in
Jan 20 18:41:41 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 2 up, 3 in
Jan 20 18:41:41 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 20 18:41:41 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 20 18:41:41 compute-0 ceph-mgr[74676]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 20 18:41:41 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:41 compute-0 ceph-mgr[74676]: [cephadm INFO root] Saving service ingress.rgw.default spec with placement count:2
Jan 20 18:41:41 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Saving service ingress.rgw.default spec with placement count:2
Jan 20 18:41:41 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Jan 20 18:41:41 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]} v 0)
Jan 20 18:41:41 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Jan 20 18:41:41 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e34 create-or-move crush item name 'osd.2' initial_weight 0.0195 at location {host=compute-2,root=default}
Jan 20 18:41:41 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:41 compute-0 infallible_kowalevski[87338]: Scheduled rgw.rgw update...
Jan 20 18:41:41 compute-0 infallible_kowalevski[87338]: Scheduled ingress.rgw.default update...
Jan 20 18:41:41 compute-0 systemd[1]: libpod-25106cfea4192928c385d6015e4cfeee497739a6db090b53a2b3d4ce8d52ecab.scope: Deactivated successfully.
Jan 20 18:41:41 compute-0 podman[87305]: 2026-01-20 18:41:41.225927173 +0000 UTC m=+1.036159125 container died 25106cfea4192928c385d6015e4cfeee497739a6db090b53a2b3d4ce8d52ecab (image=quay.io/ceph/ceph:v19, name=infallible_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Jan 20 18:41:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-f7fa58d08550c1c9a802851dde9d11fcf94a3453c2b3ad2f89de4f9071aac832-merged.mount: Deactivated successfully.
Jan 20 18:41:41 compute-0 podman[87305]: 2026-01-20 18:41:41.274053419 +0000 UTC m=+1.084285401 container remove 25106cfea4192928c385d6015e4cfeee497739a6db090b53a2b3d4ce8d52ecab (image=quay.io/ceph/ceph:v19, name=infallible_kowalevski, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:41:41 compute-0 systemd[1]: libpod-conmon-25106cfea4192928c385d6015e4cfeee497739a6db090b53a2b3d4ce8d52ecab.scope: Deactivated successfully.
Jan 20 18:41:41 compute-0 sudo[87302]: pam_unix(sudo:session): session closed for user root
Jan 20 18:41:41 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 20 18:41:41 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:41 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 20 18:41:41 compute-0 ceph-mon[74381]: 2.1a scrub starts
Jan 20 18:41:41 compute-0 ceph-mon[74381]: 2.1a scrub ok
Jan 20 18:41:41 compute-0 ceph-mon[74381]: 4.16 scrub starts
Jan 20 18:41:41 compute-0 ceph-mon[74381]: 4.16 scrub ok
Jan 20 18:41:41 compute-0 ceph-mon[74381]: pgmap v102: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 20 18:41:41 compute-0 ceph-mon[74381]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Jan 20 18:41:41 compute-0 ceph-mon[74381]: osdmap e34: 3 total, 2 up, 3 in
Jan 20 18:41:41 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 20 18:41:41 compute-0 ceph-mon[74381]: from='osd.2 [v2:192.168.122.102:6800/426593654,v1:192.168.122.102:6801/426593654]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Jan 20 18:41:41 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:41 compute-0 ceph-mon[74381]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Jan 20 18:41:41 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:41 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:41 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 4.17 deep-scrub starts
Jan 20 18:41:41 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 4.17 deep-scrub ok
Jan 20 18:41:41 compute-0 python3[87464]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_dashboard.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 18:41:41 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 20 18:41:41 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:41 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 20 18:41:41 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:42 compute-0 python3[87535]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768934501.4557319-37415-61427303263455/source dest=/tmp/ceph_dashboard.yml mode=0644 force=True follow=False _original_basename=ceph_monitoring_stack.yml.j2 checksum=2701faaa92cae31b5bbad92984c27e2af7a44b84 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:41:42 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Jan 20 18:41:42 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]': finished
Jan 20 18:41:42 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e35 e35: 3 total, 2 up, 3 in
Jan 20 18:41:42 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 2 up, 3 in
Jan 20 18:41:42 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 20 18:41:42 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 20 18:41:42 compute-0 ceph-mgr[74676]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 20 18:41:42 compute-0 ceph-mgr[74676]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/426593654; not ready for session (expect reconnect)
Jan 20 18:41:42 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 20 18:41:42 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 20 18:41:42 compute-0 ceph-mgr[74676]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 20 18:41:42 compute-0 sudo[87583]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sihlpaynbcarrroksxzuiegcmliujdtx ; /usr/bin/python3'
Jan 20 18:41:42 compute-0 sudo[87583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:41:42 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v105: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 20 18:41:42 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Jan 20 18:41:42 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Jan 20 18:41:42 compute-0 python3[87585]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_dashboard.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:41:42 compute-0 podman[87586]: 2026-01-20 18:41:42.699019985 +0000 UTC m=+0.024949821 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:41:43 compute-0 ceph-mgr[74676]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/426593654; not ready for session (expect reconnect)
Jan 20 18:41:43 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 5.b scrub starts
Jan 20 18:41:44 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e35 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 18:41:44 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 20 18:41:44 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 20 18:41:44 compute-0 ceph-mgr[74676]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 20 18:41:44 compute-0 podman[87586]: 2026-01-20 18:41:44.128739069 +0000 UTC m=+1.454662125 container create 135497e2f529a68c6c4b4c2d4709b9357519384b4b029af1427a0ce52827f4fb (image=quay.io/ceph/ceph:v19, name=clever_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:41:44 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 5.b scrub ok
Jan 20 18:41:44 compute-0 ceph-mon[74381]: from='client.14286 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 18:41:44 compute-0 ceph-mon[74381]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 20 18:41:44 compute-0 ceph-mon[74381]: Saving service ingress.rgw.default spec with placement count:2
Jan 20 18:41:44 compute-0 ceph-mon[74381]: 4.1f scrub starts
Jan 20 18:41:44 compute-0 ceph-mon[74381]: 4.1f scrub ok
Jan 20 18:41:44 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:44 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:44 compute-0 ceph-mon[74381]: 4.17 deep-scrub starts
Jan 20 18:41:44 compute-0 ceph-mon[74381]: 4.17 deep-scrub ok
Jan 20 18:41:44 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:44 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:44 compute-0 ceph-mon[74381]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]': finished
Jan 20 18:41:44 compute-0 ceph-mon[74381]: osdmap e35: 3 total, 2 up, 3 in
Jan 20 18:41:44 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 20 18:41:44 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 20 18:41:44 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 35 pg[4.1d( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=35 pruub=8.081380844s) [] r=-1 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active pruub 79.324928284s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:44 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 35 pg[4.1d( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=35 pruub=8.081380844s) [] r=-1 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.324928284s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:44 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 35 pg[5.13( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=35 pruub=8.088142395s) [] r=-1 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active pruub 79.331825256s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:44 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 35 pg[5.13( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=35 pruub=8.088142395s) [] r=-1 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.331825256s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:44 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 35 pg[2.1b( empty local-lis/les=30/31 n=0 ec=21/14 lis/c=30/30 les/c/f=31/31/0 sis=35 pruub=10.644582748s) [] r=-1 lpr=35 pi=[30,35)/1 crt=0'0 mlcod 0'0 active pruub 81.888420105s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:44 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 35 pg[2.15( empty local-lis/les=30/31 n=0 ec=21/14 lis/c=30/30 les/c/f=31/31/0 sis=35 pruub=10.644490242s) [] r=-1 lpr=35 pi=[30,35)/1 crt=0'0 mlcod 0'0 active pruub 81.888343811s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:44 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 35 pg[2.15( empty local-lis/les=30/31 n=0 ec=21/14 lis/c=30/30 les/c/f=31/31/0 sis=35 pruub=10.644490242s) [] r=-1 lpr=35 pi=[30,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.888343811s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:44 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 35 pg[5.12( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=35 pruub=8.087839127s) [] r=-1 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active pruub 79.331756592s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:44 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 35 pg[5.12( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=35 pruub=8.087839127s) [] r=-1 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.331756592s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:44 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 35 pg[2.13( empty local-lis/les=30/31 n=0 ec=21/14 lis/c=30/30 les/c/f=31/31/0 sis=35 pruub=10.644370079s) [] r=-1 lpr=35 pi=[30,35)/1 crt=0'0 mlcod 0'0 active pruub 81.888412476s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:44 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 35 pg[2.13( empty local-lis/les=30/31 n=0 ec=21/14 lis/c=30/30 les/c/f=31/31/0 sis=35 pruub=10.644370079s) [] r=-1 lpr=35 pi=[30,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.888412476s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:44 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 35 pg[2.10( empty local-lis/les=30/31 n=0 ec=21/14 lis/c=30/30 les/c/f=31/31/0 sis=35 pruub=10.644281387s) [] r=-1 lpr=35 pi=[30,35)/1 crt=0'0 mlcod 0'0 active pruub 81.888389587s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:44 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 35 pg[2.10( empty local-lis/les=30/31 n=0 ec=21/14 lis/c=30/30 les/c/f=31/31/0 sis=35 pruub=10.644281387s) [] r=-1 lpr=35 pi=[30,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.888389587s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:44 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 35 pg[2.1b( empty local-lis/les=30/31 n=0 ec=21/14 lis/c=30/30 les/c/f=31/31/0 sis=35 pruub=10.644582748s) [] r=-1 lpr=35 pi=[30,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.888420105s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:44 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 35 pg[4.14( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=35 pruub=8.087821007s) [] r=-1 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active pruub 79.332023621s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:44 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 35 pg[5.8( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=35 pruub=8.088387489s) [] r=-1 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active pruub 79.332595825s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:44 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 35 pg[4.14( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=35 pruub=8.087821007s) [] r=-1 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.332023621s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:44 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 35 pg[5.8( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=35 pruub=8.088387489s) [] r=-1 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.332595825s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:44 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 35 pg[5.b( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=35 pruub=8.088265419s) [] r=-1 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active pruub 79.332565308s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:44 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 35 pg[2.d( empty local-lis/les=30/31 n=0 ec=21/14 lis/c=30/30 les/c/f=31/31/0 sis=35 pruub=10.643874168s) [] r=-1 lpr=35 pi=[30,35)/1 crt=0'0 mlcod 0'0 active pruub 81.888183594s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:44 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 35 pg[5.b( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=35 pruub=8.088265419s) [] r=-1 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.332565308s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:44 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 35 pg[2.d( empty local-lis/les=30/31 n=0 ec=21/14 lis/c=30/30 les/c/f=31/31/0 sis=35 pruub=10.643874168s) [] r=-1 lpr=35 pi=[30,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.888183594s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:44 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 35 pg[5.d( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=35 pruub=8.088172913s) [] r=-1 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active pruub 79.332618713s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:44 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 35 pg[5.d( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=35 pruub=8.088172913s) [] r=-1 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.332618713s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:44 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 35 pg[2.c( empty local-lis/les=30/31 n=0 ec=21/14 lis/c=30/30 les/c/f=31/31/0 sis=35 pruub=10.643817902s) [] r=-1 lpr=35 pi=[30,35)/1 crt=0'0 mlcod 0'0 active pruub 81.888305664s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:44 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 35 pg[2.c( empty local-lis/les=30/31 n=0 ec=21/14 lis/c=30/30 les/c/f=31/31/0 sis=35 pruub=10.643817902s) [] r=-1 lpr=35 pi=[30,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.888305664s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:44 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 35 pg[3.0( empty local-lis/les=21/23 n=0 ec=15/15 lis/c=21/21 les/c/f=23/23/0 sis=35 pruub=15.081524849s) [] r=-1 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active pruub 86.326141357s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:44 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 35 pg[3.0( empty local-lis/les=21/23 n=0 ec=15/15 lis/c=21/21 les/c/f=23/23/0 sis=35 pruub=15.081524849s) [] r=-1 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.326141357s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:44 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 35 pg[5.0( empty local-lis/les=23/24 n=0 ec=18/18 lis/c=23/23 les/c/f=24/24/0 sis=35 pruub=8.088461876s) [] r=-1 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active pruub 79.333129883s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:44 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 35 pg[5.0( empty local-lis/les=23/24 n=0 ec=18/18 lis/c=23/23 les/c/f=24/24/0 sis=35 pruub=8.088461876s) [] r=-1 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.333129883s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:44 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 35 pg[4.6( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=35 pruub=8.088653564s) [] r=-1 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active pruub 79.333381653s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:44 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 35 pg[4.6( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=35 pruub=8.088653564s) [] r=-1 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.333381653s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:44 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 35 pg[4.3( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=35 pruub=8.089056015s) [] r=-1 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active pruub 79.333900452s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:44 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 35 pg[4.3( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=35 pruub=8.089056015s) [] r=-1 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.333900452s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:44 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 35 pg[3.8( empty local-lis/les=21/23 n=0 ec=21/15 lis/c=21/21 les/c/f=23/23/0 sis=35 pruub=15.081394196s) [] r=-1 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active pruub 86.326309204s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:44 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 35 pg[3.8( empty local-lis/les=21/23 n=0 ec=21/15 lis/c=21/21 les/c/f=23/23/0 sis=35 pruub=15.081394196s) [] r=-1 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.326309204s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:44 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 35 pg[4.1c( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=35 pruub=8.088991165s) [] r=-1 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active pruub 79.334014893s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:44 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 35 pg[4.1c( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=35 pruub=8.088991165s) [] r=-1 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.334014893s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:44 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 35 pg[3.1b( empty local-lis/les=21/23 n=0 ec=21/15 lis/c=21/21 les/c/f=23/23/0 sis=35 pruub=15.081285477s) [] r=-1 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 active pruub 86.326347351s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:44 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 35 pg[3.1b( empty local-lis/les=21/23 n=0 ec=21/15 lis/c=21/21 les/c/f=23/23/0 sis=35 pruub=15.081285477s) [] r=-1 lpr=35 pi=[21,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.326347351s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:44 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 35 pg[2.a( empty local-lis/les=30/31 n=0 ec=21/14 lis/c=30/30 les/c/f=31/31/0 sis=35 pruub=10.643026352s) [] r=-1 lpr=35 pi=[30,35)/1 crt=0'0 mlcod 0'0 active pruub 81.888175964s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:44 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 35 pg[2.a( empty local-lis/les=30/31 n=0 ec=21/14 lis/c=30/30 les/c/f=31/31/0 sis=35 pruub=10.643026352s) [] r=-1 lpr=35 pi=[30,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.888175964s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:44 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 35 pg[4.2( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=35 pruub=8.088609695s) [] r=-1 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active pruub 79.333908081s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:44 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 35 pg[4.19( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=35 pruub=8.088959694s) [] r=-1 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 active pruub 79.334259033s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:44 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 35 pg[4.19( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=35 pruub=8.088959694s) [] r=-1 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.334259033s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:44 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 35 pg[4.2( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=35 pruub=8.088609695s) [] r=-1 lpr=35 pi=[23,35)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.333908081s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:44 compute-0 ceph-mgr[74676]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/426593654; not ready for session (expect reconnect)
Jan 20 18:41:44 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 20 18:41:44 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 20 18:41:44 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 20 18:41:44 compute-0 ceph-mgr[74676]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 20 18:41:44 compute-0 systemd[1]: Started libpod-conmon-135497e2f529a68c6c4b4c2d4709b9357519384b4b029af1427a0ce52827f4fb.scope.
Jan 20 18:41:44 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:41:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b535459831d89b09c144b6de8810ac7f65ca7fe91b6ff5e3b59cf1e1791574a8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:41:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b535459831d89b09c144b6de8810ac7f65ca7fe91b6ff5e3b59cf1e1791574a8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:41:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b535459831d89b09c144b6de8810ac7f65ca7fe91b6ff5e3b59cf1e1791574a8/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 20 18:41:44 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 5.a deep-scrub starts
Jan 20 18:41:44 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 5.a deep-scrub ok
Jan 20 18:41:44 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v106: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 20 18:41:44 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:45 compute-0 ceph-mgr[74676]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/426593654; not ready for session (expect reconnect)
Jan 20 18:41:45 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 20 18:41:45 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 20 18:41:45 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 20 18:41:45 compute-0 ceph-mgr[74676]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 20 18:41:45 compute-0 podman[87586]: 2026-01-20 18:41:45.307288787 +0000 UTC m=+2.633211863 container init 135497e2f529a68c6c4b4c2d4709b9357519384b4b029af1427a0ce52827f4fb (image=quay.io/ceph/ceph:v19, name=clever_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 20 18:41:45 compute-0 podman[87586]: 2026-01-20 18:41:45.320492077 +0000 UTC m=+2.646415143 container start 135497e2f529a68c6c4b4c2d4709b9357519384b4b029af1427a0ce52827f4fb (image=quay.io/ceph/ceph:v19, name=clever_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Jan 20 18:41:45 compute-0 podman[87586]: 2026-01-20 18:41:45.326180957 +0000 UTC m=+2.652104093 container attach 135497e2f529a68c6c4b4c2d4709b9357519384b4b029af1427a0ce52827f4fb (image=quay.io/ceph/ceph:v19, name=clever_tharp, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:41:45 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 4.b scrub starts
Jan 20 18:41:45 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 4.b scrub ok
Jan 20 18:41:45 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.14292 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 18:41:45 compute-0 ceph-mgr[74676]: [cephadm INFO root] Saving service node-exporter spec with placement *
Jan 20 18:41:45 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Saving service node-exporter spec with placement *
Jan 20 18:41:45 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Jan 20 18:41:45 compute-0 ceph-mon[74381]: purged_snaps scrub starts
Jan 20 18:41:45 compute-0 ceph-mon[74381]: purged_snaps scrub ok
Jan 20 18:41:45 compute-0 ceph-mon[74381]: 4.13 scrub starts
Jan 20 18:41:45 compute-0 ceph-mon[74381]: 4.13 scrub ok
Jan 20 18:41:45 compute-0 ceph-mon[74381]: pgmap v105: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 20 18:41:45 compute-0 ceph-mon[74381]: 5.8 scrub starts
Jan 20 18:41:45 compute-0 ceph-mon[74381]: 5.8 scrub ok
Jan 20 18:41:45 compute-0 ceph-mon[74381]: 5.11 deep-scrub starts
Jan 20 18:41:45 compute-0 ceph-mon[74381]: 5.11 deep-scrub ok
Jan 20 18:41:45 compute-0 ceph-mon[74381]: 5.b scrub starts
Jan 20 18:41:45 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 20 18:41:45 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 20 18:41:45 compute-0 ceph-mon[74381]: pgmap v106: 131 pgs: 131 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Jan 20 18:41:45 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:45 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:45 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Jan 20 18:41:45 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Jan 20 18:41:45 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:45 compute-0 ceph-mgr[74676]: [cephadm INFO root] Saving service grafana spec with placement compute-0;count:1
Jan 20 18:41:45 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Saving service grafana spec with placement compute-0;count:1
Jan 20 18:41:45 compute-0 ceph-mgr[74676]: [cephadm INFO root] Adjusting osd_memory_target on compute-2 to 127.9M
Jan 20 18:41:45 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-2 to 127.9M
Jan 20 18:41:45 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Jan 20 18:41:45 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Jan 20 18:41:45 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:45 compute-0 ceph-mgr[74676]: [cephadm INFO root] Saving service prometheus spec with placement compute-0;count:1
Jan 20 18:41:45 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Saving service prometheus spec with placement compute-0;count:1
Jan 20 18:41:45 compute-0 ceph-mgr[74676]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-2 to 134203392: error parsing value: Value '134203392' is below minimum 939524096
Jan 20 18:41:45 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-2 to 134203392: error parsing value: Value '134203392' is below minimum 939524096
Jan 20 18:41:45 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Jan 20 18:41:45 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 18:41:45 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:41:45 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 20 18:41:45 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 18:41:45 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Jan 20 18:41:45 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Jan 20 18:41:45 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Jan 20 18:41:45 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Jan 20 18:41:45 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Jan 20 18:41:45 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Jan 20 18:41:45 compute-0 sudo[87625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Jan 20 18:41:45 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:45 compute-0 sudo[87625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:41:45 compute-0 ceph-mgr[74676]: [cephadm INFO root] Saving service alertmanager spec with placement compute-0;count:1
Jan 20 18:41:45 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Saving service alertmanager spec with placement compute-0;count:1
Jan 20 18:41:45 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Jan 20 18:41:45 compute-0 sudo[87625]: pam_unix(sudo:session): session closed for user root
Jan 20 18:41:45 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:45 compute-0 clever_tharp[87601]: Scheduled node-exporter update...
Jan 20 18:41:45 compute-0 clever_tharp[87601]: Scheduled grafana update...
Jan 20 18:41:45 compute-0 clever_tharp[87601]: Scheduled prometheus update...
Jan 20 18:41:45 compute-0 clever_tharp[87601]: Scheduled alertmanager update...
Jan 20 18:41:46 compute-0 systemd[1]: libpod-135497e2f529a68c6c4b4c2d4709b9357519384b4b029af1427a0ce52827f4fb.scope: Deactivated successfully.
Jan 20 18:41:46 compute-0 podman[87586]: 2026-01-20 18:41:46.013658803 +0000 UTC m=+3.339581849 container died 135497e2f529a68c6c4b4c2d4709b9357519384b4b029af1427a0ce52827f4fb (image=quay.io/ceph/ceph:v19, name=clever_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Jan 20 18:41:46 compute-0 sudo[87650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/etc/ceph
Jan 20 18:41:46 compute-0 sudo[87650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:41:46 compute-0 sudo[87650]: pam_unix(sudo:session): session closed for user root
Jan 20 18:41:46 compute-0 sudo[87688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/etc/ceph/ceph.conf.new
Jan 20 18:41:46 compute-0 sudo[87688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:41:46 compute-0 sudo[87688]: pam_unix(sudo:session): session closed for user root
Jan 20 18:41:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-b535459831d89b09c144b6de8810ac7f65ca7fe91b6ff5e3b59cf1e1791574a8-merged.mount: Deactivated successfully.
Jan 20 18:41:46 compute-0 podman[87586]: 2026-01-20 18:41:46.162105276 +0000 UTC m=+3.488028332 container remove 135497e2f529a68c6c4b4c2d4709b9357519384b4b029af1427a0ce52827f4fb (image=quay.io/ceph/ceph:v19, name=clever_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 20 18:41:46 compute-0 sudo[87713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1
Jan 20 18:41:46 compute-0 sudo[87713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:41:46 compute-0 sudo[87713]: pam_unix(sudo:session): session closed for user root
Jan 20 18:41:46 compute-0 sudo[87583]: pam_unix(sudo:session): session closed for user root
Jan 20 18:41:46 compute-0 systemd[1]: libpod-conmon-135497e2f529a68c6c4b4c2d4709b9357519384b4b029af1427a0ce52827f4fb.scope: Deactivated successfully.
Jan 20 18:41:46 compute-0 ceph-mgr[74676]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/426593654; not ready for session (expect reconnect)
Jan 20 18:41:46 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 20 18:41:46 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 20 18:41:46 compute-0 ceph-mgr[74676]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 20 18:41:46 compute-0 sudo[87739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/etc/ceph/ceph.conf.new
Jan 20 18:41:46 compute-0 sudo[87739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:41:46 compute-0 sudo[87739]: pam_unix(sudo:session): session closed for user root
Jan 20 18:41:46 compute-0 sudo[87787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/etc/ceph/ceph.conf.new
Jan 20 18:41:46 compute-0 sudo[87787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:41:46 compute-0 sudo[87787]: pam_unix(sudo:session): session closed for user root
Jan 20 18:41:46 compute-0 sudo[87812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/etc/ceph/ceph.conf.new
Jan 20 18:41:46 compute-0 sudo[87812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:41:46 compute-0 sudo[87812]: pam_unix(sudo:session): session closed for user root
Jan 20 18:41:46 compute-0 sudo[87837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Jan 20 18:41:46 compute-0 sudo[87837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:41:46 compute-0 sudo[87837]: pam_unix(sudo:session): session closed for user root
Jan 20 18:41:46 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.conf
Jan 20 18:41:46 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.conf
Jan 20 18:41:46 compute-0 sudo[87862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config
Jan 20 18:41:46 compute-0 sudo[87862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:41:46 compute-0 sudo[87862]: pam_unix(sudo:session): session closed for user root
Jan 20 18:41:46 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 3.b scrub starts
Jan 20 18:41:46 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 3.b scrub ok
Jan 20 18:41:46 compute-0 sudo[87887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config
Jan 20 18:41:46 compute-0 sudo[87887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:41:46 compute-0 sudo[87887]: pam_unix(sudo:session): session closed for user root
Jan 20 18:41:46 compute-0 sudo[87934]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vuxarmbjwmzngvrtvwxzfzkfazxftzue ; /usr/bin/python3'
Jan 20 18:41:46 compute-0 sudo[87934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:41:46 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v107: 131 pgs: 131 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Jan 20 18:41:46 compute-0 sudo[87936]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.conf.new
Jan 20 18:41:46 compute-0 sudo[87936]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:41:46 compute-0 sudo[87936]: pam_unix(sudo:session): session closed for user root
Jan 20 18:41:46 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.conf
Jan 20 18:41:46 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.conf
Jan 20 18:41:46 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.conf
Jan 20 18:41:46 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.conf
Jan 20 18:41:46 compute-0 sudo[87963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1
Jan 20 18:41:46 compute-0 sudo[87963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:41:46 compute-0 sudo[87963]: pam_unix(sudo:session): session closed for user root
Jan 20 18:41:46 compute-0 sudo[87988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.conf.new
Jan 20 18:41:46 compute-0 sudo[87988]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:41:46 compute-0 sudo[87988]: pam_unix(sudo:session): session closed for user root
Jan 20 18:41:46 compute-0 sudo[88036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.conf.new
Jan 20 18:41:46 compute-0 sudo[88036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:41:46 compute-0 sudo[88036]: pam_unix(sudo:session): session closed for user root
Jan 20 18:41:46 compute-0 python3[87943]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/server_port 8443 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:41:46 compute-0 sudo[88062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.conf.new
Jan 20 18:41:46 compute-0 sudo[88062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:41:46 compute-0 sudo[88062]: pam_unix(sudo:session): session closed for user root
Jan 20 18:41:46 compute-0 sudo[88099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.conf.new /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.conf
Jan 20 18:41:46 compute-0 sudo[88099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:41:46 compute-0 sudo[88099]: pam_unix(sudo:session): session closed for user root
Jan 20 18:41:47 compute-0 podman[88061]: 2026-01-20 18:41:46.913808444 +0000 UTC m=+0.029763261 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:41:47 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Jan 20 18:41:47 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 18:41:47 compute-0 podman[88061]: 2026-01-20 18:41:47.214992044 +0000 UTC m=+0.330946841 container create a45c5e1422c5735507b59f2bbe078c454d48660adeca9d1ebf96a3cb09e47683 (image=quay.io/ceph/ceph:v19, name=nifty_williams, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325)
Jan 20 18:41:47 compute-0 ceph-mon[74381]: 5.b scrub ok
Jan 20 18:41:47 compute-0 ceph-mon[74381]: 3.14 deep-scrub starts
Jan 20 18:41:47 compute-0 ceph-mon[74381]: 3.14 deep-scrub ok
Jan 20 18:41:47 compute-0 ceph-mon[74381]: 5.a deep-scrub starts
Jan 20 18:41:47 compute-0 ceph-mon[74381]: 5.a deep-scrub ok
Jan 20 18:41:47 compute-0 ceph-mon[74381]: 5.15 scrub starts
Jan 20 18:41:47 compute-0 ceph-mon[74381]: 5.15 scrub ok
Jan 20 18:41:47 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 20 18:41:47 compute-0 ceph-mon[74381]: 4.b scrub starts
Jan 20 18:41:47 compute-0 ceph-mon[74381]: 4.b scrub ok
Jan 20 18:41:47 compute-0 ceph-mon[74381]: from='client.14292 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 18:41:47 compute-0 ceph-mon[74381]: Saving service node-exporter spec with placement *
Jan 20 18:41:47 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:47 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Jan 20 18:41:47 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:47 compute-0 ceph-mon[74381]: Saving service grafana spec with placement compute-0;count:1
Jan 20 18:41:47 compute-0 ceph-mon[74381]: Adjusting osd_memory_target on compute-2 to 127.9M
Jan 20 18:41:47 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:47 compute-0 ceph-mon[74381]: Saving service prometheus spec with placement compute-0;count:1
Jan 20 18:41:47 compute-0 ceph-mon[74381]: Unable to set osd_memory_target on compute-2 to 134203392: error parsing value: Value '134203392' is below minimum 939524096
Jan 20 18:41:47 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:41:47 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 18:41:47 compute-0 ceph-mon[74381]: Updating compute-0:/etc/ceph/ceph.conf
Jan 20 18:41:47 compute-0 ceph-mon[74381]: Updating compute-1:/etc/ceph/ceph.conf
Jan 20 18:41:47 compute-0 ceph-mon[74381]: Updating compute-2:/etc/ceph/ceph.conf
Jan 20 18:41:47 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:47 compute-0 ceph-mon[74381]: Saving service alertmanager spec with placement compute-0;count:1
Jan 20 18:41:47 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:47 compute-0 ceph-mon[74381]: OSD bench result of 5550.463047 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 20 18:41:47 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 20 18:41:47 compute-0 ceph-mon[74381]: 3.10 scrub starts
Jan 20 18:41:47 compute-0 ceph-mon[74381]: 3.10 scrub ok
Jan 20 18:41:47 compute-0 ceph-mon[74381]: Updating compute-0:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.conf
Jan 20 18:41:47 compute-0 ceph-mon[74381]: pgmap v107: 131 pgs: 131 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Jan 20 18:41:47 compute-0 ceph-mon[74381]: Updating compute-1:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.conf
Jan 20 18:41:47 compute-0 ceph-mon[74381]: Updating compute-2:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.conf
Jan 20 18:41:47 compute-0 ceph-mgr[74676]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/426593654; not ready for session (expect reconnect)
Jan 20 18:41:47 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 20 18:41:47 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 20 18:41:47 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 20 18:41:47 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 20 18:41:47 compute-0 ceph-mgr[74676]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 20 18:41:47 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Jan 20 18:41:47 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Jan 20 18:41:47 compute-0 systemd[1]: Started libpod-conmon-a45c5e1422c5735507b59f2bbe078c454d48660adeca9d1ebf96a3cb09e47683.scope.
Jan 20 18:41:47 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:41:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/220078f10b9e6f5fb19c28aa0d96bfd47b6be9491e816974b50b38f4124284c2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:41:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/220078f10b9e6f5fb19c28aa0d96bfd47b6be9491e816974b50b38f4124284c2/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 20 18:41:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/220078f10b9e6f5fb19c28aa0d96bfd47b6be9491e816974b50b38f4124284c2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:41:48 compute-0 ceph-mgr[74676]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/426593654; not ready for session (expect reconnect)
Jan 20 18:41:48 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 20 18:41:48 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 20 18:41:48 compute-0 ceph-mgr[74676]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 20 18:41:48 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Jan 20 18:41:48 compute-0 podman[88061]: 2026-01-20 18:41:48.335139464 +0000 UTC m=+1.451094281 container init a45c5e1422c5735507b59f2bbe078c454d48660adeca9d1ebf96a3cb09e47683 (image=quay.io/ceph/ceph:v19, name=nifty_williams, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:41:48 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:48 compute-0 ceph-mon[74381]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.102:6800/426593654,v1:192.168.122.102:6801/426593654] boot
Jan 20 18:41:48 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Jan 20 18:41:48 compute-0 podman[88061]: 2026-01-20 18:41:48.343806404 +0000 UTC m=+1.459761201 container start a45c5e1422c5735507b59f2bbe078c454d48660adeca9d1ebf96a3cb09e47683 (image=quay.io/ceph/ceph:v19, name=nifty_williams, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:41:48 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 20 18:41:48 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 20 18:41:48 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 18:41:48 compute-0 podman[88061]: 2026-01-20 18:41:48.34819507 +0000 UTC m=+1.464149897 container attach a45c5e1422c5735507b59f2bbe078c454d48660adeca9d1ebf96a3cb09e47683 (image=quay.io/ceph/ceph:v19, name=nifty_williams, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:41:48 compute-0 ceph-mon[74381]: 3.b scrub starts
Jan 20 18:41:48 compute-0 ceph-mon[74381]: 3.b scrub ok
Jan 20 18:41:48 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 20 18:41:48 compute-0 ceph-mon[74381]: 5.6 scrub starts
Jan 20 18:41:48 compute-0 ceph-mon[74381]: 5.6 scrub ok
Jan 20 18:41:48 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:48 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 20 18:41:48 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 36 pg[4.1d( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=36 pruub=3.893354416s) [2] r=-1 lpr=36 pi=[23,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.324928284s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:48 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 36 pg[2.1b( empty local-lis/les=30/31 n=0 ec=21/14 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=6.456697941s) [2] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.888420105s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:48 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 36 pg[2.1b( empty local-lis/les=30/31 n=0 ec=21/14 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=6.456637859s) [2] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.888420105s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:48 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 36 pg[5.13( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=36 pruub=3.899529219s) [2] r=-1 lpr=36 pi=[23,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.331825256s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:48 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 36 pg[2.15( empty local-lis/les=30/31 n=0 ec=21/14 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=6.456015587s) [2] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.888343811s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:48 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 36 pg[5.13( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=36 pruub=3.899491072s) [2] r=-1 lpr=36 pi=[23,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.331825256s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:48 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 36 pg[2.15( empty local-lis/les=30/31 n=0 ec=21/14 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=6.455995560s) [2] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.888343811s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:48 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 36 pg[2.13( empty local-lis/les=30/31 n=0 ec=21/14 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=6.455835342s) [2] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.888412476s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:48 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 36 pg[4.1d( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=36 pruub=3.893311977s) [2] r=-1 lpr=36 pi=[23,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.324928284s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:48 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 36 pg[2.10( empty local-lis/les=30/31 n=0 ec=21/14 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=6.455611706s) [2] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.888389587s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:48 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 36 pg[5.8( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=36 pruub=3.899775028s) [2] r=-1 lpr=36 pi=[23,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.332595825s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:48 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 36 pg[2.10( empty local-lis/les=30/31 n=0 ec=21/14 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=6.455557346s) [2] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.888389587s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:48 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 36 pg[5.8( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=36 pruub=3.899752855s) [2] r=-1 lpr=36 pi=[23,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.332595825s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:48 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 36 pg[2.13( empty local-lis/les=30/31 n=0 ec=21/14 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=6.455431461s) [2] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.888412476s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:48 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 36 pg[5.12( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=36 pruub=3.898710251s) [2] r=-1 lpr=36 pi=[23,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.331756592s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:48 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 36 pg[4.14( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=36 pruub=3.898802280s) [2] r=-1 lpr=36 pi=[23,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.332023621s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:48 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 36 pg[4.14( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=36 pruub=3.898770571s) [2] r=-1 lpr=36 pi=[23,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.332023621s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:48 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 36 pg[5.b( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=36 pruub=3.899252176s) [2] r=-1 lpr=36 pi=[23,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.332565308s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:48 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 36 pg[2.c( empty local-lis/les=30/31 n=0 ec=21/14 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=6.454864025s) [2] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.888305664s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:48 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 36 pg[2.d( empty local-lis/les=30/31 n=0 ec=21/14 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=6.454607964s) [2] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.888183594s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:48 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 36 pg[2.a( empty local-lis/les=30/31 n=0 ec=21/14 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=6.454579830s) [2] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.888175964s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:48 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 36 pg[2.a( empty local-lis/les=30/31 n=0 ec=21/14 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=6.454565525s) [2] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.888175964s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:48 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 36 pg[2.d( empty local-lis/les=30/31 n=0 ec=21/14 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=6.454579353s) [2] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.888183594s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:48 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 36 pg[5.d( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=36 pruub=3.898922443s) [2] r=-1 lpr=36 pi=[23,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.332618713s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:48 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 36 pg[5.d( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=36 pruub=3.898905277s) [2] r=-1 lpr=36 pi=[23,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.332618713s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:48 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 36 pg[5.12( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=36 pruub=3.898650408s) [2] r=-1 lpr=36 pi=[23,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.331756592s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:48 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 36 pg[5.b( empty local-lis/les=23/24 n=0 ec=23/18 lis/c=23/23 les/c/f=24/24/0 sis=36 pruub=3.899232149s) [2] r=-1 lpr=36 pi=[23,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.332565308s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:48 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 36 pg[3.0( empty local-lis/les=21/23 n=0 ec=15/15 lis/c=21/21 les/c/f=23/23/0 sis=36 pruub=10.892220497s) [2] r=-1 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.326141357s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:48 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 36 pg[3.0( empty local-lis/les=21/23 n=0 ec=15/15 lis/c=21/21 les/c/f=23/23/0 sis=36 pruub=10.892192841s) [2] r=-1 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.326141357s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:48 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 36 pg[5.0( empty local-lis/les=23/24 n=0 ec=18/18 lis/c=23/23 les/c/f=24/24/0 sis=36 pruub=3.899090290s) [2] r=-1 lpr=36 pi=[23,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.333129883s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:48 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 36 pg[5.0( empty local-lis/les=23/24 n=0 ec=18/18 lis/c=23/23 les/c/f=24/24/0 sis=36 pruub=3.899070501s) [2] r=-1 lpr=36 pi=[23,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.333129883s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:48 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 36 pg[4.6( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=36 pruub=3.899250984s) [2] r=-1 lpr=36 pi=[23,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.333381653s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:48 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 36 pg[4.6( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=36 pruub=3.899227858s) [2] r=-1 lpr=36 pi=[23,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.333381653s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:48 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 36 pg[2.c( empty local-lis/les=30/31 n=0 ec=21/14 lis/c=30/30 les/c/f=31/31/0 sis=36 pruub=6.454805851s) [2] r=-1 lpr=36 pi=[30,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 81.888305664s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:48 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 36 pg[4.3( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=36 pruub=3.899578333s) [2] r=-1 lpr=36 pi=[23,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.333900452s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:48 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 36 pg[4.3( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=36 pruub=3.899560213s) [2] r=-1 lpr=36 pi=[23,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.333900452s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:48 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 36 pg[4.2( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=36 pruub=3.899467945s) [2] r=-1 lpr=36 pi=[23,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.333908081s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:48 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 36 pg[4.2( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=36 pruub=3.899451494s) [2] r=-1 lpr=36 pi=[23,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.333908081s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:48 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 36 pg[3.8( empty local-lis/les=21/23 n=0 ec=21/15 lis/c=21/21 les/c/f=23/23/0 sis=36 pruub=10.891829491s) [2] r=-1 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.326309204s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:48 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 36 pg[3.8( empty local-lis/les=21/23 n=0 ec=21/15 lis/c=21/21 les/c/f=23/23/0 sis=36 pruub=10.891813278s) [2] r=-1 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.326309204s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:48 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 36 pg[4.1c( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=36 pruub=3.899348021s) [2] r=-1 lpr=36 pi=[23,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.334014893s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:48 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 36 pg[3.1b( empty local-lis/les=21/23 n=0 ec=21/15 lis/c=21/21 les/c/f=23/23/0 sis=36 pruub=10.891597748s) [2] r=-1 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.326347351s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:48 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 36 pg[3.1b( empty local-lis/les=21/23 n=0 ec=21/15 lis/c=21/21 les/c/f=23/23/0 sis=36 pruub=10.891576767s) [2] r=-1 lpr=36 pi=[21,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 86.326347351s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:48 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 36 pg[4.19( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=36 pruub=3.899468660s) [2] r=-1 lpr=36 pi=[23,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.334259033s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:41:48 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 36 pg[4.19( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=36 pruub=3.899450064s) [2] r=-1 lpr=36 pi=[23,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.334259033s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:48 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 36 pg[4.1c( empty local-lis/les=23/24 n=0 ec=23/16 lis/c=23/23 les/c/f=24/24/0 sis=36 pruub=3.899331331s) [2] r=-1 lpr=36 pi=[23,36)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 79.334014893s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:41:48 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:48 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 20 18:41:48 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:48 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:48 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:48 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 18:41:48 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:48 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 20 18:41:48 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 18:41:48 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 20 18:41:48 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 18:41:48 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 18:41:48 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:41:48 compute-0 sudo[88132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:41:48 compute-0 sudo[88132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:41:48 compute-0 sudo[88132]: pam_unix(sudo:session): session closed for user root
Jan 20 18:41:48 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Jan 20 18:41:48 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Jan 20 18:41:48 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v109: 131 pgs: 131 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Jan 20 18:41:48 compute-0 sudo[88174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 18:41:48 compute-0 sudo[88174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:41:48 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/server_port}] v 0)
Jan 20 18:41:49 compute-0 podman[88241]: 2026-01-20 18:41:49.023085222 +0000 UTC m=+0.101715756 container create 311f85d0ff5f95b52bcf392dff9ba0fc7ad2204f2592706103e459c9480d36ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_shirley, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 18:41:49 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1520162648' entity='client.admin' 
Jan 20 18:41:49 compute-0 podman[88241]: 2026-01-20 18:41:48.945963749 +0000 UTC m=+0.024594303 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:41:49 compute-0 systemd[1]: libpod-a45c5e1422c5735507b59f2bbe078c454d48660adeca9d1ebf96a3cb09e47683.scope: Deactivated successfully.
Jan 20 18:41:49 compute-0 podman[88061]: 2026-01-20 18:41:49.052194094 +0000 UTC m=+2.168148881 container died a45c5e1422c5735507b59f2bbe078c454d48660adeca9d1ebf96a3cb09e47683 (image=quay.io/ceph/ceph:v19, name=nifty_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Jan 20 18:41:49 compute-0 systemd[1]: Started libpod-conmon-311f85d0ff5f95b52bcf392dff9ba0fc7ad2204f2592706103e459c9480d36ad.scope.
Jan 20 18:41:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-220078f10b9e6f5fb19c28aa0d96bfd47b6be9491e816974b50b38f4124284c2-merged.mount: Deactivated successfully.
Jan 20 18:41:49 compute-0 podman[88061]: 2026-01-20 18:41:49.094784562 +0000 UTC m=+2.210739359 container remove a45c5e1422c5735507b59f2bbe078c454d48660adeca9d1ebf96a3cb09e47683 (image=quay.io/ceph/ceph:v19, name=nifty_williams, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:41:49 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:41:49 compute-0 systemd[1]: libpod-conmon-a45c5e1422c5735507b59f2bbe078c454d48660adeca9d1ebf96a3cb09e47683.scope: Deactivated successfully.
Jan 20 18:41:49 compute-0 sudo[87934]: pam_unix(sudo:session): session closed for user root
Jan 20 18:41:49 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e36 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 18:41:49 compute-0 podman[88241]: 2026-01-20 18:41:49.207839298 +0000 UTC m=+0.286469852 container init 311f85d0ff5f95b52bcf392dff9ba0fc7ad2204f2592706103e459c9480d36ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325)
Jan 20 18:41:49 compute-0 podman[88241]: 2026-01-20 18:41:49.214791932 +0000 UTC m=+0.293422466 container start 311f85d0ff5f95b52bcf392dff9ba0fc7ad2204f2592706103e459c9480d36ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_shirley, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:41:49 compute-0 clever_shirley[88266]: 167 167
Jan 20 18:41:49 compute-0 systemd[1]: libpod-311f85d0ff5f95b52bcf392dff9ba0fc7ad2204f2592706103e459c9480d36ad.scope: Deactivated successfully.
Jan 20 18:41:49 compute-0 podman[88241]: 2026-01-20 18:41:49.222095175 +0000 UTC m=+0.300725709 container attach 311f85d0ff5f95b52bcf392dff9ba0fc7ad2204f2592706103e459c9480d36ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:41:49 compute-0 podman[88241]: 2026-01-20 18:41:49.222912127 +0000 UTC m=+0.301542661 container died 311f85d0ff5f95b52bcf392dff9ba0fc7ad2204f2592706103e459c9480d36ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Jan 20 18:41:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-85a95978e1e2df3e20637027b2f00378591a23372a5c95d58aeeaac61d8a3440-merged.mount: Deactivated successfully.
Jan 20 18:41:49 compute-0 sudo[88303]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svkomzbftqzkaqphoqvkgtlturtwdujz ; /usr/bin/python3'
Jan 20 18:41:49 compute-0 sudo[88303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:41:49 compute-0 podman[88241]: 2026-01-20 18:41:49.262449295 +0000 UTC m=+0.341079819 container remove 311f85d0ff5f95b52bcf392dff9ba0fc7ad2204f2592706103e459c9480d36ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:41:49 compute-0 systemd[1]: libpod-conmon-311f85d0ff5f95b52bcf392dff9ba0fc7ad2204f2592706103e459c9480d36ad.scope: Deactivated successfully.
Jan 20 18:41:49 compute-0 python3[88310]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/ssl_server_port 8443 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:41:49 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Jan 20 18:41:49 compute-0 podman[88321]: 2026-01-20 18:41:49.470069796 +0000 UTC m=+0.088316271 container create 56c91b9dbb7f95608bf7b7e8099da50f1fff4bab9ee1151d031b70c8abad24c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_buck, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:41:49 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 4.0 scrub starts
Jan 20 18:41:49 compute-0 podman[88321]: 2026-01-20 18:41:49.405263659 +0000 UTC m=+0.023510164 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:41:49 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 4.0 scrub ok
Jan 20 18:41:49 compute-0 ceph-mon[74381]: 5.16 scrub starts
Jan 20 18:41:49 compute-0 ceph-mon[74381]: 5.16 scrub ok
Jan 20 18:41:49 compute-0 ceph-mon[74381]: 3.f scrub starts
Jan 20 18:41:49 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 20 18:41:49 compute-0 ceph-mon[74381]: 3.f scrub ok
Jan 20 18:41:49 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:49 compute-0 ceph-mon[74381]: osd.2 [v2:192.168.122.102:6800/426593654,v1:192.168.122.102:6801/426593654] boot
Jan 20 18:41:49 compute-0 ceph-mon[74381]: osdmap e36: 3 total, 3 up, 3 in
Jan 20 18:41:49 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 20 18:41:49 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:49 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:49 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:49 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:49 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:49 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:49 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 18:41:49 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 18:41:49 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:41:49 compute-0 ceph-mon[74381]: 4.7 scrub starts
Jan 20 18:41:49 compute-0 ceph-mon[74381]: 4.7 scrub ok
Jan 20 18:41:49 compute-0 ceph-mon[74381]: pgmap v109: 131 pgs: 131 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Jan 20 18:41:49 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/1520162648' entity='client.admin' 
Jan 20 18:41:49 compute-0 ceph-mon[74381]: 5.9 scrub starts
Jan 20 18:41:49 compute-0 ceph-mon[74381]: 5.9 scrub ok
Jan 20 18:41:49 compute-0 systemd[1]: Started libpod-conmon-56c91b9dbb7f95608bf7b7e8099da50f1fff4bab9ee1151d031b70c8abad24c9.scope.
Jan 20 18:41:49 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Jan 20 18:41:49 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Jan 20 18:41:49 compute-0 podman[88333]: 2026-01-20 18:41:49.595313885 +0000 UTC m=+0.183313709 container create 0e89ec03fbf29d8293dac0f01332932712e4f65ead5e580967228a116bc76e27 (image=quay.io/ceph/ceph:v19, name=jovial_faraday, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Jan 20 18:41:49 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:41:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44d72edfaaa85ff9905ebc867203fe989fa9c0ca67d456249957d7c61b9f4e51/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 18:41:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44d72edfaaa85ff9905ebc867203fe989fa9c0ca67d456249957d7c61b9f4e51/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:41:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44d72edfaaa85ff9905ebc867203fe989fa9c0ca67d456249957d7c61b9f4e51/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:41:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44d72edfaaa85ff9905ebc867203fe989fa9c0ca67d456249957d7c61b9f4e51/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 18:41:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44d72edfaaa85ff9905ebc867203fe989fa9c0ca67d456249957d7c61b9f4e51/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 18:41:49 compute-0 systemd[1]: Started libpod-conmon-0e89ec03fbf29d8293dac0f01332932712e4f65ead5e580967228a116bc76e27.scope.
Jan 20 18:41:49 compute-0 podman[88321]: 2026-01-20 18:41:49.627438196 +0000 UTC m=+0.245684691 container init 56c91b9dbb7f95608bf7b7e8099da50f1fff4bab9ee1151d031b70c8abad24c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_buck, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:41:49 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:41:49 compute-0 podman[88321]: 2026-01-20 18:41:49.637619965 +0000 UTC m=+0.255866440 container start 56c91b9dbb7f95608bf7b7e8099da50f1fff4bab9ee1151d031b70c8abad24c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_buck, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 18:41:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1acc9c7dfa5f230f561e74ede9ece8baee6af274ca721eb3880ed0fa18f7c80/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 20 18:41:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1acc9c7dfa5f230f561e74ede9ece8baee6af274ca721eb3880ed0fa18f7c80/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:41:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1acc9c7dfa5f230f561e74ede9ece8baee6af274ca721eb3880ed0fa18f7c80/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:41:49 compute-0 podman[88321]: 2026-01-20 18:41:49.644482807 +0000 UTC m=+0.262729282 container attach 56c91b9dbb7f95608bf7b7e8099da50f1fff4bab9ee1151d031b70c8abad24c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_buck, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:41:49 compute-0 podman[88333]: 2026-01-20 18:41:49.651493213 +0000 UTC m=+0.239493057 container init 0e89ec03fbf29d8293dac0f01332932712e4f65ead5e580967228a116bc76e27 (image=quay.io/ceph/ceph:v19, name=jovial_faraday, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:41:49 compute-0 podman[88333]: 2026-01-20 18:41:49.658881679 +0000 UTC m=+0.246881503 container start 0e89ec03fbf29d8293dac0f01332932712e4f65ead5e580967228a116bc76e27 (image=quay.io/ceph/ceph:v19, name=jovial_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:41:49 compute-0 podman[88333]: 2026-01-20 18:41:49.662538206 +0000 UTC m=+0.250538030 container attach 0e89ec03fbf29d8293dac0f01332932712e4f65ead5e580967228a116bc76e27 (image=quay.io/ceph/ceph:v19, name=jovial_faraday, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Jan 20 18:41:49 compute-0 podman[88333]: 2026-01-20 18:41:49.569358546 +0000 UTC m=+0.157358390 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:41:49 compute-0 gallant_buck[88350]: --> passed data devices: 0 physical, 1 LVM
Jan 20 18:41:49 compute-0 gallant_buck[88350]: --> All data devices are unavailable
Jan 20 18:41:49 compute-0 systemd[1]: libpod-56c91b9dbb7f95608bf7b7e8099da50f1fff4bab9ee1151d031b70c8abad24c9.scope: Deactivated successfully.
Jan 20 18:41:49 compute-0 podman[88321]: 2026-01-20 18:41:49.992397556 +0000 UTC m=+0.610644031 container died 56c91b9dbb7f95608bf7b7e8099da50f1fff4bab9ee1151d031b70c8abad24c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_buck, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Jan 20 18:41:49 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ssl_server_port}] v 0)
Jan 20 18:41:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-44d72edfaaa85ff9905ebc867203fe989fa9c0ca67d456249957d7c61b9f4e51-merged.mount: Deactivated successfully.
Jan 20 18:41:50 compute-0 podman[88321]: 2026-01-20 18:41:50.037704757 +0000 UTC m=+0.655951232 container remove 56c91b9dbb7f95608bf7b7e8099da50f1fff4bab9ee1151d031b70c8abad24c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:41:50 compute-0 systemd[1]: libpod-conmon-56c91b9dbb7f95608bf7b7e8099da50f1fff4bab9ee1151d031b70c8abad24c9.scope: Deactivated successfully.
Jan 20 18:41:50 compute-0 sudo[88174]: pam_unix(sudo:session): session closed for user root
Jan 20 18:41:50 compute-0 sudo[88401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:41:50 compute-0 sudo[88401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:41:50 compute-0 sudo[88401]: pam_unix(sudo:session): session closed for user root
Jan 20 18:41:50 compute-0 sudo[88426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- lvm list --format json
Jan 20 18:41:50 compute-0 sudo[88426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:41:50 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3708826816' entity='client.admin' 
Jan 20 18:41:50 compute-0 systemd[1]: libpod-0e89ec03fbf29d8293dac0f01332932712e4f65ead5e580967228a116bc76e27.scope: Deactivated successfully.
Jan 20 18:41:50 compute-0 podman[88333]: 2026-01-20 18:41:50.345650496 +0000 UTC m=+0.933650310 container died 0e89ec03fbf29d8293dac0f01332932712e4f65ead5e580967228a116bc76e27 (image=quay.io/ceph/ceph:v19, name=jovial_faraday, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Jan 20 18:41:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-a1acc9c7dfa5f230f561e74ede9ece8baee6af274ca721eb3880ed0fa18f7c80-merged.mount: Deactivated successfully.
Jan 20 18:41:50 compute-0 podman[88333]: 2026-01-20 18:41:50.390959426 +0000 UTC m=+0.978959250 container remove 0e89ec03fbf29d8293dac0f01332932712e4f65ead5e580967228a116bc76e27 (image=quay.io/ceph/ceph:v19, name=jovial_faraday, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 20 18:41:50 compute-0 systemd[1]: libpod-conmon-0e89ec03fbf29d8293dac0f01332932712e4f65ead5e580967228a116bc76e27.scope: Deactivated successfully.
Jan 20 18:41:50 compute-0 sudo[88303]: pam_unix(sudo:session): session closed for user root
Jan 20 18:41:50 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 3.7 deep-scrub starts
Jan 20 18:41:50 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 3.7 deep-scrub ok
Jan 20 18:41:50 compute-0 sudo[88531]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oeudaqfmemlwvhsahgcdnmqiexzcvesq ; /usr/bin/python3'
Jan 20 18:41:50 compute-0 sudo[88531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:41:50 compute-0 podman[88524]: 2026-01-20 18:41:50.553269347 +0000 UTC m=+0.038584744 container create 40ad2ca218f1aff5ef7f050508502a0929036a3a1ebb0d526b2b3c1fcf1ea966 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_hofstadter, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 20 18:41:50 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v111: 131 pgs: 44 peering, 87 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:41:50 compute-0 ceph-mon[74381]: 4.0 scrub starts
Jan 20 18:41:50 compute-0 ceph-mon[74381]: 4.0 scrub ok
Jan 20 18:41:50 compute-0 ceph-mon[74381]: osdmap e37: 3 total, 3 up, 3 in
Jan 20 18:41:50 compute-0 ceph-mon[74381]: 4.a scrub starts
Jan 20 18:41:50 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/3708826816' entity='client.admin' 
Jan 20 18:41:50 compute-0 systemd[1]: Started libpod-conmon-40ad2ca218f1aff5ef7f050508502a0929036a3a1ebb0d526b2b3c1fcf1ea966.scope.
Jan 20 18:41:50 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:41:50 compute-0 podman[88524]: 2026-01-20 18:41:50.535523037 +0000 UTC m=+0.020838464 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:41:50 compute-0 podman[88524]: 2026-01-20 18:41:50.636481402 +0000 UTC m=+0.121796829 container init 40ad2ca218f1aff5ef7f050508502a0929036a3a1ebb0d526b2b3c1fcf1ea966 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_hofstadter, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:41:50 compute-0 podman[88524]: 2026-01-20 18:41:50.644197076 +0000 UTC m=+0.129512473 container start 40ad2ca218f1aff5ef7f050508502a0929036a3a1ebb0d526b2b3c1fcf1ea966 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:41:50 compute-0 optimistic_hofstadter[88548]: 167 167
Jan 20 18:41:50 compute-0 systemd[1]: libpod-40ad2ca218f1aff5ef7f050508502a0929036a3a1ebb0d526b2b3c1fcf1ea966.scope: Deactivated successfully.
Jan 20 18:41:50 compute-0 python3[88541]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/ssl false _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:41:50 compute-0 podman[88524]: 2026-01-20 18:41:50.703641111 +0000 UTC m=+0.188956508 container attach 40ad2ca218f1aff5ef7f050508502a0929036a3a1ebb0d526b2b3c1fcf1ea966 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 20 18:41:50 compute-0 podman[88524]: 2026-01-20 18:41:50.703985211 +0000 UTC m=+0.189300608 container died 40ad2ca218f1aff5ef7f050508502a0929036a3a1ebb0d526b2b3c1fcf1ea966 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:41:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-775ace00d05545b3573d562836d4c1f00f832ef25443022099dcfff836e89e18-merged.mount: Deactivated successfully.
Jan 20 18:41:50 compute-0 podman[88524]: 2026-01-20 18:41:50.788528371 +0000 UTC m=+0.273843768 container remove 40ad2ca218f1aff5ef7f050508502a0929036a3a1ebb0d526b2b3c1fcf1ea966 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_hofstadter, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:41:50 compute-0 systemd[1]: libpod-conmon-40ad2ca218f1aff5ef7f050508502a0929036a3a1ebb0d526b2b3c1fcf1ea966.scope: Deactivated successfully.
Jan 20 18:41:50 compute-0 podman[88564]: 2026-01-20 18:41:50.720272582 +0000 UTC m=+0.025143427 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:41:50 compute-0 podman[88564]: 2026-01-20 18:41:50.819705357 +0000 UTC m=+0.124576182 container create af3aeac542b718bc28030783e04208b6ff395383ef696cfaf720bf61b13aed63 (image=quay.io/ceph/ceph:v19, name=wonderful_turing, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Jan 20 18:41:50 compute-0 systemd[1]: Started libpod-conmon-af3aeac542b718bc28030783e04208b6ff395383ef696cfaf720bf61b13aed63.scope.
Jan 20 18:41:50 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:41:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ba02b21897277c85677adca6a3c0d48f211626b8501129ecf54c16f8f5457ed/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:41:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ba02b21897277c85677adca6a3c0d48f211626b8501129ecf54c16f8f5457ed/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:41:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ba02b21897277c85677adca6a3c0d48f211626b8501129ecf54c16f8f5457ed/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 20 18:41:50 compute-0 podman[88564]: 2026-01-20 18:41:50.918776342 +0000 UTC m=+0.223647187 container init af3aeac542b718bc28030783e04208b6ff395383ef696cfaf720bf61b13aed63 (image=quay.io/ceph/ceph:v19, name=wonderful_turing, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:41:50 compute-0 podman[88564]: 2026-01-20 18:41:50.926149127 +0000 UTC m=+0.231019952 container start af3aeac542b718bc28030783e04208b6ff395383ef696cfaf720bf61b13aed63 (image=quay.io/ceph/ceph:v19, name=wonderful_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 20 18:41:50 compute-0 podman[88564]: 2026-01-20 18:41:50.929486056 +0000 UTC m=+0.234356901 container attach af3aeac542b718bc28030783e04208b6ff395383ef696cfaf720bf61b13aed63 (image=quay.io/ceph/ceph:v19, name=wonderful_turing, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:41:50 compute-0 podman[88592]: 2026-01-20 18:41:50.941032942 +0000 UTC m=+0.038166142 container create b153f6b71befb41cbe814019869b53e688f78cfdb7b34658b2822909d4c7746f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:41:50 compute-0 systemd[1]: Started libpod-conmon-b153f6b71befb41cbe814019869b53e688f78cfdb7b34658b2822909d4c7746f.scope.
Jan 20 18:41:50 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:41:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e968b7c93087235a45b9a0ddce8c0e35f53c55ea6337206eef1841988f47ec1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 18:41:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e968b7c93087235a45b9a0ddce8c0e35f53c55ea6337206eef1841988f47ec1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:41:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e968b7c93087235a45b9a0ddce8c0e35f53c55ea6337206eef1841988f47ec1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:41:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e968b7c93087235a45b9a0ddce8c0e35f53c55ea6337206eef1841988f47ec1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 18:41:51 compute-0 podman[88592]: 2026-01-20 18:41:50.924991387 +0000 UTC m=+0.022124607 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:41:51 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ssl}] v 0)
Jan 20 18:41:51 compute-0 podman[88592]: 2026-01-20 18:41:51.316562993 +0000 UTC m=+0.413696203 container init b153f6b71befb41cbe814019869b53e688f78cfdb7b34658b2822909d4c7746f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Jan 20 18:41:51 compute-0 podman[88592]: 2026-01-20 18:41:51.322933522 +0000 UTC m=+0.420066722 container start b153f6b71befb41cbe814019869b53e688f78cfdb7b34658b2822909d4c7746f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_brown, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 20 18:41:51 compute-0 podman[88592]: 2026-01-20 18:41:51.339161901 +0000 UTC m=+0.436295121 container attach b153f6b71befb41cbe814019869b53e688f78cfdb7b34658b2822909d4c7746f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_brown, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Jan 20 18:41:51 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2934488772' entity='client.admin' 
Jan 20 18:41:51 compute-0 systemd[1]: libpod-af3aeac542b718bc28030783e04208b6ff395383ef696cfaf720bf61b13aed63.scope: Deactivated successfully.
Jan 20 18:41:51 compute-0 podman[88564]: 2026-01-20 18:41:51.428063256 +0000 UTC m=+0.732934081 container died af3aeac542b718bc28030783e04208b6ff395383ef696cfaf720bf61b13aed63 (image=quay.io/ceph/ceph:v19, name=wonderful_turing, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:41:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ba02b21897277c85677adca6a3c0d48f211626b8501129ecf54c16f8f5457ed-merged.mount: Deactivated successfully.
Jan 20 18:41:51 compute-0 podman[88564]: 2026-01-20 18:41:51.482655793 +0000 UTC m=+0.787526618 container remove af3aeac542b718bc28030783e04208b6ff395383ef696cfaf720bf61b13aed63 (image=quay.io/ceph/ceph:v19, name=wonderful_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 20 18:41:51 compute-0 systemd[1]: libpod-conmon-af3aeac542b718bc28030783e04208b6ff395383ef696cfaf720bf61b13aed63.scope: Deactivated successfully.
Jan 20 18:41:51 compute-0 sudo[88531]: pam_unix(sudo:session): session closed for user root
Jan 20 18:41:51 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Jan 20 18:41:51 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Jan 20 18:41:51 compute-0 silly_brown[88609]: {
Jan 20 18:41:51 compute-0 silly_brown[88609]:     "0": [
Jan 20 18:41:51 compute-0 silly_brown[88609]:         {
Jan 20 18:41:51 compute-0 silly_brown[88609]:             "devices": [
Jan 20 18:41:51 compute-0 silly_brown[88609]:                 "/dev/loop3"
Jan 20 18:41:51 compute-0 silly_brown[88609]:             ],
Jan 20 18:41:51 compute-0 silly_brown[88609]:             "lv_name": "ceph_lv0",
Jan 20 18:41:51 compute-0 silly_brown[88609]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 18:41:51 compute-0 silly_brown[88609]:             "lv_size": "21470642176",
Jan 20 18:41:51 compute-0 silly_brown[88609]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=aecbbf3b-b405-507b-97d7-637a83f5b4b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5f53c0c6-6046-4836-83f9-ff93da7e674e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 18:41:51 compute-0 silly_brown[88609]:             "lv_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 18:41:51 compute-0 silly_brown[88609]:             "name": "ceph_lv0",
Jan 20 18:41:51 compute-0 silly_brown[88609]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 18:41:51 compute-0 silly_brown[88609]:             "tags": {
Jan 20 18:41:51 compute-0 silly_brown[88609]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 18:41:51 compute-0 silly_brown[88609]:                 "ceph.block_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 18:41:51 compute-0 silly_brown[88609]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 18:41:51 compute-0 silly_brown[88609]:                 "ceph.cluster_fsid": "aecbbf3b-b405-507b-97d7-637a83f5b4b1",
Jan 20 18:41:51 compute-0 silly_brown[88609]:                 "ceph.cluster_name": "ceph",
Jan 20 18:41:51 compute-0 silly_brown[88609]:                 "ceph.crush_device_class": "",
Jan 20 18:41:51 compute-0 silly_brown[88609]:                 "ceph.encrypted": "0",
Jan 20 18:41:51 compute-0 silly_brown[88609]:                 "ceph.osd_fsid": "5f53c0c6-6046-4836-83f9-ff93da7e674e",
Jan 20 18:41:51 compute-0 silly_brown[88609]:                 "ceph.osd_id": "0",
Jan 20 18:41:51 compute-0 silly_brown[88609]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 18:41:51 compute-0 silly_brown[88609]:                 "ceph.type": "block",
Jan 20 18:41:51 compute-0 silly_brown[88609]:                 "ceph.vdo": "0",
Jan 20 18:41:51 compute-0 silly_brown[88609]:                 "ceph.with_tpm": "0"
Jan 20 18:41:51 compute-0 silly_brown[88609]:             },
Jan 20 18:41:51 compute-0 silly_brown[88609]:             "type": "block",
Jan 20 18:41:51 compute-0 silly_brown[88609]:             "vg_name": "ceph_vg0"
Jan 20 18:41:51 compute-0 silly_brown[88609]:         }
Jan 20 18:41:51 compute-0 silly_brown[88609]:     ]
Jan 20 18:41:51 compute-0 silly_brown[88609]: }
Jan 20 18:41:51 compute-0 systemd[1]: libpod-b153f6b71befb41cbe814019869b53e688f78cfdb7b34658b2822909d4c7746f.scope: Deactivated successfully.
Jan 20 18:41:51 compute-0 ceph-mon[74381]: 4.2 scrub starts
Jan 20 18:41:51 compute-0 ceph-mon[74381]: 4.2 scrub ok
Jan 20 18:41:51 compute-0 conmon[88609]: conmon b153f6b71befb41cbe81 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b153f6b71befb41cbe814019869b53e688f78cfdb7b34658b2822909d4c7746f.scope/container/memory.events
Jan 20 18:41:51 compute-0 ceph-mon[74381]: 4.a scrub ok
Jan 20 18:41:51 compute-0 ceph-mon[74381]: 3.7 deep-scrub starts
Jan 20 18:41:51 compute-0 ceph-mon[74381]: 3.7 deep-scrub ok
Jan 20 18:41:51 compute-0 ceph-mon[74381]: pgmap v111: 131 pgs: 44 peering, 87 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:41:51 compute-0 ceph-mon[74381]: 3.1b scrub starts
Jan 20 18:41:51 compute-0 ceph-mon[74381]: 3.1b scrub ok
Jan 20 18:41:51 compute-0 ceph-mon[74381]: 3.c scrub starts
Jan 20 18:41:51 compute-0 ceph-mon[74381]: 3.c scrub ok
Jan 20 18:41:51 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/2934488772' entity='client.admin' 
Jan 20 18:41:51 compute-0 podman[88592]: 2026-01-20 18:41:51.622063376 +0000 UTC m=+0.719196596 container died b153f6b71befb41cbe814019869b53e688f78cfdb7b34658b2822909d4c7746f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_brown, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Jan 20 18:41:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e968b7c93087235a45b9a0ddce8c0e35f53c55ea6337206eef1841988f47ec1-merged.mount: Deactivated successfully.
Jan 20 18:41:51 compute-0 podman[88592]: 2026-01-20 18:41:51.77503415 +0000 UTC m=+0.872167350 container remove b153f6b71befb41cbe814019869b53e688f78cfdb7b34658b2822909d4c7746f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 20 18:41:51 compute-0 systemd[1]: libpod-conmon-b153f6b71befb41cbe814019869b53e688f78cfdb7b34658b2822909d4c7746f.scope: Deactivated successfully.
Jan 20 18:41:51 compute-0 sudo[88426]: pam_unix(sudo:session): session closed for user root
Jan 20 18:41:51 compute-0 sudo[88667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:41:51 compute-0 sudo[88667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:41:51 compute-0 sudo[88667]: pam_unix(sudo:session): session closed for user root
Jan 20 18:41:51 compute-0 sudo[88693]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- raw list --format json
Jan 20 18:41:51 compute-0 sudo[88693]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:41:51 compute-0 sudo[88738]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vibxedmlcbafgiyuhervgrevegazmlmh ; /usr/bin/python3'
Jan 20 18:41:51 compute-0 sudo[88738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:41:52 compute-0 python3[88742]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a -f 'name=ceph-?(.*)-mgr.*' --format \{\{\.Command\}\} --no-trunc
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:41:52 compute-0 sudo[88738]: pam_unix(sudo:session): session closed for user root
Jan 20 18:41:52 compute-0 podman[88795]: 2026-01-20 18:41:52.307002056 +0000 UTC m=+0.065084256 container create fad6eed30b316c7e675e8f5c74e61a4cf43ad32c185974a05665c72e5616ca0c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_burnell, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Jan 20 18:41:52 compute-0 systemd[1]: Started libpod-conmon-fad6eed30b316c7e675e8f5c74e61a4cf43ad32c185974a05665c72e5616ca0c.scope.
Jan 20 18:41:52 compute-0 podman[88795]: 2026-01-20 18:41:52.26305089 +0000 UTC m=+0.021133090 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:41:52 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:41:52 compute-0 sudo[88837]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jreyxgpyccvdsftqwmrxfnjxhtofmbzb ; /usr/bin/python3'
Jan 20 18:41:52 compute-0 sudo[88837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:41:52 compute-0 podman[88795]: 2026-01-20 18:41:52.447215161 +0000 UTC m=+0.205297391 container init fad6eed30b316c7e675e8f5c74e61a4cf43ad32c185974a05665c72e5616ca0c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_burnell, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Jan 20 18:41:52 compute-0 podman[88795]: 2026-01-20 18:41:52.455756327 +0000 UTC m=+0.213838527 container start fad6eed30b316c7e675e8f5c74e61a4cf43ad32c185974a05665c72e5616ca0c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_burnell, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 20 18:41:52 compute-0 priceless_burnell[88811]: 167 167
Jan 20 18:41:52 compute-0 systemd[1]: libpod-fad6eed30b316c7e675e8f5c74e61a4cf43ad32c185974a05665c72e5616ca0c.scope: Deactivated successfully.
Jan 20 18:41:52 compute-0 python3[88839]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-0.cepfkm/server_addr 192.168.122.100
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:41:52 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Jan 20 18:41:52 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Jan 20 18:41:52 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v112: 131 pgs: 44 peering, 87 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:41:52 compute-0 podman[88795]: 2026-01-20 18:41:52.604357564 +0000 UTC m=+0.362439844 container attach fad6eed30b316c7e675e8f5c74e61a4cf43ad32c185974a05665c72e5616ca0c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_burnell, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Jan 20 18:41:52 compute-0 podman[88795]: 2026-01-20 18:41:52.60531948 +0000 UTC m=+0.363401700 container died fad6eed30b316c7e675e8f5c74e61a4cf43ad32c185974a05665c72e5616ca0c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_burnell, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:41:52 compute-0 podman[88853]: 2026-01-20 18:41:52.614047561 +0000 UTC m=+0.059183389 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:41:52 compute-0 ceph-mon[74381]: 3.1 scrub starts
Jan 20 18:41:52 compute-0 ceph-mon[74381]: 3.1 scrub ok
Jan 20 18:41:52 compute-0 ceph-mon[74381]: 4.1c scrub starts
Jan 20 18:41:52 compute-0 ceph-mon[74381]: 4.1c scrub ok
Jan 20 18:41:52 compute-0 ceph-mon[74381]: 3.3 scrub starts
Jan 20 18:41:52 compute-0 ceph-mon[74381]: 3.3 scrub ok
Jan 20 18:41:52 compute-0 ceph-mon[74381]: 4.6 scrub starts
Jan 20 18:41:52 compute-0 podman[88853]: 2026-01-20 18:41:52.732370486 +0000 UTC m=+0.177506294 container create f08fb73e3eb6f7e2aa806f3cae96583dcdca896d062714c2200b5ff53a20f046 (image=quay.io/ceph/ceph:v19, name=affectionate_vaughan, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:41:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-af17b8e2f32acc84454e4a42937c48e693b2f81954e029f016db140fc75ba0c0-merged.mount: Deactivated successfully.
Jan 20 18:41:52 compute-0 podman[88795]: 2026-01-20 18:41:52.858753395 +0000 UTC m=+0.616835595 container remove fad6eed30b316c7e675e8f5c74e61a4cf43ad32c185974a05665c72e5616ca0c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_burnell, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True)
Jan 20 18:41:52 compute-0 systemd[1]: Started libpod-conmon-f08fb73e3eb6f7e2aa806f3cae96583dcdca896d062714c2200b5ff53a20f046.scope.
Jan 20 18:41:52 compute-0 systemd[1]: libpod-conmon-fad6eed30b316c7e675e8f5c74e61a4cf43ad32c185974a05665c72e5616ca0c.scope: Deactivated successfully.
Jan 20 18:41:52 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:41:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dac61d9feb219508ca833802329d9c009ffca31c254fb86980e1264cfec5a55b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:41:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dac61d9feb219508ca833802329d9c009ffca31c254fb86980e1264cfec5a55b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:41:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dac61d9feb219508ca833802329d9c009ffca31c254fb86980e1264cfec5a55b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 20 18:41:52 compute-0 podman[88853]: 2026-01-20 18:41:52.9827406 +0000 UTC m=+0.427876438 container init f08fb73e3eb6f7e2aa806f3cae96583dcdca896d062714c2200b5ff53a20f046 (image=quay.io/ceph/ceph:v19, name=affectionate_vaughan, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Jan 20 18:41:52 compute-0 podman[88853]: 2026-01-20 18:41:52.990928257 +0000 UTC m=+0.436064065 container start f08fb73e3eb6f7e2aa806f3cae96583dcdca896d062714c2200b5ff53a20f046 (image=quay.io/ceph/ceph:v19, name=affectionate_vaughan, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:41:52 compute-0 podman[88853]: 2026-01-20 18:41:52.995243481 +0000 UTC m=+0.440379289 container attach f08fb73e3eb6f7e2aa806f3cae96583dcdca896d062714c2200b5ff53a20f046 (image=quay.io/ceph/ceph:v19, name=affectionate_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Jan 20 18:41:53 compute-0 podman[88880]: 2026-01-20 18:41:53.007597479 +0000 UTC m=+0.040685189 container create 768ca533e4ffdc798672cbaa939fac5f69a5d5bf9e9c0f92c26f38ffed410207 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:41:53 compute-0 systemd[1]: Started libpod-conmon-768ca533e4ffdc798672cbaa939fac5f69a5d5bf9e9c0f92c26f38ffed410207.scope.
Jan 20 18:41:53 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:41:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/323f324ae31a99426b6d07fc4ff6500e981044bea8200415b15bb372c00976c5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 18:41:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/323f324ae31a99426b6d07fc4ff6500e981044bea8200415b15bb372c00976c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:41:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/323f324ae31a99426b6d07fc4ff6500e981044bea8200415b15bb372c00976c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:41:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/323f324ae31a99426b6d07fc4ff6500e981044bea8200415b15bb372c00976c5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 18:41:53 compute-0 podman[88880]: 2026-01-20 18:41:52.990302501 +0000 UTC m=+0.023390241 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:41:53 compute-0 podman[88880]: 2026-01-20 18:41:53.08690695 +0000 UTC m=+0.119994680 container init 768ca533e4ffdc798672cbaa939fac5f69a5d5bf9e9c0f92c26f38ffed410207 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_raman, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 18:41:53 compute-0 podman[88880]: 2026-01-20 18:41:53.094389538 +0000 UTC m=+0.127477248 container start 768ca533e4ffdc798672cbaa939fac5f69a5d5bf9e9c0f92c26f38ffed410207 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_raman, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 20 18:41:53 compute-0 podman[88880]: 2026-01-20 18:41:53.100194642 +0000 UTC m=+0.133282362 container attach 768ca533e4ffdc798672cbaa939fac5f69a5d5bf9e9c0f92c26f38ffed410207 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_raman, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Jan 20 18:41:53 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-0.cepfkm/server_addr}] v 0)
Jan 20 18:41:53 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Jan 20 18:41:53 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Jan 20 18:41:53 compute-0 lvm[88992]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 18:41:53 compute-0 lvm[88992]: VG ceph_vg0 finished
Jan 20 18:41:53 compute-0 sleepy_raman[88897]: {}
Jan 20 18:41:53 compute-0 systemd[1]: libpod-768ca533e4ffdc798672cbaa939fac5f69a5d5bf9e9c0f92c26f38ffed410207.scope: Deactivated successfully.
Jan 20 18:41:53 compute-0 systemd[1]: libpod-768ca533e4ffdc798672cbaa939fac5f69a5d5bf9e9c0f92c26f38ffed410207.scope: Consumed 1.064s CPU time.
Jan 20 18:41:53 compute-0 podman[88880]: 2026-01-20 18:41:53.830792831 +0000 UTC m=+0.863880541 container died 768ca533e4ffdc798672cbaa939fac5f69a5d5bf9e9c0f92c26f38ffed410207 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:41:53 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/716967892' entity='client.admin' 
Jan 20 18:41:53 compute-0 ceph-mon[74381]: 3.2 scrub starts
Jan 20 18:41:53 compute-0 ceph-mon[74381]: 3.2 scrub ok
Jan 20 18:41:53 compute-0 ceph-mon[74381]: pgmap v112: 131 pgs: 44 peering, 87 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:41:53 compute-0 ceph-mon[74381]: 4.6 scrub ok
Jan 20 18:41:53 compute-0 ceph-mon[74381]: 4.5 deep-scrub starts
Jan 20 18:41:53 compute-0 ceph-mon[74381]: 4.5 deep-scrub ok
Jan 20 18:41:53 compute-0 systemd[1]: libpod-f08fb73e3eb6f7e2aa806f3cae96583dcdca896d062714c2200b5ff53a20f046.scope: Deactivated successfully.
Jan 20 18:41:53 compute-0 podman[88853]: 2026-01-20 18:41:53.897973531 +0000 UTC m=+1.343109349 container died f08fb73e3eb6f7e2aa806f3cae96583dcdca896d062714c2200b5ff53a20f046 (image=quay.io/ceph/ceph:v19, name=affectionate_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Jan 20 18:41:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-323f324ae31a99426b6d07fc4ff6500e981044bea8200415b15bb372c00976c5-merged.mount: Deactivated successfully.
Jan 20 18:41:53 compute-0 podman[88880]: 2026-01-20 18:41:53.925169231 +0000 UTC m=+0.958256941 container remove 768ca533e4ffdc798672cbaa939fac5f69a5d5bf9e9c0f92c26f38ffed410207 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_raman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:41:53 compute-0 systemd[1]: libpod-conmon-768ca533e4ffdc798672cbaa939fac5f69a5d5bf9e9c0f92c26f38ffed410207.scope: Deactivated successfully.
Jan 20 18:41:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-dac61d9feb219508ca833802329d9c009ffca31c254fb86980e1264cfec5a55b-merged.mount: Deactivated successfully.
Jan 20 18:41:53 compute-0 podman[88853]: 2026-01-20 18:41:53.951913979 +0000 UTC m=+1.397049787 container remove f08fb73e3eb6f7e2aa806f3cae96583dcdca896d062714c2200b5ff53a20f046 (image=quay.io/ceph/ceph:v19, name=affectionate_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True)
Jan 20 18:41:53 compute-0 systemd[1]: libpod-conmon-f08fb73e3eb6f7e2aa806f3cae96583dcdca896d062714c2200b5ff53a20f046.scope: Deactivated successfully.
Jan 20 18:41:53 compute-0 sudo[88693]: pam_unix(sudo:session): session closed for user root
Jan 20 18:41:53 compute-0 sudo[88837]: pam_unix(sudo:session): session closed for user root
Jan 20 18:41:53 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 18:41:53 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:53 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 18:41:54 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:54 compute-0 ceph-mgr[74676]: [progress INFO root] update: starting ev ca8d558d-92ff-44a4-acaa-2d2a3cf2b0c4 (Updating rgw.rgw deployment (+3 -> 3))
Jan 20 18:41:54 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.mqbqmb", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Jan 20 18:41:54 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.mqbqmb", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 20 18:41:54 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.mqbqmb", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 20 18:41:54 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Jan 20 18:41:54 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:54 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 18:41:54 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:41:54 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-2.mqbqmb on compute-2
Jan 20 18:41:54 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-2.mqbqmb on compute-2
Jan 20 18:41:54 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e37 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 18:41:54 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v113: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:41:54 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Jan 20 18:41:54 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Jan 20 18:41:54 compute-0 sudo[89044]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzeblnvtppiczzvvitgpledgllolwrxw ; /usr/bin/python3'
Jan 20 18:41:54 compute-0 sudo[89044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:41:54 compute-0 python3[89046]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-1.whkwsm/server_addr 192.168.122.101
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:41:54 compute-0 podman[89047]: 2026-01-20 18:41:54.836497568 +0000 UTC m=+0.026705079 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:41:55 compute-0 podman[89047]: 2026-01-20 18:41:55.118253014 +0000 UTC m=+0.308460515 container create 793847fab2646915ed865d2b14926b7f32b7aca5b03135c64b519c7d6aae03f6 (image=quay.io/ceph/ceph:v19, name=relaxed_pare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:41:55 compute-0 ceph-mon[74381]: 5.5 scrub starts
Jan 20 18:41:55 compute-0 ceph-mon[74381]: 5.5 scrub ok
Jan 20 18:41:55 compute-0 ceph-mon[74381]: 2.d scrub starts
Jan 20 18:41:55 compute-0 ceph-mon[74381]: 2.d scrub ok
Jan 20 18:41:55 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/716967892' entity='client.admin' 
Jan 20 18:41:55 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:55 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:55 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.mqbqmb", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 20 18:41:55 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.mqbqmb", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 20 18:41:55 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:55 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:41:55 compute-0 ceph-mon[74381]: Deploying daemon rgw.rgw.compute-2.mqbqmb on compute-2
Jan 20 18:41:55 compute-0 ceph-mon[74381]: 5.7 scrub starts
Jan 20 18:41:55 compute-0 ceph-mon[74381]: 5.7 scrub ok
Jan 20 18:41:55 compute-0 ceph-mon[74381]: pgmap v113: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:41:55 compute-0 systemd[1]: Started libpod-conmon-793847fab2646915ed865d2b14926b7f32b7aca5b03135c64b519c7d6aae03f6.scope.
Jan 20 18:41:55 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:41:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51ed72aa489f60a37c6603fd3207b29564f907c9c5248a963b5f9ec62eb9aab1/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 20 18:41:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51ed72aa489f60a37c6603fd3207b29564f907c9c5248a963b5f9ec62eb9aab1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:41:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51ed72aa489f60a37c6603fd3207b29564f907c9c5248a963b5f9ec62eb9aab1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:41:55 compute-0 podman[89047]: 2026-01-20 18:41:55.230585701 +0000 UTC m=+0.420793202 container init 793847fab2646915ed865d2b14926b7f32b7aca5b03135c64b519c7d6aae03f6 (image=quay.io/ceph/ceph:v19, name=relaxed_pare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 20 18:41:55 compute-0 podman[89047]: 2026-01-20 18:41:55.238944382 +0000 UTC m=+0.429151883 container start 793847fab2646915ed865d2b14926b7f32b7aca5b03135c64b519c7d6aae03f6 (image=quay.io/ceph/ceph:v19, name=relaxed_pare, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:41:55 compute-0 podman[89047]: 2026-01-20 18:41:55.242512876 +0000 UTC m=+0.432720377 container attach 793847fab2646915ed865d2b14926b7f32b7aca5b03135c64b519c7d6aae03f6 (image=quay.io/ceph/ceph:v19, name=relaxed_pare, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:41:55 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Jan 20 18:41:55 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Jan 20 18:41:55 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-1.whkwsm/server_addr}] v 0)
Jan 20 18:41:55 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2128350925' entity='client.admin' 
Jan 20 18:41:55 compute-0 systemd[1]: libpod-793847fab2646915ed865d2b14926b7f32b7aca5b03135c64b519c7d6aae03f6.scope: Deactivated successfully.
Jan 20 18:41:55 compute-0 podman[89047]: 2026-01-20 18:41:55.65178375 +0000 UTC m=+0.841991231 container died 793847fab2646915ed865d2b14926b7f32b7aca5b03135c64b519c7d6aae03f6 (image=quay.io/ceph/ceph:v19, name=relaxed_pare, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1)
Jan 20 18:41:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-51ed72aa489f60a37c6603fd3207b29564f907c9c5248a963b5f9ec62eb9aab1-merged.mount: Deactivated successfully.
Jan 20 18:41:56 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 20 18:41:56 compute-0 podman[89047]: 2026-01-20 18:41:56.093043823 +0000 UTC m=+1.283251304 container remove 793847fab2646915ed865d2b14926b7f32b7aca5b03135c64b519c7d6aae03f6 (image=quay.io/ceph/ceph:v19, name=relaxed_pare, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:41:56 compute-0 sudo[89044]: pam_unix(sudo:session): session closed for user root
Jan 20 18:41:56 compute-0 systemd[1]: libpod-conmon-793847fab2646915ed865d2b14926b7f32b7aca5b03135c64b519c7d6aae03f6.scope: Deactivated successfully.
Jan 20 18:41:56 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:56 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 20 18:41:56 compute-0 ceph-mon[74381]: 4.4 scrub starts
Jan 20 18:41:56 compute-0 ceph-mon[74381]: 4.4 scrub ok
Jan 20 18:41:56 compute-0 ceph-mon[74381]: 2.c scrub starts
Jan 20 18:41:56 compute-0 ceph-mon[74381]: 2.c scrub ok
Jan 20 18:41:56 compute-0 ceph-mon[74381]: 5.2 scrub starts
Jan 20 18:41:56 compute-0 ceph-mon[74381]: 5.2 scrub ok
Jan 20 18:41:56 compute-0 ceph-mon[74381]: 3.4 scrub starts
Jan 20 18:41:56 compute-0 ceph-mon[74381]: 3.4 scrub ok
Jan 20 18:41:56 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/2128350925' entity='client.admin' 
Jan 20 18:41:56 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Jan 20 18:41:56 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v114: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:41:56 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Jan 20 18:41:56 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Jan 20 18:41:56 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:41:56 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Jan 20 18:41:56 compute-0 sudo[89124]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtaebjhlyosuooaucaqzgpgqsjstbmzb ; /usr/bin/python3'
Jan 20 18:41:56 compute-0 sudo[89124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:41:57 compute-0 python3[89126]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-2.pyghhf/server_addr 192.168.122.102
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:41:57 compute-0 podman[89127]: 2026-01-20 18:41:57.116882401 +0000 UTC m=+0.022594500 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:41:57 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 4.f scrub starts
Jan 20 18:41:57 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 4.f scrub ok
Jan 20 18:41:58 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 5.c scrub starts
Jan 20 18:41:58 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v115: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:42:00 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v116: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:42:00 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Jan 20 18:42:00 compute-0 podman[89127]: 2026-01-20 18:42:00.833419247 +0000 UTC m=+3.739131326 container create 51ff8cf495dad5dcd260c8c9ecc236502f4c1ab7c730647cc664de7cecfd0b38 (image=quay.io/ceph/ceph:v19, name=mystifying_chebyshev, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 20 18:42:00 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Jan 20 18:42:00 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 5.c scrub ok
Jan 20 18:42:00 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Jan 20 18:42:00 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0)
Jan 20 18:42:00 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.mqbqmb' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Jan 20 18:42:00 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Jan 20 18:42:00 compute-0 ceph-mon[74381]: 5.d scrub starts
Jan 20 18:42:00 compute-0 ceph-mon[74381]: 5.d scrub ok
Jan 20 18:42:00 compute-0 ceph-mon[74381]: 3.5 scrub starts
Jan 20 18:42:00 compute-0 ceph-mon[74381]: 3.5 scrub ok
Jan 20 18:42:00 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:00 compute-0 ceph-mon[74381]: 3.6 scrub starts
Jan 20 18:42:00 compute-0 ceph-mon[74381]: pgmap v114: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:42:00 compute-0 ceph-mon[74381]: 3.6 scrub ok
Jan 20 18:42:00 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:00 compute-0 ceph-mon[74381]: 5.1 scrub starts
Jan 20 18:42:00 compute-0 ceph-mon[74381]: 5.1 scrub ok
Jan 20 18:42:01 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:01 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.unzimq", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Jan 20 18:42:01 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.unzimq", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 20 18:42:01 compute-0 systemd[1]: Started libpod-conmon-51ff8cf495dad5dcd260c8c9ecc236502f4c1ab7c730647cc664de7cecfd0b38.scope.
Jan 20 18:42:01 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:42:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00938383ef7cb8b11e7c723ae8a70fbb1b00065c713e9b0324fad86a57523594/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:42:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00938383ef7cb8b11e7c723ae8a70fbb1b00065c713e9b0324fad86a57523594/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:42:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00938383ef7cb8b11e7c723ae8a70fbb1b00065c713e9b0324fad86a57523594/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 20 18:42:01 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.unzimq", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 20 18:42:01 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Jan 20 18:42:01 compute-0 podman[89127]: 2026-01-20 18:42:01.609941463 +0000 UTC m=+4.515653562 container init 51ff8cf495dad5dcd260c8c9ecc236502f4c1ab7c730647cc664de7cecfd0b38 (image=quay.io/ceph/ceph:v19, name=mystifying_chebyshev, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:42:01 compute-0 podman[89127]: 2026-01-20 18:42:01.616186028 +0000 UTC m=+4.521898107 container start 51ff8cf495dad5dcd260c8c9ecc236502f4c1ab7c730647cc664de7cecfd0b38 (image=quay.io/ceph/ceph:v19, name=mystifying_chebyshev, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 20 18:42:01 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:01 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 18:42:01 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:42:01 compute-0 podman[89127]: 2026-01-20 18:42:01.649703846 +0000 UTC m=+4.555415955 container attach 51ff8cf495dad5dcd260c8c9ecc236502f4c1ab7c730647cc664de7cecfd0b38 (image=quay.io/ceph/ceph:v19, name=mystifying_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Jan 20 18:42:01 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-1.unzimq on compute-1
Jan 20 18:42:01 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-1.unzimq on compute-1
Jan 20 18:42:01 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Jan 20 18:42:01 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Jan 20 18:42:01 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Jan 20 18:42:02 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-2.pyghhf/server_addr}] v 0)
Jan 20 18:42:02 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.mqbqmb' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Jan 20 18:42:02 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Jan 20 18:42:02 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 38 pg[8.0( empty local-lis/les=0/0 n=0 ec=38/38 lis/c=0/0 les/c/f=0/0/0 sis=38) [0] r=0 lpr=38 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:42:02 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Jan 20 18:42:02 compute-0 ceph-mon[74381]: 4.19 scrub starts
Jan 20 18:42:02 compute-0 ceph-mon[74381]: 4.19 scrub ok
Jan 20 18:42:02 compute-0 ceph-mon[74381]: 4.f scrub starts
Jan 20 18:42:02 compute-0 ceph-mon[74381]: 4.f scrub ok
Jan 20 18:42:02 compute-0 ceph-mon[74381]: 3.0 deep-scrub starts
Jan 20 18:42:02 compute-0 ceph-mon[74381]: 3.0 deep-scrub ok
Jan 20 18:42:02 compute-0 ceph-mon[74381]: 4.e scrub starts
Jan 20 18:42:02 compute-0 ceph-mon[74381]: 4.e scrub ok
Jan 20 18:42:02 compute-0 ceph-mon[74381]: 5.c scrub starts
Jan 20 18:42:02 compute-0 ceph-mon[74381]: pgmap v115: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:42:02 compute-0 ceph-mon[74381]: 4.3 scrub starts
Jan 20 18:42:02 compute-0 ceph-mon[74381]: 4.3 scrub ok
Jan 20 18:42:02 compute-0 ceph-mon[74381]: 3.a scrub starts
Jan 20 18:42:02 compute-0 ceph-mon[74381]: 3.a scrub ok
Jan 20 18:42:02 compute-0 ceph-mon[74381]: 5.0 scrub starts
Jan 20 18:42:02 compute-0 ceph-mon[74381]: 5.0 scrub ok
Jan 20 18:42:02 compute-0 ceph-mon[74381]: 5.f scrub starts
Jan 20 18:42:02 compute-0 ceph-mon[74381]: 5.f scrub ok
Jan 20 18:42:02 compute-0 ceph-mon[74381]: pgmap v116: 131 pgs: 131 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:42:02 compute-0 ceph-mon[74381]: 4.15 scrub starts
Jan 20 18:42:02 compute-0 ceph-mon[74381]: 4.15 scrub ok
Jan 20 18:42:02 compute-0 ceph-mon[74381]: 5.1d scrub starts
Jan 20 18:42:02 compute-0 ceph-mon[74381]: 5.c scrub ok
Jan 20 18:42:02 compute-0 ceph-mon[74381]: osdmap e38: 3 total, 3 up, 3 in
Jan 20 18:42:02 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/522261975' entity='client.rgw.rgw.compute-2.mqbqmb' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Jan 20 18:42:02 compute-0 ceph-mon[74381]: from='client.? ' entity='client.rgw.rgw.compute-2.mqbqmb' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Jan 20 18:42:02 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:02 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.unzimq", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 20 18:42:02 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.unzimq", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 20 18:42:02 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:02 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:42:02 compute-0 ceph-mon[74381]: Deploying daemon rgw.rgw.compute-1.unzimq on compute-1
Jan 20 18:42:02 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2051329523' entity='client.admin' 
Jan 20 18:42:02 compute-0 systemd[1]: libpod-51ff8cf495dad5dcd260c8c9ecc236502f4c1ab7c730647cc664de7cecfd0b38.scope: Deactivated successfully.
Jan 20 18:42:02 compute-0 podman[89127]: 2026-01-20 18:42:02.163577102 +0000 UTC m=+5.069289201 container died 51ff8cf495dad5dcd260c8c9ecc236502f4c1ab7c730647cc664de7cecfd0b38 (image=quay.io/ceph/ceph:v19, name=mystifying_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Jan 20 18:42:02 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v119: 132 pgs: 1 unknown, 1 active+clean+scrubbing, 130 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:42:02 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 5.19 deep-scrub starts
Jan 20 18:42:02 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 5.19 deep-scrub ok
Jan 20 18:42:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-00938383ef7cb8b11e7c723ae8a70fbb1b00065c713e9b0324fad86a57523594-merged.mount: Deactivated successfully.
Jan 20 18:42:03 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Jan 20 18:42:03 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 20 18:42:03 compute-0 podman[89127]: 2026-01-20 18:42:03.512231457 +0000 UTC m=+6.417943546 container remove 51ff8cf495dad5dcd260c8c9ecc236502f4c1ab7c730647cc664de7cecfd0b38 (image=quay.io/ceph/ceph:v19, name=mystifying_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Jan 20 18:42:03 compute-0 systemd[1]: libpod-conmon-51ff8cf495dad5dcd260c8c9ecc236502f4c1ab7c730647cc664de7cecfd0b38.scope: Deactivated successfully.
Jan 20 18:42:03 compute-0 sudo[89124]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:03 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Jan 20 18:42:03 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Jan 20 18:42:03 compute-0 sudo[89203]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtiauhxukcsdsrzncglphiyjsmhiarul ; /usr/bin/python3'
Jan 20 18:42:03 compute-0 sudo[89203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:42:03 compute-0 ceph-mgr[74676]: [progress WARNING root] Starting Global Recovery Event,1 pgs not in active + clean state
Jan 20 18:42:03 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Jan 20 18:42:03 compute-0 python3[89205]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module disable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:42:04 compute-0 podman[89206]: 2026-01-20 18:42:03.992699999 +0000 UTC m=+0.020394122 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:42:04 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 18:42:04 compute-0 podman[89206]: 2026-01-20 18:42:04.199707073 +0000 UTC m=+0.227401176 container create 4d17533857fe12fead23038519a46e03de294a30960c603f1856eae81b3e622c (image=quay.io/ceph/ceph:v19, name=bold_elgamal, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:42:04 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Jan 20 18:42:04 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 40 pg[8.0( empty local-lis/les=38/40 n=0 ec=38/38 lis/c=0/0 les/c/f=0/0/0 sis=38) [0] r=0 lpr=38 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:42:04 compute-0 ceph-mon[74381]: 5.1d scrub ok
Jan 20 18:42:04 compute-0 ceph-mon[74381]: 4.c scrub starts
Jan 20 18:42:04 compute-0 ceph-mon[74381]: 4.c scrub ok
Jan 20 18:42:04 compute-0 ceph-mon[74381]: 4.1 scrub starts
Jan 20 18:42:04 compute-0 ceph-mon[74381]: 4.1 scrub ok
Jan 20 18:42:04 compute-0 ceph-mon[74381]: 3.1e scrub starts
Jan 20 18:42:04 compute-0 ceph-mon[74381]: 3.1e scrub ok
Jan 20 18:42:04 compute-0 ceph-mon[74381]: from='client.? ' entity='client.rgw.rgw.compute-2.mqbqmb' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Jan 20 18:42:04 compute-0 ceph-mon[74381]: osdmap e39: 3 total, 3 up, 3 in
Jan 20 18:42:04 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/2051329523' entity='client.admin' 
Jan 20 18:42:04 compute-0 ceph-mon[74381]: pgmap v119: 132 pgs: 1 unknown, 1 active+clean+scrubbing, 130 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:42:04 compute-0 ceph-mon[74381]: 5.19 deep-scrub starts
Jan 20 18:42:04 compute-0 ceph-mon[74381]: 5.19 deep-scrub ok
Jan 20 18:42:04 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:04 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 20 18:42:04 compute-0 systemd[1]: Started libpod-conmon-4d17533857fe12fead23038519a46e03de294a30960c603f1856eae81b3e622c.scope.
Jan 20 18:42:04 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:42:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02cacfd627110966282905e7088b6b3759ea37ac7648f1d1c938f71067c5b33e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:42:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02cacfd627110966282905e7088b6b3759ea37ac7648f1d1c938f71067c5b33e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 20 18:42:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02cacfd627110966282905e7088b6b3759ea37ac7648f1d1c938f71067c5b33e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:42:04 compute-0 podman[89206]: 2026-01-20 18:42:04.525933447 +0000 UTC m=+0.553627580 container init 4d17533857fe12fead23038519a46e03de294a30960c603f1856eae81b3e622c (image=quay.io/ceph/ceph:v19, name=bold_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:42:04 compute-0 podman[89206]: 2026-01-20 18:42:04.533087667 +0000 UTC m=+0.560781770 container start 4d17533857fe12fead23038519a46e03de294a30960c603f1856eae81b3e622c (image=quay.io/ceph/ceph:v19, name=bold_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:42:04 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:04 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Jan 20 18:42:04 compute-0 podman[89206]: 2026-01-20 18:42:04.540719989 +0000 UTC m=+0.568414092 container attach 4d17533857fe12fead23038519a46e03de294a30960c603f1856eae81b3e622c (image=quay.io/ceph/ceph:v19, name=bold_elgamal, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:42:04 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:04 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.phlxkp", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Jan 20 18:42:04 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.phlxkp", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 20 18:42:04 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v121: 132 pgs: 1 unknown, 1 active+clean+scrubbing, 130 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:42:04 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Jan 20 18:42:05 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.phlxkp", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 20 18:42:05 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Jan 20 18:42:05 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Jan 20 18:42:05 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:05 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 18:42:05 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:42:05 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.phlxkp on compute-0
Jan 20 18:42:05 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.phlxkp on compute-0
Jan 20 18:42:05 compute-0 sudo[89244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:42:05 compute-0 sudo[89244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:42:05 compute-0 sudo[89244]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:05 compute-0 sudo[89271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1
Jan 20 18:42:05 compute-0 sudo[89271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:42:05 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Jan 20 18:42:05 compute-0 ceph-mon[74381]: 3.d scrub starts
Jan 20 18:42:05 compute-0 ceph-mon[74381]: 3.d scrub ok
Jan 20 18:42:05 compute-0 ceph-mon[74381]: 3.11 deep-scrub starts
Jan 20 18:42:05 compute-0 ceph-mon[74381]: 3.11 deep-scrub ok
Jan 20 18:42:05 compute-0 ceph-mon[74381]: 4.1a scrub starts
Jan 20 18:42:05 compute-0 ceph-mon[74381]: 4.1a scrub ok
Jan 20 18:42:05 compute-0 ceph-mon[74381]: osdmap e40: 3 total, 3 up, 3 in
Jan 20 18:42:05 compute-0 ceph-mon[74381]: 2.10 scrub starts
Jan 20 18:42:05 compute-0 ceph-mon[74381]: 2.10 scrub ok
Jan 20 18:42:05 compute-0 ceph-mon[74381]: 3.1f scrub starts
Jan 20 18:42:05 compute-0 ceph-mon[74381]: 3.1f scrub ok
Jan 20 18:42:05 compute-0 ceph-mon[74381]: 4.d deep-scrub starts
Jan 20 18:42:05 compute-0 ceph-mon[74381]: 4.d deep-scrub ok
Jan 20 18:42:05 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:05 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:05 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:05 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.phlxkp", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 20 18:42:05 compute-0 ceph-mon[74381]: pgmap v121: 132 pgs: 1 unknown, 1 active+clean+scrubbing, 130 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:42:05 compute-0 ceph-mon[74381]: 5.3 scrub starts
Jan 20 18:42:05 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.phlxkp", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 20 18:42:05 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:05 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:42:05 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Jan 20 18:42:05 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Jan 20 18:42:05 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Jan 20 18:42:05 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.mqbqmb' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 20 18:42:05 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Jan 20 18:42:05 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.unzimq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 20 18:42:05 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module disable", "module": "dashboard"} v 0)
Jan 20 18:42:05 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3524813003' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Jan 20 18:42:05 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 41 pg[9.0( empty local-lis/les=0/0 n=0 ec=41/41 lis/c=0/0 les/c/f=0/0/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:42:05 compute-0 podman[89338]: 2026-01-20 18:42:05.687474584 +0000 UTC m=+0.038356897 container create 25c2901527405e6af20869a285d41e587c9ea8fd43e571ff1d8e8f7cb39d28ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_mahavira, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 20 18:42:05 compute-0 systemd[1]: Started libpod-conmon-25c2901527405e6af20869a285d41e587c9ea8fd43e571ff1d8e8f7cb39d28ae.scope.
Jan 20 18:42:05 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:42:05 compute-0 podman[89338]: 2026-01-20 18:42:05.669830616 +0000 UTC m=+0.020712949 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:42:05 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Jan 20 18:42:05 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Jan 20 18:42:06 compute-0 podman[89338]: 2026-01-20 18:42:06.076319317 +0000 UTC m=+0.427201650 container init 25c2901527405e6af20869a285d41e587c9ea8fd43e571ff1d8e8f7cb39d28ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_mahavira, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 20 18:42:06 compute-0 podman[89338]: 2026-01-20 18:42:06.082225944 +0000 UTC m=+0.433108257 container start 25c2901527405e6af20869a285d41e587c9ea8fd43e571ff1d8e8f7cb39d28ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:42:06 compute-0 tender_mahavira[89354]: 167 167
Jan 20 18:42:06 compute-0 systemd[1]: libpod-25c2901527405e6af20869a285d41e587c9ea8fd43e571ff1d8e8f7cb39d28ae.scope: Deactivated successfully.
Jan 20 18:42:06 compute-0 podman[89338]: 2026-01-20 18:42:06.111597102 +0000 UTC m=+0.462479445 container attach 25c2901527405e6af20869a285d41e587c9ea8fd43e571ff1d8e8f7cb39d28ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 20 18:42:06 compute-0 podman[89338]: 2026-01-20 18:42:06.112099675 +0000 UTC m=+0.462981988 container died 25c2901527405e6af20869a285d41e587c9ea8fd43e571ff1d8e8f7cb39d28ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_mahavira, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:42:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-37d16f2410e649a87cc22310be9ae3b37b722de03e6ee2b162b04157137b9b96-merged.mount: Deactivated successfully.
Jan 20 18:42:06 compute-0 podman[89338]: 2026-01-20 18:42:06.302708256 +0000 UTC m=+0.653590569 container remove 25c2901527405e6af20869a285d41e587c9ea8fd43e571ff1d8e8f7cb39d28ae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_mahavira, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 18:42:06 compute-0 systemd[1]: libpod-conmon-25c2901527405e6af20869a285d41e587c9ea8fd43e571ff1d8e8f7cb39d28ae.scope: Deactivated successfully.
Jan 20 18:42:06 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v123: 133 pgs: 1 creating+peering, 132 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.4 KiB/s wr, 6 op/s
Jan 20 18:42:06 compute-0 ceph-mgr[74676]: [balancer INFO root] Optimize plan auto_2026-01-20_18:42:06
Jan 20 18:42:06 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Jan 20 18:42:06 compute-0 ceph-mgr[74676]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 18:42:06 compute-0 ceph-mgr[74676]: [balancer INFO root] Some PGs (0.007519) are inactive; try again later
Jan 20 18:42:06 compute-0 ceph-mon[74381]: 2.13 scrub starts
Jan 20 18:42:06 compute-0 ceph-mon[74381]: 2.13 scrub ok
Jan 20 18:42:06 compute-0 ceph-mon[74381]: 5.3 scrub ok
Jan 20 18:42:06 compute-0 ceph-mon[74381]: Deploying daemon rgw.rgw.compute-0.phlxkp on compute-0
Jan 20 18:42:06 compute-0 ceph-mon[74381]: 3.1c scrub starts
Jan 20 18:42:06 compute-0 ceph-mon[74381]: 3.1c scrub ok
Jan 20 18:42:06 compute-0 ceph-mon[74381]: osdmap e41: 3 total, 3 up, 3 in
Jan 20 18:42:06 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/2196779070' entity='client.rgw.rgw.compute-2.mqbqmb' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 20 18:42:06 compute-0 ceph-mon[74381]: from='client.? ' entity='client.rgw.rgw.compute-2.mqbqmb' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 20 18:42:06 compute-0 ceph-mon[74381]: from='client.? ' entity='client.rgw.rgw.compute-1.unzimq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 20 18:42:06 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/3140638165' entity='client.rgw.rgw.compute-1.unzimq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Jan 20 18:42:06 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/3524813003' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Jan 20 18:42:06 compute-0 ceph-mon[74381]: 2.1 scrub starts
Jan 20 18:42:06 compute-0 ceph-mon[74381]: 2.1 scrub ok
Jan 20 18:42:06 compute-0 systemd[1]: Reloading.
Jan 20 18:42:06 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.mqbqmb' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 20 18:42:06 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.unzimq' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 20 18:42:06 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Jan 20 18:42:06 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3524813003' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Jan 20 18:42:06 compute-0 bold_elgamal[89221]: module 'dashboard' is already disabled
Jan 20 18:42:06 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Jan 20 18:42:06 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : mgrmap e12: compute-0.cepfkm(active, since 3m), standbys: compute-2.pyghhf, compute-1.whkwsm
Jan 20 18:42:06 compute-0 podman[89206]: 2026-01-20 18:42:06.686715192 +0000 UTC m=+2.714409305 container died 4d17533857fe12fead23038519a46e03de294a30960c603f1856eae81b3e622c (image=quay.io/ceph/ceph:v19, name=bold_elgamal, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Jan 20 18:42:06 compute-0 systemd-rc-local-generator[89408]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:42:06 compute-0 systemd-sysv-generator[89411]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:42:06 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 2.e scrub starts
Jan 20 18:42:06 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 42 pg[9.0( empty local-lis/les=41/42 n=0 ec=41/41 lis/c=0/0 les/c/f=0/0/0 sis=41) [0] r=0 lpr=41 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:42:06 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 2.e scrub ok
Jan 20 18:42:06 compute-0 systemd[1]: libpod-4d17533857fe12fead23038519a46e03de294a30960c603f1856eae81b3e622c.scope: Deactivated successfully.
Jan 20 18:42:07 compute-0 systemd[1]: Reloading.
Jan 20 18:42:07 compute-0 ceph-mon[74381]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 20 18:42:07 compute-0 systemd-rc-local-generator[89448]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:42:07 compute-0 systemd-sysv-generator[89454]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:42:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-02cacfd627110966282905e7088b6b3759ea37ac7648f1d1c938f71067c5b33e-merged.mount: Deactivated successfully.
Jan 20 18:42:07 compute-0 systemd[1]: Starting Ceph rgw.rgw.compute-0.phlxkp for aecbbf3b-b405-507b-97d7-637a83f5b4b1...
Jan 20 18:42:07 compute-0 ceph-mgr[74676]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 18:42:07 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 18:42:07 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:42:07 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:42:07 compute-0 ceph-mgr[74676]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 18:42:07 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 18:42:07 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 18:42:07 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 18:42:07 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 18:42:07 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 18:42:07 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:42:07 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:42:07 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 18:42:07 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 18:42:07 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:42:07 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:42:07 compute-0 podman[89206]: 2026-01-20 18:42:07.600692949 +0000 UTC m=+3.628387052 container remove 4d17533857fe12fead23038519a46e03de294a30960c603f1856eae81b3e622c (image=quay.io/ceph/ceph:v19, name=bold_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 18:42:07 compute-0 systemd[1]: libpod-conmon-4d17533857fe12fead23038519a46e03de294a30960c603f1856eae81b3e622c.scope: Deactivated successfully.
Jan 20 18:42:07 compute-0 sudo[89203]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:07 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Jan 20 18:42:07 compute-0 ceph-mon[74381]: 4.8 scrub starts
Jan 20 18:42:07 compute-0 ceph-mon[74381]: 4.8 scrub ok
Jan 20 18:42:07 compute-0 ceph-mon[74381]: 4.1b scrub starts
Jan 20 18:42:07 compute-0 ceph-mon[74381]: 4.1b scrub ok
Jan 20 18:42:07 compute-0 ceph-mon[74381]: pgmap v123: 133 pgs: 1 creating+peering, 132 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.4 KiB/s wr, 6 op/s
Jan 20 18:42:07 compute-0 ceph-mon[74381]: 3.1a scrub starts
Jan 20 18:42:07 compute-0 ceph-mon[74381]: 3.1a scrub ok
Jan 20 18:42:07 compute-0 ceph-mon[74381]: from='client.? ' entity='client.rgw.rgw.compute-2.mqbqmb' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 20 18:42:07 compute-0 ceph-mon[74381]: from='client.? ' entity='client.rgw.rgw.compute-1.unzimq' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 20 18:42:07 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/3524813003' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Jan 20 18:42:07 compute-0 ceph-mon[74381]: osdmap e42: 3 total, 3 up, 3 in
Jan 20 18:42:07 compute-0 ceph-mon[74381]: mgrmap e12: compute-0.cepfkm(active, since 3m), standbys: compute-2.pyghhf, compute-1.whkwsm
Jan 20 18:42:07 compute-0 ceph-mon[74381]: 2.e scrub starts
Jan 20 18:42:07 compute-0 ceph-mon[74381]: 2.e scrub ok
Jan 20 18:42:07 compute-0 ceph-mon[74381]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 20 18:42:07 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Jan 20 18:42:07 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Jan 20 18:42:07 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Jan 20 18:42:07 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.mqbqmb' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 20 18:42:07 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Jan 20 18:42:07 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.unzimq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 20 18:42:07 compute-0 sudo[89511]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzkfstntrexpcorjnhtqmtxchbbfanwd ; /usr/bin/python3'
Jan 20 18:42:07 compute-0 sudo[89511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:42:07 compute-0 python3[89519]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module enable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:42:07 compute-0 podman[89534]: 2026-01-20 18:42:07.907433817 +0000 UTC m=+0.098861781 container create 620f7a3733a82e0715ed65bedefead9e39e5371679eff615cd05bf697b523c83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-rgw-rgw-compute-0-phlxkp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Jan 20 18:42:07 compute-0 podman[89534]: 2026-01-20 18:42:07.829540653 +0000 UTC m=+0.020968637 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:42:07 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Jan 20 18:42:07 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Jan 20 18:42:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a0f9eb5c81f215f5fc221ca9901d2801288d69e717b83b658bb5370ae9527ad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:42:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a0f9eb5c81f215f5fc221ca9901d2801288d69e717b83b658bb5370ae9527ad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:42:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a0f9eb5c81f215f5fc221ca9901d2801288d69e717b83b658bb5370ae9527ad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 18:42:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a0f9eb5c81f215f5fc221ca9901d2801288d69e717b83b658bb5370ae9527ad/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.phlxkp supports timestamps until 2038 (0x7fffffff)
Jan 20 18:42:08 compute-0 podman[89547]: 2026-01-20 18:42:07.946159302 +0000 UTC m=+0.022417635 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:42:08 compute-0 podman[89547]: 2026-01-20 18:42:08.113697572 +0000 UTC m=+0.189955905 container create 1f3d89208b7bdf9b6e75a346ef8a0a15562c3d46692766779525811d1740b5c3 (image=quay.io/ceph/ceph:v19, name=upbeat_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 20 18:42:08 compute-0 systemd[1]: Started libpod-conmon-1f3d89208b7bdf9b6e75a346ef8a0a15562c3d46692766779525811d1740b5c3.scope.
Jan 20 18:42:08 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:42:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4a195c3fcaf9a3fb4f8dcde279b52555ebae2ed9d54c38f95cb92c4c30ce638/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:42:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4a195c3fcaf9a3fb4f8dcde279b52555ebae2ed9d54c38f95cb92c4c30ce638/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:42:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4a195c3fcaf9a3fb4f8dcde279b52555ebae2ed9d54c38f95cb92c4c30ce638/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 20 18:42:08 compute-0 podman[89534]: 2026-01-20 18:42:08.487040044 +0000 UTC m=+0.678468028 container init 620f7a3733a82e0715ed65bedefead9e39e5371679eff615cd05bf697b523c83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-rgw-rgw-compute-0-phlxkp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 20 18:42:08 compute-0 podman[89534]: 2026-01-20 18:42:08.493461424 +0000 UTC m=+0.684889398 container start 620f7a3733a82e0715ed65bedefead9e39e5371679eff615cd05bf697b523c83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-rgw-rgw-compute-0-phlxkp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 20 18:42:08 compute-0 podman[89547]: 2026-01-20 18:42:08.496588546 +0000 UTC m=+0.572846859 container init 1f3d89208b7bdf9b6e75a346ef8a0a15562c3d46692766779525811d1740b5c3 (image=quay.io/ceph/ceph:v19, name=upbeat_rosalind, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 20 18:42:08 compute-0 podman[89547]: 2026-01-20 18:42:08.503982563 +0000 UTC m=+0.580240876 container start 1f3d89208b7bdf9b6e75a346ef8a0a15562c3d46692766779525811d1740b5c3 (image=quay.io/ceph/ceph:v19, name=upbeat_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Jan 20 18:42:08 compute-0 radosgw[89571]: deferred set uid:gid to 167:167 (ceph:ceph)
Jan 20 18:42:08 compute-0 radosgw[89571]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process radosgw, pid 2
Jan 20 18:42:08 compute-0 radosgw[89571]: framework: beast
Jan 20 18:42:08 compute-0 radosgw[89571]: framework conf key: endpoint, val: 192.168.122.100:8082
Jan 20 18:42:08 compute-0 radosgw[89571]: init_numa not setting numa affinity
Jan 20 18:42:08 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v126: 134 pgs: 2 creating+peering, 132 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 3.6 KiB/s rd, 1.6 KiB/s wr, 7 op/s
Jan 20 18:42:08 compute-0 bash[89534]: 620f7a3733a82e0715ed65bedefead9e39e5371679eff615cd05bf697b523c83
Jan 20 18:42:08 compute-0 systemd[1]: Started Ceph rgw.rgw.compute-0.phlxkp for aecbbf3b-b405-507b-97d7-637a83f5b4b1.
Jan 20 18:42:08 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Jan 20 18:42:08 compute-0 ceph-mon[74381]: 5.1b deep-scrub starts
Jan 20 18:42:08 compute-0 ceph-mon[74381]: 5.1b deep-scrub ok
Jan 20 18:42:08 compute-0 ceph-mon[74381]: 3.e scrub starts
Jan 20 18:42:08 compute-0 ceph-mon[74381]: 3.e scrub ok
Jan 20 18:42:08 compute-0 ceph-mon[74381]: osdmap e43: 3 total, 3 up, 3 in
Jan 20 18:42:08 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/2196779070' entity='client.rgw.rgw.compute-2.mqbqmb' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 20 18:42:08 compute-0 ceph-mon[74381]: from='client.? ' entity='client.rgw.rgw.compute-2.mqbqmb' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 20 18:42:08 compute-0 ceph-mon[74381]: from='client.? ' entity='client.rgw.rgw.compute-1.unzimq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 20 18:42:08 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/3140638165' entity='client.rgw.rgw.compute-1.unzimq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Jan 20 18:42:08 compute-0 ceph-mon[74381]: 2.19 scrub starts
Jan 20 18:42:08 compute-0 ceph-mon[74381]: 2.19 scrub ok
Jan 20 18:42:08 compute-0 podman[89547]: 2026-01-20 18:42:08.833115434 +0000 UTC m=+0.909373927 container attach 1f3d89208b7bdf9b6e75a346ef8a0a15562c3d46692766779525811d1740b5c3 (image=quay.io/ceph/ceph:v19, name=upbeat_rosalind, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Jan 20 18:42:08 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.mqbqmb' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 20 18:42:08 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.unzimq' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 20 18:42:08 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Jan 20 18:42:08 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Jan 20 18:42:08 compute-0 sudo[89271]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:08 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 18:42:09 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "dashboard"} v 0)
Jan 20 18:42:09 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4276066950' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Jan 20 18:42:09 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 18:42:09 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:09 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 18:42:09 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:09 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Jan 20 18:42:09 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:09 compute-0 ceph-mgr[74676]: [progress INFO root] complete: finished ev ca8d558d-92ff-44a4-acaa-2d2a3cf2b0c4 (Updating rgw.rgw deployment (+3 -> 3))
Jan 20 18:42:09 compute-0 ceph-mgr[74676]: [progress INFO root] Completed event ca8d558d-92ff-44a4-acaa-2d2a3cf2b0c4 (Updating rgw.rgw deployment (+3 -> 3)) in 16 seconds
Jan 20 18:42:09 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 20 18:42:09 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Jan 20 18:42:09 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Jan 20 18:42:09 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Jan 20 18:42:10 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:10 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Jan 20 18:42:10 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Jan 20 18:42:10 compute-0 ceph-mon[74381]: 5.18 scrub starts
Jan 20 18:42:10 compute-0 ceph-mon[74381]: 5.18 scrub ok
Jan 20 18:42:10 compute-0 ceph-mon[74381]: pgmap v126: 134 pgs: 2 creating+peering, 132 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 3.6 KiB/s rd, 1.6 KiB/s wr, 7 op/s
Jan 20 18:42:10 compute-0 ceph-mon[74381]: 2.1b scrub starts
Jan 20 18:42:10 compute-0 ceph-mon[74381]: 2.1b scrub ok
Jan 20 18:42:10 compute-0 ceph-mon[74381]: from='client.? ' entity='client.rgw.rgw.compute-2.mqbqmb' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 20 18:42:10 compute-0 ceph-mon[74381]: from='client.? ' entity='client.rgw.rgw.compute-1.unzimq' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 20 18:42:10 compute-0 ceph-mon[74381]: osdmap e44: 3 total, 3 up, 3 in
Jan 20 18:42:10 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/4276066950' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Jan 20 18:42:10 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:10 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:10 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:10 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4276066950' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Jan 20 18:42:10 compute-0 ceph-mgr[74676]: mgr handle_mgr_map respawning because set of enabled modules changed!
Jan 20 18:42:10 compute-0 ceph-mgr[74676]: mgr respawn  e: '/usr/bin/ceph-mgr'
Jan 20 18:42:10 compute-0 ceph-mgr[74676]: mgr respawn  0: '/usr/bin/ceph-mgr'
Jan 20 18:42:10 compute-0 ceph-mgr[74676]: mgr respawn  1: '-n'
Jan 20 18:42:10 compute-0 ceph-mgr[74676]: mgr respawn  2: 'mgr.compute-0.cepfkm'
Jan 20 18:42:10 compute-0 ceph-mgr[74676]: mgr respawn  3: '-f'
Jan 20 18:42:10 compute-0 ceph-mgr[74676]: mgr respawn  4: '--setuser'
Jan 20 18:42:10 compute-0 ceph-mgr[74676]: mgr respawn  5: 'ceph'
Jan 20 18:42:10 compute-0 ceph-mgr[74676]: mgr respawn  6: '--setgroup'
Jan 20 18:42:10 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : mgrmap e13: compute-0.cepfkm(active, since 3m), standbys: compute-2.pyghhf, compute-1.whkwsm
Jan 20 18:42:10 compute-0 systemd[1]: libpod-1f3d89208b7bdf9b6e75a346ef8a0a15562c3d46692766779525811d1740b5c3.scope: Deactivated successfully.
Jan 20 18:42:10 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Jan 20 18:42:10 compute-0 podman[90183]: 2026-01-20 18:42:10.360576647 +0000 UTC m=+0.022835586 container died 1f3d89208b7bdf9b6e75a346ef8a0a15562c3d46692766779525811d1740b5c3 (image=quay.io/ceph/ceph:v19, name=upbeat_rosalind, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 20 18:42:10 compute-0 sshd-session[76016]: Connection closed by 192.168.122.100 port 60410
Jan 20 18:42:10 compute-0 sshd-session[75960]: Connection closed by 192.168.122.100 port 60382
Jan 20 18:42:10 compute-0 sshd-session[75987]: Connection closed by 192.168.122.100 port 60396
Jan 20 18:42:10 compute-0 sshd-session[75902]: Connection closed by 192.168.122.100 port 60362
Jan 20 18:42:10 compute-0 sshd-session[75931]: Connection closed by 192.168.122.100 port 60372
Jan 20 18:42:10 compute-0 sshd-session[75873]: Connection closed by 192.168.122.100 port 60356
Jan 20 18:42:10 compute-0 sshd-session[75757]: Connection closed by 192.168.122.100 port 60318
Jan 20 18:42:10 compute-0 sshd-session[75844]: Connection closed by 192.168.122.100 port 60352
Jan 20 18:42:10 compute-0 sshd-session[75815]: Connection closed by 192.168.122.100 port 60336
Jan 20 18:42:10 compute-0 sshd-session[75786]: Connection closed by 192.168.122.100 port 60328
Jan 20 18:42:10 compute-0 sshd-session[75727]: Connection closed by 192.168.122.100 port 60312
Jan 20 18:42:10 compute-0 sshd-session[75728]: Connection closed by 192.168.122.100 port 60316
Jan 20 18:42:10 compute-0 sshd-session[76013]: pam_unix(sshd:session): session closed for user ceph-admin
Jan 20 18:42:10 compute-0 sshd-session[75841]: pam_unix(sshd:session): session closed for user ceph-admin
Jan 20 18:42:10 compute-0 systemd[1]: session-27.scope: Deactivated successfully.
Jan 20 18:42:10 compute-0 sshd-session[75722]: pam_unix(sshd:session): session closed for user ceph-admin
Jan 20 18:42:10 compute-0 sshd-session[75783]: pam_unix(sshd:session): session closed for user ceph-admin
Jan 20 18:42:10 compute-0 sshd-session[75705]: pam_unix(sshd:session): session closed for user ceph-admin
Jan 20 18:42:10 compute-0 systemd[1]: session-23.scope: Deactivated successfully.
Jan 20 18:42:10 compute-0 sshd-session[75928]: pam_unix(sshd:session): session closed for user ceph-admin
Jan 20 18:42:10 compute-0 systemd[1]: session-25.scope: Deactivated successfully.
Jan 20 18:42:10 compute-0 systemd-logind[796]: Session 27 logged out. Waiting for processes to exit.
Jan 20 18:42:10 compute-0 systemd[1]: session-21.scope: Deactivated successfully.
Jan 20 18:42:10 compute-0 systemd[1]: session-30.scope: Deactivated successfully.
Jan 20 18:42:10 compute-0 systemd-logind[796]: Session 23 logged out. Waiting for processes to exit.
Jan 20 18:42:10 compute-0 sshd-session[75754]: pam_unix(sshd:session): session closed for user ceph-admin
Jan 20 18:42:10 compute-0 systemd-logind[796]: Session 25 logged out. Waiting for processes to exit.
Jan 20 18:42:10 compute-0 systemd-logind[796]: Session 21 logged out. Waiting for processes to exit.
Jan 20 18:42:10 compute-0 systemd[1]: session-33.scope: Deactivated successfully.
Jan 20 18:42:10 compute-0 systemd[1]: session-33.scope: Consumed 26.019s CPU time.
Jan 20 18:42:10 compute-0 systemd[1]: session-24.scope: Deactivated successfully.
Jan 20 18:42:10 compute-0 systemd-logind[796]: Session 30 logged out. Waiting for processes to exit.
Jan 20 18:42:10 compute-0 systemd-logind[796]: Session 33 logged out. Waiting for processes to exit.
Jan 20 18:42:10 compute-0 systemd-logind[796]: Session 24 logged out. Waiting for processes to exit.
Jan 20 18:42:10 compute-0 systemd-logind[796]: Removed session 27.
Jan 20 18:42:10 compute-0 systemd-logind[796]: Removed session 23.
Jan 20 18:42:10 compute-0 sshd-session[75984]: pam_unix(sshd:session): session closed for user ceph-admin
Jan 20 18:42:10 compute-0 sshd-session[75899]: pam_unix(sshd:session): session closed for user ceph-admin
Jan 20 18:42:10 compute-0 sshd-session[75957]: pam_unix(sshd:session): session closed for user ceph-admin
Jan 20 18:42:10 compute-0 sshd-session[75812]: pam_unix(sshd:session): session closed for user ceph-admin
Jan 20 18:42:10 compute-0 systemd[1]: session-32.scope: Deactivated successfully.
Jan 20 18:42:10 compute-0 systemd-logind[796]: Removed session 25.
Jan 20 18:42:10 compute-0 systemd[1]: session-26.scope: Deactivated successfully.
Jan 20 18:42:10 compute-0 systemd[1]: session-31.scope: Deactivated successfully.
Jan 20 18:42:10 compute-0 systemd-logind[796]: Session 32 logged out. Waiting for processes to exit.
Jan 20 18:42:10 compute-0 systemd[1]: session-29.scope: Deactivated successfully.
Jan 20 18:42:10 compute-0 systemd-logind[796]: Session 26 logged out. Waiting for processes to exit.
Jan 20 18:42:10 compute-0 systemd-logind[796]: Session 31 logged out. Waiting for processes to exit.
Jan 20 18:42:10 compute-0 systemd-logind[796]: Session 29 logged out. Waiting for processes to exit.
Jan 20 18:42:10 compute-0 systemd-logind[796]: Removed session 21.
Jan 20 18:42:10 compute-0 sshd-session[75870]: pam_unix(sshd:session): session closed for user ceph-admin
Jan 20 18:42:10 compute-0 systemd-logind[796]: Removed session 30.
Jan 20 18:42:10 compute-0 systemd-logind[796]: Removed session 33.
Jan 20 18:42:10 compute-0 systemd-logind[796]: Removed session 24.
Jan 20 18:42:10 compute-0 systemd[1]: session-28.scope: Deactivated successfully.
Jan 20 18:42:10 compute-0 systemd-logind[796]: Session 28 logged out. Waiting for processes to exit.
Jan 20 18:42:10 compute-0 systemd-logind[796]: Removed session 32.
Jan 20 18:42:10 compute-0 systemd-logind[796]: Removed session 26.
Jan 20 18:42:10 compute-0 systemd-logind[796]: Removed session 31.
Jan 20 18:42:10 compute-0 systemd-logind[796]: Removed session 29.
Jan 20 18:42:10 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ignoring --setuser ceph since I am not root
Jan 20 18:42:10 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ignoring --setgroup ceph since I am not root
Jan 20 18:42:10 compute-0 systemd-logind[796]: Removed session 28.
Jan 20 18:42:10 compute-0 ceph-mgr[74676]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Jan 20 18:42:10 compute-0 ceph-mgr[74676]: pidfile_write: ignore empty --pid-file
Jan 20 18:42:10 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-c4a195c3fcaf9a3fb4f8dcde279b52555ebae2ed9d54c38f95cb92c4c30ce638-merged.mount: Deactivated successfully.
Jan 20 18:42:10 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'alerts'
Jan 20 18:42:10 compute-0 podman[90183]: 2026-01-20 18:42:10.475230786 +0000 UTC m=+0.137489715 container remove 1f3d89208b7bdf9b6e75a346ef8a0a15562c3d46692766779525811d1740b5c3 (image=quay.io/ceph/ceph:v19, name=upbeat_rosalind, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Jan 20 18:42:10 compute-0 systemd[1]: libpod-conmon-1f3d89208b7bdf9b6e75a346ef8a0a15562c3d46692766779525811d1740b5c3.scope: Deactivated successfully.
Jan 20 18:42:10 compute-0 sudo[89511]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:10 compute-0 ceph-mgr[74676]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 20 18:42:10 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'balancer'
Jan 20 18:42:10 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:42:10.566+0000 7f10fa28a140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 20 18:42:10 compute-0 ceph-mgr[74676]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 20 18:42:10 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:42:10.646+0000 7f10fa28a140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 20 18:42:10 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'cephadm'
Jan 20 18:42:10 compute-0 sudo[90246]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrcvwcrfxhzzhburnhtupnfppurjnxib ; /usr/bin/python3'
Jan 20 18:42:10 compute-0 sudo[90246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:42:10 compute-0 python3[90248]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-username admin _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:42:11 compute-0 podman[90249]: 2026-01-20 18:42:10.916063225 +0000 UTC m=+0.021824128 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:42:11 compute-0 podman[90249]: 2026-01-20 18:42:11.053547969 +0000 UTC m=+0.159308852 container create 8c676df98a327b80330c9c53733d3fc78c03c6c3d755169a90032d61ef136879 (image=quay.io/ceph/ceph:v19, name=focused_einstein, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 20 18:42:11 compute-0 systemd[1]: Started libpod-conmon-8c676df98a327b80330c9c53733d3fc78c03c6c3d755169a90032d61ef136879.scope.
Jan 20 18:42:11 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:42:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea8ca7e95501c814180220e0bd425f4cdb6768771292b66f1beb49f1087ff7da/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:42:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea8ca7e95501c814180220e0bd425f4cdb6768771292b66f1beb49f1087ff7da/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:42:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea8ca7e95501c814180220e0bd425f4cdb6768771292b66f1beb49f1087ff7da/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 20 18:42:11 compute-0 podman[90249]: 2026-01-20 18:42:11.132539972 +0000 UTC m=+0.238300875 container init 8c676df98a327b80330c9c53733d3fc78c03c6c3d755169a90032d61ef136879 (image=quay.io/ceph/ceph:v19, name=focused_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:42:11 compute-0 podman[90249]: 2026-01-20 18:42:11.148903676 +0000 UTC m=+0.254664549 container start 8c676df98a327b80330c9c53733d3fc78c03c6c3d755169a90032d61ef136879 (image=quay.io/ceph/ceph:v19, name=focused_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Jan 20 18:42:11 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'crash'
Jan 20 18:42:11 compute-0 ceph-mgr[74676]: mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 20 18:42:11 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'dashboard'
Jan 20 18:42:11 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:42:11.439+0000 7f10fa28a140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 20 18:42:12 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'devicehealth'
Jan 20 18:42:12 compute-0 ceph-mgr[74676]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 20 18:42:12 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'diskprediction_local'
Jan 20 18:42:12 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:42:12.107+0000 7f10fa28a140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 20 18:42:12 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 20 18:42:12 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 20 18:42:12 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]:   from numpy import show_config as show_numpy_config
Jan 20 18:42:12 compute-0 ceph-mgr[74676]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 20 18:42:12 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:42:12.288+0000 7f10fa28a140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 20 18:42:12 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'influx'
Jan 20 18:42:12 compute-0 ceph-mgr[74676]: mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 20 18:42:12 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'insights'
Jan 20 18:42:12 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:42:12.366+0000 7f10fa28a140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 20 18:42:12 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'iostat'
Jan 20 18:42:12 compute-0 ceph-mgr[74676]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 20 18:42:12 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:42:12.514+0000 7f10fa28a140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 20 18:42:12 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'k8sevents'
Jan 20 18:42:12 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Jan 20 18:42:12 compute-0 podman[90249]: 2026-01-20 18:42:12.9212217 +0000 UTC m=+2.026982623 container attach 8c676df98a327b80330c9c53733d3fc78c03c6c3d755169a90032d61ef136879 (image=quay.io/ceph/ceph:v19, name=focused_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Jan 20 18:42:12 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'localpool'
Jan 20 18:42:12 compute-0 ceph-mon[74381]: 4.18 scrub starts
Jan 20 18:42:12 compute-0 ceph-mon[74381]: 4.18 scrub ok
Jan 20 18:42:12 compute-0 ceph-mon[74381]: 4.9 scrub starts
Jan 20 18:42:12 compute-0 ceph-mon[74381]: 4.9 scrub ok
Jan 20 18:42:12 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:12 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/4276066950' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Jan 20 18:42:12 compute-0 ceph-mon[74381]: mgrmap e13: compute-0.cepfkm(active, since 3m), standbys: compute-2.pyghhf, compute-1.whkwsm
Jan 20 18:42:12 compute-0 ceph-mon[74381]: osdmap e45: 3 total, 3 up, 3 in
Jan 20 18:42:12 compute-0 ceph-mon[74381]: from='mgr.14122 192.168.122.100:0/2366581746' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:13 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'mds_autoscaler'
Jan 20 18:42:13 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'mirroring'
Jan 20 18:42:13 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Jan 20 18:42:13 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Jan 20 18:42:13 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Jan 20 18:42:13 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2396071884' entity='client.rgw.rgw.compute-0.phlxkp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 20 18:42:13 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Jan 20 18:42:13 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.mqbqmb' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 20 18:42:13 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Jan 20 18:42:13 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.unzimq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 20 18:42:13 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'nfs'
Jan 20 18:42:13 compute-0 ceph-mgr[74676]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 20 18:42:13 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'orchestrator'
Jan 20 18:42:13 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:42:13.618+0000 7f10fa28a140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 20 18:42:13 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 46 pg[11.0( empty local-lis/les=0/0 n=0 ec=46/46 lis/c=0/0 les/c/f=0/0/0 sis=46) [0] r=0 lpr=46 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:42:13 compute-0 ceph-mgr[74676]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 20 18:42:13 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'osd_perf_query'
Jan 20 18:42:13 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:42:13.847+0000 7f10fa28a140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 20 18:42:13 compute-0 ceph-mgr[74676]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 20 18:42:13 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'osd_support'
Jan 20 18:42:13 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:42:13.932+0000 7f10fa28a140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 20 18:42:14 compute-0 ceph-mon[74381]: 3.1d scrub starts
Jan 20 18:42:14 compute-0 ceph-mon[74381]: 3.1d scrub ok
Jan 20 18:42:14 compute-0 ceph-mon[74381]: 5.4 deep-scrub starts
Jan 20 18:42:14 compute-0 ceph-mon[74381]: 5.4 deep-scrub ok
Jan 20 18:42:14 compute-0 ceph-mon[74381]: 5.e scrub starts
Jan 20 18:42:14 compute-0 ceph-mon[74381]: 5.e scrub ok
Jan 20 18:42:14 compute-0 ceph-mon[74381]: osdmap e46: 3 total, 3 up, 3 in
Jan 20 18:42:14 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/2396071884' entity='client.rgw.rgw.compute-0.phlxkp' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 20 18:42:14 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/2196779070' entity='client.rgw.rgw.compute-2.mqbqmb' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 20 18:42:14 compute-0 ceph-mon[74381]: from='client.? ' entity='client.rgw.rgw.compute-2.mqbqmb' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 20 18:42:14 compute-0 ceph-mon[74381]: from='client.? ' entity='client.rgw.rgw.compute-1.unzimq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 20 18:42:14 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/3140638165' entity='client.rgw.rgw.compute-1.unzimq' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Jan 20 18:42:14 compute-0 ceph-mgr[74676]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 20 18:42:14 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'pg_autoscaler'
Jan 20 18:42:14 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:42:14.006+0000 7f10fa28a140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 20 18:42:14 compute-0 ceph-mgr[74676]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 20 18:42:14 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'progress'
Jan 20 18:42:14 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:42:14.098+0000 7f10fa28a140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 20 18:42:14 compute-0 ceph-mgr[74676]: mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 20 18:42:14 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'prometheus'
Jan 20 18:42:14 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:42:14.177+0000 7f10fa28a140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 20 18:42:14 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e46 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 18:42:14 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Jan 20 18:42:14 compute-0 ceph-mgr[74676]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 20 18:42:14 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'rbd_support'
Jan 20 18:42:14 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:42:14.551+0000 7f10fa28a140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 20 18:42:14 compute-0 ceph-mgr[74676]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 20 18:42:14 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'restful'
Jan 20 18:42:14 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:42:14.657+0000 7f10fa28a140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 20 18:42:14 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'rgw'
Jan 20 18:42:15 compute-0 ceph-mgr[74676]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 20 18:42:15 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'rook'
Jan 20 18:42:15 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:42:15.105+0000 7f10fa28a140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 20 18:42:15 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2396071884' entity='client.rgw.rgw.compute-0.phlxkp' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 20 18:42:15 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.mqbqmb' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 20 18:42:15 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.unzimq' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 20 18:42:15 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Jan 20 18:42:15 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Jan 20 18:42:15 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Jan 20 18:42:15 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.mqbqmb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 20 18:42:15 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Jan 20 18:42:15 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.unzimq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 20 18:42:15 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Jan 20 18:42:15 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2396071884' entity='client.rgw.rgw.compute-0.phlxkp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 20 18:42:15 compute-0 ceph-mon[74381]: 3.8 scrub starts
Jan 20 18:42:15 compute-0 ceph-mon[74381]: 3.8 scrub ok
Jan 20 18:42:15 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 47 pg[11.0( empty local-lis/les=46/47 n=0 ec=46/46 lis/c=0/0 les/c/f=0/0/0 sis=46) [0] r=0 lpr=46 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:42:15 compute-0 ceph-mgr[74676]: mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 20 18:42:15 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'selftest'
Jan 20 18:42:15 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:42:15.720+0000 7f10fa28a140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 20 18:42:15 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:42:15.793+0000 7f10fa28a140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 20 18:42:15 compute-0 ceph-mgr[74676]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 20 18:42:15 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'snap_schedule'
Jan 20 18:42:15 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:42:15.884+0000 7f10fa28a140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 20 18:42:15 compute-0 ceph-mgr[74676]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 20 18:42:15 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'stats'
Jan 20 18:42:15 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'status'
Jan 20 18:42:16 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:42:16.053+0000 7f10fa28a140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Jan 20 18:42:16 compute-0 ceph-mgr[74676]: mgr[py] Module status has missing NOTIFY_TYPES member
Jan 20 18:42:16 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'telegraf'
Jan 20 18:42:16 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:42:16.135+0000 7f10fa28a140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 20 18:42:16 compute-0 ceph-mgr[74676]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 20 18:42:16 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'telemetry'
Jan 20 18:42:16 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Jan 20 18:42:16 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:42:16.322+0000 7f10fa28a140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 20 18:42:16 compute-0 ceph-mgr[74676]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 20 18:42:16 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'test_orchestrator'
Jan 20 18:42:16 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:42:16.557+0000 7f10fa28a140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 20 18:42:16 compute-0 ceph-mgr[74676]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 20 18:42:16 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'volumes'
Jan 20 18:42:16 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:42:16.840+0000 7f10fa28a140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 20 18:42:16 compute-0 ceph-mgr[74676]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 20 18:42:16 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'zabbix'
Jan 20 18:42:16 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:42:16.910+0000 7f10fa28a140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 20 18:42:16 compute-0 ceph-mgr[74676]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 20 18:42:16 compute-0 ceph-mon[74381]: log_channel(cluster) log [INF] : Active manager daemon compute-0.cepfkm restarted
Jan 20 18:42:16 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).mgr e13 prepare_beacon:  waiting for osdmon writeable to blocklist old instance.
Jan 20 18:42:16 compute-0 ceph-mgr[74676]: ms_deliver_dispatch: unhandled message 0x559880d9d860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Jan 20 18:42:18 compute-0 ceph-mon[74381]: log_channel(cluster) log [INF] : Active manager daemon compute-0.cepfkm restarted
Jan 20 18:42:18 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).mgr e13 prepare_beacon:  waiting for osdmon writeable to blocklist old instance.
Jan 20 18:42:19 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.mqbqmb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 20 18:42:19 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.unzimq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 20 18:42:19 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2396071884' entity='client.rgw.rgw.compute-0.phlxkp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 20 18:42:19 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Jan 20 18:42:19 compute-0 ceph-mon[74381]: log_channel(cluster) log [INF] : Active manager daemon compute-0.cepfkm restarted
Jan 20 18:42:19 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Jan 20 18:42:19 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e48 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 20 18:42:19 compute-0 ceph-mon[74381]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.cepfkm
Jan 20 18:42:19 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Jan 20 18:42:20 compute-0 systemd[1]: Stopping User Manager for UID 42477...
Jan 20 18:42:20 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Jan 20 18:42:20 compute-0 systemd[75709]: Activating special unit Exit the Session...
Jan 20 18:42:20 compute-0 systemd[75709]: Stopped target Main User Target.
Jan 20 18:42:20 compute-0 systemd[75709]: Stopped target Basic System.
Jan 20 18:42:20 compute-0 systemd[75709]: Stopped target Paths.
Jan 20 18:42:20 compute-0 systemd[75709]: Stopped target Sockets.
Jan 20 18:42:20 compute-0 systemd[75709]: Stopped target Timers.
Jan 20 18:42:20 compute-0 systemd[75709]: Stopped Mark boot as successful after the user session has run 2 minutes.
Jan 20 18:42:20 compute-0 systemd[75709]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 20 18:42:20 compute-0 systemd[75709]: Closed D-Bus User Message Bus Socket.
Jan 20 18:42:20 compute-0 systemd[75709]: Stopped Create User's Volatile Files and Directories.
Jan 20 18:42:20 compute-0 systemd[75709]: Removed slice User Application Slice.
Jan 20 18:42:20 compute-0 systemd[75709]: Reached target Shutdown.
Jan 20 18:42:20 compute-0 systemd[75709]: Finished Exit the Session.
Jan 20 18:42:20 compute-0 systemd[75709]: Reached target Exit the Session.
Jan 20 18:42:20 compute-0 ceph-mon[74381]: 3.9 deep-scrub starts
Jan 20 18:42:20 compute-0 ceph-mon[74381]: 3.9 deep-scrub ok
Jan 20 18:42:20 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/2396071884' entity='client.rgw.rgw.compute-0.phlxkp' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 20 18:42:20 compute-0 ceph-mon[74381]: from='client.? ' entity='client.rgw.rgw.compute-2.mqbqmb' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 20 18:42:20 compute-0 ceph-mon[74381]: from='client.? ' entity='client.rgw.rgw.compute-1.unzimq' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 20 18:42:20 compute-0 ceph-mon[74381]: osdmap e47: 3 total, 3 up, 3 in
Jan 20 18:42:20 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/2196779070' entity='client.rgw.rgw.compute-2.mqbqmb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 20 18:42:20 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/3140638165' entity='client.rgw.rgw.compute-1.unzimq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 20 18:42:20 compute-0 ceph-mon[74381]: from='client.? ' entity='client.rgw.rgw.compute-2.mqbqmb' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 20 18:42:20 compute-0 ceph-mon[74381]: from='client.? ' entity='client.rgw.rgw.compute-1.unzimq' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 20 18:42:20 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/2396071884' entity='client.rgw.rgw.compute-0.phlxkp' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Jan 20 18:42:20 compute-0 systemd[1]: user@42477.service: Deactivated successfully.
Jan 20 18:42:20 compute-0 systemd[1]: Stopped User Manager for UID 42477.
Jan 20 18:42:20 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Jan 20 18:42:20 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Jan 20 18:42:20 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.pyghhf restarted
Jan 20 18:42:20 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.pyghhf started
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: mgr handle_mgr_map Activating!
Jan 20 18:42:20 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.whkwsm restarted
Jan 20 18:42:20 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.whkwsm started
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: mgr handle_mgr_map I am now activating
Jan 20 18:42:20 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : mgrmap e14: compute-0.cepfkm(active, starting, since 0.738965s), standbys: compute-2.pyghhf, compute-1.whkwsm
Jan 20 18:42:20 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 20 18:42:20 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 20 18:42:20 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Jan 20 18:42:20 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 20 18:42:20 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Jan 20 18:42:20 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 20 18:42:20 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.cepfkm", "id": "compute-0.cepfkm"} v 0)
Jan 20 18:42:20 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mgr metadata", "who": "compute-0.cepfkm", "id": "compute-0.cepfkm"}]: dispatch
Jan 20 18:42:20 compute-0 systemd[1]: run-user-42477.mount: Deactivated successfully.
Jan 20 18:42:20 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.pyghhf", "id": "compute-2.pyghhf"} v 0)
Jan 20 18:42:20 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mgr metadata", "who": "compute-2.pyghhf", "id": "compute-2.pyghhf"}]: dispatch
Jan 20 18:42:20 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.whkwsm", "id": "compute-1.whkwsm"} v 0)
Jan 20 18:42:20 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mgr metadata", "who": "compute-1.whkwsm", "id": "compute-1.whkwsm"}]: dispatch
Jan 20 18:42:20 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 20 18:42:20 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 20 18:42:20 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 20 18:42:20 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 20 18:42:20 compute-0 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Jan 20 18:42:20 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 20 18:42:20 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 20 18:42:20 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Jan 20 18:42:20 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Jan 20 18:42:20 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mds metadata"}]: dispatch
Jan 20 18:42:20 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).mds e1 all = 1
Jan 20 18:42:20 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Jan 20 18:42:20 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 20 18:42:20 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Jan 20 18:42:20 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mon metadata"}]: dispatch
Jan 20 18:42:20 compute-0 systemd[1]: Removed slice User Slice of UID 42477.
Jan 20 18:42:20 compute-0 systemd[1]: user-42477.slice: Consumed 27.531s CPU time.
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: mgr load Constructed class from module: balancer
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [balancer INFO root] Starting
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 18:42:20 compute-0 ceph-mon[74381]: log_channel(cluster) log [INF] : Manager daemon compute-0.cepfkm is now available
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [balancer INFO root] Optimize plan auto_2026-01-20_18:42:20
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: mgr load Constructed class from module: cephadm
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: mgr load Constructed class from module: crash
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: mgr load Constructed class from module: dashboard
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO access_control] Loading user roles DB version=2
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO sso] Loading SSO DB version=1
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: mgr load Constructed class from module: devicehealth
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO root] Configured CherryPy, starting engine...
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: mgr load Constructed class from module: iostat
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [devicehealth INFO root] Starting
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: mgr load Constructed class from module: nfs
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: mgr load Constructed class from module: orchestrator
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: mgr load Constructed class from module: pg_autoscaler
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: mgr load Constructed class from module: progress
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [rbd_support INFO root] recovery thread starting
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [rbd_support INFO root] starting setup
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [progress INFO root] Loading...
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7f1075561d60>, <progress.module.GhostEvent object at 0x7f1075561b50>, <progress.module.GhostEvent object at 0x7f1075561b20>, <progress.module.GhostEvent object at 0x7f1075561af0>, <progress.module.GhostEvent object at 0x7f1075561ac0>, <progress.module.GhostEvent object at 0x7f1075561a90>, <progress.module.GhostEvent object at 0x7f1075561a60>, <progress.module.GhostEvent object at 0x7f1075561a30>, <progress.module.GhostEvent object at 0x7f1075561a00>, <progress.module.GhostEvent object at 0x7f10755619d0>] historic events
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: mgr load Constructed class from module: rbd_support
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [progress INFO root] Loaded OSDMap, ready.
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: mgr load Constructed class from module: restful
Jan 20 18:42:20 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.cepfkm/mirror_snapshot_schedule"} v 0)
Jan 20 18:42:20 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.cepfkm/mirror_snapshot_schedule"}]: dispatch
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [restful INFO root] server_addr: :: server_port: 8003
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: mgr load Constructed class from module: status
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [restful WARNING root] server not running: no certificate configured
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: mgr load Constructed class from module: telemetry
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: mgr load Constructed class from module: volumes
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [rbd_support INFO root] PerfHandler: starting
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_task_task: vms, start_after=
Jan 20 18:42:20 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_task_task: volumes, start_after=
Jan 20 18:42:21 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_task_task: backups, start_after=
Jan 20 18:42:21 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_task_task: images, start_after=
Jan 20 18:42:21 compute-0 ceph-mgr[74676]: [rbd_support INFO root] TaskHandler: starting
Jan 20 18:42:21 compute-0 sshd-session[90412]: Accepted publickey for ceph-admin from 192.168.122.100 port 34508 ssh2: RSA SHA256:58ALtshni2jJ/laX5+bMZOBBr4k3I3UMx5wmNonUL8k
Jan 20 18:42:21 compute-0 systemd-logind[796]: New session 34 of user ceph-admin.
Jan 20 18:42:21 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Jan 20 18:42:21 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Jan 20 18:42:21 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Jan 20 18:42:21 compute-0 systemd[1]: Starting User Manager for UID 42477...
Jan 20 18:42:21 compute-0 systemd[90430]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 20 18:42:21 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.module] Engine started.
Jan 20 18:42:21 compute-0 systemd[90430]: Queued start job for default target Main User Target.
Jan 20 18:42:21 compute-0 systemd[90430]: Created slice User Application Slice.
Jan 20 18:42:21 compute-0 systemd[90430]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 20 18:42:21 compute-0 systemd[90430]: Started Daily Cleanup of User's Temporary Directories.
Jan 20 18:42:21 compute-0 systemd[90430]: Reached target Paths.
Jan 20 18:42:21 compute-0 systemd[90430]: Reached target Timers.
Jan 20 18:42:21 compute-0 systemd[90430]: Starting D-Bus User Message Bus Socket...
Jan 20 18:42:21 compute-0 systemd[90430]: Starting Create User's Volatile Files and Directories...
Jan 20 18:42:21 compute-0 systemd[90430]: Listening on D-Bus User Message Bus Socket.
Jan 20 18:42:21 compute-0 systemd[90430]: Finished Create User's Volatile Files and Directories.
Jan 20 18:42:21 compute-0 systemd[90430]: Reached target Sockets.
Jan 20 18:42:21 compute-0 systemd[90430]: Reached target Basic System.
Jan 20 18:42:21 compute-0 systemd[90430]: Reached target Main User Target.
Jan 20 18:42:21 compute-0 systemd[90430]: Startup finished in 127ms.
Jan 20 18:42:21 compute-0 systemd[1]: Started User Manager for UID 42477.
Jan 20 18:42:21 compute-0 systemd[1]: Started Session 34 of User ceph-admin.
Jan 20 18:42:21 compute-0 sshd-session[90412]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 20 18:42:21 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.cepfkm/trash_purge_schedule"} v 0)
Jan 20 18:42:21 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.cepfkm/trash_purge_schedule"}]: dispatch
Jan 20 18:42:21 compute-0 ceph-mgr[74676]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 18:42:21 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 18:42:21 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 18:42:21 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 18:42:21 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 18:42:21 compute-0 ceph-mgr[74676]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Jan 20 18:42:21 compute-0 ceph-mgr[74676]: [rbd_support INFO root] setup complete
Jan 20 18:42:21 compute-0 sudo[90448]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:42:21 compute-0 sudo[90448]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:42:21 compute-0 sudo[90448]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:21 compute-0 sudo[90473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Jan 20 18:42:21 compute-0 sudo[90473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:42:21 compute-0 ceph-mon[74381]: 5.1a scrub starts
Jan 20 18:42:21 compute-0 ceph-mon[74381]: 5.1a scrub ok
Jan 20 18:42:21 compute-0 ceph-mon[74381]: 2.15 scrub starts
Jan 20 18:42:21 compute-0 ceph-mon[74381]: 2.15 scrub ok
Jan 20 18:42:21 compute-0 ceph-mon[74381]: Active manager daemon compute-0.cepfkm restarted
Jan 20 18:42:21 compute-0 ceph-mon[74381]: Active manager daemon compute-0.cepfkm restarted
Jan 20 18:42:21 compute-0 ceph-mon[74381]: from='client.? ' entity='client.rgw.rgw.compute-2.mqbqmb' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 20 18:42:21 compute-0 ceph-mon[74381]: from='client.? ' entity='client.rgw.rgw.compute-1.unzimq' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 20 18:42:21 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/2396071884' entity='client.rgw.rgw.compute-0.phlxkp' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 20 18:42:21 compute-0 ceph-mon[74381]: Active manager daemon compute-0.cepfkm restarted
Jan 20 18:42:21 compute-0 ceph-mon[74381]: Activating manager daemon compute-0.cepfkm
Jan 20 18:42:21 compute-0 ceph-mon[74381]: osdmap e48: 3 total, 3 up, 3 in
Jan 20 18:42:21 compute-0 ceph-mon[74381]: osdmap e49: 3 total, 3 up, 3 in
Jan 20 18:42:21 compute-0 ceph-mon[74381]: Standby manager daemon compute-2.pyghhf restarted
Jan 20 18:42:21 compute-0 ceph-mon[74381]: Standby manager daemon compute-2.pyghhf started
Jan 20 18:42:21 compute-0 ceph-mon[74381]: Standby manager daemon compute-1.whkwsm restarted
Jan 20 18:42:21 compute-0 ceph-mon[74381]: Standby manager daemon compute-1.whkwsm started
Jan 20 18:42:21 compute-0 ceph-mon[74381]: mgrmap e14: compute-0.cepfkm(active, starting, since 0.738965s), standbys: compute-2.pyghhf, compute-1.whkwsm
Jan 20 18:42:21 compute-0 ceph-mon[74381]: from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 20 18:42:21 compute-0 ceph-mon[74381]: from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 20 18:42:21 compute-0 ceph-mon[74381]: from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 20 18:42:21 compute-0 ceph-mon[74381]: from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mgr metadata", "who": "compute-0.cepfkm", "id": "compute-0.cepfkm"}]: dispatch
Jan 20 18:42:21 compute-0 ceph-mon[74381]: from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mgr metadata", "who": "compute-2.pyghhf", "id": "compute-2.pyghhf"}]: dispatch
Jan 20 18:42:21 compute-0 ceph-mon[74381]: from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mgr metadata", "who": "compute-1.whkwsm", "id": "compute-1.whkwsm"}]: dispatch
Jan 20 18:42:21 compute-0 ceph-mon[74381]: from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 20 18:42:21 compute-0 ceph-mon[74381]: from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 20 18:42:21 compute-0 ceph-mon[74381]: from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 20 18:42:21 compute-0 ceph-mon[74381]: from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mds metadata"}]: dispatch
Jan 20 18:42:21 compute-0 ceph-mon[74381]: from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 20 18:42:21 compute-0 ceph-mon[74381]: from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mon metadata"}]: dispatch
Jan 20 18:42:21 compute-0 ceph-mon[74381]: Manager daemon compute-0.cepfkm is now available
Jan 20 18:42:21 compute-0 ceph-mon[74381]: from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.cepfkm/mirror_snapshot_schedule"}]: dispatch
Jan 20 18:42:21 compute-0 ceph-mon[74381]: from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.cepfkm/trash_purge_schedule"}]: dispatch
Jan 20 18:42:21 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : mgrmap e15: compute-0.cepfkm(active, since 2s), standbys: compute-1.whkwsm, compute-2.pyghhf
Jan 20 18:42:21 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.14385 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-username", "value": "admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 18:42:21 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_USERNAME}] v 0)
Jan 20 18:42:21 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v3: 135 pgs: 135 active+clean; 452 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:42:21 compute-0 ceph-mgr[74676]: [cephadm INFO cherrypy.error] [20/Jan/2026:18:42:21] ENGINE Bus STARTING
Jan 20 18:42:21 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : [20/Jan/2026:18:42:21] ENGINE Bus STARTING
Jan 20 18:42:22 compute-0 ceph-mgr[74676]: [cephadm INFO cherrypy.error] [20/Jan/2026:18:42:22] ENGINE Serving on https://192.168.122.100:7150
Jan 20 18:42:22 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : [20/Jan/2026:18:42:22] ENGINE Serving on https://192.168.122.100:7150
Jan 20 18:42:22 compute-0 ceph-mgr[74676]: [cephadm INFO cherrypy.error] [20/Jan/2026:18:42:22] ENGINE Client ('192.168.122.100', 35008) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 20 18:42:22 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : [20/Jan/2026:18:42:22] ENGINE Client ('192.168.122.100', 35008) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 20 18:42:22 compute-0 ceph-mgr[74676]: [cephadm INFO cherrypy.error] [20/Jan/2026:18:42:22] ENGINE Serving on http://192.168.122.100:8765
Jan 20 18:42:22 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : [20/Jan/2026:18:42:22] ENGINE Serving on http://192.168.122.100:8765
Jan 20 18:42:22 compute-0 ceph-mgr[74676]: [cephadm INFO cherrypy.error] [20/Jan/2026:18:42:22] ENGINE Bus STARTED
Jan 20 18:42:22 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : [20/Jan/2026:18:42:22] ENGINE Bus STARTED
Jan 20 18:42:22 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:22 compute-0 focused_einstein[90275]: Option GRAFANA_API_USERNAME updated
Jan 20 18:42:22 compute-0 systemd[1]: libpod-8c676df98a327b80330c9c53733d3fc78c03c6c3d755169a90032d61ef136879.scope: Deactivated successfully.
Jan 20 18:42:22 compute-0 podman[90249]: 2026-01-20 18:42:22.230494329 +0000 UTC m=+11.336255232 container died 8c676df98a327b80330c9c53733d3fc78c03c6c3d755169a90032d61ef136879 (image=quay.io/ceph/ceph:v19, name=focused_einstein, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:42:22 compute-0 podman[90570]: 2026-01-20 18:42:22.230278104 +0000 UTC m=+0.356982266 container exec 2fba800b181b7eef43f9bbe592c3e2bd413c4a1140b3b26fe8cd839a68603da7 (image=quay.io/ceph/ceph:v19, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Jan 20 18:42:22 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v4: 135 pgs: 135 active+clean; 452 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:42:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-ea8ca7e95501c814180220e0bd425f4cdb6768771292b66f1beb49f1087ff7da-merged.mount: Deactivated successfully.
Jan 20 18:42:22 compute-0 podman[90249]: 2026-01-20 18:42:22.571704858 +0000 UTC m=+11.677465741 container remove 8c676df98a327b80330c9c53733d3fc78c03c6c3d755169a90032d61ef136879 (image=quay.io/ceph/ceph:v19, name=focused_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:42:22 compute-0 podman[90570]: 2026-01-20 18:42:22.574266462 +0000 UTC m=+0.700970624 container exec_died 2fba800b181b7eef43f9bbe592c3e2bd413c4a1140b3b26fe8cd839a68603da7 (image=quay.io/ceph/ceph:v19, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mon-compute-0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Jan 20 18:42:22 compute-0 sudo[90246]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:22 compute-0 ceph-mgr[74676]: [devicehealth INFO root] Check health
Jan 20 18:42:22 compute-0 systemd[1]: libpod-conmon-8c676df98a327b80330c9c53733d3fc78c03c6c3d755169a90032d61ef136879.scope: Deactivated successfully.
Jan 20 18:42:22 compute-0 sudo[90707]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngbrkarxnmhmtalzfttrkediirliysap ; /usr/bin/python3'
Jan 20 18:42:22 compute-0 sudo[90707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:42:22 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 20 18:42:22 compute-0 python3[90714]: ansible-ansible.legacy.command Invoked with stdin=/home/grafana_password.yml stdin_add_newline=False _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-password -i - _uses_shell=False strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None
Jan 20 18:42:22 compute-0 sudo[90473]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:22 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 20 18:42:22 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 18:42:22 compute-0 ceph-mon[74381]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 20 18:42:22 compute-0 ceph-mon[74381]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 20 18:42:22 compute-0 podman[90749]: 2026-01-20 18:42:22.936600761 +0000 UTC m=+0.047275716 container create 902975f3de77b65f938b89d488d47732471ef5ee9110f944660a06836f2c4a7f (image=quay.io/ceph/ceph:v19, name=interesting_burnell, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:42:22 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:22 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 20 18:42:22 compute-0 radosgw[89571]: v1 topic migration: starting v1 topic migration..
Jan 20 18:42:22 compute-0 radosgw[89571]: LDAP not started since no server URIs were provided in the configuration.
Jan 20 18:42:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-rgw-rgw-compute-0-phlxkp[89562]: 2026-01-20T18:42:22.979+0000 7f0fc7709980 -1 LDAP not started since no server URIs were provided in the configuration.
Jan 20 18:42:22 compute-0 systemd[1]: Started libpod-conmon-902975f3de77b65f938b89d488d47732471ef5ee9110f944660a06836f2c4a7f.scope.
Jan 20 18:42:23 compute-0 podman[90749]: 2026-01-20 18:42:22.909542332 +0000 UTC m=+0.020217317 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:42:23 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:42:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30f8b4ef347959c02908aa8c92458bc862678f2bca3c217cb8b6767b026299d4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:42:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30f8b4ef347959c02908aa8c92458bc862678f2bca3c217cb8b6767b026299d4/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 20 18:42:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30f8b4ef347959c02908aa8c92458bc862678f2bca3c217cb8b6767b026299d4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:42:23 compute-0 ceph-mon[74381]: mgrmap e15: compute-0.cepfkm(active, since 2s), standbys: compute-1.whkwsm, compute-2.pyghhf
Jan 20 18:42:23 compute-0 ceph-mon[74381]: [20/Jan/2026:18:42:21] ENGINE Bus STARTING
Jan 20 18:42:23 compute-0 ceph-mon[74381]: [20/Jan/2026:18:42:22] ENGINE Serving on https://192.168.122.100:7150
Jan 20 18:42:23 compute-0 ceph-mon[74381]: [20/Jan/2026:18:42:22] ENGINE Client ('192.168.122.100', 35008) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 20 18:42:23 compute-0 ceph-mon[74381]: [20/Jan/2026:18:42:22] ENGINE Serving on http://192.168.122.100:8765
Jan 20 18:42:23 compute-0 ceph-mon[74381]: [20/Jan/2026:18:42:22] ENGINE Bus STARTED
Jan 20 18:42:23 compute-0 ceph-mon[74381]: from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:23 compute-0 radosgw[89571]: v1 topic migration: finished v1 topic migration
Jan 20 18:42:23 compute-0 podman[90749]: 2026-01-20 18:42:23.106230971 +0000 UTC m=+0.216905936 container init 902975f3de77b65f938b89d488d47732471ef5ee9110f944660a06836f2c4a7f (image=quay.io/ceph/ceph:v19, name=interesting_burnell, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:42:23 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:23 compute-0 radosgw[89571]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Jan 20 18:42:23 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 20 18:42:23 compute-0 podman[90749]: 2026-01-20 18:42:23.114604331 +0000 UTC m=+0.225279286 container start 902975f3de77b65f938b89d488d47732471ef5ee9110f944660a06836f2c4a7f (image=quay.io/ceph/ceph:v19, name=interesting_burnell, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Jan 20 18:42:23 compute-0 radosgw[89571]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Jan 20 18:42:23 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:23 compute-0 radosgw[89571]: framework: beast
Jan 20 18:42:23 compute-0 radosgw[89571]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Jan 20 18:42:23 compute-0 radosgw[89571]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Jan 20 18:42:23 compute-0 podman[90749]: 2026-01-20 18:42:23.209074347 +0000 UTC m=+0.319749302 container attach 902975f3de77b65f938b89d488d47732471ef5ee9110f944660a06836f2c4a7f (image=quay.io/ceph/ceph:v19, name=interesting_burnell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Jan 20 18:42:23 compute-0 radosgw[89571]: starting handler: beast
Jan 20 18:42:23 compute-0 radosgw[89571]: set uid:gid to 167:167 (ceph:ceph)
Jan 20 18:42:23 compute-0 radosgw[89571]: mgrc service_daemon_register rgw.14364 metadata {arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.phlxkp,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026,kernel_version=5.14.0-661.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864316,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=e5424423-e0b7-453a-acdd-580a59c79a77,zone_name=default,zonegroup_id=3115895e-8a03-4fc4-b262-7d669efe3b52,zonegroup_name=default}
Jan 20 18:42:23 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 18:42:23 compute-0 radosgw[89571]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Jan 20 18:42:23 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:23 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : mgrmap e16: compute-0.cepfkm(active, since 3s), standbys: compute-1.whkwsm, compute-2.pyghhf
Jan 20 18:42:23 compute-0 radosgw[89571]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Jan 20 18:42:23 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:23 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:23 compute-0 sudo[90822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:42:23 compute-0 sudo[90822]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:42:23 compute-0 sudo[90822]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:23 compute-0 radosgw[89571]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Jan 20 18:42:23 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.14415 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-password", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 18:42:23 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_PASSWORD}] v 0)
Jan 20 18:42:23 compute-0 sudo[90847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 20 18:42:23 compute-0 sudo[90847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:42:23 compute-0 radosgw[89571]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Jan 20 18:42:23 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:23 compute-0 interesting_burnell[90796]: Option GRAFANA_API_PASSWORD updated
Jan 20 18:42:23 compute-0 systemd[1]: libpod-902975f3de77b65f938b89d488d47732471ef5ee9110f944660a06836f2c4a7f.scope: Deactivated successfully.
Jan 20 18:42:23 compute-0 podman[90749]: 2026-01-20 18:42:23.7272775 +0000 UTC m=+0.837952455 container died 902975f3de77b65f938b89d488d47732471ef5ee9110f944660a06836f2c4a7f (image=quay.io/ceph/ceph:v19, name=interesting_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:42:23 compute-0 radosgw[89571]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Jan 20 18:42:23 compute-0 radosgw[89571]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Jan 20 18:42:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-30f8b4ef347959c02908aa8c92458bc862678f2bca3c217cb8b6767b026299d4-merged.mount: Deactivated successfully.
Jan 20 18:42:24 compute-0 podman[90749]: 2026-01-20 18:42:24.011567433 +0000 UTC m=+1.122242388 container remove 902975f3de77b65f938b89d488d47732471ef5ee9110f944660a06836f2c4a7f (image=quay.io/ceph/ceph:v19, name=interesting_burnell, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 20 18:42:24 compute-0 sudo[90707]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:24 compute-0 systemd[1]: libpod-conmon-902975f3de77b65f938b89d488d47732471ef5ee9110f944660a06836f2c4a7f.scope: Deactivated successfully.
Jan 20 18:42:24 compute-0 radosgw[89571]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Jan 20 18:42:24 compute-0 sudo[90847]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:24 compute-0 sudo[90940]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyncsrmhxzgeeqqwlopnwjxftnvwkqnk ; /usr/bin/python3'
Jan 20 18:42:24 compute-0 sudo[90940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:42:24 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e49 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 18:42:24 compute-0 ceph-mon[74381]: pgmap v4: 135 pgs: 135 active+clean; 452 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:42:24 compute-0 ceph-mon[74381]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 20 18:42:24 compute-0 ceph-mon[74381]: Cluster is now healthy
Jan 20 18:42:24 compute-0 ceph-mon[74381]: from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:24 compute-0 ceph-mon[74381]: from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:24 compute-0 ceph-mon[74381]: from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:24 compute-0 ceph-mon[74381]: from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:24 compute-0 ceph-mon[74381]: mgrmap e16: compute-0.cepfkm(active, since 3s), standbys: compute-1.whkwsm, compute-2.pyghhf
Jan 20 18:42:24 compute-0 ceph-mon[74381]: from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:24 compute-0 ceph-mon[74381]: from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:24 compute-0 ceph-mon[74381]: from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:24 compute-0 radosgw[89571]: INFO: RGWReshardLock::lock found lock on reshard.0000000014 to be held by another RGW process; skipping for now
Jan 20 18:42:24 compute-0 sudo[90943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:42:24 compute-0 sudo[90943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:42:24 compute-0 sudo[90943]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:24 compute-0 sudo[90968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Jan 20 18:42:24 compute-0 sudo[90968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:42:24 compute-0 python3[90942]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-alertmanager-api-host http://192.168.122.100:9093
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:42:24 compute-0 podman[90993]: 2026-01-20 18:42:24.461560738 +0000 UTC m=+0.100974381 container create 921c99ae842ebb728ff78f77bf526bf202d2f0261396f3d4af75a03d9c1674f0 (image=quay.io/ceph/ceph:v19, name=pedantic_golick, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 20 18:42:24 compute-0 podman[90993]: 2026-01-20 18:42:24.382420924 +0000 UTC m=+0.021834587 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:42:24 compute-0 systemd[1]: Started libpod-conmon-921c99ae842ebb728ff78f77bf526bf202d2f0261396f3d4af75a03d9c1674f0.scope.
Jan 20 18:42:24 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v5: 135 pgs: 135 active+clean; 452 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:42:24 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:42:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db863ba63d76735b09dfaf8a9621ee4533044b4810eab6c8bb9eadec63a42a7d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:42:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db863ba63d76735b09dfaf8a9621ee4533044b4810eab6c8bb9eadec63a42a7d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:42:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db863ba63d76735b09dfaf8a9621ee4533044b4810eab6c8bb9eadec63a42a7d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 20 18:42:24 compute-0 sudo[90968]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:24 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 18:42:24 compute-0 podman[90993]: 2026-01-20 18:42:24.659694431 +0000 UTC m=+0.299108094 container init 921c99ae842ebb728ff78f77bf526bf202d2f0261396f3d4af75a03d9c1674f0 (image=quay.io/ceph/ceph:v19, name=pedantic_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Jan 20 18:42:24 compute-0 podman[90993]: 2026-01-20 18:42:24.665961009 +0000 UTC m=+0.305374652 container start 921c99ae842ebb728ff78f77bf526bf202d2f0261396f3d4af75a03d9c1674f0 (image=quay.io/ceph/ceph:v19, name=pedantic_golick, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Jan 20 18:42:24 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:24 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 18:42:24 compute-0 podman[90993]: 2026-01-20 18:42:24.847263721 +0000 UTC m=+0.486677364 container attach 921c99ae842ebb728ff78f77bf526bf202d2f0261396f3d4af75a03d9c1674f0 (image=quay.io/ceph/ceph:v19, name=pedantic_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:42:24 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 20 18:42:24 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:24 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 20 18:42:24 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 20 18:42:24 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 20 18:42:24 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:24 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 20 18:42:24 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:24 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 20 18:42:24 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:24 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Jan 20 18:42:24 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 20 18:42:24 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:24 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Jan 20 18:42:24 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 20 18:42:24 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Jan 20 18:42:24 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 18:42:24 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:42:24 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 20 18:42:24 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 18:42:24 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Jan 20 18:42:24 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Jan 20 18:42:24 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Jan 20 18:42:24 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Jan 20 18:42:24 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Jan 20 18:42:24 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Jan 20 18:42:25 compute-0 sudo[91049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Jan 20 18:42:25 compute-0 sudo[91049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:42:25 compute-0 sudo[91049]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:25 compute-0 sudo[91074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/etc/ceph
Jan 20 18:42:25 compute-0 sudo[91074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:42:25 compute-0 sudo[91074]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:25 compute-0 sudo[91099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/etc/ceph/ceph.conf.new
Jan 20 18:42:25 compute-0 sudo[91099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:42:25 compute-0 sudo[91099]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:25 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.14421 -' entity='client.admin' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://192.168.122.100:9093", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 18:42:25 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ALERTMANAGER_API_HOST}] v 0)
Jan 20 18:42:25 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:25 compute-0 pedantic_golick[91015]: Option ALERTMANAGER_API_HOST updated
Jan 20 18:42:25 compute-0 systemd[1]: libpod-921c99ae842ebb728ff78f77bf526bf202d2f0261396f3d4af75a03d9c1674f0.scope: Deactivated successfully.
Jan 20 18:42:25 compute-0 podman[90993]: 2026-01-20 18:42:25.218989475 +0000 UTC m=+0.858403118 container died 921c99ae842ebb728ff78f77bf526bf202d2f0261396f3d4af75a03d9c1674f0 (image=quay.io/ceph/ceph:v19, name=pedantic_golick, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Jan 20 18:42:25 compute-0 sudo[91125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1
Jan 20 18:42:25 compute-0 sudo[91125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:42:25 compute-0 sudo[91125]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-db863ba63d76735b09dfaf8a9621ee4533044b4810eab6c8bb9eadec63a42a7d-merged.mount: Deactivated successfully.
Jan 20 18:42:25 compute-0 podman[90993]: 2026-01-20 18:42:25.262460584 +0000 UTC m=+0.901874227 container remove 921c99ae842ebb728ff78f77bf526bf202d2f0261396f3d4af75a03d9c1674f0 (image=quay.io/ceph/ceph:v19, name=pedantic_golick, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 18:42:25 compute-0 systemd[1]: libpod-conmon-921c99ae842ebb728ff78f77bf526bf202d2f0261396f3d4af75a03d9c1674f0.scope: Deactivated successfully.
Jan 20 18:42:25 compute-0 sudo[90940]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:25 compute-0 ceph-mon[74381]: from='client.14415 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-password", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 18:42:25 compute-0 ceph-mon[74381]: from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:25 compute-0 ceph-mon[74381]: from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:25 compute-0 ceph-mon[74381]: from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 20 18:42:25 compute-0 ceph-mon[74381]: from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:25 compute-0 ceph-mon[74381]: from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:25 compute-0 ceph-mon[74381]: from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:25 compute-0 ceph-mon[74381]: from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 20 18:42:25 compute-0 ceph-mon[74381]: from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:25 compute-0 ceph-mon[74381]: from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 20 18:42:25 compute-0 ceph-mon[74381]: from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Jan 20 18:42:25 compute-0 ceph-mon[74381]: from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:42:25 compute-0 ceph-mon[74381]: from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 18:42:25 compute-0 ceph-mon[74381]: from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:25 compute-0 sudo[91158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/etc/ceph/ceph.conf.new
Jan 20 18:42:25 compute-0 sudo[91158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:42:25 compute-0 sudo[91158]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:25 compute-0 sudo[91209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/etc/ceph/ceph.conf.new
Jan 20 18:42:25 compute-0 sudo[91209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:42:25 compute-0 sudo[91209]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:25 compute-0 sudo[91257]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvodkdseiycvtwrwwjpfyunxfmokuhbp ; /usr/bin/python3'
Jan 20 18:42:25 compute-0 sudo[91257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:42:25 compute-0 sudo[91258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/etc/ceph/ceph.conf.new
Jan 20 18:42:25 compute-0 sudo[91258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:42:25 compute-0 sudo[91258]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:25 compute-0 sudo[91285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Jan 20 18:42:25 compute-0 sudo[91285]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:42:25 compute-0 sudo[91285]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:25 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.conf
Jan 20 18:42:25 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.conf
Jan 20 18:42:25 compute-0 sudo[91310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config
Jan 20 18:42:25 compute-0 sudo[91310]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:42:25 compute-0 sudo[91310]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:25 compute-0 python3[91267]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-prometheus-api-host http://192.168.122.100:9092
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:42:25 compute-0 sudo[91335]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config
Jan 20 18:42:25 compute-0 sudo[91335]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:42:25 compute-0 sudo[91335]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:25 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.conf
Jan 20 18:42:25 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.conf
Jan 20 18:42:25 compute-0 sudo[91373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.conf.new
Jan 20 18:42:25 compute-0 sudo[91373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:42:25 compute-0 sudo[91373]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:25 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.conf
Jan 20 18:42:25 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.conf
Jan 20 18:42:25 compute-0 podman[91336]: 2026-01-20 18:42:25.63711032 +0000 UTC m=+0.030737772 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:42:25 compute-0 podman[91336]: 2026-01-20 18:42:25.738577512 +0000 UTC m=+0.132204934 container create ecaf6416bea870e29200ec60389874827363bbdd22d38e469e7d50bfe37e682c (image=quay.io/ceph/ceph:v19, name=dreamy_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Jan 20 18:42:25 compute-0 sudo[91398]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1
Jan 20 18:42:25 compute-0 sudo[91398]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:42:25 compute-0 sudo[91398]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:25 compute-0 systemd[1]: Started libpod-conmon-ecaf6416bea870e29200ec60389874827363bbdd22d38e469e7d50bfe37e682c.scope.
Jan 20 18:42:25 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:42:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0295dd5df67ca185f291f836c2aade520aafea8fa9b91275380ada70cce4fedc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:42:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0295dd5df67ca185f291f836c2aade520aafea8fa9b91275380ada70cce4fedc/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 20 18:42:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0295dd5df67ca185f291f836c2aade520aafea8fa9b91275380ada70cce4fedc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:42:25 compute-0 sudo[91425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.conf.new
Jan 20 18:42:25 compute-0 sudo[91425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:42:25 compute-0 sudo[91425]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:25 compute-0 sudo[91476]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.conf.new
Jan 20 18:42:25 compute-0 sudo[91476]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:42:25 compute-0 sudo[91476]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:25 compute-0 sudo[91501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.conf.new
Jan 20 18:42:25 compute-0 sudo[91501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:42:25 compute-0 sudo[91501]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:26 compute-0 podman[91336]: 2026-01-20 18:42:26.010122816 +0000 UTC m=+0.403750258 container init ecaf6416bea870e29200ec60389874827363bbdd22d38e469e7d50bfe37e682c (image=quay.io/ceph/ceph:v19, name=dreamy_ganguly, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Jan 20 18:42:26 compute-0 sudo[91526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.conf.new /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.conf
Jan 20 18:42:26 compute-0 sudo[91526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:42:26 compute-0 podman[91336]: 2026-01-20 18:42:26.016081845 +0000 UTC m=+0.409709267 container start ecaf6416bea870e29200ec60389874827363bbdd22d38e469e7d50bfe37e682c (image=quay.io/ceph/ceph:v19, name=dreamy_ganguly, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:42:26 compute-0 sudo[91526]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:26 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 20 18:42:26 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 20 18:42:26 compute-0 podman[91336]: 2026-01-20 18:42:26.051910933 +0000 UTC m=+0.445538355 container attach ecaf6416bea870e29200ec60389874827363bbdd22d38e469e7d50bfe37e682c (image=quay.io/ceph/ceph:v19, name=dreamy_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Jan 20 18:42:26 compute-0 sudo[91552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Jan 20 18:42:26 compute-0 sudo[91552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:42:26 compute-0 sudo[91552]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:26 compute-0 sudo[91577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/etc/ceph
Jan 20 18:42:26 compute-0 sudo[91577]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:42:26 compute-0 sudo[91577]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:26 compute-0 sudo[91621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/etc/ceph/ceph.client.admin.keyring.new
Jan 20 18:42:26 compute-0 sudo[91621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:42:26 compute-0 sudo[91621]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:26 compute-0 sudo[91646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1
Jan 20 18:42:26 compute-0 sudo[91646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:42:26 compute-0 sudo[91646]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:26 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 20 18:42:26 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 20 18:42:26 compute-0 sudo[91671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/etc/ceph/ceph.client.admin.keyring.new
Jan 20 18:42:26 compute-0 sudo[91671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:42:26 compute-0 sudo[91671]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:26 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.24163 -' entity='client.admin' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://192.168.122.100:9092", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 18:42:26 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/PROMETHEUS_API_HOST}] v 0)
Jan 20 18:42:26 compute-0 sudo[91719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/etc/ceph/ceph.client.admin.keyring.new
Jan 20 18:42:26 compute-0 sudo[91719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:42:26 compute-0 sudo[91719]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:26 compute-0 sudo[91745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/etc/ceph/ceph.client.admin.keyring.new
Jan 20 18:42:26 compute-0 sudo[91745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:42:26 compute-0 sudo[91745]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:26 compute-0 sudo[91770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Jan 20 18:42:26 compute-0 sudo[91770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:42:26 compute-0 sudo[91770]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:26 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.client.admin.keyring
Jan 20 18:42:26 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.client.admin.keyring
Jan 20 18:42:26 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 20 18:42:26 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 20 18:42:26 compute-0 sudo[91795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config
Jan 20 18:42:26 compute-0 sudo[91795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:42:26 compute-0 sudo[91795]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:26 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v6: 135 pgs: 135 active+clean; 452 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:42:26 compute-0 sudo[91820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config
Jan 20 18:42:26 compute-0 sudo[91820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:42:26 compute-0 sudo[91820]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:26 compute-0 sudo[91845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.client.admin.keyring.new
Jan 20 18:42:26 compute-0 sudo[91845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:42:26 compute-0 sudo[91845]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:26 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.client.admin.keyring
Jan 20 18:42:26 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.client.admin.keyring
Jan 20 18:42:27 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.client.admin.keyring
Jan 20 18:42:27 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.client.admin.keyring
Jan 20 18:42:27 compute-0 sudo[91870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1
Jan 20 18:42:27 compute-0 sudo[91870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:42:27 compute-0 sudo[91870]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:27 compute-0 sudo[91895]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.client.admin.keyring.new
Jan 20 18:42:27 compute-0 sudo[91895]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:42:27 compute-0 ceph-mon[74381]: pgmap v5: 135 pgs: 135 active+clean; 452 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:42:27 compute-0 ceph-mon[74381]: Updating compute-0:/etc/ceph/ceph.conf
Jan 20 18:42:27 compute-0 ceph-mon[74381]: Updating compute-1:/etc/ceph/ceph.conf
Jan 20 18:42:27 compute-0 ceph-mon[74381]: Updating compute-2:/etc/ceph/ceph.conf
Jan 20 18:42:27 compute-0 ceph-mon[74381]: from='client.14421 -' entity='client.admin' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://192.168.122.100:9093", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 18:42:27 compute-0 sudo[91895]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:27 compute-0 sudo[91943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.client.admin.keyring.new
Jan 20 18:42:27 compute-0 sudo[91943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:42:27 compute-0 sudo[91943]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:27 compute-0 sudo[91968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.client.admin.keyring.new
Jan 20 18:42:27 compute-0 sudo[91968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:42:27 compute-0 sudo[91968]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:27 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 20 18:42:27 compute-0 sudo[91993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.client.admin.keyring.new /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.client.admin.keyring
Jan 20 18:42:27 compute-0 sudo[91993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:42:27 compute-0 sudo[91993]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:27 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 18:42:27 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:27 compute-0 dreamy_ganguly[91438]: Option PROMETHEUS_API_HOST updated
Jan 20 18:42:27 compute-0 systemd[1]: libpod-ecaf6416bea870e29200ec60389874827363bbdd22d38e469e7d50bfe37e682c.scope: Deactivated successfully.
Jan 20 18:42:27 compute-0 podman[91336]: 2026-01-20 18:42:27.701918472 +0000 UTC m=+2.095545894 container died ecaf6416bea870e29200ec60389874827363bbdd22d38e469e7d50bfe37e682c (image=quay.io/ceph/ceph:v19, name=dreamy_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Jan 20 18:42:27 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:27 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 20 18:42:27 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:28 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 18:42:28 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 20 18:42:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-0295dd5df67ca185f291f836c2aade520aafea8fa9b91275380ada70cce4fedc-merged.mount: Deactivated successfully.
Jan 20 18:42:28 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:28 compute-0 podman[91336]: 2026-01-20 18:42:28.175271672 +0000 UTC m=+2.568899094 container remove ecaf6416bea870e29200ec60389874827363bbdd22d38e469e7d50bfe37e682c (image=quay.io/ceph/ceph:v19, name=dreamy_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 20 18:42:28 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:28 compute-0 systemd[1]: libpod-conmon-ecaf6416bea870e29200ec60389874827363bbdd22d38e469e7d50bfe37e682c.scope: Deactivated successfully.
Jan 20 18:42:28 compute-0 sudo[91257]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:28 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:28 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 20 18:42:28 compute-0 sudo[92056]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfqpwgehxqaxxnekyazkwvktdbyshzmu ; /usr/bin/python3'
Jan 20 18:42:28 compute-0 sudo[92056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:42:28 compute-0 python3[92058]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-url http://192.168.122.100:3100
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:42:28 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v7: 135 pgs: 135 active+clean; 454 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 180 KiB/s rd, 2.5 KiB/s wr, 281 op/s
Jan 20 18:42:28 compute-0 podman[92059]: 2026-01-20 18:42:28.55317934 +0000 UTC m=+0.075282017 container create 59a2f3f08fc670b106469307c8a8da6cbbdfcbd4d9e7bc4a6558b041b2c62796 (image=quay.io/ceph/ceph:v19, name=bold_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 20 18:42:28 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:28 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 18:42:28 compute-0 podman[92059]: 2026-01-20 18:42:28.5052579 +0000 UTC m=+0.027360597 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:42:28 compute-0 ceph-mon[74381]: Updating compute-0:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.conf
Jan 20 18:42:28 compute-0 ceph-mon[74381]: Updating compute-1:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.conf
Jan 20 18:42:28 compute-0 ceph-mon[74381]: Updating compute-2:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.conf
Jan 20 18:42:28 compute-0 ceph-mon[74381]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 20 18:42:28 compute-0 ceph-mon[74381]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 20 18:42:28 compute-0 ceph-mon[74381]: from='client.24163 -' entity='client.admin' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://192.168.122.100:9092", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 18:42:28 compute-0 ceph-mon[74381]: Updating compute-0:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.client.admin.keyring
Jan 20 18:42:28 compute-0 ceph-mon[74381]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 20 18:42:28 compute-0 ceph-mon[74381]: pgmap v6: 135 pgs: 135 active+clean; 452 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:42:28 compute-0 ceph-mon[74381]: Updating compute-1:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.client.admin.keyring
Jan 20 18:42:28 compute-0 ceph-mon[74381]: Updating compute-2:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.client.admin.keyring
Jan 20 18:42:28 compute-0 ceph-mon[74381]: from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:28 compute-0 ceph-mon[74381]: from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:28 compute-0 ceph-mon[74381]: from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:28 compute-0 ceph-mon[74381]: from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:28 compute-0 ceph-mon[74381]: from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:28 compute-0 ceph-mon[74381]: from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:28 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:28 compute-0 systemd[1]: Started libpod-conmon-59a2f3f08fc670b106469307c8a8da6cbbdfcbd4d9e7bc4a6558b041b2c62796.scope.
Jan 20 18:42:28 compute-0 ceph-mgr[74676]: [progress INFO root] update: starting ev e9e81521-7bd6-4328-8e14-392479600977 (Updating node-exporter deployment (+3 -> 3))
Jan 20 18:42:28 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-0 on compute-0
Jan 20 18:42:28 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-0 on compute-0
Jan 20 18:42:28 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:42:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56180ce900c7802ae758de751f281e35e4bade2328ebced217ad7c7c78b382df/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:42:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56180ce900c7802ae758de751f281e35e4bade2328ebced217ad7c7c78b382df/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 20 18:42:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56180ce900c7802ae758de751f281e35e4bade2328ebced217ad7c7c78b382df/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:42:28 compute-0 sudo[92077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:42:28 compute-0 sudo[92077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:42:28 compute-0 sudo[92077]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:28 compute-0 podman[92059]: 2026-01-20 18:42:28.914495363 +0000 UTC m=+0.436598060 container init 59a2f3f08fc670b106469307c8a8da6cbbdfcbd4d9e7bc4a6558b041b2c62796 (image=quay.io/ceph/ceph:v19, name=bold_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:42:28 compute-0 podman[92059]: 2026-01-20 18:42:28.922681918 +0000 UTC m=+0.444784595 container start 59a2f3f08fc670b106469307c8a8da6cbbdfcbd4d9e7bc4a6558b041b2c62796 (image=quay.io/ceph/ceph:v19, name=bold_jackson, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:42:28 compute-0 sudo[92102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/prometheus/node-exporter:v1.7.0 --timeout 895 _orch deploy --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1
Jan 20 18:42:28 compute-0 sudo[92102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:42:29 compute-0 podman[92059]: 2026-01-20 18:42:29.026115119 +0000 UTC m=+0.548217806 container attach 59a2f3f08fc670b106469307c8a8da6cbbdfcbd4d9e7bc4a6558b041b2c62796 (image=quay.io/ceph/ceph:v19, name=bold_jackson, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 20 18:42:29 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e49 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 18:42:29 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.14433 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "http://192.168.122.100:3100", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 18:42:29 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_URL}] v 0)
Jan 20 18:42:29 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:29 compute-0 bold_jackson[92074]: Option GRAFANA_API_URL updated
Jan 20 18:42:29 compute-0 systemd[1]: Reloading.
Jan 20 18:42:29 compute-0 podman[92059]: 2026-01-20 18:42:29.498480984 +0000 UTC m=+1.020583691 container died 59a2f3f08fc670b106469307c8a8da6cbbdfcbd4d9e7bc4a6558b041b2c62796 (image=quay.io/ceph/ceph:v19, name=bold_jackson, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 20 18:42:29 compute-0 systemd-rc-local-generator[92233]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:42:29 compute-0 systemd-sysv-generator[92236]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:42:29 compute-0 systemd[1]: libpod-59a2f3f08fc670b106469307c8a8da6cbbdfcbd4d9e7bc4a6558b041b2c62796.scope: Deactivated successfully.
Jan 20 18:42:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-56180ce900c7802ae758de751f281e35e4bade2328ebced217ad7c7c78b382df-merged.mount: Deactivated successfully.
Jan 20 18:42:29 compute-0 systemd[1]: Reloading.
Jan 20 18:42:29 compute-0 podman[92059]: 2026-01-20 18:42:29.995703411 +0000 UTC m=+1.517806088 container remove 59a2f3f08fc670b106469307c8a8da6cbbdfcbd4d9e7bc4a6558b041b2c62796 (image=quay.io/ceph/ceph:v19, name=bold_jackson, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Jan 20 18:42:30 compute-0 sudo[92056]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:30 compute-0 systemd-sysv-generator[92276]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:42:30 compute-0 systemd-rc-local-generator[92271]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:42:30 compute-0 systemd[1]: libpod-conmon-59a2f3f08fc670b106469307c8a8da6cbbdfcbd4d9e7bc4a6558b041b2c62796.scope: Deactivated successfully.
Jan 20 18:42:30 compute-0 sudo[92303]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntlqoboklocpwcpdridznrkvybeavoqd ; /usr/bin/python3'
Jan 20 18:42:30 compute-0 sudo[92303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:42:30 compute-0 systemd[1]: Starting Ceph node-exporter.compute-0 for aecbbf3b-b405-507b-97d7-637a83f5b4b1...
Jan 20 18:42:30 compute-0 python3[92307]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module disable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:42:30 compute-0 bash[92353]: Trying to pull quay.io/prometheus/node-exporter:v1.7.0...
Jan 20 18:42:30 compute-0 podman[92354]: 2026-01-20 18:42:30.448340102 +0000 UTC m=+0.028754312 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:42:30 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v8: 135 pgs: 135 active+clean; 454 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 139 KiB/s rd, 2.0 KiB/s wr, 216 op/s
Jan 20 18:42:30 compute-0 bash[92353]: Getting image source signatures
Jan 20 18:42:30 compute-0 bash[92353]: Copying blob sha256:324153f2810a9927fcce320af9e4e291e0b6e805cbdd1f338386c756b9defa24
Jan 20 18:42:30 compute-0 bash[92353]: Copying blob sha256:2abcce694348cd2c949c0e98a7400ebdfd8341021bcf6b541bc72033ce982510
Jan 20 18:42:30 compute-0 bash[92353]: Copying blob sha256:455fd88e5221bc1e278ef2d059cd70e4df99a24e5af050ede621534276f6cf9a
Jan 20 18:42:30 compute-0 podman[92354]: 2026-01-20 18:42:30.900742597 +0000 UTC m=+0.481156777 container create 9b945a05339f020df7f5272bb892d392900cbb71d0af749bf413bb78c3b9b13a (image=quay.io/ceph/ceph:v19, name=jovial_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 20 18:42:30 compute-0 ceph-mon[74381]: pgmap v7: 135 pgs: 135 active+clean; 454 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 180 KiB/s rd, 2.5 KiB/s wr, 281 op/s
Jan 20 18:42:30 compute-0 ceph-mon[74381]: from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:30 compute-0 ceph-mon[74381]: from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:30 compute-0 ceph-mon[74381]: Deploying daemon node-exporter.compute-0 on compute-0
Jan 20 18:42:30 compute-0 ceph-mon[74381]: from='client.14433 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "http://192.168.122.100:3100", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 18:42:30 compute-0 ceph-mon[74381]: from='mgr.14373 192.168.122.100:0/3929921856' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:30 compute-0 systemd[1]: Started libpod-conmon-9b945a05339f020df7f5272bb892d392900cbb71d0af749bf413bb78c3b9b13a.scope.
Jan 20 18:42:30 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:42:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e968500df5d5093ea3f97e1d2143904cb16247a6ed479c94d975949b82f9faa/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:42:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e968500df5d5093ea3f97e1d2143904cb16247a6ed479c94d975949b82f9faa/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:42:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e968500df5d5093ea3f97e1d2143904cb16247a6ed479c94d975949b82f9faa/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 20 18:42:31 compute-0 podman[92354]: 2026-01-20 18:42:31.165941871 +0000 UTC m=+0.746356071 container init 9b945a05339f020df7f5272bb892d392900cbb71d0af749bf413bb78c3b9b13a (image=quay.io/ceph/ceph:v19, name=jovial_babbage, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid)
Jan 20 18:42:31 compute-0 podman[92354]: 2026-01-20 18:42:31.173573723 +0000 UTC m=+0.753987903 container start 9b945a05339f020df7f5272bb892d392900cbb71d0af749bf413bb78c3b9b13a (image=quay.io/ceph/ceph:v19, name=jovial_babbage, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:42:31 compute-0 podman[92354]: 2026-01-20 18:42:31.193214374 +0000 UTC m=+0.773628584 container attach 9b945a05339f020df7f5272bb892d392900cbb71d0af749bf413bb78c3b9b13a (image=quay.io/ceph/ceph:v19, name=jovial_babbage, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:42:31 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module disable", "module": "dashboard"} v 0)
Jan 20 18:42:31 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/543553434' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Jan 20 18:42:32 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v9: 135 pgs: 135 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 143 KiB/s rd, 1.6 KiB/s wr, 234 op/s
Jan 20 18:42:32 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/543553434' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Jan 20 18:42:32 compute-0 ceph-mgr[74676]: mgr handle_mgr_map respawning because set of enabled modules changed!
Jan 20 18:42:32 compute-0 ceph-mgr[74676]: mgr respawn  e: '/usr/bin/ceph-mgr'
Jan 20 18:42:32 compute-0 ceph-mgr[74676]: mgr respawn  0: '/usr/bin/ceph-mgr'
Jan 20 18:42:32 compute-0 ceph-mgr[74676]: mgr respawn  1: '-n'
Jan 20 18:42:32 compute-0 ceph-mgr[74676]: mgr respawn  2: 'mgr.compute-0.cepfkm'
Jan 20 18:42:32 compute-0 ceph-mgr[74676]: mgr respawn  3: '-f'
Jan 20 18:42:32 compute-0 ceph-mgr[74676]: mgr respawn  4: '--setuser'
Jan 20 18:42:32 compute-0 ceph-mgr[74676]: mgr respawn  5: 'ceph'
Jan 20 18:42:32 compute-0 ceph-mgr[74676]: mgr respawn  6: '--setgroup'
Jan 20 18:42:32 compute-0 ceph-mgr[74676]: mgr respawn  7: 'ceph'
Jan 20 18:42:32 compute-0 ceph-mgr[74676]: mgr respawn  8: '--default-log-to-file=false'
Jan 20 18:42:32 compute-0 ceph-mgr[74676]: mgr respawn  9: '--default-log-to-journald=true'
Jan 20 18:42:32 compute-0 ceph-mgr[74676]: mgr respawn  10: '--default-log-to-stderr=false'
Jan 20 18:42:32 compute-0 ceph-mgr[74676]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Jan 20 18:42:32 compute-0 ceph-mgr[74676]: mgr respawn  exe_path /proc/self/exe
Jan 20 18:42:32 compute-0 systemd[1]: libpod-9b945a05339f020df7f5272bb892d392900cbb71d0af749bf413bb78c3b9b13a.scope: Deactivated successfully.
Jan 20 18:42:32 compute-0 podman[92354]: 2026-01-20 18:42:32.997865809 +0000 UTC m=+2.578279999 container died 9b945a05339f020df7f5272bb892d392900cbb71d0af749bf413bb78c3b9b13a (image=quay.io/ceph/ceph:v19, name=jovial_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 20 18:42:33 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : mgrmap e17: compute-0.cepfkm(active, since 13s), standbys: compute-1.whkwsm, compute-2.pyghhf
Jan 20 18:42:33 compute-0 ceph-mon[74381]: pgmap v8: 135 pgs: 135 active+clean; 454 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 139 KiB/s rd, 2.0 KiB/s wr, 216 op/s
Jan 20 18:42:33 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/543553434' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Jan 20 18:42:33 compute-0 sshd-session[90446]: Connection closed by 192.168.122.100 port 34508
Jan 20 18:42:33 compute-0 sshd-session[90412]: pam_unix(sshd:session): session closed for user ceph-admin
Jan 20 18:42:33 compute-0 systemd-logind[796]: Session 34 logged out. Waiting for processes to exit.
Jan 20 18:42:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ignoring --setuser ceph since I am not root
Jan 20 18:42:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ignoring --setgroup ceph since I am not root
Jan 20 18:42:33 compute-0 ceph-mgr[74676]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Jan 20 18:42:33 compute-0 ceph-mgr[74676]: pidfile_write: ignore empty --pid-file
Jan 20 18:42:33 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'alerts'
Jan 20 18:42:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e968500df5d5093ea3f97e1d2143904cb16247a6ed479c94d975949b82f9faa-merged.mount: Deactivated successfully.
Jan 20 18:42:33 compute-0 bash[92353]: Copying config sha256:72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e
Jan 20 18:42:33 compute-0 podman[92354]: 2026-01-20 18:42:33.21584185 +0000 UTC m=+2.796256030 container remove 9b945a05339f020df7f5272bb892d392900cbb71d0af749bf413bb78c3b9b13a (image=quay.io/ceph/ceph:v19, name=jovial_babbage, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:42:33 compute-0 bash[92353]: Writing manifest to image destination
Jan 20 18:42:33 compute-0 systemd[1]: libpod-conmon-9b945a05339f020df7f5272bb892d392900cbb71d0af749bf413bb78c3b9b13a.scope: Deactivated successfully.
Jan 20 18:42:33 compute-0 ceph-mgr[74676]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 20 18:42:33 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'balancer'
Jan 20 18:42:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:42:33.225+0000 7f46a8198140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 20 18:42:33 compute-0 sudo[92303]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:33 compute-0 podman[92353]: 2026-01-20 18:42:33.256046128 +0000 UTC m=+2.846042388 container create ce781f31ce1e6337dcc83c72a15848474c3fba90a583434d21a997d8255e9246 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:42:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0efa559753edbff0e68fb9753e884ab2c71245a3a2814ffc4cccc69b3e1fcc9a/merged/etc/node-exporter supports timestamps until 2038 (0x7fffffff)
Jan 20 18:42:33 compute-0 podman[92353]: 2026-01-20 18:42:33.308766709 +0000 UTC m=+2.898762989 container init ce781f31ce1e6337dcc83c72a15848474c3fba90a583434d21a997d8255e9246 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:42:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:42:33.308+0000 7f46a8198140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 20 18:42:33 compute-0 ceph-mgr[74676]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 20 18:42:33 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'cephadm'
Jan 20 18:42:33 compute-0 podman[92353]: 2026-01-20 18:42:33.313479527 +0000 UTC m=+2.903475787 container start ce781f31ce1e6337dcc83c72a15848474c3fba90a583434d21a997d8255e9246 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:42:33 compute-0 bash[92353]: ce781f31ce1e6337dcc83c72a15848474c3fba90a583434d21a997d8255e9246
Jan 20 18:42:33 compute-0 podman[92353]: 2026-01-20 18:42:33.24140985 +0000 UTC m=+2.831406130 image pull 72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e quay.io/prometheus/node-exporter:v1.7.0
Jan 20 18:42:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[92499]: ts=2026-01-20T18:42:33.320Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)"
Jan 20 18:42:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[92499]: ts=2026-01-20T18:42:33.320Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)"
Jan 20 18:42:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[92499]: ts=2026-01-20T18:42:33.323Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Jan 20 18:42:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[92499]: ts=2026-01-20T18:42:33.323Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Jan 20 18:42:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[92499]: ts=2026-01-20T18:42:33.324Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Jan 20 18:42:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[92499]: ts=2026-01-20T18:42:33.324Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Jan 20 18:42:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[92499]: ts=2026-01-20T18:42:33.324Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Jan 20 18:42:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[92499]: ts=2026-01-20T18:42:33.324Z caller=node_exporter.go:117 level=info collector=arp
Jan 20 18:42:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[92499]: ts=2026-01-20T18:42:33.324Z caller=node_exporter.go:117 level=info collector=bcache
Jan 20 18:42:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[92499]: ts=2026-01-20T18:42:33.324Z caller=node_exporter.go:117 level=info collector=bonding
Jan 20 18:42:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[92499]: ts=2026-01-20T18:42:33.324Z caller=node_exporter.go:117 level=info collector=btrfs
Jan 20 18:42:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[92499]: ts=2026-01-20T18:42:33.324Z caller=node_exporter.go:117 level=info collector=conntrack
Jan 20 18:42:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[92499]: ts=2026-01-20T18:42:33.324Z caller=node_exporter.go:117 level=info collector=cpu
Jan 20 18:42:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[92499]: ts=2026-01-20T18:42:33.324Z caller=node_exporter.go:117 level=info collector=cpufreq
Jan 20 18:42:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[92499]: ts=2026-01-20T18:42:33.324Z caller=node_exporter.go:117 level=info collector=diskstats
Jan 20 18:42:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[92499]: ts=2026-01-20T18:42:33.325Z caller=node_exporter.go:117 level=info collector=dmi
Jan 20 18:42:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[92499]: ts=2026-01-20T18:42:33.325Z caller=node_exporter.go:117 level=info collector=edac
Jan 20 18:42:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[92499]: ts=2026-01-20T18:42:33.325Z caller=node_exporter.go:117 level=info collector=entropy
Jan 20 18:42:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[92499]: ts=2026-01-20T18:42:33.325Z caller=node_exporter.go:117 level=info collector=fibrechannel
Jan 20 18:42:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[92499]: ts=2026-01-20T18:42:33.325Z caller=node_exporter.go:117 level=info collector=filefd
Jan 20 18:42:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[92499]: ts=2026-01-20T18:42:33.325Z caller=node_exporter.go:117 level=info collector=filesystem
Jan 20 18:42:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[92499]: ts=2026-01-20T18:42:33.325Z caller=node_exporter.go:117 level=info collector=hwmon
Jan 20 18:42:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[92499]: ts=2026-01-20T18:42:33.325Z caller=node_exporter.go:117 level=info collector=infiniband
Jan 20 18:42:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[92499]: ts=2026-01-20T18:42:33.325Z caller=node_exporter.go:117 level=info collector=ipvs
Jan 20 18:42:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[92499]: ts=2026-01-20T18:42:33.325Z caller=node_exporter.go:117 level=info collector=loadavg
Jan 20 18:42:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[92499]: ts=2026-01-20T18:42:33.325Z caller=node_exporter.go:117 level=info collector=mdadm
Jan 20 18:42:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[92499]: ts=2026-01-20T18:42:33.325Z caller=node_exporter.go:117 level=info collector=meminfo
Jan 20 18:42:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[92499]: ts=2026-01-20T18:42:33.325Z caller=node_exporter.go:117 level=info collector=netclass
Jan 20 18:42:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[92499]: ts=2026-01-20T18:42:33.325Z caller=node_exporter.go:117 level=info collector=netdev
Jan 20 18:42:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[92499]: ts=2026-01-20T18:42:33.325Z caller=node_exporter.go:117 level=info collector=netstat
Jan 20 18:42:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[92499]: ts=2026-01-20T18:42:33.326Z caller=node_exporter.go:117 level=info collector=nfs
Jan 20 18:42:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[92499]: ts=2026-01-20T18:42:33.326Z caller=node_exporter.go:117 level=info collector=nfsd
Jan 20 18:42:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[92499]: ts=2026-01-20T18:42:33.326Z caller=node_exporter.go:117 level=info collector=nvme
Jan 20 18:42:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[92499]: ts=2026-01-20T18:42:33.326Z caller=node_exporter.go:117 level=info collector=os
Jan 20 18:42:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[92499]: ts=2026-01-20T18:42:33.326Z caller=node_exporter.go:117 level=info collector=powersupplyclass
Jan 20 18:42:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[92499]: ts=2026-01-20T18:42:33.326Z caller=node_exporter.go:117 level=info collector=pressure
Jan 20 18:42:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[92499]: ts=2026-01-20T18:42:33.326Z caller=node_exporter.go:117 level=info collector=rapl
Jan 20 18:42:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[92499]: ts=2026-01-20T18:42:33.326Z caller=node_exporter.go:117 level=info collector=schedstat
Jan 20 18:42:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[92499]: ts=2026-01-20T18:42:33.326Z caller=node_exporter.go:117 level=info collector=selinux
Jan 20 18:42:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[92499]: ts=2026-01-20T18:42:33.326Z caller=node_exporter.go:117 level=info collector=sockstat
Jan 20 18:42:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[92499]: ts=2026-01-20T18:42:33.326Z caller=node_exporter.go:117 level=info collector=softnet
Jan 20 18:42:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[92499]: ts=2026-01-20T18:42:33.326Z caller=node_exporter.go:117 level=info collector=stat
Jan 20 18:42:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[92499]: ts=2026-01-20T18:42:33.326Z caller=node_exporter.go:117 level=info collector=tapestats
Jan 20 18:42:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[92499]: ts=2026-01-20T18:42:33.326Z caller=node_exporter.go:117 level=info collector=textfile
Jan 20 18:42:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[92499]: ts=2026-01-20T18:42:33.326Z caller=node_exporter.go:117 level=info collector=thermal_zone
Jan 20 18:42:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[92499]: ts=2026-01-20T18:42:33.326Z caller=node_exporter.go:117 level=info collector=time
Jan 20 18:42:33 compute-0 systemd[1]: Started Ceph node-exporter.compute-0 for aecbbf3b-b405-507b-97d7-637a83f5b4b1.
Jan 20 18:42:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[92499]: ts=2026-01-20T18:42:33.326Z caller=node_exporter.go:117 level=info collector=udp_queues
Jan 20 18:42:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[92499]: ts=2026-01-20T18:42:33.327Z caller=node_exporter.go:117 level=info collector=uname
Jan 20 18:42:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[92499]: ts=2026-01-20T18:42:33.327Z caller=node_exporter.go:117 level=info collector=vmstat
Jan 20 18:42:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[92499]: ts=2026-01-20T18:42:33.327Z caller=node_exporter.go:117 level=info collector=xfs
Jan 20 18:42:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[92499]: ts=2026-01-20T18:42:33.327Z caller=node_exporter.go:117 level=info collector=zfs
Jan 20 18:42:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[92499]: ts=2026-01-20T18:42:33.327Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100
Jan 20 18:42:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[92499]: ts=2026-01-20T18:42:33.327Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100
Jan 20 18:42:33 compute-0 sudo[92102]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:33 compute-0 systemd[1]: session-34.scope: Deactivated successfully.
Jan 20 18:42:33 compute-0 systemd[1]: session-34.scope: Consumed 4.895s CPU time.
Jan 20 18:42:33 compute-0 systemd-logind[796]: Removed session 34.
Jan 20 18:42:33 compute-0 sudo[92532]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxpqssmwvnbaeavijrjzcnvhnaijxwpf ; /usr/bin/python3'
Jan 20 18:42:33 compute-0 sudo[92532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:42:33 compute-0 python3[92534]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module enable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:42:33 compute-0 podman[92535]: 2026-01-20 18:42:33.614709424 +0000 UTC m=+0.057806049 container create 8f130a15eb4777122fce8efb10e654fb77f047d15663821962e3bf8f46d87a84 (image=quay.io/ceph/ceph:v19, name=relaxed_hermann, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:42:33 compute-0 systemd[1]: Started libpod-conmon-8f130a15eb4777122fce8efb10e654fb77f047d15663821962e3bf8f46d87a84.scope.
Jan 20 18:42:33 compute-0 podman[92535]: 2026-01-20 18:42:33.584373103 +0000 UTC m=+0.027469808 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:42:33 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:42:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07bbb9ec76edae4b225e900bdf75ed8e6153b3afe1445c5b129bae31c5b5722c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:42:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07bbb9ec76edae4b225e900bdf75ed8e6153b3afe1445c5b129bae31c5b5722c/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 20 18:42:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07bbb9ec76edae4b225e900bdf75ed8e6153b3afe1445c5b129bae31c5b5722c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:42:33 compute-0 podman[92535]: 2026-01-20 18:42:33.714524134 +0000 UTC m=+0.157620789 container init 8f130a15eb4777122fce8efb10e654fb77f047d15663821962e3bf8f46d87a84 (image=quay.io/ceph/ceph:v19, name=relaxed_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:42:33 compute-0 podman[92535]: 2026-01-20 18:42:33.722243298 +0000 UTC m=+0.165339913 container start 8f130a15eb4777122fce8efb10e654fb77f047d15663821962e3bf8f46d87a84 (image=quay.io/ceph/ceph:v19, name=relaxed_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 20 18:42:33 compute-0 podman[92535]: 2026-01-20 18:42:33.726518665 +0000 UTC m=+0.169615290 container attach 8f130a15eb4777122fce8efb10e654fb77f047d15663821962e3bf8f46d87a84 (image=quay.io/ceph/ceph:v19, name=relaxed_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 18:42:34 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'crash'
Jan 20 18:42:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:42:34.115+0000 7f46a8198140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 20 18:42:34 compute-0 ceph-mgr[74676]: mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 20 18:42:34 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'dashboard'
Jan 20 18:42:34 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "dashboard"} v 0)
Jan 20 18:42:34 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/765852702' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Jan 20 18:42:34 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e49 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 18:42:34 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/543553434' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Jan 20 18:42:34 compute-0 ceph-mon[74381]: mgrmap e17: compute-0.cepfkm(active, since 13s), standbys: compute-1.whkwsm, compute-2.pyghhf
Jan 20 18:42:34 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/765852702' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Jan 20 18:42:34 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : mgrmap e18: compute-0.cepfkm(active, since 14s), standbys: compute-1.whkwsm, compute-2.pyghhf
Jan 20 18:42:34 compute-0 systemd[1]: libpod-8f130a15eb4777122fce8efb10e654fb77f047d15663821962e3bf8f46d87a84.scope: Deactivated successfully.
Jan 20 18:42:34 compute-0 podman[92535]: 2026-01-20 18:42:34.306915256 +0000 UTC m=+0.750011881 container died 8f130a15eb4777122fce8efb10e654fb77f047d15663821962e3bf8f46d87a84 (image=quay.io/ceph/ceph:v19, name=relaxed_hermann, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 18:42:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-07bbb9ec76edae4b225e900bdf75ed8e6153b3afe1445c5b129bae31c5b5722c-merged.mount: Deactivated successfully.
Jan 20 18:42:34 compute-0 podman[92535]: 2026-01-20 18:42:34.346626061 +0000 UTC m=+0.789722686 container remove 8f130a15eb4777122fce8efb10e654fb77f047d15663821962e3bf8f46d87a84 (image=quay.io/ceph/ceph:v19, name=relaxed_hermann, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:42:34 compute-0 systemd[1]: libpod-conmon-8f130a15eb4777122fce8efb10e654fb77f047d15663821962e3bf8f46d87a84.scope: Deactivated successfully.
Jan 20 18:42:34 compute-0 sudo[92532]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:34 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'devicehealth'
Jan 20 18:42:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:42:34.844+0000 7f46a8198140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 20 18:42:34 compute-0 ceph-mgr[74676]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 20 18:42:34 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'diskprediction_local'
Jan 20 18:42:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 20 18:42:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 20 18:42:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]:   from numpy import show_config as show_numpy_config
Jan 20 18:42:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:42:35.010+0000 7f46a8198140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 20 18:42:35 compute-0 ceph-mgr[74676]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 20 18:42:35 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'influx'
Jan 20 18:42:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:42:35.078+0000 7f46a8198140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 20 18:42:35 compute-0 ceph-mgr[74676]: mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 20 18:42:35 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'insights'
Jan 20 18:42:35 compute-0 python3[92676]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 18:42:35 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'iostat'
Jan 20 18:42:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:42:35.214+0000 7f46a8198140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 20 18:42:35 compute-0 ceph-mgr[74676]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 20 18:42:35 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'k8sevents'
Jan 20 18:42:35 compute-0 python3[92747]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768934554.8715887-37530-39022613578998/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=b1f36629bdb347469f4890c95dfdef5abc68c3ae backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:42:35 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'localpool'
Jan 20 18:42:35 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'mds_autoscaler'
Jan 20 18:42:35 compute-0 sudo[92795]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eojylfjelkarybyvgaqbyfzbvznhonlx ; /usr/bin/python3'
Jan 20 18:42:35 compute-0 sudo[92795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:42:35 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/765852702' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Jan 20 18:42:35 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/765852702' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Jan 20 18:42:35 compute-0 ceph-mon[74381]: mgrmap e18: compute-0.cepfkm(active, since 14s), standbys: compute-1.whkwsm, compute-2.pyghhf
Jan 20 18:42:35 compute-0 python3[92797]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 compute-1 compute-2 '
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:42:35 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'mirroring'
Jan 20 18:42:35 compute-0 podman[92798]: 2026-01-20 18:42:35.981896912 +0000 UTC m=+0.074834626 container create 203ef91a41e9116edc6e0331c128b3ed9cfcccef464449bcf754db7a9e6aefac (image=quay.io/ceph/ceph:v19, name=heuristic_jepsen, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:42:36 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'nfs'
Jan 20 18:42:36 compute-0 podman[92798]: 2026-01-20 18:42:35.928388032 +0000 UTC m=+0.021325746 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:42:36 compute-0 systemd[1]: Started libpod-conmon-203ef91a41e9116edc6e0331c128b3ed9cfcccef464449bcf754db7a9e6aefac.scope.
Jan 20 18:42:36 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:42:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a81d23c4a88af15c1b0e409788c0f177fcda97cda96a86e860aa3aaa4a3a425/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:42:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a81d23c4a88af15c1b0e409788c0f177fcda97cda96a86e860aa3aaa4a3a425/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 20 18:42:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a81d23c4a88af15c1b0e409788c0f177fcda97cda96a86e860aa3aaa4a3a425/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:42:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:42:36.275+0000 7f46a8198140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 20 18:42:36 compute-0 ceph-mgr[74676]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 20 18:42:36 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'orchestrator'
Jan 20 18:42:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:42:36.491+0000 7f46a8198140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 20 18:42:36 compute-0 ceph-mgr[74676]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 20 18:42:36 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'osd_perf_query'
Jan 20 18:42:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:42:36.574+0000 7f46a8198140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 20 18:42:36 compute-0 ceph-mgr[74676]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 20 18:42:36 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'osd_support'
Jan 20 18:42:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:42:36.638+0000 7f46a8198140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 20 18:42:36 compute-0 ceph-mgr[74676]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 20 18:42:36 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'pg_autoscaler'
Jan 20 18:42:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:42:36.714+0000 7f46a8198140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 20 18:42:36 compute-0 ceph-mgr[74676]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 20 18:42:36 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'progress'
Jan 20 18:42:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:42:36.788+0000 7f46a8198140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 20 18:42:36 compute-0 ceph-mgr[74676]: mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 20 18:42:36 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'prometheus'
Jan 20 18:42:36 compute-0 podman[92798]: 2026-01-20 18:42:36.793091546 +0000 UTC m=+0.886029290 container init 203ef91a41e9116edc6e0331c128b3ed9cfcccef464449bcf754db7a9e6aefac (image=quay.io/ceph/ceph:v19, name=heuristic_jepsen, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Jan 20 18:42:36 compute-0 podman[92798]: 2026-01-20 18:42:36.812949893 +0000 UTC m=+0.905887607 container start 203ef91a41e9116edc6e0331c128b3ed9cfcccef464449bcf754db7a9e6aefac (image=quay.io/ceph/ceph:v19, name=heuristic_jepsen, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 18:42:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:42:37.137+0000 7f46a8198140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 20 18:42:37 compute-0 ceph-mgr[74676]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 20 18:42:37 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'rbd_support'
Jan 20 18:42:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:42:37.236+0000 7f46a8198140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 20 18:42:37 compute-0 ceph-mgr[74676]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 20 18:42:37 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'restful'
Jan 20 18:42:37 compute-0 podman[92798]: 2026-01-20 18:42:37.381752245 +0000 UTC m=+1.474689979 container attach 203ef91a41e9116edc6e0331c128b3ed9cfcccef464449bcf754db7a9e6aefac (image=quay.io/ceph/ceph:v19, name=heuristic_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 18:42:37 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'rgw'
Jan 20 18:42:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:42:37.676+0000 7f46a8198140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 20 18:42:37 compute-0 ceph-mgr[74676]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 20 18:42:37 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'rook'
Jan 20 18:42:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:42:38.246+0000 7f46a8198140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 20 18:42:38 compute-0 ceph-mgr[74676]: mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 20 18:42:38 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'selftest'
Jan 20 18:42:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:42:38.322+0000 7f46a8198140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 20 18:42:38 compute-0 ceph-mgr[74676]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 20 18:42:38 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'snap_schedule'
Jan 20 18:42:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:42:38.408+0000 7f46a8198140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 20 18:42:38 compute-0 ceph-mgr[74676]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 20 18:42:38 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'stats'
Jan 20 18:42:38 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'status'
Jan 20 18:42:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:42:38.559+0000 7f46a8198140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Jan 20 18:42:38 compute-0 ceph-mgr[74676]: mgr[py] Module status has missing NOTIFY_TYPES member
Jan 20 18:42:38 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'telegraf'
Jan 20 18:42:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:42:38.634+0000 7f46a8198140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 20 18:42:38 compute-0 ceph-mgr[74676]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 20 18:42:38 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'telemetry'
Jan 20 18:42:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:42:38.791+0000 7f46a8198140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 20 18:42:38 compute-0 ceph-mgr[74676]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 20 18:42:38 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'test_orchestrator'
Jan 20 18:42:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:42:39.012+0000 7f46a8198140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 20 18:42:39 compute-0 ceph-mgr[74676]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 20 18:42:39 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'volumes'
Jan 20 18:42:39 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e49 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 18:42:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:42:39.279+0000 7f46a8198140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 20 18:42:39 compute-0 ceph-mgr[74676]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 20 18:42:39 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'zabbix'
Jan 20 18:42:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:42:39.347+0000 7f46a8198140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 20 18:42:39 compute-0 ceph-mgr[74676]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 20 18:42:39 compute-0 ceph-mon[74381]: log_channel(cluster) log [INF] : Active manager daemon compute-0.cepfkm restarted
Jan 20 18:42:39 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Jan 20 18:42:39 compute-0 ceph-mon[74381]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.cepfkm
Jan 20 18:42:39 compute-0 ceph-mgr[74676]: ms_deliver_dispatch: unhandled message 0x56361a527860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Jan 20 18:42:39 compute-0 ceph-mgr[74676]: mgr handle_mgr_map respawning because set of enabled modules changed!
Jan 20 18:42:39 compute-0 ceph-mgr[74676]: mgr respawn  e: '/usr/bin/ceph-mgr'
Jan 20 18:42:39 compute-0 ceph-mgr[74676]: mgr respawn  0: '/usr/bin/ceph-mgr'
Jan 20 18:42:39 compute-0 ceph-mgr[74676]: mgr respawn  1: '-n'
Jan 20 18:42:39 compute-0 ceph-mgr[74676]: mgr respawn  2: 'mgr.compute-0.cepfkm'
Jan 20 18:42:39 compute-0 ceph-mgr[74676]: mgr respawn  3: '-f'
Jan 20 18:42:39 compute-0 ceph-mgr[74676]: mgr respawn  4: '--setuser'
Jan 20 18:42:39 compute-0 ceph-mgr[74676]: mgr respawn  5: 'ceph'
Jan 20 18:42:39 compute-0 ceph-mgr[74676]: mgr respawn  6: '--setgroup'
Jan 20 18:42:39 compute-0 ceph-mgr[74676]: mgr respawn  7: 'ceph'
Jan 20 18:42:39 compute-0 ceph-mgr[74676]: mgr respawn  8: '--default-log-to-file=false'
Jan 20 18:42:39 compute-0 ceph-mgr[74676]: mgr respawn  9: '--default-log-to-journald=true'
Jan 20 18:42:39 compute-0 ceph-mgr[74676]: mgr respawn  10: '--default-log-to-stderr=false'
Jan 20 18:42:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ignoring --setuser ceph since I am not root
Jan 20 18:42:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ignoring --setgroup ceph since I am not root
Jan 20 18:42:39 compute-0 ceph-mgr[74676]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Jan 20 18:42:39 compute-0 ceph-mgr[74676]: pidfile_write: ignore empty --pid-file
Jan 20 18:42:39 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'alerts'
Jan 20 18:42:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:42:39.805+0000 7f349f46d140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 20 18:42:39 compute-0 ceph-mgr[74676]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 20 18:42:39 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'balancer'
Jan 20 18:42:39 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Jan 20 18:42:39 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Jan 20 18:42:39 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.pyghhf restarted
Jan 20 18:42:39 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.pyghhf started
Jan 20 18:42:39 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : mgrmap e19: compute-0.cepfkm(active, starting, since 0.539571s), standbys: compute-1.whkwsm, compute-2.pyghhf
Jan 20 18:42:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:42:39.892+0000 7f349f46d140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 20 18:42:39 compute-0 ceph-mgr[74676]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 20 18:42:39 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'cephadm'
Jan 20 18:42:39 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.whkwsm restarted
Jan 20 18:42:39 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.whkwsm started
Jan 20 18:42:40 compute-0 ceph-mon[74381]: Active manager daemon compute-0.cepfkm restarted
Jan 20 18:42:40 compute-0 ceph-mon[74381]: Activating manager daemon compute-0.cepfkm
Jan 20 18:42:40 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'crash'
Jan 20 18:42:40 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:42:40.677+0000 7f349f46d140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 20 18:42:40 compute-0 ceph-mgr[74676]: mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 20 18:42:40 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'dashboard'
Jan 20 18:42:41 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'devicehealth'
Jan 20 18:42:41 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:42:41.334+0000 7f349f46d140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 20 18:42:41 compute-0 ceph-mgr[74676]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 20 18:42:41 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'diskprediction_local'
Jan 20 18:42:41 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 20 18:42:41 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 20 18:42:41 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]:   from numpy import show_config as show_numpy_config
Jan 20 18:42:41 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:42:41.496+0000 7f349f46d140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 20 18:42:41 compute-0 ceph-mgr[74676]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 20 18:42:41 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'influx'
Jan 20 18:42:41 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:42:41.564+0000 7f349f46d140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 20 18:42:41 compute-0 ceph-mgr[74676]: mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 20 18:42:41 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'insights'
Jan 20 18:42:41 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'iostat'
Jan 20 18:42:41 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:42:41.699+0000 7f349f46d140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 20 18:42:41 compute-0 ceph-mgr[74676]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 20 18:42:41 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'k8sevents'
Jan 20 18:42:41 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : mgrmap e20: compute-0.cepfkm(active, starting, since 2s), standbys: compute-1.whkwsm, compute-2.pyghhf
Jan 20 18:42:41 compute-0 ceph-mon[74381]: osdmap e50: 3 total, 3 up, 3 in
Jan 20 18:42:41 compute-0 ceph-mon[74381]: Standby manager daemon compute-2.pyghhf restarted
Jan 20 18:42:41 compute-0 ceph-mon[74381]: Standby manager daemon compute-2.pyghhf started
Jan 20 18:42:41 compute-0 ceph-mon[74381]: mgrmap e19: compute-0.cepfkm(active, starting, since 0.539571s), standbys: compute-1.whkwsm, compute-2.pyghhf
Jan 20 18:42:41 compute-0 ceph-mon[74381]: Standby manager daemon compute-1.whkwsm restarted
Jan 20 18:42:41 compute-0 ceph-mon[74381]: Standby manager daemon compute-1.whkwsm started
Jan 20 18:42:42 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'localpool'
Jan 20 18:42:42 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'mds_autoscaler'
Jan 20 18:42:42 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'mirroring'
Jan 20 18:42:42 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'nfs'
Jan 20 18:42:42 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:42:42.765+0000 7f349f46d140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 20 18:42:42 compute-0 ceph-mgr[74676]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 20 18:42:42 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'orchestrator'
Jan 20 18:42:42 compute-0 ceph-mon[74381]: mgrmap e20: compute-0.cepfkm(active, starting, since 2s), standbys: compute-1.whkwsm, compute-2.pyghhf
Jan 20 18:42:43 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:42:43.028+0000 7f349f46d140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 20 18:42:43 compute-0 ceph-mgr[74676]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 20 18:42:43 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'osd_perf_query'
Jan 20 18:42:43 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:42:43.123+0000 7f349f46d140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 20 18:42:43 compute-0 ceph-mgr[74676]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 20 18:42:43 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'osd_support'
Jan 20 18:42:43 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:42:43.199+0000 7f349f46d140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 20 18:42:43 compute-0 ceph-mgr[74676]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 20 18:42:43 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'pg_autoscaler'
Jan 20 18:42:43 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:42:43.300+0000 7f349f46d140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 20 18:42:43 compute-0 ceph-mgr[74676]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 20 18:42:43 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'progress'
Jan 20 18:42:43 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:42:43.380+0000 7f349f46d140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 20 18:42:43 compute-0 ceph-mgr[74676]: mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 20 18:42:43 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'prometheus'
Jan 20 18:42:43 compute-0 systemd[1]: Stopping User Manager for UID 42477...
Jan 20 18:42:43 compute-0 systemd[90430]: Activating special unit Exit the Session...
Jan 20 18:42:43 compute-0 systemd[90430]: Stopped target Main User Target.
Jan 20 18:42:43 compute-0 systemd[90430]: Stopped target Basic System.
Jan 20 18:42:43 compute-0 systemd[90430]: Stopped target Paths.
Jan 20 18:42:43 compute-0 systemd[90430]: Stopped target Sockets.
Jan 20 18:42:43 compute-0 systemd[90430]: Stopped target Timers.
Jan 20 18:42:43 compute-0 systemd[90430]: Stopped Mark boot as successful after the user session has run 2 minutes.
Jan 20 18:42:43 compute-0 systemd[90430]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 20 18:42:43 compute-0 systemd[90430]: Closed D-Bus User Message Bus Socket.
Jan 20 18:42:43 compute-0 systemd[90430]: Stopped Create User's Volatile Files and Directories.
Jan 20 18:42:43 compute-0 systemd[90430]: Removed slice User Application Slice.
Jan 20 18:42:43 compute-0 systemd[90430]: Reached target Shutdown.
Jan 20 18:42:43 compute-0 systemd[90430]: Finished Exit the Session.
Jan 20 18:42:43 compute-0 systemd[90430]: Reached target Exit the Session.
Jan 20 18:42:43 compute-0 systemd[1]: user@42477.service: Deactivated successfully.
Jan 20 18:42:43 compute-0 systemd[1]: Stopped User Manager for UID 42477.
Jan 20 18:42:43 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Jan 20 18:42:43 compute-0 systemd[1]: run-user-42477.mount: Deactivated successfully.
Jan 20 18:42:43 compute-0 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Jan 20 18:42:43 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Jan 20 18:42:43 compute-0 systemd[1]: Removed slice User Slice of UID 42477.
Jan 20 18:42:43 compute-0 systemd[1]: user-42477.slice: Consumed 5.114s CPU time.
Jan 20 18:42:43 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:42:43.788+0000 7f349f46d140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 20 18:42:43 compute-0 ceph-mgr[74676]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 20 18:42:43 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'rbd_support'
Jan 20 18:42:43 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:42:43.883+0000 7f349f46d140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 20 18:42:43 compute-0 ceph-mgr[74676]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 20 18:42:43 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'restful'
Jan 20 18:42:44 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'rgw'
Jan 20 18:42:44 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e50 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 18:42:44 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:42:44.328+0000 7f349f46d140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 20 18:42:44 compute-0 ceph-mgr[74676]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 20 18:42:44 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'rook'
Jan 20 18:42:44 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:42:44.907+0000 7f349f46d140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 20 18:42:44 compute-0 ceph-mgr[74676]: mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 20 18:42:44 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'selftest'
Jan 20 18:42:44 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:42:44.981+0000 7f349f46d140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 20 18:42:44 compute-0 ceph-mgr[74676]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 20 18:42:44 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'snap_schedule'
Jan 20 18:42:45 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:42:45.058+0000 7f349f46d140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 20 18:42:45 compute-0 ceph-mgr[74676]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 20 18:42:45 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'stats'
Jan 20 18:42:45 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'status'
Jan 20 18:42:45 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:42:45.207+0000 7f349f46d140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Jan 20 18:42:45 compute-0 ceph-mgr[74676]: mgr[py] Module status has missing NOTIFY_TYPES member
Jan 20 18:42:45 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'telegraf'
Jan 20 18:42:45 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:42:45.277+0000 7f349f46d140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 20 18:42:45 compute-0 ceph-mgr[74676]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 20 18:42:45 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'telemetry'
Jan 20 18:42:45 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:42:45.432+0000 7f349f46d140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 20 18:42:45 compute-0 ceph-mgr[74676]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 20 18:42:45 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'test_orchestrator'
Jan 20 18:42:45 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:42:45.670+0000 7f349f46d140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 20 18:42:45 compute-0 ceph-mgr[74676]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 20 18:42:45 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'volumes'
Jan 20 18:42:45 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.pyghhf restarted
Jan 20 18:42:45 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.pyghhf started
Jan 20 18:42:45 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:42:45.957+0000 7f349f46d140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 20 18:42:45 compute-0 ceph-mgr[74676]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 20 18:42:45 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'zabbix'
Jan 20 18:42:46 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:42:46.041+0000 7f349f46d140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: ms_deliver_dispatch: unhandled message 0x55cb872ab860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Jan 20 18:42:46 compute-0 ceph-mon[74381]: log_channel(cluster) log [INF] : Active manager daemon compute-0.cepfkm restarted
Jan 20 18:42:46 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Jan 20 18:42:46 compute-0 ceph-mon[74381]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.cepfkm
Jan 20 18:42:46 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : mgrmap e21: compute-0.cepfkm(active, starting, since 6s), standbys: compute-1.whkwsm, compute-2.pyghhf
Jan 20 18:42:46 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Jan 20 18:42:46 compute-0 ceph-mon[74381]: Standby manager daemon compute-2.pyghhf restarted
Jan 20 18:42:46 compute-0 ceph-mon[74381]: Standby manager daemon compute-2.pyghhf started
Jan 20 18:42:46 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Jan 20 18:42:46 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : mgrmap e22: compute-0.cepfkm(active, starting, since 0.492534s), standbys: compute-1.whkwsm, compute-2.pyghhf
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: mgr handle_mgr_map Activating!
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: mgr handle_mgr_map I am now activating
Jan 20 18:42:46 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 20 18:42:46 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 20 18:42:46 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Jan 20 18:42:46 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 20 18:42:46 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Jan 20 18:42:46 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 20 18:42:46 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.cepfkm", "id": "compute-0.cepfkm"} v 0)
Jan 20 18:42:46 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mgr metadata", "who": "compute-0.cepfkm", "id": "compute-0.cepfkm"}]: dispatch
Jan 20 18:42:46 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.whkwsm", "id": "compute-1.whkwsm"} v 0)
Jan 20 18:42:46 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mgr metadata", "who": "compute-1.whkwsm", "id": "compute-1.whkwsm"}]: dispatch
Jan 20 18:42:46 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.pyghhf", "id": "compute-2.pyghhf"} v 0)
Jan 20 18:42:46 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mgr metadata", "who": "compute-2.pyghhf", "id": "compute-2.pyghhf"}]: dispatch
Jan 20 18:42:46 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 20 18:42:46 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 20 18:42:46 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 20 18:42:46 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 20 18:42:46 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 20 18:42:46 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 20 18:42:46 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Jan 20 18:42:46 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mds metadata"}]: dispatch
Jan 20 18:42:46 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).mds e1 all = 1
Jan 20 18:42:46 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Jan 20 18:42:46 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 20 18:42:46 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Jan 20 18:42:46 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mon metadata"}]: dispatch
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: mgr load Constructed class from module: balancer
Jan 20 18:42:46 compute-0 ceph-mon[74381]: log_channel(cluster) log [INF] : Manager daemon compute-0.cepfkm is now available
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [balancer INFO root] Starting
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [balancer INFO root] Optimize plan auto_2026-01-20_18:42:46
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: mgr load Constructed class from module: cephadm
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: mgr load Constructed class from module: crash
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: mgr load Constructed class from module: dashboard
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: mgr load Constructed class from module: devicehealth
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: mgr load Constructed class from module: iostat
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO access_control] Loading user roles DB version=2
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO sso] Loading SSO DB version=1
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO root] Configured CherryPy, starting engine...
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: mgr load Constructed class from module: nfs
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: mgr load Constructed class from module: orchestrator
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [devicehealth INFO root] Starting
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: mgr load Constructed class from module: pg_autoscaler
Jan 20 18:42:46 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.whkwsm restarted
Jan 20 18:42:46 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.whkwsm started
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: mgr load Constructed class from module: progress
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [progress INFO root] Loading...
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7f3420f74bb0>, <progress.module.GhostEvent object at 0x7f3420f74be0>, <progress.module.GhostEvent object at 0x7f3420f74c10>, <progress.module.GhostEvent object at 0x7f3420f74c40>, <progress.module.GhostEvent object at 0x7f3420f74c70>, <progress.module.GhostEvent object at 0x7f3420f74ca0>, <progress.module.GhostEvent object at 0x7f3420f74cd0>, <progress.module.GhostEvent object at 0x7f3420f74d00>, <progress.module.GhostEvent object at 0x7f3420f74d30>, <progress.module.GhostEvent object at 0x7f3420f74d60>] historic events
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [progress INFO root] Loaded OSDMap, ready.
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [rbd_support INFO root] recovery thread starting
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [rbd_support INFO root] starting setup
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: mgr load Constructed class from module: rbd_support
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: mgr load Constructed class from module: restful
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: mgr load Constructed class from module: status
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [restful INFO root] server_addr: :: server_port: 8003
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: mgr load Constructed class from module: telemetry
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [restful WARNING root] server not running: no certificate configured
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 18:42:46 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.cepfkm/mirror_snapshot_schedule"} v 0)
Jan 20 18:42:46 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.cepfkm/mirror_snapshot_schedule"}]: dispatch
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [rbd_support INFO root] PerfHandler: starting
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_task_task: vms, start_after=
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_task_task: volumes, start_after=
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: mgr load Constructed class from module: volumes
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_task_task: backups, start_after=
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_task_task: images, start_after=
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [rbd_support INFO root] TaskHandler: starting
Jan 20 18:42:46 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.cepfkm/trash_purge_schedule"} v 0)
Jan 20 18:42:46 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.cepfkm/trash_purge_schedule"}]: dispatch
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [rbd_support INFO root] setup complete
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Jan 20 18:42:46 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Jan 20 18:42:47 compute-0 sshd-session[92986]: Accepted publickey for ceph-admin from 192.168.122.100 port 41482 ssh2: RSA SHA256:58ALtshni2jJ/laX5+bMZOBBr4k3I3UMx5wmNonUL8k
Jan 20 18:42:47 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Jan 20 18:42:47 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Jan 20 18:42:47 compute-0 systemd-logind[796]: New session 36 of user ceph-admin.
Jan 20 18:42:47 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Jan 20 18:42:47 compute-0 systemd[1]: Starting User Manager for UID 42477...
Jan 20 18:42:47 compute-0 systemd[93001]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 20 18:42:47 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.module] Engine started.
Jan 20 18:42:47 compute-0 systemd[93001]: Queued start job for default target Main User Target.
Jan 20 18:42:47 compute-0 systemd[93001]: Created slice User Application Slice.
Jan 20 18:42:47 compute-0 systemd[93001]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 20 18:42:47 compute-0 systemd[93001]: Started Daily Cleanup of User's Temporary Directories.
Jan 20 18:42:47 compute-0 systemd[93001]: Reached target Paths.
Jan 20 18:42:47 compute-0 systemd[93001]: Reached target Timers.
Jan 20 18:42:47 compute-0 systemd[93001]: Starting D-Bus User Message Bus Socket...
Jan 20 18:42:47 compute-0 systemd[93001]: Starting Create User's Volatile Files and Directories...
Jan 20 18:42:47 compute-0 systemd[93001]: Listening on D-Bus User Message Bus Socket.
Jan 20 18:42:47 compute-0 systemd[93001]: Reached target Sockets.
Jan 20 18:42:47 compute-0 systemd[93001]: Finished Create User's Volatile Files and Directories.
Jan 20 18:42:47 compute-0 systemd[93001]: Reached target Basic System.
Jan 20 18:42:47 compute-0 systemd[93001]: Reached target Main User Target.
Jan 20 18:42:47 compute-0 systemd[93001]: Startup finished in 129ms.
Jan 20 18:42:47 compute-0 systemd[1]: Started User Manager for UID 42477.
Jan 20 18:42:47 compute-0 systemd[1]: Started Session 36 of User ceph-admin.
Jan 20 18:42:47 compute-0 sshd-session[92986]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 20 18:42:47 compute-0 sudo[93018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:42:47 compute-0 sudo[93018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:42:47 compute-0 sudo[93018]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:47 compute-0 sudo[93043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Jan 20 18:42:47 compute-0 sudo[93043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:42:47 compute-0 ceph-mon[74381]: Active manager daemon compute-0.cepfkm restarted
Jan 20 18:42:47 compute-0 ceph-mon[74381]: Activating manager daemon compute-0.cepfkm
Jan 20 18:42:47 compute-0 ceph-mon[74381]: mgrmap e21: compute-0.cepfkm(active, starting, since 6s), standbys: compute-1.whkwsm, compute-2.pyghhf
Jan 20 18:42:47 compute-0 ceph-mon[74381]: osdmap e51: 3 total, 3 up, 3 in
Jan 20 18:42:47 compute-0 ceph-mon[74381]: mgrmap e22: compute-0.cepfkm(active, starting, since 0.492534s), standbys: compute-1.whkwsm, compute-2.pyghhf
Jan 20 18:42:47 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 20 18:42:47 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 20 18:42:47 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 20 18:42:47 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mgr metadata", "who": "compute-0.cepfkm", "id": "compute-0.cepfkm"}]: dispatch
Jan 20 18:42:47 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mgr metadata", "who": "compute-1.whkwsm", "id": "compute-1.whkwsm"}]: dispatch
Jan 20 18:42:47 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mgr metadata", "who": "compute-2.pyghhf", "id": "compute-2.pyghhf"}]: dispatch
Jan 20 18:42:47 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 20 18:42:47 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 20 18:42:47 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 20 18:42:47 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mds metadata"}]: dispatch
Jan 20 18:42:47 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 20 18:42:47 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mon metadata"}]: dispatch
Jan 20 18:42:47 compute-0 ceph-mon[74381]: Manager daemon compute-0.cepfkm is now available
Jan 20 18:42:47 compute-0 ceph-mon[74381]: Standby manager daemon compute-1.whkwsm restarted
Jan 20 18:42:47 compute-0 ceph-mon[74381]: Standby manager daemon compute-1.whkwsm started
Jan 20 18:42:47 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.cepfkm/mirror_snapshot_schedule"}]: dispatch
Jan 20 18:42:47 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.cepfkm/trash_purge_schedule"}]: dispatch
Jan 20 18:42:47 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : mgrmap e23: compute-0.cepfkm(active, since 1.53437s), standbys: compute-1.whkwsm, compute-2.pyghhf
Jan 20 18:42:47 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.14463 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 18:42:47 compute-0 ceph-mgr[74676]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Jan 20 18:42:47 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0)
Jan 20 18:42:47 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Jan 20 18:42:47 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0)
Jan 20 18:42:47 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Jan 20 18:42:47 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0)
Jan 20 18:42:47 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Jan 20 18:42:47 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Jan 20 18:42:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mon-compute-0[74377]: 2026-01-20T18:42:47.693+0000 7f58856b8640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 20 18:42:47 compute-0 ceph-mon[74381]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 20 18:42:47 compute-0 ceph-mon[74381]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Jan 20 18:42:47 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v3: 135 pgs: 135 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:42:47 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Jan 20 18:42:47 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).mds e2 new map
Jan 20 18:42:47 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).mds e2 print_map
                                           e2
                                           btime 2026-01-20T18:42:47:693934+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-20T18:42:47.693845+0000
                                           modified        2026-01-20T18:42:47.693845+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                            
                                            
Jan 20 18:42:47 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Jan 20 18:42:47 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Jan 20 18:42:47 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Jan 20 18:42:47 compute-0 ceph-mgr[74676]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 20 18:42:47 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 20 18:42:47 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 20 18:42:47 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:47 compute-0 ceph-mgr[74676]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Jan 20 18:42:47 compute-0 systemd[1]: libpod-203ef91a41e9116edc6e0331c128b3ed9cfcccef464449bcf754db7a9e6aefac.scope: Deactivated successfully.
Jan 20 18:42:47 compute-0 podman[93119]: 2026-01-20 18:42:47.78416921 +0000 UTC m=+0.024299980 container died 203ef91a41e9116edc6e0331c128b3ed9cfcccef464449bcf754db7a9e6aefac (image=quay.io/ceph/ceph:v19, name=heuristic_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:42:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a81d23c4a88af15c1b0e409788c0f177fcda97cda96a86e860aa3aaa4a3a425-merged.mount: Deactivated successfully.
Jan 20 18:42:47 compute-0 podman[93119]: 2026-01-20 18:42:47.830830698 +0000 UTC m=+0.070961468 container remove 203ef91a41e9116edc6e0331c128b3ed9cfcccef464449bcf754db7a9e6aefac (image=quay.io/ceph/ceph:v19, name=heuristic_jepsen, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 20 18:42:47 compute-0 systemd[1]: libpod-conmon-203ef91a41e9116edc6e0331c128b3ed9cfcccef464449bcf754db7a9e6aefac.scope: Deactivated successfully.
Jan 20 18:42:47 compute-0 sudo[92795]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:47 compute-0 podman[93150]: 2026-01-20 18:42:47.895772856 +0000 UTC m=+0.058095607 container exec 2fba800b181b7eef43f9bbe592c3e2bd413c4a1140b3b26fe8cd839a68603da7 (image=quay.io/ceph/ceph:v19, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 20 18:42:47 compute-0 podman[93150]: 2026-01-20 18:42:47.992182792 +0000 UTC m=+0.154505523 container exec_died 2fba800b181b7eef43f9bbe592c3e2bd413c4a1140b3b26fe8cd839a68603da7 (image=quay.io/ceph/ceph:v19, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mon-compute-0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:42:47 compute-0 sudo[93195]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpwlimtdxpmesfikiljqefiexsyuzwnh ; /usr/bin/python3'
Jan 20 18:42:47 compute-0 sudo[93195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:42:48 compute-0 python3[93198]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:42:48 compute-0 podman[93235]: 2026-01-20 18:42:48.182439028 +0000 UTC m=+0.040255529 container create a2300a7ebd6b5c639905a4fafcadb52e1373acc7ffc76c07705ff6ab6ef16275 (image=quay.io/ceph/ceph:v19, name=kind_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Jan 20 18:42:48 compute-0 systemd[1]: Started libpod-conmon-a2300a7ebd6b5c639905a4fafcadb52e1373acc7ffc76c07705ff6ab6ef16275.scope.
Jan 20 18:42:48 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:42:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89597f8bb9a942e523329b94c0f990841899c6fda6de4044be7bd4503dc077c0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:42:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89597f8bb9a942e523329b94c0f990841899c6fda6de4044be7bd4503dc077c0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:42:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/89597f8bb9a942e523329b94c0f990841899c6fda6de4044be7bd4503dc077c0/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 20 18:42:48 compute-0 podman[93235]: 2026-01-20 18:42:48.162328214 +0000 UTC m=+0.020144735 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:42:48 compute-0 podman[93235]: 2026-01-20 18:42:48.2619536 +0000 UTC m=+0.119770121 container init a2300a7ebd6b5c639905a4fafcadb52e1373acc7ffc76c07705ff6ab6ef16275 (image=quay.io/ceph/ceph:v19, name=kind_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 20 18:42:48 compute-0 podman[93235]: 2026-01-20 18:42:48.272866023 +0000 UTC m=+0.130682534 container start a2300a7ebd6b5c639905a4fafcadb52e1373acc7ffc76c07705ff6ab6ef16275 (image=quay.io/ceph/ceph:v19, name=kind_jepsen, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Jan 20 18:42:48 compute-0 podman[93235]: 2026-01-20 18:42:48.298391183 +0000 UTC m=+0.156207684 container attach a2300a7ebd6b5c639905a4fafcadb52e1373acc7ffc76c07705ff6ab6ef16275 (image=quay.io/ceph/ceph:v19, name=kind_jepsen, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:42:48 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 20 18:42:48 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v5: 135 pgs: 135 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:42:48 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.14490 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 18:42:48 compute-0 ceph-mgr[74676]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 20 18:42:48 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 20 18:42:48 compute-0 ceph-mgr[74676]: [cephadm INFO cherrypy.error] [20/Jan/2026:18:42:48] ENGINE Bus STARTING
Jan 20 18:42:48 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : [20/Jan/2026:18:42:48] ENGINE Bus STARTING
Jan 20 18:42:48 compute-0 ceph-mgr[74676]: [cephadm INFO cherrypy.error] [20/Jan/2026:18:42:48] ENGINE Serving on http://192.168.122.100:8765
Jan 20 18:42:48 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : [20/Jan/2026:18:42:48] ENGINE Serving on http://192.168.122.100:8765
Jan 20 18:42:49 compute-0 ceph-mgr[74676]: [cephadm INFO cherrypy.error] [20/Jan/2026:18:42:49] ENGINE Serving on https://192.168.122.100:7150
Jan 20 18:42:49 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : [20/Jan/2026:18:42:49] ENGINE Serving on https://192.168.122.100:7150
Jan 20 18:42:49 compute-0 ceph-mgr[74676]: [cephadm INFO cherrypy.error] [20/Jan/2026:18:42:49] ENGINE Bus STARTED
Jan 20 18:42:49 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : [20/Jan/2026:18:42:49] ENGINE Bus STARTED
Jan 20 18:42:49 compute-0 ceph-mgr[74676]: [cephadm INFO cherrypy.error] [20/Jan/2026:18:42:49] ENGINE Client ('192.168.122.100', 56796) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 20 18:42:49 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : [20/Jan/2026:18:42:49] ENGINE Client ('192.168.122.100', 56796) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 20 18:42:49 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 20 18:42:49 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 20 18:42:49 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:49 compute-0 podman[93345]: 2026-01-20 18:42:49.083973885 +0000 UTC m=+0.621667177 container exec ce781f31ce1e6337dcc83c72a15848474c3fba90a583434d21a997d8255e9246 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:42:49 compute-0 ceph-mgr[74676]: [devicehealth INFO root] Check health
Jan 20 18:42:49 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 20 18:42:49 compute-0 podman[93345]: 2026-01-20 18:42:49.139215009 +0000 UTC m=+0.676908251 container exec_died ce781f31ce1e6337dcc83c72a15848474c3fba90a583434d21a997d8255e9246 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:42:49 compute-0 ceph-mon[74381]: mgrmap e23: compute-0.cepfkm(active, since 1.53437s), standbys: compute-1.whkwsm, compute-2.pyghhf
Jan 20 18:42:49 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Jan 20 18:42:49 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Jan 20 18:42:49 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Jan 20 18:42:49 compute-0 ceph-mon[74381]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 20 18:42:49 compute-0 ceph-mon[74381]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Jan 20 18:42:49 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Jan 20 18:42:49 compute-0 ceph-mon[74381]: osdmap e52: 3 total, 3 up, 3 in
Jan 20 18:42:49 compute-0 ceph-mon[74381]: fsmap cephfs:0
Jan 20 18:42:49 compute-0 ceph-mon[74381]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 20 18:42:49 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:49 compute-0 ceph-mon[74381]: pgmap v5: 135 pgs: 135 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:42:49 compute-0 ceph-mon[74381]: from='client.14490 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 18:42:49 compute-0 ceph-mon[74381]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Jan 20 18:42:49 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:49 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : mgrmap e24: compute-0.cepfkm(active, since 3s), standbys: compute-1.whkwsm, compute-2.pyghhf
Jan 20 18:42:49 compute-0 kind_jepsen[93275]: Scheduled mds.cephfs update...
Jan 20 18:42:49 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e52 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 18:42:49 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:49 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 20 18:42:49 compute-0 systemd[1]: libpod-a2300a7ebd6b5c639905a4fafcadb52e1373acc7ffc76c07705ff6ab6ef16275.scope: Deactivated successfully.
Jan 20 18:42:49 compute-0 podman[93235]: 2026-01-20 18:42:49.232858486 +0000 UTC m=+1.090674997 container died a2300a7ebd6b5c639905a4fafcadb52e1373acc7ffc76c07705ff6ab6ef16275 (image=quay.io/ceph/ceph:v19, name=kind_jepsen, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Jan 20 18:42:49 compute-0 sudo[93043]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:49 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:49 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 18:42:49 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-89597f8bb9a942e523329b94c0f990841899c6fda6de4044be7bd4503dc077c0-merged.mount: Deactivated successfully.
Jan 20 18:42:49 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:49 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 18:42:49 compute-0 podman[93235]: 2026-01-20 18:42:49.567938741 +0000 UTC m=+1.425755242 container remove a2300a7ebd6b5c639905a4fafcadb52e1373acc7ffc76c07705ff6ab6ef16275 (image=quay.io/ceph/ceph:v19, name=kind_jepsen, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Jan 20 18:42:49 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:49 compute-0 sudo[93195]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:49 compute-0 sudo[93430]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:42:49 compute-0 sudo[93430]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:42:49 compute-0 sudo[93430]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:49 compute-0 systemd[1]: libpod-conmon-a2300a7ebd6b5c639905a4fafcadb52e1373acc7ffc76c07705ff6ab6ef16275.scope: Deactivated successfully.
Jan 20 18:42:49 compute-0 sudo[93455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 20 18:42:49 compute-0 sudo[93455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:42:49 compute-0 sudo[93503]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycofaajgbbcqwlmslprnmcuxqtywrejm ; /usr/bin/python3'
Jan 20 18:42:49 compute-0 sudo[93503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:42:49 compute-0 python3[93505]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   nfs cluster create cephfs --ingress --virtual-ip=192.168.122.2/24 --ingress-mode=haproxy-protocol '--placement=compute-0 compute-1 compute-2 '
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:42:49 compute-0 podman[93517]: 2026-01-20 18:42:49.928968136 +0000 UTC m=+0.052975318 container create cbd16c39c67888adbbc8bddbe17f5c6157e6456f25b409621fccabb6e5a47ff7 (image=quay.io/ceph/ceph:v19, name=stoic_hertz, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Jan 20 18:42:49 compute-0 systemd[1]: Started libpod-conmon-cbd16c39c67888adbbc8bddbe17f5c6157e6456f25b409621fccabb6e5a47ff7.scope.
Jan 20 18:42:49 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:42:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09ca468933ab3cd1ead9e920b996c72df62e6642b6c9f49985fae769f86c7968/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 20 18:42:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09ca468933ab3cd1ead9e920b996c72df62e6642b6c9f49985fae769f86c7968/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:42:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09ca468933ab3cd1ead9e920b996c72df62e6642b6c9f49985fae769f86c7968/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:42:50 compute-0 podman[93517]: 2026-01-20 18:42:49.906331209 +0000 UTC m=+0.030338401 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:42:50 compute-0 podman[93517]: 2026-01-20 18:42:50.03612077 +0000 UTC m=+0.160127962 container init cbd16c39c67888adbbc8bddbe17f5c6157e6456f25b409621fccabb6e5a47ff7 (image=quay.io/ceph/ceph:v19, name=stoic_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Jan 20 18:42:50 compute-0 podman[93517]: 2026-01-20 18:42:50.046516061 +0000 UTC m=+0.170523243 container start cbd16c39c67888adbbc8bddbe17f5c6157e6456f25b409621fccabb6e5a47ff7 (image=quay.io/ceph/ceph:v19, name=stoic_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:42:50 compute-0 podman[93517]: 2026-01-20 18:42:50.070771419 +0000 UTC m=+0.194778591 container attach cbd16c39c67888adbbc8bddbe17f5c6157e6456f25b409621fccabb6e5a47ff7 (image=quay.io/ceph/ceph:v19, name=stoic_hertz, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Jan 20 18:42:50 compute-0 sudo[93455]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:50 compute-0 ceph-mon[74381]: [20/Jan/2026:18:42:48] ENGINE Bus STARTING
Jan 20 18:42:50 compute-0 ceph-mon[74381]: [20/Jan/2026:18:42:48] ENGINE Serving on http://192.168.122.100:8765
Jan 20 18:42:50 compute-0 ceph-mon[74381]: [20/Jan/2026:18:42:49] ENGINE Serving on https://192.168.122.100:7150
Jan 20 18:42:50 compute-0 ceph-mon[74381]: [20/Jan/2026:18:42:49] ENGINE Bus STARTED
Jan 20 18:42:50 compute-0 ceph-mon[74381]: [20/Jan/2026:18:42:49] ENGINE Client ('192.168.122.100', 56796) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 20 18:42:50 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:50 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:50 compute-0 ceph-mon[74381]: mgrmap e24: compute-0.cepfkm(active, since 3s), standbys: compute-1.whkwsm, compute-2.pyghhf
Jan 20 18:42:50 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:50 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:50 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:50 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:50 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:50 compute-0 sudo[93577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:42:50 compute-0 sudo[93577]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:42:50 compute-0 sudo[93577]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:50 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 20 18:42:50 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:50 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 20 18:42:50 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:50 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Jan 20 18:42:50 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 20 18:42:50 compute-0 sudo[93602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Jan 20 18:42:50 compute-0 sudo[93602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:42:50 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.14502 -' entity='client.admin' cmd=[{"prefix": "nfs cluster create", "cluster_id": "cephfs", "ingress": true, "virtual_ip": "192.168.122.2/24", "ingress_mode": "haproxy-protocol", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 18:42:50 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true} v 0)
Jan 20 18:42:50 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch
Jan 20 18:42:50 compute-0 sudo[93602]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:50 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 18:42:50 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 20 18:42:50 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v6: 135 pgs: 135 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:42:51 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Jan 20 18:42:51 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : mgrmap e25: compute-0.cepfkm(active, since 5s), standbys: compute-1.whkwsm, compute-2.pyghhf
Jan 20 18:42:51 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished
Jan 20 18:42:51 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Jan 20 18:42:51 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:51 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:51 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 20 18:42:51 compute-0 ceph-mon[74381]: from='client.14502 -' entity='client.admin' cmd=[{"prefix": "nfs cluster create", "cluster_id": "cephfs", "ingress": true, "virtual_ip": "192.168.122.2/24", "ingress_mode": "haproxy-protocol", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 18:42:51 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch
Jan 20 18:42:51 compute-0 ceph-mon[74381]: pgmap v6: 135 pgs: 135 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:42:52 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:52 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Jan 20 18:42:52 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 18:42:52 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"} v 0)
Jan 20 18:42:52 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch
Jan 20 18:42:52 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:52 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 20 18:42:52 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:52 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 20 18:42:52 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 20 18:42:52 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:52 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Jan 20 18:42:52 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 20 18:42:52 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 18:42:52 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:42:52 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 20 18:42:52 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 18:42:52 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Jan 20 18:42:52 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Jan 20 18:42:52 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Jan 20 18:42:52 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Jan 20 18:42:52 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Jan 20 18:42:52 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Jan 20 18:42:52 compute-0 sudo[93648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Jan 20 18:42:52 compute-0 sudo[93648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:42:52 compute-0 sudo[93648]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:52 compute-0 sudo[93673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/etc/ceph
Jan 20 18:42:52 compute-0 sudo[93673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:42:52 compute-0 sudo[93673]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:52 compute-0 sudo[93698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/etc/ceph/ceph.conf.new
Jan 20 18:42:52 compute-0 sudo[93698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:42:52 compute-0 sudo[93698]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:52 compute-0 sudo[93723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1
Jan 20 18:42:52 compute-0 sudo[93723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:42:52 compute-0 sudo[93723]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:52 compute-0 sudo[93748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/etc/ceph/ceph.conf.new
Jan 20 18:42:52 compute-0 sudo[93748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:42:52 compute-0 sudo[93748]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:52 compute-0 sudo[93796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/etc/ceph/ceph.conf.new
Jan 20 18:42:52 compute-0 sudo[93796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:42:52 compute-0 sudo[93796]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:52 compute-0 sudo[93821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/etc/ceph/ceph.conf.new
Jan 20 18:42:52 compute-0 sudo[93821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:42:52 compute-0 sudo[93821]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:52 compute-0 sudo[93846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Jan 20 18:42:52 compute-0 sudo[93846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:42:52 compute-0 sudo[93846]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:52 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.conf
Jan 20 18:42:52 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.conf
Jan 20 18:42:52 compute-0 sudo[93871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config
Jan 20 18:42:52 compute-0 sudo[93871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:42:52 compute-0 sudo[93871]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:52 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.conf
Jan 20 18:42:52 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.conf
Jan 20 18:42:52 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v8: 136 pgs: 1 unknown, 135 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:42:52 compute-0 sudo[93896]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config
Jan 20 18:42:52 compute-0 sudo[93896]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:42:52 compute-0 sudo[93896]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:52 compute-0 sudo[93921]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.conf.new
Jan 20 18:42:52 compute-0 sudo[93921]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:42:52 compute-0 sudo[93921]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:52 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.conf
Jan 20 18:42:52 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.conf
Jan 20 18:42:52 compute-0 sudo[93946]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1
Jan 20 18:42:52 compute-0 sudo[93946]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:42:52 compute-0 sudo[93946]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:52 compute-0 sudo[93971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.conf.new
Jan 20 18:42:52 compute-0 sudo[93971]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:42:52 compute-0 sudo[93971]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:52 compute-0 sudo[94019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.conf.new
Jan 20 18:42:52 compute-0 sudo[94019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:42:52 compute-0 sudo[94019]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:53 compute-0 sudo[94044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.conf.new
Jan 20 18:42:53 compute-0 sudo[94044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:42:53 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Jan 20 18:42:53 compute-0 sudo[94044]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:53 compute-0 ceph-mon[74381]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 20 18:42:53 compute-0 ceph-mon[74381]: mgrmap e25: compute-0.cepfkm(active, since 5s), standbys: compute-1.whkwsm, compute-2.pyghhf
Jan 20 18:42:53 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished
Jan 20 18:42:53 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:53 compute-0 ceph-mon[74381]: osdmap e53: 3 total, 3 up, 3 in
Jan 20 18:42:53 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch
Jan 20 18:42:53 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:53 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:53 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 20 18:42:53 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:53 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 20 18:42:53 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:42:53 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 18:42:53 compute-0 ceph-mon[74381]: Updating compute-0:/etc/ceph/ceph.conf
Jan 20 18:42:53 compute-0 ceph-mon[74381]: Updating compute-1:/etc/ceph/ceph.conf
Jan 20 18:42:53 compute-0 ceph-mon[74381]: Updating compute-2:/etc/ceph/ceph.conf
Jan 20 18:42:53 compute-0 ceph-mon[74381]: Updating compute-0:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.conf
Jan 20 18:42:53 compute-0 ceph-mon[74381]: Updating compute-1:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.conf
Jan 20 18:42:53 compute-0 ceph-mon[74381]: pgmap v8: 136 pgs: 1 unknown, 135 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:42:53 compute-0 sudo[94069]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.conf.new /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.conf
Jan 20 18:42:53 compute-0 sudo[94069]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:42:53 compute-0 sudo[94069]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:53 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished
Jan 20 18:42:53 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Jan 20 18:42:53 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 20 18:42:53 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 20 18:42:53 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Jan 20 18:42:53 compute-0 ceph-mgr[74676]: [nfs INFO nfs.cluster] Created empty object:conf-nfs.cephfs
Jan 20 18:42:53 compute-0 ceph-mgr[74676]: [cephadm INFO root] Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Jan 20 18:42:53 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Jan 20 18:42:53 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 20 18:42:53 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:53 compute-0 ceph-mgr[74676]: [cephadm INFO root] Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Jan 20 18:42:53 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Jan 20 18:42:53 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Jan 20 18:42:53 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:53 compute-0 sudo[94094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Jan 20 18:42:53 compute-0 sudo[94094]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:42:53 compute-0 sudo[94094]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:53 compute-0 systemd[1]: libpod-cbd16c39c67888adbbc8bddbe17f5c6157e6456f25b409621fccabb6e5a47ff7.scope: Deactivated successfully.
Jan 20 18:42:53 compute-0 podman[93517]: 2026-01-20 18:42:53.181463355 +0000 UTC m=+3.305470537 container died cbd16c39c67888adbbc8bddbe17f5c6157e6456f25b409621fccabb6e5a47ff7 (image=quay.io/ceph/ceph:v19, name=stoic_hertz, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Jan 20 18:42:53 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 20 18:42:53 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 20 18:42:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-09ca468933ab3cd1ead9e920b996c72df62e6642b6c9f49985fae769f86c7968-merged.mount: Deactivated successfully.
Jan 20 18:42:53 compute-0 podman[93517]: 2026-01-20 18:42:53.232581515 +0000 UTC m=+3.356588697 container remove cbd16c39c67888adbbc8bddbe17f5c6157e6456f25b409621fccabb6e5a47ff7 (image=quay.io/ceph/ceph:v19, name=stoic_hertz, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 20 18:42:53 compute-0 sudo[94131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/etc/ceph
Jan 20 18:42:53 compute-0 sudo[94131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:42:53 compute-0 sudo[94131]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:53 compute-0 systemd[1]: libpod-conmon-cbd16c39c67888adbbc8bddbe17f5c6157e6456f25b409621fccabb6e5a47ff7.scope: Deactivated successfully.
Jan 20 18:42:53 compute-0 sudo[93503]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:53 compute-0 sudo[94165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/etc/ceph/ceph.client.admin.keyring.new
Jan 20 18:42:53 compute-0 sudo[94165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:42:53 compute-0 sudo[94165]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:53 compute-0 sudo[94190]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1
Jan 20 18:42:53 compute-0 sudo[94190]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:42:53 compute-0 sudo[94190]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:53 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 20 18:42:53 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 20 18:42:53 compute-0 sudo[94215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/etc/ceph/ceph.client.admin.keyring.new
Jan 20 18:42:53 compute-0 sudo[94215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:42:53 compute-0 sudo[94215]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:53 compute-0 sudo[94263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/etc/ceph/ceph.client.admin.keyring.new
Jan 20 18:42:53 compute-0 sudo[94263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:42:53 compute-0 sudo[94263]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:53 compute-0 sudo[94288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/etc/ceph/ceph.client.admin.keyring.new
Jan 20 18:42:53 compute-0 sudo[94288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:42:53 compute-0 sudo[94288]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:53 compute-0 sudo[94313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Jan 20 18:42:53 compute-0 sudo[94313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:42:53 compute-0 sudo[94313]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:53 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.client.admin.keyring
Jan 20 18:42:53 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.client.admin.keyring
Jan 20 18:42:53 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.client.admin.keyring
Jan 20 18:42:53 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.client.admin.keyring
Jan 20 18:42:53 compute-0 sudo[94338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config
Jan 20 18:42:53 compute-0 sudo[94338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:42:53 compute-0 sudo[94338]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:53 compute-0 sudo[94363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config
Jan 20 18:42:53 compute-0 sudo[94363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:42:53 compute-0 sudo[94363]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:53 compute-0 sudo[94388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.client.admin.keyring.new
Jan 20 18:42:53 compute-0 sudo[94388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:42:53 compute-0 sudo[94388]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:53 compute-0 sudo[94413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1
Jan 20 18:42:53 compute-0 sudo[94413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:42:53 compute-0 sudo[94413]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:53 compute-0 sudo[94438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.client.admin.keyring.new
Jan 20 18:42:53 compute-0 sudo[94438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:42:53 compute-0 sudo[94438]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:54 compute-0 sudo[94486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.client.admin.keyring.new
Jan 20 18:42:54 compute-0 sudo[94486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:42:54 compute-0 sudo[94486]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:54 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.client.admin.keyring
Jan 20 18:42:54 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.client.admin.keyring
Jan 20 18:42:54 compute-0 sudo[94511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.client.admin.keyring.new
Jan 20 18:42:54 compute-0 sudo[94511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:42:54 compute-0 sudo[94511]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:54 compute-0 sudo[94540]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.client.admin.keyring.new /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.client.admin.keyring
Jan 20 18:42:54 compute-0 sudo[94540]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:42:54 compute-0 sudo[94540]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:54 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Jan 20 18:42:54 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 18:42:54 compute-0 ceph-mon[74381]: Updating compute-2:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.conf
Jan 20 18:42:54 compute-0 ceph-mon[74381]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Jan 20 18:42:54 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished
Jan 20 18:42:54 compute-0 ceph-mon[74381]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 20 18:42:54 compute-0 ceph-mon[74381]: osdmap e54: 3 total, 3 up, 3 in
Jan 20 18:42:54 compute-0 ceph-mon[74381]: Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Jan 20 18:42:54 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:54 compute-0 ceph-mon[74381]: Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Jan 20 18:42:54 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:54 compute-0 ceph-mon[74381]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 20 18:42:54 compute-0 ceph-mon[74381]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 20 18:42:54 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 20 18:42:54 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Jan 20 18:42:54 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:54 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Jan 20 18:42:54 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : mgrmap e26: compute-0.cepfkm(active, since 8s), standbys: compute-1.whkwsm, compute-2.pyghhf
Jan 20 18:42:54 compute-0 sudo[94636]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tomlnpdwizitrddbztyicrmpzqwddwty ; /usr/bin/python3'
Jan 20 18:42:54 compute-0 sudo[94636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:42:54 compute-0 python3[94638]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 18:42:54 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 18:42:54 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:54 compute-0 sudo[94636]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:54 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 20 18:42:54 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:54 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v11: 136 pgs: 1 creating+peering, 135 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 11 op/s
Jan 20 18:42:54 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:54 compute-0 sudo[94709]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gucpjvdseuckkjpexcajykaxwjzsbhdx ; /usr/bin/python3'
Jan 20 18:42:54 compute-0 sudo[94709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:42:54 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 20 18:42:54 compute-0 python3[94711]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768934574.1888149-37585-95245050303369/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=465f1f6abe8e4d723d0b6c413f0a5a323af4f262 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:42:54 compute-0 sudo[94709]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:55 compute-0 sudo[94759]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckpurfdrkgpsqcponzsfrulctyetetku ; /usr/bin/python3'
Jan 20 18:42:55 compute-0 sudo[94759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:42:55 compute-0 python3[94761]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:42:55 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:55 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 20 18:42:55 compute-0 podman[94762]: 2026-01-20 18:42:55.3963922 +0000 UTC m=+0.056619330 container create ccfc307de0e0bbc831c361b7e011bb76b039a0d4a355a83a2699392d7e818781 (image=quay.io/ceph/ceph:v19, name=practical_cohen, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Jan 20 18:42:55 compute-0 podman[94762]: 2026-01-20 18:42:55.363649169 +0000 UTC m=+0.023876319 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:42:55 compute-0 ceph-mon[74381]: Updating compute-0:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.client.admin.keyring
Jan 20 18:42:55 compute-0 ceph-mon[74381]: Updating compute-1:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.client.admin.keyring
Jan 20 18:42:55 compute-0 ceph-mon[74381]: Updating compute-2:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.client.admin.keyring
Jan 20 18:42:55 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:55 compute-0 ceph-mon[74381]: osdmap e55: 3 total, 3 up, 3 in
Jan 20 18:42:55 compute-0 ceph-mon[74381]: mgrmap e26: compute-0.cepfkm(active, since 8s), standbys: compute-1.whkwsm, compute-2.pyghhf
Jan 20 18:42:55 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:55 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:55 compute-0 ceph-mon[74381]: pgmap v11: 136 pgs: 1 creating+peering, 135 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 11 op/s
Jan 20 18:42:55 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:55 compute-0 systemd[1]: Started libpod-conmon-ccfc307de0e0bbc831c361b7e011bb76b039a0d4a355a83a2699392d7e818781.scope.
Jan 20 18:42:55 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:42:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/330f1f084dd8d487ae635e902d8591c90893ad4129f1ed61219557be65aba03c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:42:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/330f1f084dd8d487ae635e902d8591c90893ad4129f1ed61219557be65aba03c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:42:55 compute-0 podman[94762]: 2026-01-20 18:42:55.615567791 +0000 UTC m=+0.275794971 container init ccfc307de0e0bbc831c361b7e011bb76b039a0d4a355a83a2699392d7e818781 (image=quay.io/ceph/ceph:v19, name=practical_cohen, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:42:55 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:55 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 18:42:55 compute-0 podman[94762]: 2026-01-20 18:42:55.626609517 +0000 UTC m=+0.286836687 container start ccfc307de0e0bbc831c361b7e011bb76b039a0d4a355a83a2699392d7e818781 (image=quay.io/ceph/ceph:v19, name=practical_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Jan 20 18:42:55 compute-0 ceph-mon[74381]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 20 18:42:55 compute-0 podman[94762]: 2026-01-20 18:42:55.721591817 +0000 UTC m=+0.381818967 container attach ccfc307de0e0bbc831c361b7e011bb76b039a0d4a355a83a2699392d7e818781 (image=quay.io/ceph/ceph:v19, name=practical_cohen, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325)
Jan 20 18:42:55 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:55 compute-0 ceph-mgr[74676]: [progress INFO root] update: starting ev 96f6ea55-00b8-42c0-9562-d0518a71e62c (Updating node-exporter deployment (+2 -> 3))
Jan 20 18:42:55 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-1 on compute-1
Jan 20 18:42:55 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-1 on compute-1
Jan 20 18:42:56 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth import"} v 0)
Jan 20 18:42:56 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4029950408' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Jan 20 18:42:56 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4029950408' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Jan 20 18:42:56 compute-0 systemd[1]: libpod-ccfc307de0e0bbc831c361b7e011bb76b039a0d4a355a83a2699392d7e818781.scope: Deactivated successfully.
Jan 20 18:42:56 compute-0 podman[94762]: 2026-01-20 18:42:56.120654355 +0000 UTC m=+0.780881485 container died ccfc307de0e0bbc831c361b7e011bb76b039a0d4a355a83a2699392d7e818781 (image=quay.io/ceph/ceph:v19, name=practical_cohen, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 18:42:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-330f1f084dd8d487ae635e902d8591c90893ad4129f1ed61219557be65aba03c-merged.mount: Deactivated successfully.
Jan 20 18:42:56 compute-0 podman[94762]: 2026-01-20 18:42:56.356670879 +0000 UTC m=+1.016898009 container remove ccfc307de0e0bbc831c361b7e011bb76b039a0d4a355a83a2699392d7e818781 (image=quay.io/ceph/ceph:v19, name=practical_cohen, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:42:56 compute-0 sudo[94759]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:56 compute-0 systemd[1]: libpod-conmon-ccfc307de0e0bbc831c361b7e011bb76b039a0d4a355a83a2699392d7e818781.scope: Deactivated successfully.
Jan 20 18:42:56 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:56 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:56 compute-0 ceph-mon[74381]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Jan 20 18:42:56 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:56 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/4029950408' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Jan 20 18:42:56 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/4029950408' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Jan 20 18:42:56 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v12: 136 pgs: 1 creating+peering, 135 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 11 op/s
Jan 20 18:42:56 compute-0 ceph-mgr[74676]: [progress WARNING root] Starting Global Recovery Event,1 pgs not in active + clean state
Jan 20 18:42:56 compute-0 sudo[94841]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrqwcadaanvoifkonyeljbmddgqavkbs ; /usr/bin/python3'
Jan 20 18:42:56 compute-0 sudo[94841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:42:57 compute-0 python3[94843]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:42:57 compute-0 podman[94845]: 2026-01-20 18:42:57.259399786 +0000 UTC m=+0.118049109 container create 2b5e9904ea99aa12a54cac382443d4acd5b0f91c412efd583d4f9be9e593e81a (image=quay.io/ceph/ceph:v19, name=elastic_shtern, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:42:57 compute-0 podman[94845]: 2026-01-20 18:42:57.165634196 +0000 UTC m=+0.024283539 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:42:57 compute-0 systemd[1]: Started libpod-conmon-2b5e9904ea99aa12a54cac382443d4acd5b0f91c412efd583d4f9be9e593e81a.scope.
Jan 20 18:42:57 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:42:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21258f251066f426362d3d525ffc05581b6757dcf9d10ed48f0f7e8a6bd59bf9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:42:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21258f251066f426362d3d525ffc05581b6757dcf9d10ed48f0f7e8a6bd59bf9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:42:57 compute-0 podman[94845]: 2026-01-20 18:42:57.425393425 +0000 UTC m=+0.284042768 container init 2b5e9904ea99aa12a54cac382443d4acd5b0f91c412efd583d4f9be9e593e81a (image=quay.io/ceph/ceph:v19, name=elastic_shtern, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 20 18:42:57 compute-0 podman[94845]: 2026-01-20 18:42:57.434158104 +0000 UTC m=+0.292807427 container start 2b5e9904ea99aa12a54cac382443d4acd5b0f91c412efd583d4f9be9e593e81a (image=quay.io/ceph/ceph:v19, name=elastic_shtern, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 18:42:57 compute-0 podman[94845]: 2026-01-20 18:42:57.524628581 +0000 UTC m=+0.383277924 container attach 2b5e9904ea99aa12a54cac382443d4acd5b0f91c412efd583d4f9be9e593e81a (image=quay.io/ceph/ceph:v19, name=elastic_shtern, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 18:42:57 compute-0 ceph-mon[74381]: Deploying daemon node-exporter.compute-1 on compute-1
Jan 20 18:42:57 compute-0 ceph-mon[74381]: pgmap v12: 136 pgs: 1 creating+peering, 135 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 11 op/s
Jan 20 18:42:57 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 20 18:42:57 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3962010139' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 20 18:42:57 compute-0 elastic_shtern[94861]: 
Jan 20 18:42:57 compute-0 elastic_shtern[94861]: {"fsid":"aecbbf3b-b405-507b-97d7-637a83f5b4b1","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":98,"monmap":{"epoch":3,"min_mon_release_name":"squid","num_mons":3},"osdmap":{"epoch":55,"num_osds":3,"num_up_osds":3,"osd_up_since":1768934507,"num_in_osds":3,"osd_in_since":1768934485,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":135},{"state_name":"creating+peering","count":1}],"num_pgs":136,"num_pools":12,"num_objects":194,"data_bytes":464595,"bytes_used":89030656,"bytes_avail":64322895872,"bytes_total":64411926528,"inactive_pgs_ratio":0.0073529412038624287,"read_bytes_sec":30030,"write_bytes_sec":0,"read_op_per_sec":9,"write_op_per_sec":1},"fsmap":{"epoch":2,"btime":"2026-01-20T18:42:47:693934+0000","id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":2,"modules":["cephadm","dashboard","iostat","nfs","restful"],"services":{"dashboard":"http://192.168.122.100:8443/"}},"servicemap":{"epoch":5,"modified":"2026-01-20T18:42:24.549709+0000","services":{"mgr":{"daemons":{"summary":"","compute-0.cepfkm":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-1.whkwsm":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2.pyghhf":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"mon":{"daemons":{"summary":"","compute-0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"rgw":{"daemons":{"summary":"","14364":{"start_epoch":5,"start_stamp":"2026-01-20T18:42:23.280618+0000","gid":14364,"addr":"192.168.122.100:0/2396071884","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-0","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.100:8082","frontend_type#0":"beast","hostname":"compute-0","id":"rgw.compute-0.phlxkp","kernel_description":"#1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026","kernel_version":"5.14.0-661.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864316","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"e5424423-e0b7-453a-acdd-580a59c79a77","zone_name":"default","zonegroup_id":"3115895e-8a03-4fc4-b262-7d669efe3b52","zonegroup_name":"default"},"task_status":{}},"24128":{"start_epoch":5,"start_stamp":"2026-01-20T18:42:23.465869+0000","gid":24128,"addr":"192.168.122.101:0/3140638165","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-1","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.101:8082","frontend_type#0":"beast","hostname":"compute-1","id":"rgw.compute-1.unzimq","kernel_description":"#1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026","kernel_version":"5.14.0-661.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864304","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"e5424423-e0b7-453a-acdd-580a59c79a77","zone_name":"default","zonegroup_id":"3115895e-8a03-4fc4-b262-7d669efe3b52","zonegroup_name":"default"},"task_status":{}},"24145":{"start_epoch":5,"start_stamp":"2026-01-20T18:42:23.169899+0000","gid":24145,"addr":"192.168.122.102:0/2196779070","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-2","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.102:8082","frontend_type#0":"beast","hostname":"compute-2","id":"rgw.compute-2.mqbqmb","kernel_description":"#1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026","kernel_version":"5.14.0-661.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864300","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"e5424423-e0b7-453a-acdd-580a59c79a77","zone_name":"default","zonegroup_id":"3115895e-8a03-4fc4-b262-7d669efe3b52","zonegroup_name":"default"},"task_status":{}}}}}},"progress_events":{"96f6ea55-00b8-42c0-9562-d0518a71e62c":{"message":"Updating node-exporter deployment (+2 -> 3) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Jan 20 18:42:57 compute-0 systemd[1]: libpod-2b5e9904ea99aa12a54cac382443d4acd5b0f91c412efd583d4f9be9e593e81a.scope: Deactivated successfully.
Jan 20 18:42:57 compute-0 podman[94845]: 2026-01-20 18:42:57.885178244 +0000 UTC m=+0.743827557 container died 2b5e9904ea99aa12a54cac382443d4acd5b0f91c412efd583d4f9be9e593e81a (image=quay.io/ceph/ceph:v19, name=elastic_shtern, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:42:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-21258f251066f426362d3d525ffc05581b6757dcf9d10ed48f0f7e8a6bd59bf9-merged.mount: Deactivated successfully.
Jan 20 18:42:57 compute-0 podman[94845]: 2026-01-20 18:42:57.973406844 +0000 UTC m=+0.832056167 container remove 2b5e9904ea99aa12a54cac382443d4acd5b0f91c412efd583d4f9be9e593e81a (image=quay.io/ceph/ceph:v19, name=elastic_shtern, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Jan 20 18:42:57 compute-0 sudo[94841]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:57 compute-0 systemd[1]: libpod-conmon-2b5e9904ea99aa12a54cac382443d4acd5b0f91c412efd583d4f9be9e593e81a.scope: Deactivated successfully.
Jan 20 18:42:58 compute-0 sudo[94921]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kktnnzttvyuqbyyjtxfyqxeskvtyrntl ; /usr/bin/python3'
Jan 20 18:42:58 compute-0 sudo[94921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:42:58 compute-0 python3[94923]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:42:58 compute-0 podman[94924]: 2026-01-20 18:42:58.307095905 +0000 UTC m=+0.025171822 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:42:58 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v13: 136 pgs: 136 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 0 B/s wr, 10 op/s
Jan 20 18:42:58 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 20 18:42:58 compute-0 podman[94924]: 2026-01-20 18:42:58.753758456 +0000 UTC m=+0.471834343 container create d4c85b14889e35a7d0fb2da7e192ab4e69354970723b7fd118bf11caf7cf051b (image=quay.io/ceph/ceph:v19, name=musing_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Jan 20 18:42:58 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:58 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 20 18:42:58 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/3962010139' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 20 18:42:58 compute-0 ceph-mon[74381]: pgmap v13: 136 pgs: 136 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 0 B/s wr, 10 op/s
Jan 20 18:42:58 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:58 compute-0 systemd[1]: Started libpod-conmon-d4c85b14889e35a7d0fb2da7e192ab4e69354970723b7fd118bf11caf7cf051b.scope.
Jan 20 18:42:58 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:42:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75f1a5dcd4232fdd5bc4673360e2f3e7b1a8d0417a4cb7b80f03715386672935/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:42:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75f1a5dcd4232fdd5bc4673360e2f3e7b1a8d0417a4cb7b80f03715386672935/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:42:58 compute-0 podman[94924]: 2026-01-20 18:42:58.908309288 +0000 UTC m=+0.626385475 container init d4c85b14889e35a7d0fb2da7e192ab4e69354970723b7fd118bf11caf7cf051b (image=quay.io/ceph/ceph:v19, name=musing_leavitt, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:42:58 compute-0 podman[94924]: 2026-01-20 18:42:58.915331024 +0000 UTC m=+0.633406911 container start d4c85b14889e35a7d0fb2da7e192ab4e69354970723b7fd118bf11caf7cf051b (image=quay.io/ceph/ceph:v19, name=musing_leavitt, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Jan 20 18:42:59 compute-0 podman[94924]: 2026-01-20 18:42:59.044587672 +0000 UTC m=+0.762663599 container attach d4c85b14889e35a7d0fb2da7e192ab4e69354970723b7fd118bf11caf7cf051b (image=quay.io/ceph/ceph:v19, name=musing_leavitt, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 20 18:42:59 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:59 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Jan 20 18:42:59 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 20 18:42:59 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/422934820' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 18:42:59 compute-0 musing_leavitt[94939]: 
Jan 20 18:42:59 compute-0 musing_leavitt[94939]: {"epoch":3,"fsid":"aecbbf3b-b405-507b-97d7-637a83f5b4b1","modified":"2026-01-20T18:41:12.004140Z","created":"2026-01-20T18:38:43.724879Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"compute-2","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.102:3300","nonce":0},{"type":"v1","addr":"192.168.122.102:6789","nonce":0}]},"addr":"192.168.122.102:6789/0","public_addr":"192.168.122.102:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"compute-1","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.101:3300","nonce":0},{"type":"v1","addr":"192.168.122.101:6789","nonce":0}]},"addr":"192.168.122.101:6789/0","public_addr":"192.168.122.101:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1,2]}
Jan 20 18:42:59 compute-0 musing_leavitt[94939]: dumped monmap epoch 3
Jan 20 18:42:59 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 18:42:59 compute-0 systemd[1]: libpod-d4c85b14889e35a7d0fb2da7e192ab4e69354970723b7fd118bf11caf7cf051b.scope: Deactivated successfully.
Jan 20 18:42:59 compute-0 podman[94924]: 2026-01-20 18:42:59.393026592 +0000 UTC m=+1.111102489 container died d4c85b14889e35a7d0fb2da7e192ab4e69354970723b7fd118bf11caf7cf051b (image=quay.io/ceph/ceph:v19, name=musing_leavitt, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:42:59 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:42:59 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-2 on compute-2
Jan 20 18:42:59 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-2 on compute-2
Jan 20 18:42:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-75f1a5dcd4232fdd5bc4673360e2f3e7b1a8d0417a4cb7b80f03715386672935-merged.mount: Deactivated successfully.
Jan 20 18:42:59 compute-0 podman[94924]: 2026-01-20 18:42:59.663618822 +0000 UTC m=+1.381694739 container remove d4c85b14889e35a7d0fb2da7e192ab4e69354970723b7fd118bf11caf7cf051b (image=quay.io/ceph/ceph:v19, name=musing_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Jan 20 18:42:59 compute-0 systemd[1]: libpod-conmon-d4c85b14889e35a7d0fb2da7e192ab4e69354970723b7fd118bf11caf7cf051b.scope: Deactivated successfully.
Jan 20 18:42:59 compute-0 sudo[94921]: pam_unix(sudo:session): session closed for user root
Jan 20 18:43:00 compute-0 sudo[95001]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sovjwttfexhgyuznfjlnrpkjhthqswqq ; /usr/bin/python3'
Jan 20 18:43:00 compute-0 sudo[95001]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:43:00 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:00 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/422934820' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 18:43:00 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:00 compute-0 ceph-mon[74381]: Deploying daemon node-exporter.compute-2 on compute-2
Jan 20 18:43:00 compute-0 python3[95003]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:43:00 compute-0 podman[95004]: 2026-01-20 18:43:00.461207834 +0000 UTC m=+0.056649029 container create 1cabfc92ab0b293533c97c3fc8e71d6598727de19813a343718194148414f0c4 (image=quay.io/ceph/ceph:v19, name=modest_bohr, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:43:00 compute-0 systemd[1]: Started libpod-conmon-1cabfc92ab0b293533c97c3fc8e71d6598727de19813a343718194148414f0c4.scope.
Jan 20 18:43:00 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:43:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1ddd14ca2ac7552381ef249725da0fac97f4024e4f3264f3f33df313ce1b865/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:43:00 compute-0 podman[95004]: 2026-01-20 18:43:00.437393488 +0000 UTC m=+0.032834683 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:43:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1ddd14ca2ac7552381ef249725da0fac97f4024e4f3264f3f33df313ce1b865/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:43:00 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v14: 136 pgs: 136 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 9 op/s
Jan 20 18:43:00 compute-0 podman[95004]: 2026-01-20 18:43:00.765210461 +0000 UTC m=+0.360651736 container init 1cabfc92ab0b293533c97c3fc8e71d6598727de19813a343718194148414f0c4 (image=quay.io/ceph/ceph:v19, name=modest_bohr, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 20 18:43:00 compute-0 podman[95004]: 2026-01-20 18:43:00.778211287 +0000 UTC m=+0.373652462 container start 1cabfc92ab0b293533c97c3fc8e71d6598727de19813a343718194148414f0c4 (image=quay.io/ceph/ceph:v19, name=modest_bohr, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 20 18:43:00 compute-0 podman[95004]: 2026-01-20 18:43:00.785018167 +0000 UTC m=+0.380459342 container attach 1cabfc92ab0b293533c97c3fc8e71d6598727de19813a343718194148414f0c4 (image=quay.io/ceph/ceph:v19, name=modest_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Jan 20 18:43:01 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0)
Jan 20 18:43:01 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1753984997' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Jan 20 18:43:01 compute-0 modest_bohr[95019]: [client.openstack]
Jan 20 18:43:01 compute-0 modest_bohr[95019]:         key = AQCMy29pAAAAABAAS5mI8AokUU3QFTWUgUlXCA==
Jan 20 18:43:01 compute-0 modest_bohr[95019]:         caps mgr = "allow *"
Jan 20 18:43:01 compute-0 modest_bohr[95019]:         caps mon = "profile rbd"
Jan 20 18:43:01 compute-0 modest_bohr[95019]:         caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Jan 20 18:43:01 compute-0 systemd[1]: libpod-1cabfc92ab0b293533c97c3fc8e71d6598727de19813a343718194148414f0c4.scope: Deactivated successfully.
Jan 20 18:43:01 compute-0 podman[95004]: 2026-01-20 18:43:01.275334342 +0000 UTC m=+0.870775517 container died 1cabfc92ab0b293533c97c3fc8e71d6598727de19813a343718194148414f0c4 (image=quay.io/ceph/ceph:v19, name=modest_bohr, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:43:01 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 18:43:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-a1ddd14ca2ac7552381ef249725da0fac97f4024e4f3264f3f33df313ce1b865-merged.mount: Deactivated successfully.
Jan 20 18:43:01 compute-0 podman[95004]: 2026-01-20 18:43:01.396097568 +0000 UTC m=+0.991538743 container remove 1cabfc92ab0b293533c97c3fc8e71d6598727de19813a343718194148414f0c4 (image=quay.io/ceph/ceph:v19, name=modest_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:43:01 compute-0 ceph-mon[74381]: pgmap v14: 136 pgs: 136 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 9 op/s
Jan 20 18:43:01 compute-0 systemd[1]: libpod-conmon-1cabfc92ab0b293533c97c3fc8e71d6598727de19813a343718194148414f0c4.scope: Deactivated successfully.
Jan 20 18:43:01 compute-0 sudo[95001]: pam_unix(sudo:session): session closed for user root
Jan 20 18:43:01 compute-0 ceph-mgr[74676]: [progress INFO root] Completed event f0896ff2-5915-414d-a85f-38e64e8413c9 (Global Recovery Event) in 5 seconds
Jan 20 18:43:02 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 20 18:43:02 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/1753984997' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Jan 20 18:43:02 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v15: 136 pgs: 136 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Jan 20 18:43:02 compute-0 sudo[95205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-extnsjfnbkqjluzwqagukvjbqolvbhbf ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1768934582.3975224-37657-81865436630370/async_wrapper.py j543547817355 30 /home/zuul/.ansible/tmp/ansible-tmp-1768934582.3975224-37657-81865436630370/AnsiballZ_command.py _'
Jan 20 18:43:02 compute-0 sudo[95205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:43:02 compute-0 ansible-async_wrapper.py[95207]: Invoked with j543547817355 30 /home/zuul/.ansible/tmp/ansible-tmp-1768934582.3975224-37657-81865436630370/AnsiballZ_command.py _
Jan 20 18:43:02 compute-0 ansible-async_wrapper.py[95210]: Starting module and watcher
Jan 20 18:43:02 compute-0 ansible-async_wrapper.py[95210]: Start watching 95211 (30)
Jan 20 18:43:02 compute-0 ansible-async_wrapper.py[95211]: Start module (95211)
Jan 20 18:43:02 compute-0 ansible-async_wrapper.py[95207]: Return async_wrapper task started.
Jan 20 18:43:02 compute-0 sudo[95205]: pam_unix(sudo:session): session closed for user root
Jan 20 18:43:02 compute-0 python3[95212]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:43:03 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:03 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 20 18:43:03 compute-0 podman[95213]: 2026-01-20 18:43:03.10808369 +0000 UTC m=+0.094967930 container create c598a20f48e3a88e94f7aa7fb7128f86bdcf88f295adf6f664ec42f8a5293891 (image=quay.io/ceph/ceph:v19, name=nostalgic_franklin, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Jan 20 18:43:03 compute-0 podman[95213]: 2026-01-20 18:43:03.034638909 +0000 UTC m=+0.021523149 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:43:03 compute-0 systemd[1]: Started libpod-conmon-c598a20f48e3a88e94f7aa7fb7128f86bdcf88f295adf6f664ec42f8a5293891.scope.
Jan 20 18:43:03 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:43:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2a2d420b7b0e1f78024f8b433d0ec6d6a1bbca819a0b880886a7b93c53c9a75/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:43:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2a2d420b7b0e1f78024f8b433d0ec6d6a1bbca819a0b880886a7b93c53c9a75/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:43:03 compute-0 podman[95213]: 2026-01-20 18:43:03.224172159 +0000 UTC m=+0.211056429 container init c598a20f48e3a88e94f7aa7fb7128f86bdcf88f295adf6f664ec42f8a5293891 (image=quay.io/ceph/ceph:v19, name=nostalgic_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Jan 20 18:43:03 compute-0 podman[95213]: 2026-01-20 18:43:03.231849471 +0000 UTC m=+0.218733711 container start c598a20f48e3a88e94f7aa7fb7128f86bdcf88f295adf6f664ec42f8a5293891 (image=quay.io/ceph/ceph:v19, name=nostalgic_franklin, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 20 18:43:03 compute-0 podman[95213]: 2026-01-20 18:43:03.331055616 +0000 UTC m=+0.317939896 container attach c598a20f48e3a88e94f7aa7fb7128f86bdcf88f295adf6f664ec42f8a5293891 (image=quay.io/ceph/ceph:v19, name=nostalgic_franklin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 20 18:43:03 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:03 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Jan 20 18:43:03 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:03 compute-0 ceph-mgr[74676]: [progress INFO root] complete: finished ev 96f6ea55-00b8-42c0-9562-d0518a71e62c (Updating node-exporter deployment (+2 -> 3))
Jan 20 18:43:03 compute-0 ceph-mgr[74676]: [progress INFO root] Completed event 96f6ea55-00b8-42c0-9562-d0518a71e62c (Updating node-exporter deployment (+2 -> 3)) in 8 seconds
Jan 20 18:43:03 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Jan 20 18:43:03 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:03 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 20 18:43:03 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 18:43:03 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 20 18:43:03 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 18:43:03 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 18:43:03 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:43:03 compute-0 sudo[95252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:43:03 compute-0 sudo[95252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:43:03 compute-0 sudo[95252]: pam_unix(sudo:session): session closed for user root
Jan 20 18:43:03 compute-0 sudo[95277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 18:43:03 compute-0 sudo[95277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:43:03 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.14538 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 20 18:43:03 compute-0 nostalgic_franklin[95229]: 
Jan 20 18:43:03 compute-0 nostalgic_franklin[95229]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 20 18:43:03 compute-0 systemd[1]: libpod-c598a20f48e3a88e94f7aa7fb7128f86bdcf88f295adf6f664ec42f8a5293891.scope: Deactivated successfully.
Jan 20 18:43:03 compute-0 podman[95213]: 2026-01-20 18:43:03.627141104 +0000 UTC m=+0.614025344 container died c598a20f48e3a88e94f7aa7fb7128f86bdcf88f295adf6f664ec42f8a5293891 (image=quay.io/ceph/ceph:v19, name=nostalgic_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Jan 20 18:43:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-c2a2d420b7b0e1f78024f8b433d0ec6d6a1bbca819a0b880886a7b93c53c9a75-merged.mount: Deactivated successfully.
Jan 20 18:43:03 compute-0 podman[95213]: 2026-01-20 18:43:03.718575835 +0000 UTC m=+0.705460075 container remove c598a20f48e3a88e94f7aa7fb7128f86bdcf88f295adf6f664ec42f8a5293891 (image=quay.io/ceph/ceph:v19, name=nostalgic_franklin, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:43:03 compute-0 ansible-async_wrapper.py[95211]: Module complete (95211)
Jan 20 18:43:03 compute-0 systemd[1]: libpod-conmon-c598a20f48e3a88e94f7aa7fb7128f86bdcf88f295adf6f664ec42f8a5293891.scope: Deactivated successfully.
Jan 20 18:43:03 compute-0 podman[95356]: 2026-01-20 18:43:03.891501557 +0000 UTC m=+0.044130086 container create b9824a9ebd948947328dab0725433090ce665b1bb8b642778740f3b6f37f8a18 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_albattani, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:43:03 compute-0 podman[95356]: 2026-01-20 18:43:03.870672045 +0000 UTC m=+0.023300594 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:43:03 compute-0 systemd[1]: Started libpod-conmon-b9824a9ebd948947328dab0725433090ce665b1bb8b642778740f3b6f37f8a18.scope.
Jan 20 18:43:04 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:43:04 compute-0 podman[95356]: 2026-01-20 18:43:04.014079469 +0000 UTC m=+0.166708028 container init b9824a9ebd948947328dab0725433090ce665b1bb8b642778740f3b6f37f8a18 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_albattani, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2)
Jan 20 18:43:04 compute-0 sudo[95420]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjfqzbxbzlzadwhkavpitfvzfmayyrsj ; /usr/bin/python3'
Jan 20 18:43:04 compute-0 podman[95356]: 2026-01-20 18:43:04.021431343 +0000 UTC m=+0.174059872 container start b9824a9ebd948947328dab0725433090ce665b1bb8b642778740f3b6f37f8a18 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_albattani, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:43:04 compute-0 sudo[95420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:43:04 compute-0 sad_albattani[95397]: 167 167
Jan 20 18:43:04 compute-0 systemd[1]: libpod-b9824a9ebd948947328dab0725433090ce665b1bb8b642778740f3b6f37f8a18.scope: Deactivated successfully.
Jan 20 18:43:04 compute-0 podman[95356]: 2026-01-20 18:43:04.039046195 +0000 UTC m=+0.191674724 container attach b9824a9ebd948947328dab0725433090ce665b1bb8b642778740f3b6f37f8a18 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_albattani, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 20 18:43:04 compute-0 podman[95356]: 2026-01-20 18:43:04.039953237 +0000 UTC m=+0.192581776 container died b9824a9ebd948947328dab0725433090ce665b1bb8b642778740f3b6f37f8a18 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_albattani, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 20 18:43:04 compute-0 ceph-mon[74381]: pgmap v15: 136 pgs: 136 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Jan 20 18:43:04 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:04 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:04 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:04 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:04 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 18:43:04 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 18:43:04 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:43:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-90a6bac70b2c18f6851671c177aede7dedcc102bc941ed37ca8f4615f8f01625-merged.mount: Deactivated successfully.
Jan 20 18:43:04 compute-0 python3[95424]: ansible-ansible.legacy.async_status Invoked with jid=j543547817355.95207 mode=status _async_dir=/root/.ansible_async
Jan 20 18:43:04 compute-0 podman[95356]: 2026-01-20 18:43:04.17338647 +0000 UTC m=+0.326014999 container remove b9824a9ebd948947328dab0725433090ce665b1bb8b642778740f3b6f37f8a18 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_albattani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:43:04 compute-0 sudo[95420]: pam_unix(sudo:session): session closed for user root
Jan 20 18:43:04 compute-0 systemd[1]: libpod-conmon-b9824a9ebd948947328dab0725433090ce665b1bb8b642778740f3b6f37f8a18.scope: Deactivated successfully.
Jan 20 18:43:04 compute-0 sudo[95490]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydywkllcfbkukwayzveictskkdevkqyj ; /usr/bin/python3'
Jan 20 18:43:04 compute-0 sudo[95490]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:43:04 compute-0 podman[95489]: 2026-01-20 18:43:04.330943477 +0000 UTC m=+0.045678075 container create 93e8c6fd48311cd83167818bd52ecf69d810431c96682b848a4bce1faaa1a988 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_snyder, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:43:04 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 18:43:04 compute-0 podman[95489]: 2026-01-20 18:43:04.311054009 +0000 UTC m=+0.025788627 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:43:04 compute-0 python3[95498]: ansible-ansible.legacy.async_status Invoked with jid=j543547817355.95207 mode=cleanup _async_dir=/root/.ansible_async
Jan 20 18:43:04 compute-0 sudo[95490]: pam_unix(sudo:session): session closed for user root
Jan 20 18:43:04 compute-0 systemd[1]: Started libpod-conmon-93e8c6fd48311cd83167818bd52ecf69d810431c96682b848a4bce1faaa1a988.scope.
Jan 20 18:43:04 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:43:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55db5538f5eab7cbee8e5021c916b07e1871c6868b13c8bc239e166dcd3825bc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 18:43:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55db5538f5eab7cbee8e5021c916b07e1871c6868b13c8bc239e166dcd3825bc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:43:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55db5538f5eab7cbee8e5021c916b07e1871c6868b13c8bc239e166dcd3825bc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:43:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55db5538f5eab7cbee8e5021c916b07e1871c6868b13c8bc239e166dcd3825bc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 18:43:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55db5538f5eab7cbee8e5021c916b07e1871c6868b13c8bc239e166dcd3825bc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 18:43:04 compute-0 podman[95489]: 2026-01-20 18:43:04.605503296 +0000 UTC m=+0.320237904 container init 93e8c6fd48311cd83167818bd52ecf69d810431c96682b848a4bce1faaa1a988 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_snyder, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Jan 20 18:43:04 compute-0 podman[95489]: 2026-01-20 18:43:04.611547157 +0000 UTC m=+0.326281755 container start 93e8c6fd48311cd83167818bd52ecf69d810431c96682b848a4bce1faaa1a988 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 20 18:43:04 compute-0 podman[95489]: 2026-01-20 18:43:04.615710442 +0000 UTC m=+0.330445070 container attach 93e8c6fd48311cd83167818bd52ecf69d810431c96682b848a4bce1faaa1a988 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_snyder, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:43:04 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v16: 136 pgs: 136 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Jan 20 18:43:04 compute-0 sudo[95542]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxywgayoqgzamovisbpxpevdgmdaxfvr ; /usr/bin/python3'
Jan 20 18:43:04 compute-0 sudo[95542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:43:04 compute-0 wizardly_snyder[95508]: --> passed data devices: 0 physical, 1 LVM
Jan 20 18:43:04 compute-0 wizardly_snyder[95508]: --> All data devices are unavailable
Jan 20 18:43:04 compute-0 systemd[1]: libpod-93e8c6fd48311cd83167818bd52ecf69d810431c96682b848a4bce1faaa1a988.scope: Deactivated successfully.
Jan 20 18:43:04 compute-0 podman[95489]: 2026-01-20 18:43:04.933996987 +0000 UTC m=+0.648731585 container died 93e8c6fd48311cd83167818bd52ecf69d810431c96682b848a4bce1faaa1a988 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_snyder, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Jan 20 18:43:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-55db5538f5eab7cbee8e5021c916b07e1871c6868b13c8bc239e166dcd3825bc-merged.mount: Deactivated successfully.
Jan 20 18:43:04 compute-0 podman[95489]: 2026-01-20 18:43:04.98643157 +0000 UTC m=+0.701166168 container remove 93e8c6fd48311cd83167818bd52ecf69d810431c96682b848a4bce1faaa1a988 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_snyder, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Jan 20 18:43:05 compute-0 systemd[1]: libpod-conmon-93e8c6fd48311cd83167818bd52ecf69d810431c96682b848a4bce1faaa1a988.scope: Deactivated successfully.
Jan 20 18:43:05 compute-0 python3[95546]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:43:05 compute-0 sudo[95277]: pam_unix(sudo:session): session closed for user root
Jan 20 18:43:05 compute-0 sudo[95562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:43:05 compute-0 sudo[95562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:43:05 compute-0 sudo[95562]: pam_unix(sudo:session): session closed for user root
Jan 20 18:43:05 compute-0 podman[95561]: 2026-01-20 18:43:05.088912557 +0000 UTC m=+0.046472475 container create b9a874e3ba3b1a39b1ab9651a94b28d7c4b92d27476f3dbd8a76e9ec319e11ac (image=quay.io/ceph/ceph:v19, name=boring_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Jan 20 18:43:05 compute-0 systemd[1]: Started libpod-conmon-b9a874e3ba3b1a39b1ab9651a94b28d7c4b92d27476f3dbd8a76e9ec319e11ac.scope.
Jan 20 18:43:05 compute-0 ceph-mon[74381]: from='client.14538 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 20 18:43:05 compute-0 ceph-mon[74381]: pgmap v16: 136 pgs: 136 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Jan 20 18:43:05 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:43:05 compute-0 sudo[95598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- lvm list --format json
Jan 20 18:43:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53ed849c1282aeff127523f9f7f906bb239b75f52840f58437e7429ad54a793e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:43:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53ed849c1282aeff127523f9f7f906bb239b75f52840f58437e7429ad54a793e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:43:05 compute-0 sudo[95598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:43:05 compute-0 podman[95561]: 2026-01-20 18:43:05.068538968 +0000 UTC m=+0.026098896 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:43:05 compute-0 podman[95561]: 2026-01-20 18:43:05.165204679 +0000 UTC m=+0.122764597 container init b9a874e3ba3b1a39b1ab9651a94b28d7c4b92d27476f3dbd8a76e9ec319e11ac (image=quay.io/ceph/ceph:v19, name=boring_johnson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 20 18:43:05 compute-0 podman[95561]: 2026-01-20 18:43:05.171414405 +0000 UTC m=+0.128974313 container start b9a874e3ba3b1a39b1ab9651a94b28d7c4b92d27476f3dbd8a76e9ec319e11ac (image=quay.io/ceph/ceph:v19, name=boring_johnson, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 18:43:05 compute-0 podman[95561]: 2026-01-20 18:43:05.174853921 +0000 UTC m=+0.132413839 container attach b9a874e3ba3b1a39b1ab9651a94b28d7c4b92d27476f3dbd8a76e9ec319e11ac (image=quay.io/ceph/ceph:v19, name=boring_johnson, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Jan 20 18:43:05 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.14544 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 20 18:43:05 compute-0 boring_johnson[95623]: 
Jan 20 18:43:05 compute-0 boring_johnson[95623]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 20 18:43:05 compute-0 systemd[1]: libpod-b9a874e3ba3b1a39b1ab9651a94b28d7c4b92d27476f3dbd8a76e9ec319e11ac.scope: Deactivated successfully.
Jan 20 18:43:05 compute-0 podman[95561]: 2026-01-20 18:43:05.545515497 +0000 UTC m=+0.503075435 container died b9a874e3ba3b1a39b1ab9651a94b28d7c4b92d27476f3dbd8a76e9ec319e11ac (image=quay.io/ceph/ceph:v19, name=boring_johnson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Jan 20 18:43:05 compute-0 podman[95687]: 2026-01-20 18:43:05.562178925 +0000 UTC m=+0.047028649 container create eb3e76eba58359bcfaf8b73f569566815ecf4f5fc27be62ef9db1831f5dadb87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 20 18:43:05 compute-0 podman[95687]: 2026-01-20 18:43:05.535575458 +0000 UTC m=+0.020425212 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:43:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-53ed849c1282aeff127523f9f7f906bb239b75f52840f58437e7429ad54a793e-merged.mount: Deactivated successfully.
Jan 20 18:43:05 compute-0 podman[95561]: 2026-01-20 18:43:05.781879289 +0000 UTC m=+0.739439207 container remove b9a874e3ba3b1a39b1ab9651a94b28d7c4b92d27476f3dbd8a76e9ec319e11ac (image=quay.io/ceph/ceph:v19, name=boring_johnson, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:43:05 compute-0 sudo[95542]: pam_unix(sudo:session): session closed for user root
Jan 20 18:43:05 compute-0 systemd[1]: libpod-conmon-b9a874e3ba3b1a39b1ab9651a94b28d7c4b92d27476f3dbd8a76e9ec319e11ac.scope: Deactivated successfully.
Jan 20 18:43:05 compute-0 systemd[1]: Started libpod-conmon-eb3e76eba58359bcfaf8b73f569566815ecf4f5fc27be62ef9db1831f5dadb87.scope.
Jan 20 18:43:05 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:43:05 compute-0 podman[95687]: 2026-01-20 18:43:05.999873702 +0000 UTC m=+0.484723436 container init eb3e76eba58359bcfaf8b73f569566815ecf4f5fc27be62ef9db1831f5dadb87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_johnson, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:43:06 compute-0 podman[95687]: 2026-01-20 18:43:06.006354875 +0000 UTC m=+0.491204599 container start eb3e76eba58359bcfaf8b73f569566815ecf4f5fc27be62ef9db1831f5dadb87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_johnson, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:43:06 compute-0 great_johnson[95717]: 167 167
Jan 20 18:43:06 compute-0 systemd[1]: libpod-eb3e76eba58359bcfaf8b73f569566815ecf4f5fc27be62ef9db1831f5dadb87.scope: Deactivated successfully.
Jan 20 18:43:06 compute-0 podman[95687]: 2026-01-20 18:43:06.013169695 +0000 UTC m=+0.498019449 container attach eb3e76eba58359bcfaf8b73f569566815ecf4f5fc27be62ef9db1831f5dadb87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_johnson, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 18:43:06 compute-0 podman[95687]: 2026-01-20 18:43:06.013780081 +0000 UTC m=+0.498629825 container died eb3e76eba58359bcfaf8b73f569566815ecf4f5fc27be62ef9db1831f5dadb87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_johnson, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 20 18:43:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-1bc2521cb6d51be353b2269ff0c485c47eb45a9b27c88806f8da76ec872dc0ed-merged.mount: Deactivated successfully.
Jan 20 18:43:06 compute-0 podman[95687]: 2026-01-20 18:43:06.053010783 +0000 UTC m=+0.537860507 container remove eb3e76eba58359bcfaf8b73f569566815ecf4f5fc27be62ef9db1831f5dadb87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_johnson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325)
Jan 20 18:43:06 compute-0 systemd[1]: libpod-conmon-eb3e76eba58359bcfaf8b73f569566815ecf4f5fc27be62ef9db1831f5dadb87.scope: Deactivated successfully.
Jan 20 18:43:06 compute-0 ceph-mon[74381]: from='client.14544 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 20 18:43:06 compute-0 podman[95741]: 2026-01-20 18:43:06.242589752 +0000 UTC m=+0.089730889 container create 2abc710973308ab10ec4c87dac6b8592fe100fbf8524d3aae6e3b58a99932d72 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:43:06 compute-0 podman[95741]: 2026-01-20 18:43:06.177211165 +0000 UTC m=+0.024352332 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:43:06 compute-0 systemd[1]: Started libpod-conmon-2abc710973308ab10ec4c87dac6b8592fe100fbf8524d3aae6e3b58a99932d72.scope.
Jan 20 18:43:06 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:43:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5aa7bd8efcc5bab4dee445f74b390d8778002f3db00104ea6959c9212f9adf9d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 18:43:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5aa7bd8efcc5bab4dee445f74b390d8778002f3db00104ea6959c9212f9adf9d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:43:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5aa7bd8efcc5bab4dee445f74b390d8778002f3db00104ea6959c9212f9adf9d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:43:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5aa7bd8efcc5bab4dee445f74b390d8778002f3db00104ea6959c9212f9adf9d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 18:43:06 compute-0 podman[95741]: 2026-01-20 18:43:06.410407837 +0000 UTC m=+0.257549004 container init 2abc710973308ab10ec4c87dac6b8592fe100fbf8524d3aae6e3b58a99932d72 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_elion, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:43:06 compute-0 podman[95741]: 2026-01-20 18:43:06.417608207 +0000 UTC m=+0.264749384 container start 2abc710973308ab10ec4c87dac6b8592fe100fbf8524d3aae6e3b58a99932d72 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_elion, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 18:43:06 compute-0 podman[95741]: 2026-01-20 18:43:06.422367167 +0000 UTC m=+0.269508354 container attach 2abc710973308ab10ec4c87dac6b8592fe100fbf8524d3aae6e3b58a99932d72 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_elion, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Jan 20 18:43:06 compute-0 sudo[95785]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykwcvvhwvgigcwytqlmyzqugfzohklov ; /usr/bin/python3'
Jan 20 18:43:06 compute-0 sudo[95785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:43:06 compute-0 python3[95787]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:43:06 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v17: 136 pgs: 136 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Jan 20 18:43:06 compute-0 blissful_elion[95757]: {
Jan 20 18:43:06 compute-0 blissful_elion[95757]:     "0": [
Jan 20 18:43:06 compute-0 blissful_elion[95757]:         {
Jan 20 18:43:06 compute-0 blissful_elion[95757]:             "devices": [
Jan 20 18:43:06 compute-0 blissful_elion[95757]:                 "/dev/loop3"
Jan 20 18:43:06 compute-0 blissful_elion[95757]:             ],
Jan 20 18:43:06 compute-0 blissful_elion[95757]:             "lv_name": "ceph_lv0",
Jan 20 18:43:06 compute-0 blissful_elion[95757]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 18:43:06 compute-0 blissful_elion[95757]:             "lv_size": "21470642176",
Jan 20 18:43:06 compute-0 blissful_elion[95757]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=aecbbf3b-b405-507b-97d7-637a83f5b4b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5f53c0c6-6046-4836-83f9-ff93da7e674e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 18:43:06 compute-0 blissful_elion[95757]:             "lv_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 18:43:06 compute-0 blissful_elion[95757]:             "name": "ceph_lv0",
Jan 20 18:43:06 compute-0 blissful_elion[95757]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 18:43:06 compute-0 blissful_elion[95757]:             "tags": {
Jan 20 18:43:06 compute-0 blissful_elion[95757]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 18:43:06 compute-0 blissful_elion[95757]:                 "ceph.block_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 18:43:06 compute-0 blissful_elion[95757]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 18:43:06 compute-0 blissful_elion[95757]:                 "ceph.cluster_fsid": "aecbbf3b-b405-507b-97d7-637a83f5b4b1",
Jan 20 18:43:06 compute-0 blissful_elion[95757]:                 "ceph.cluster_name": "ceph",
Jan 20 18:43:06 compute-0 blissful_elion[95757]:                 "ceph.crush_device_class": "",
Jan 20 18:43:06 compute-0 blissful_elion[95757]:                 "ceph.encrypted": "0",
Jan 20 18:43:06 compute-0 blissful_elion[95757]:                 "ceph.osd_fsid": "5f53c0c6-6046-4836-83f9-ff93da7e674e",
Jan 20 18:43:06 compute-0 blissful_elion[95757]:                 "ceph.osd_id": "0",
Jan 20 18:43:06 compute-0 blissful_elion[95757]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 18:43:06 compute-0 blissful_elion[95757]:                 "ceph.type": "block",
Jan 20 18:43:06 compute-0 blissful_elion[95757]:                 "ceph.vdo": "0",
Jan 20 18:43:06 compute-0 blissful_elion[95757]:                 "ceph.with_tpm": "0"
Jan 20 18:43:06 compute-0 blissful_elion[95757]:             },
Jan 20 18:43:06 compute-0 blissful_elion[95757]:             "type": "block",
Jan 20 18:43:06 compute-0 blissful_elion[95757]:             "vg_name": "ceph_vg0"
Jan 20 18:43:06 compute-0 blissful_elion[95757]:         }
Jan 20 18:43:06 compute-0 blissful_elion[95757]:     ]
Jan 20 18:43:06 compute-0 blissful_elion[95757]: }
Jan 20 18:43:06 compute-0 systemd[1]: libpod-2abc710973308ab10ec4c87dac6b8592fe100fbf8524d3aae6e3b58a99932d72.scope: Deactivated successfully.
Jan 20 18:43:06 compute-0 ceph-mgr[74676]: [progress INFO root] Writing back 12 completed events
Jan 20 18:43:06 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 20 18:43:06 compute-0 podman[95790]: 2026-01-20 18:43:06.680477183 +0000 UTC m=+0.041441910 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:43:06 compute-0 podman[95790]: 2026-01-20 18:43:06.777740891 +0000 UTC m=+0.138705638 container create 23b622c4cb7a1ca9da14f12c4d5d3df66be2c125b265dd0ad2133a2022434657 (image=quay.io/ceph/ceph:v19, name=funny_ramanujan, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1)
Jan 20 18:43:06 compute-0 podman[95741]: 2026-01-20 18:43:06.81246757 +0000 UTC m=+0.659608697 container died 2abc710973308ab10ec4c87dac6b8592fe100fbf8524d3aae6e3b58a99932d72 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_elion, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:43:06 compute-0 systemd[1]: Started libpod-conmon-23b622c4cb7a1ca9da14f12c4d5d3df66be2c125b265dd0ad2133a2022434657.scope.
Jan 20 18:43:06 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:43:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4ebeaf542d0c5d13b9306b0d8f686d69cd684a4cdd01d03bb205ceb68e923eb/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:43:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4ebeaf542d0c5d13b9306b0d8f686d69cd684a4cdd01d03bb205ceb68e923eb/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:43:06 compute-0 podman[95790]: 2026-01-20 18:43:06.877830648 +0000 UTC m=+0.238795355 container init 23b622c4cb7a1ca9da14f12c4d5d3df66be2c125b265dd0ad2133a2022434657 (image=quay.io/ceph/ceph:v19, name=funny_ramanujan, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Jan 20 18:43:06 compute-0 podman[95790]: 2026-01-20 18:43:06.887766947 +0000 UTC m=+0.248731664 container start 23b622c4cb7a1ca9da14f12c4d5d3df66be2c125b265dd0ad2133a2022434657 (image=quay.io/ceph/ceph:v19, name=funny_ramanujan, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:43:07 compute-0 podman[95790]: 2026-01-20 18:43:07.024503183 +0000 UTC m=+0.385467910 container attach 23b622c4cb7a1ca9da14f12c4d5d3df66be2c125b265dd0ad2133a2022434657 (image=quay.io/ceph/ceph:v19, name=funny_ramanujan, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 20 18:43:07 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-5aa7bd8efcc5bab4dee445f74b390d8778002f3db00104ea6959c9212f9adf9d-merged.mount: Deactivated successfully.
Jan 20 18:43:07 compute-0 podman[95741]: 2026-01-20 18:43:07.089423849 +0000 UTC m=+0.936564986 container remove 2abc710973308ab10ec4c87dac6b8592fe100fbf8524d3aae6e3b58a99932d72 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_elion, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 18:43:07 compute-0 sudo[95598]: pam_unix(sudo:session): session closed for user root
Jan 20 18:43:07 compute-0 systemd[1]: libpod-conmon-2abc710973308ab10ec4c87dac6b8592fe100fbf8524d3aae6e3b58a99932d72.scope: Deactivated successfully.
Jan 20 18:43:07 compute-0 sudo[95843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:43:07 compute-0 sudo[95843]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:43:07 compute-0 sudo[95843]: pam_unix(sudo:session): session closed for user root
Jan 20 18:43:07 compute-0 sudo[95868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- raw list --format json
Jan 20 18:43:07 compute-0 sudo[95868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:43:07 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.14550 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 20 18:43:07 compute-0 funny_ramanujan[95819]: 
Jan 20 18:43:07 compute-0 funny_ramanujan[95819]: [{"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "alertmanager", "service_type": "alertmanager"}, {"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "grafana", "service_type": "grafana", "spec": {"anonymous_access": true, "protocol": "https"}}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "nfs.cephfs", "service_name": "ingress.nfs.cephfs", "service_type": "ingress", "spec": {"backend_service": "nfs.cephfs", "enable_haproxy_protocol": true, "first_virtual_router_id": 50, "frontend_port": 2049, "monitor_port": 9049, "virtual_ip": "192.168.122.2/24"}}, {"placement": {"count": 2}, "service_id": "rgw.default", "service_name": "ingress.rgw.default", "service_type": "ingress", "spec": {"backend_service": "rgw.rgw", "first_virtual_router_id": 50, "frontend_port": 8080, "monitor_port": 8999, "virtual_interface_networks": ["192.168.122.0/24"], "virtual_ip": "192.168.122.2/24"}}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "cephfs", "service_name": "nfs.cephfs", "service_type": "nfs", "spec": {"enable_haproxy_protocol": true, "port": 12049}}, {"placement": {"host_pattern": "*"}, "service_name": "node-exporter", "service_type": "node-exporter"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "prometheus", "service_type": "prometheus"}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_frontend_port": 8082}}]
Jan 20 18:43:07 compute-0 systemd[1]: libpod-23b622c4cb7a1ca9da14f12c4d5d3df66be2c125b265dd0ad2133a2022434657.scope: Deactivated successfully.
Jan 20 18:43:07 compute-0 podman[95790]: 2026-01-20 18:43:07.292663121 +0000 UTC m=+0.653627828 container died 23b622c4cb7a1ca9da14f12c4d5d3df66be2c125b265dd0ad2133a2022434657 (image=quay.io/ceph/ceph:v19, name=funny_ramanujan, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:43:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-c4ebeaf542d0c5d13b9306b0d8f686d69cd684a4cdd01d03bb205ceb68e923eb-merged.mount: Deactivated successfully.
Jan 20 18:43:07 compute-0 podman[95790]: 2026-01-20 18:43:07.329325599 +0000 UTC m=+0.690290306 container remove 23b622c4cb7a1ca9da14f12c4d5d3df66be2c125b265dd0ad2133a2022434657 (image=quay.io/ceph/ceph:v19, name=funny_ramanujan, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325)
Jan 20 18:43:07 compute-0 systemd[1]: libpod-conmon-23b622c4cb7a1ca9da14f12c4d5d3df66be2c125b265dd0ad2133a2022434657.scope: Deactivated successfully.
Jan 20 18:43:07 compute-0 ceph-mon[74381]: pgmap v17: 136 pgs: 136 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Jan 20 18:43:07 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:07 compute-0 sudo[95785]: pam_unix(sudo:session): session closed for user root
Jan 20 18:43:07 compute-0 podman[95948]: 2026-01-20 18:43:07.603337274 +0000 UTC m=+0.034349851 container create 26b5831c17baca3bee01c06946cf69f9d340b889971f92d9a4efed500fd231c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 20 18:43:07 compute-0 systemd[1]: Started libpod-conmon-26b5831c17baca3bee01c06946cf69f9d340b889971f92d9a4efed500fd231c3.scope.
Jan 20 18:43:07 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:43:07 compute-0 podman[95948]: 2026-01-20 18:43:07.668286242 +0000 UTC m=+0.099298839 container init 26b5831c17baca3bee01c06946cf69f9d340b889971f92d9a4efed500fd231c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_hopper, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Jan 20 18:43:07 compute-0 podman[95948]: 2026-01-20 18:43:07.672903938 +0000 UTC m=+0.103916515 container start 26b5831c17baca3bee01c06946cf69f9d340b889971f92d9a4efed500fd231c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_hopper, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 20 18:43:07 compute-0 podman[95948]: 2026-01-20 18:43:07.676080247 +0000 UTC m=+0.107092824 container attach 26b5831c17baca3bee01c06946cf69f9d340b889971f92d9a4efed500fd231c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_hopper, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:43:07 compute-0 heuristic_hopper[95965]: 167 167
Jan 20 18:43:07 compute-0 systemd[1]: libpod-26b5831c17baca3bee01c06946cf69f9d340b889971f92d9a4efed500fd231c3.scope: Deactivated successfully.
Jan 20 18:43:07 compute-0 podman[95948]: 2026-01-20 18:43:07.677971454 +0000 UTC m=+0.108984031 container died 26b5831c17baca3bee01c06946cf69f9d340b889971f92d9a4efed500fd231c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:43:07 compute-0 podman[95948]: 2026-01-20 18:43:07.589779706 +0000 UTC m=+0.020792303 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:43:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-903e174b0ee98bee7c4e129292809e1e1c163f6bf8c30cd0a4dad7b1d12c01fc-merged.mount: Deactivated successfully.
Jan 20 18:43:07 compute-0 podman[95948]: 2026-01-20 18:43:07.711504094 +0000 UTC m=+0.142516671 container remove 26b5831c17baca3bee01c06946cf69f9d340b889971f92d9a4efed500fd231c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_hopper, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Jan 20 18:43:07 compute-0 systemd[1]: libpod-conmon-26b5831c17baca3bee01c06946cf69f9d340b889971f92d9a4efed500fd231c3.scope: Deactivated successfully.
Jan 20 18:43:07 compute-0 ansible-async_wrapper.py[95210]: Done in kid B.
Jan 20 18:43:07 compute-0 podman[95989]: 2026-01-20 18:43:07.870781845 +0000 UTC m=+0.037988663 container create 300e3b6c8a9ee94ea7cbcc4dcf4f0762a68d9b43d37999c872ffedbd74dc1ef2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_beaver, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Jan 20 18:43:07 compute-0 systemd[1]: Started libpod-conmon-300e3b6c8a9ee94ea7cbcc4dcf4f0762a68d9b43d37999c872ffedbd74dc1ef2.scope.
Jan 20 18:43:07 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:43:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f84a55ca5801b49a48a1a230bd398999b65268fca4e5f95c56370a55904d65a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 18:43:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f84a55ca5801b49a48a1a230bd398999b65268fca4e5f95c56370a55904d65a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:43:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f84a55ca5801b49a48a1a230bd398999b65268fca4e5f95c56370a55904d65a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:43:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f84a55ca5801b49a48a1a230bd398999b65268fca4e5f95c56370a55904d65a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 18:43:07 compute-0 podman[95989]: 2026-01-20 18:43:07.852893087 +0000 UTC m=+0.020099935 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:43:07 compute-0 podman[95989]: 2026-01-20 18:43:07.957206271 +0000 UTC m=+0.124413089 container init 300e3b6c8a9ee94ea7cbcc4dcf4f0762a68d9b43d37999c872ffedbd74dc1ef2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_beaver, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:43:07 compute-0 podman[95989]: 2026-01-20 18:43:07.96395331 +0000 UTC m=+0.131160128 container start 300e3b6c8a9ee94ea7cbcc4dcf4f0762a68d9b43d37999c872ffedbd74dc1ef2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_beaver, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:43:07 compute-0 podman[95989]: 2026-01-20 18:43:07.96756112 +0000 UTC m=+0.134767938 container attach 300e3b6c8a9ee94ea7cbcc4dcf4f0762a68d9b43d37999c872ffedbd74dc1ef2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_beaver, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Jan 20 18:43:08 compute-0 sudo[96034]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcjmfypdqicykyhyltkddzgjymbofutz ; /usr/bin/python3'
Jan 20 18:43:08 compute-0 sudo[96034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:43:08 compute-0 python3[96036]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:43:08 compute-0 podman[96061]: 2026-01-20 18:43:08.340605897 +0000 UTC m=+0.043333847 container create 6159258ff2173c3509726417c9ade714ef979dad3625c03ad7a238ea4fdbaaf5 (image=quay.io/ceph/ceph:v19, name=stoic_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:43:08 compute-0 systemd[1]: Started libpod-conmon-6159258ff2173c3509726417c9ade714ef979dad3625c03ad7a238ea4fdbaaf5.scope.
Jan 20 18:43:08 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:43:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/283d956301a6cd8baaeb7228ae65aaa2360646748ac8d63b6830a23775ed1bd4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:43:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/283d956301a6cd8baaeb7228ae65aaa2360646748ac8d63b6830a23775ed1bd4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:43:08 compute-0 podman[96061]: 2026-01-20 18:43:08.323308443 +0000 UTC m=+0.026036413 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:43:08 compute-0 podman[96061]: 2026-01-20 18:43:08.430717854 +0000 UTC m=+0.133445834 container init 6159258ff2173c3509726417c9ade714ef979dad3625c03ad7a238ea4fdbaaf5 (image=quay.io/ceph/ceph:v19, name=stoic_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:43:08 compute-0 podman[96061]: 2026-01-20 18:43:08.436820287 +0000 UTC m=+0.139548237 container start 6159258ff2173c3509726417c9ade714ef979dad3625c03ad7a238ea4fdbaaf5 (image=quay.io/ceph/ceph:v19, name=stoic_boyd, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Jan 20 18:43:08 compute-0 podman[96061]: 2026-01-20 18:43:08.4405436 +0000 UTC m=+0.143271560 container attach 6159258ff2173c3509726417c9ade714ef979dad3625c03ad7a238ea4fdbaaf5 (image=quay.io/ceph/ceph:v19, name=stoic_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 20 18:43:08 compute-0 lvm[96135]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 18:43:08 compute-0 lvm[96135]: VG ceph_vg0 finished
Jan 20 18:43:08 compute-0 quirky_beaver[96006]: {}
Jan 20 18:43:08 compute-0 systemd[1]: libpod-300e3b6c8a9ee94ea7cbcc4dcf4f0762a68d9b43d37999c872ffedbd74dc1ef2.scope: Deactivated successfully.
Jan 20 18:43:08 compute-0 systemd[1]: libpod-300e3b6c8a9ee94ea7cbcc4dcf4f0762a68d9b43d37999c872ffedbd74dc1ef2.scope: Consumed 1.062s CPU time.
Jan 20 18:43:08 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v18: 136 pgs: 136 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Jan 20 18:43:08 compute-0 ceph-mon[74381]: from='client.14550 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 20 18:43:08 compute-0 podman[96147]: 2026-01-20 18:43:08.698672757 +0000 UTC m=+0.032054393 container died 300e3b6c8a9ee94ea7cbcc4dcf4f0762a68d9b43d37999c872ffedbd74dc1ef2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_beaver, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:43:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f84a55ca5801b49a48a1a230bd398999b65268fca4e5f95c56370a55904d65a-merged.mount: Deactivated successfully.
Jan 20 18:43:08 compute-0 podman[96147]: 2026-01-20 18:43:08.740352562 +0000 UTC m=+0.073734198 container remove 300e3b6c8a9ee94ea7cbcc4dcf4f0762a68d9b43d37999c872ffedbd74dc1ef2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:43:08 compute-0 systemd[1]: libpod-conmon-300e3b6c8a9ee94ea7cbcc4dcf4f0762a68d9b43d37999c872ffedbd74dc1ef2.scope: Deactivated successfully.
Jan 20 18:43:08 compute-0 sudo[95868]: pam_unix(sudo:session): session closed for user root
Jan 20 18:43:08 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 18:43:08 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.14556 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 20 18:43:08 compute-0 stoic_boyd[96101]: 
Jan 20 18:43:08 compute-0 stoic_boyd[96101]: [{"container_id": "7416fde3489a", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "0.07%", "created": "2026-01-20T18:39:30.822781Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-20T18:42:49.254512Z", "memory_usage": 7808745, "ports": [], "service_name": "crash", "started": "2026-01-20T18:39:30.424929Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@crash.compute-0", "version": "19.2.3"}, {"container_id": "4a30a103bb68", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "0.40%", "created": "2026-01-20T18:40:19.195484Z", "daemon_id": "compute-1", "daemon_name": "crash.compute-1", "daemon_type": "crash", "hostname": "compute-1", "is_active": false, "last_refresh": "2026-01-20T18:42:48.630176Z", "memory_usage": 7817134, "ports": [], "service_name": "crash", "started": "2026-01-20T18:40:18.882757Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@crash.compute-1", "version": "19.2.3"}, {"container_id": "7786f776d6a1", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "0.20%", "created": "2026-01-20T18:41:21.157121Z", "daemon_id": "compute-2", "daemon_name": "crash.compute-2", "daemon_type": "crash", "hostname": "compute-2", "is_active": false, "last_refresh": "2026-01-20T18:42:48.833973Z", "memory_usage": 7803502, "ports": [], "service_name": "crash", "started": "2026-01-20T18:41:21.034222Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@crash.compute-2", "version": "19.2.3"}, {"container_id": "5d7fd05f6661", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph:v19", "cpu_percentage": "22.27%", "created": "2026-01-20T18:38:50.044708Z", "daemon_id": "compute-0.cepfkm", "daemon_name": "mgr.compute-0.cepfkm", "daemon_type": "mgr", "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-20T18:42:49.254404Z", "memory_usage": 541484646, "ports": [9283, 8765], "service_name": "mgr", "started": "2026-01-20T18:38:49.937851Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@mgr.compute-0.cepfkm", "version": "19.2.3"}, {"container_id": "a224023d27cf", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "32.76%", "created": "2026-01-20T18:41:19.079460Z", "daemon_id": "compute-1.whkwsm", "daemon_name": "mgr.compute-1.whkwsm", "daemon_type": "mgr", "hostname": "compute-1", "is_active": false, "last_refresh": "2026-01-20T18:42:48.630406Z", "memory_usage": 504469913, "ports": [8765], "service_name": "mgr", "started": "2026-01-20T18:41:18.950806Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@mgr.compute-1.whkwsm", "version": "19.2.3"}, {"container_id": "d876f8edbb25", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "27.08%", "created": "2026-01-20T18:41:12.856923Z", "daemon_id": "compute-2.pyghhf", "daemon_name": "mgr.compute-2.pyghhf", "daemon_type": "mgr", "hostname": "compute-2", "is_active": false, "last_refresh": "2026-01-20T18:42:48.833877Z", "memory_usage": 504050483, "ports": [8765], "service_name": "mgr", "started": "2026-01-20T18:41:12.754917Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@mgr.compute-2.pyghhf", "version": "19.2.3"}, {"container_id": "2fba800b181b", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph:v19", "cpu_percentage": "2.29%", "created": "2026-01-20T18:38:45.990601Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-20T18:42:49.254249Z", "memory_request": 2147483648, "memory_usage": 63271075, "ports": [], "service_name": "mon", "started": "2026-01-20T18:38:48.098098Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@mon.compute-0", "version": "19.2.3"}, {"container_id": "ccb6c4a2ca2d", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "1.87%", "created": "2026-01-20T18:41:07.883127Z", "daemon_id": "compute-1", "daemon_name": "mon.compute-1", "daemon_type": "mon", "hostname": "compute-1", "is_active": false, "last_refresh": "2026-01-20T18:42:48.630340Z", "memory_request": 2147483648, "memory_usage": 47248834, "ports": [], "service_name": "mon", "started": "2026-01-20T18:41:07.762345Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@mon.compute-1", "version": "19.2.3"}, {"container_id": "6ea509e8696e", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "2.36%", "created": "2026-01-20T18:41:05.654755Z", "daemon_id": "compute-2", "daemon_name": "mon.compute-2", "daemon_type": "mon", "hostname": "compute-2", "is_active": false, "last_refresh": "2026-01-20T18:42:48.833696Z", "memory_request": 2147483648, "memory_usage": 53016002, "ports": [], "service_name": "mon", "started": "2026-01-20T18:41:05.537334Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@mon.compute-2", "version": "19.2.3"}, {"container_id": "ce781f31ce1e", "container_image_digests": ["quay.io/prometheus/node-exporter@sha256:4cb2b9019f1757be8482419002cb7afe028fdba35d47958829e4cfeaf6246d80", "quay.io/prometheus/node-exporter@sha256:52a6f10ff10238979c365c06dbed8ad5cd1645c41780dc08ff813adacfb2341e"], "container_image_id": "72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e", "container_image_name": "quay.io/prometheus/node-exporter:v1.7.0", "cpu_percentage": "0.12%", "created": "2026-01-20T18:42:33.329323Z", "daemon_id": "compute-0", "daemon_name": "node-exporter.compute-0", "daemon_type": "node-exporter", "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-20T18:42:49.254849Z", "memory_usage": 5961154, "ports": [9100], "service_name": "node-exporter", "started": "2026-01-20T18:42:33.247483Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@node-exporter.compute-0", "version": "1.7.0"}, {"daemon_id": "compute-1", "daemon_name": "node-exporter.compute-1", "daemon_type": "node-exporter", "events": ["2026-01-20T18:42:59.049481Z daemon:node-exporter.compute-1 [INFO] \"Deployed node-exporter.compute-1 on host 'compute-1'\""], "hostname": "compute-1", "is_active": false, "ports": [9100], "service_name": "node-exporter", "status": 2, "status_desc": "starting"}, {"daemon_id": "compute-2", "daemon_name": "node-exporter.compute-2", "daemon_type": "node-exporter", "events": ["2026-01-20T18:43:03.399903Z daemon:node-exporter.compute-2 [INFO] \"Deployed node-exporter.compute-2 on host 'compute-2'\""], "hostname": "compute-2", "is_active": false, "ports": [9100], "service_name": "node-exporter", "status": 2, "status_desc": "starting"}, {"container_id": "d1930c87ccd8", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "2.03%", "created": "2026-01-20T18:40:30.739278Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-20T18:42:49.254616Z", "memory_request": 4294967296, "memory_usage": 77594624, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-01-20T18:40:30.633432Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@osd.0", "version": "19.2.3"}, {"container_id": "87bc2e65c64d", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "1.96%", "created": "2026-01-20T18:40:31.039884Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "hostname": "compute-1", "is_active": false, "last_refresh": "2026-01-20T18:42:48.630272Z", "memory_request": 4294967296, "memory_usage": 69940019, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-01-20T18:40:30.942007Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@osd.1", "version": "19.2.3"}, {"container_id": "e97613b11baf", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "2.07%", "created": "2026-01-20T18:41:36.437487Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "hostname": "compute-2", "is_active": false, "last_refresh": "2026-01-20T18:42:48.834053Z", "memory_request": 4294967296, "memory_usage": 65043169, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-01-20T18:41:36.304675Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@osd.2", "version": "19.2.3"}, {"container_id": "620f7a3733a8", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "1.69%", "created": "2026-01-20T18:42:08.700719Z", "daemon_id": "rgw.compute-0.phlxkp", "daemon_name": "rgw.rgw.compute-0.phlxkp", "daemon_type": "rgw", "hostname": "compute-0", "ip": "192.168.122.100", "is_active": false, "last_refresh": "2026-01-20T18:42:49.254723Z", "memory_usage": 101229527, "ports": [8082], "service_name": "rgw.rgw", "started": "2026-01-20T18:42:07.835352Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@rgw.rgw.compute-0.phlxkp", "version": "19.2.3"}, {"container_id": "f66ce6fd0294", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "2.20%", "created": "2026-01-20T18:42:03.320700Z", "daemon_id": "rgw.compute-1.unzimq", "daemon_name": "rgw.rgw.compute-1.unzimq", "daemon_type": "rgw", "hostname": "compute-1", "ip": "192.168.122.101", "is_active": false, "last_refresh": "2026-01-20T18:42:48.630472Z", "memory_usage": 101942558, "ports": [8082], "service_name": "rgw.rgw", "started": "2026-01-20T18:42:03.209394Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@rgw.rgw.compute-1.unzimq", "version": "19.2.3"}, {"container_id": "3b1ad516c4c3", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "1.66%", "created": "2026-01-20T18:41:55.983954Z", "daemon_id": "rgw.compute-2.mqbqmb", "daemon_name": "rgw.rgw.compute-2.mqbqmb", "daemon_type": "rgw", "hostname": "compute-2", "ip": "192.168.122.102", "is_active": false, "last_refresh": "2026-01-20T18:42:48.834162Z", "memory_usage": 100789125, "ports": [8082], "service_name": "rgw.rgw", "started": "2026-01-20T18:41:55.833898Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@rgw.rgw.compute-2.mqbqmb", "version": "19.2.3"}]
Jan 20 18:43:08 compute-0 systemd[1]: libpod-6159258ff2173c3509726417c9ade714ef979dad3625c03ad7a238ea4fdbaaf5.scope: Deactivated successfully.
Jan 20 18:43:08 compute-0 podman[96164]: 2026-01-20 18:43:08.860990294 +0000 UTC m=+0.021591622 container died 6159258ff2173c3509726417c9ade714ef979dad3625c03ad7a238ea4fdbaaf5 (image=quay.io/ceph/ceph:v19, name=stoic_boyd, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:43:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-283d956301a6cd8baaeb7228ae65aaa2360646748ac8d63b6830a23775ed1bd4-merged.mount: Deactivated successfully.
Jan 20 18:43:08 compute-0 podman[96164]: 2026-01-20 18:43:08.897448177 +0000 UTC m=+0.058049495 container remove 6159258ff2173c3509726417c9ade714ef979dad3625c03ad7a238ea4fdbaaf5 (image=quay.io/ceph/ceph:v19, name=stoic_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:43:08 compute-0 systemd[1]: libpod-conmon-6159258ff2173c3509726417c9ade714ef979dad3625c03ad7a238ea4fdbaaf5.scope: Deactivated successfully.
Jan 20 18:43:08 compute-0 sudo[96034]: pam_unix(sudo:session): session closed for user root
Jan 20 18:43:08 compute-0 rsyslogd[1003]: message too long (15159) with configured size 8096, begin of message is: [{"container_id": "7416fde3489a", "container_image_digests": ["quay.io/ceph/ceph [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 20 18:43:09 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 18:43:09 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:09 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 18:43:09 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:09 compute-0 ceph-mgr[74676]: [progress INFO root] update: starting ev 1ea6261f-0638-4c63-be1b-b435161c148c (Updating mds.cephfs deployment (+3 -> 3))
Jan 20 18:43:09 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.rrgioo", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Jan 20 18:43:09 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.rrgioo", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 20 18:43:09 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.rrgioo", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 20 18:43:09 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 18:43:09 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:43:09 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-2.rrgioo on compute-2
Jan 20 18:43:09 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-2.rrgioo on compute-2
Jan 20 18:43:09 compute-0 ceph-mon[74381]: pgmap v18: 136 pgs: 136 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Jan 20 18:43:09 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:09 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:09 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.rrgioo", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 20 18:43:09 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.rrgioo", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 20 18:43:09 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:43:10 compute-0 sudo[96202]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azcwwxhfexwecpnnzwoxskwxcaudkfrl ; /usr/bin/python3'
Jan 20 18:43:10 compute-0 sudo[96202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:43:10 compute-0 python3[96204]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:43:10 compute-0 podman[96205]: 2026-01-20 18:43:10.528748579 +0000 UTC m=+0.044574768 container create 64716bacf23fc1ca9399c1e623fe633bc55d5a577344635cc7197bc96ad1c975 (image=quay.io/ceph/ceph:v19, name=naughty_davinci, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Jan 20 18:43:10 compute-0 systemd[1]: Started libpod-conmon-64716bacf23fc1ca9399c1e623fe633bc55d5a577344635cc7197bc96ad1c975.scope.
Jan 20 18:43:10 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:43:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/402798e354d16127649000635031e6062e18808e43e786f42546b522241433a1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:43:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/402798e354d16127649000635031e6062e18808e43e786f42546b522241433a1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:43:10 compute-0 podman[96205]: 2026-01-20 18:43:10.59268482 +0000 UTC m=+0.108511029 container init 64716bacf23fc1ca9399c1e623fe633bc55d5a577344635cc7197bc96ad1c975 (image=quay.io/ceph/ceph:v19, name=naughty_davinci, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:43:10 compute-0 podman[96205]: 2026-01-20 18:43:10.598730022 +0000 UTC m=+0.114556211 container start 64716bacf23fc1ca9399c1e623fe633bc55d5a577344635cc7197bc96ad1c975 (image=quay.io/ceph/ceph:v19, name=naughty_davinci, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 20 18:43:10 compute-0 podman[96205]: 2026-01-20 18:43:10.601710377 +0000 UTC m=+0.117536566 container attach 64716bacf23fc1ca9399c1e623fe633bc55d5a577344635cc7197bc96ad1c975 (image=quay.io/ceph/ceph:v19, name=naughty_davinci, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:43:10 compute-0 podman[96205]: 2026-01-20 18:43:10.51201713 +0000 UTC m=+0.027843339 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:43:10 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v19: 136 pgs: 136 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:43:10 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 20 18:43:10 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1492333938' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 20 18:43:10 compute-0 naughty_davinci[96220]: 
Jan 20 18:43:10 compute-0 naughty_davinci[96220]: {"fsid":"aecbbf3b-b405-507b-97d7-637a83f5b4b1","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":111,"monmap":{"epoch":3,"min_mon_release_name":"squid","num_mons":3},"osdmap":{"epoch":55,"num_osds":3,"num_up_osds":3,"osd_up_since":1768934507,"num_in_osds":3,"osd_in_since":1768934485,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":136}],"num_pgs":136,"num_pools":12,"num_objects":195,"data_bytes":464595,"bytes_used":89124864,"bytes_avail":64322801664,"bytes_total":64411926528,"write_bytes_sec":0,"read_op_per_sec":0,"write_op_per_sec":0},"fsmap":{"epoch":2,"btime":"2026-01-20T18:42:47:693934+0000","id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":2,"modules":["cephadm","dashboard","iostat","nfs","restful"],"services":{"dashboard":"http://192.168.122.100:8443/"}},"servicemap":{"epoch":5,"modified":"2026-01-20T18:42:24.549709+0000","services":{"mgr":{"daemons":{"summary":"","compute-0.cepfkm":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-1.whkwsm":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2.pyghhf":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"mon":{"daemons":{"summary":"","compute-0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"rgw":{"daemons":{"summary":"","14364":{"start_epoch":5,"start_stamp":"2026-01-20T18:42:23.280618+0000","gid":14364,"addr":"192.168.122.100:0/2396071884","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-0","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.100:8082","frontend_type#0":"beast","hostname":"compute-0","id":"rgw.compute-0.phlxkp","kernel_description":"#1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026","kernel_version":"5.14.0-661.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864316","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"e5424423-e0b7-453a-acdd-580a59c79a77","zone_name":"default","zonegroup_id":"3115895e-8a03-4fc4-b262-7d669efe3b52","zonegroup_name":"default"},"task_status":{}},"24128":{"start_epoch":5,"start_stamp":"2026-01-20T18:42:23.465869+0000","gid":24128,"addr":"192.168.122.101:0/3140638165","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-1","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.101:8082","frontend_type#0":"beast","hostname":"compute-1","id":"rgw.compute-1.unzimq","kernel_description":"#1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026","kernel_version":"5.14.0-661.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864304","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"e5424423-e0b7-453a-acdd-580a59c79a77","zone_name":"default","zonegroup_id":"3115895e-8a03-4fc4-b262-7d669efe3b52","zonegroup_name":"default"},"task_status":{}},"24145":{"start_epoch":5,"start_stamp":"2026-01-20T18:42:23.169899+0000","gid":24145,"addr":"192.168.122.102:0/2196779070","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-2","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.102:8082","frontend_type#0":"beast","hostname":"compute-2","id":"rgw.compute-2.mqbqmb","kernel_description":"#1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026","kernel_version":"5.14.0-661.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864300","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"e5424423-e0b7-453a-acdd-580a59c79a77","zone_name":"default","zonegroup_id":"3115895e-8a03-4fc4-b262-7d669efe3b52","zonegroup_name":"default"},"task_status":{}}}}}},"progress_events":{}}
Jan 20 18:43:11 compute-0 systemd[1]: libpod-64716bacf23fc1ca9399c1e623fe633bc55d5a577344635cc7197bc96ad1c975.scope: Deactivated successfully.
Jan 20 18:43:11 compute-0 podman[96205]: 2026-01-20 18:43:11.015039732 +0000 UTC m=+0.530865921 container died 64716bacf23fc1ca9399c1e623fe633bc55d5a577344635cc7197bc96ad1c975 (image=quay.io/ceph/ceph:v19, name=naughty_davinci, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:43:11 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 20 18:43:11 compute-0 ceph-mon[74381]: from='client.14556 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 20 18:43:11 compute-0 ceph-mon[74381]: Deploying daemon mds.cephfs.compute-2.rrgioo on compute-2
Jan 20 18:43:11 compute-0 ceph-mon[74381]: pgmap v19: 136 pgs: 136 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:43:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-402798e354d16127649000635031e6062e18808e43e786f42546b522241433a1-merged.mount: Deactivated successfully.
Jan 20 18:43:12 compute-0 podman[96205]: 2026-01-20 18:43:12.2758301 +0000 UTC m=+1.791656289 container remove 64716bacf23fc1ca9399c1e623fe633bc55d5a577344635cc7197bc96ad1c975 (image=quay.io/ceph/ceph:v19, name=naughty_davinci, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:43:12 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:12 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 20 18:43:12 compute-0 systemd[1]: libpod-conmon-64716bacf23fc1ca9399c1e623fe633bc55d5a577344635cc7197bc96ad1c975.scope: Deactivated successfully.
Jan 20 18:43:12 compute-0 sudo[96202]: pam_unix(sudo:session): session closed for user root
Jan 20 18:43:12 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v20: 136 pgs: 136 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:43:13 compute-0 sudo[96281]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kavdbejnjicsohvhhvsvgwphkytoyvbg ; /usr/bin/python3'
Jan 20 18:43:13 compute-0 sudo[96281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:43:13 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).mds e3 new map
Jan 20 18:43:13 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).mds e3 print_map
                                           e3
                                           btime 2026-01-20T18:43:12:559794+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-20T18:42:47.693845+0000
                                           modified        2026-01-20T18:42:47.693845+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-2.rrgioo{-1:24196} state up:standby seq 1 addr [v2:192.168.122.102:6804/64351334,v1:192.168.122.102:6805/64351334] compat {c=[1],r=[1],i=[1fff]}]
Jan 20 18:43:13 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/1492333938' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Jan 20 18:43:13 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:13 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:13 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/64351334,v1:192.168.122.102:6805/64351334] up:boot
Jan 20 18:43:13 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.102:6804/64351334,v1:192.168.122.102:6805/64351334] as mds.0
Jan 20 18:43:13 compute-0 ceph-mon[74381]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.rrgioo assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Jan 20 18:43:13 compute-0 ceph-mon[74381]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Jan 20 18:43:13 compute-0 ceph-mon[74381]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Jan 20 18:43:13 compute-0 ceph-mon[74381]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 20 18:43:13 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Jan 20 18:43:13 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.rrgioo"} v 0)
Jan 20 18:43:13 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.rrgioo"}]: dispatch
Jan 20 18:43:13 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).mds e3 all = 0
Jan 20 18:43:13 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 20 18:43:13 compute-0 python3[96283]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:43:13 compute-0 podman[96284]: 2026-01-20 18:43:13.78412165 +0000 UTC m=+0.046582759 container create e1c4c1c01bfd34181d3eafe955117d1070a2ca2fc9b2ab5f8e437643becddc99 (image=quay.io/ceph/ceph:v19, name=kind_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:43:13 compute-0 systemd[1]: Started libpod-conmon-e1c4c1c01bfd34181d3eafe955117d1070a2ca2fc9b2ab5f8e437643becddc99.scope.
Jan 20 18:43:13 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:43:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2421ddc1ca1d47ce5b5f7786839dda46d467c4c945932fe16b8dfd503597ea91/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:43:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2421ddc1ca1d47ce5b5f7786839dda46d467c4c945932fe16b8dfd503597ea91/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:43:13 compute-0 podman[96284]: 2026-01-20 18:43:13.764673863 +0000 UTC m=+0.027134992 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:43:13 compute-0 podman[96284]: 2026-01-20 18:43:13.926367333 +0000 UTC m=+0.188828442 container init e1c4c1c01bfd34181d3eafe955117d1070a2ca2fc9b2ab5f8e437643becddc99 (image=quay.io/ceph/ceph:v19, name=kind_proskuriakova, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:43:13 compute-0 podman[96284]: 2026-01-20 18:43:13.931462091 +0000 UTC m=+0.193923200 container start e1c4c1c01bfd34181d3eafe955117d1070a2ca2fc9b2ab5f8e437643becddc99 (image=quay.io/ceph/ceph:v19, name=kind_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:43:13 compute-0 podman[96284]: 2026-01-20 18:43:13.935667277 +0000 UTC m=+0.198128396 container attach e1c4c1c01bfd34181d3eafe955117d1070a2ca2fc9b2ab5f8e437643becddc99 (image=quay.io/ceph/ceph:v19, name=kind_proskuriakova, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:43:14 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 20 18:43:14 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1468372551' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 20 18:43:14 compute-0 kind_proskuriakova[96300]: 
Jan 20 18:43:14 compute-0 kind_proskuriakova[96300]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_api_version","value":"3","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"7","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ALERTMANAGER_API_HOST","value":"http://192.168.122.100:9093","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_PASSWORD","value":"/home/grafana_password.yml","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_URL","value":"http://192.168.122.100:3100","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_USERNAME","value":"admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/PROMETHEUS_API_HOST","value":"http://192.168.122.100:9092","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/compute-0.cepfkm/server_addr","value":"192.168.122.100","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/compute-1.whkwsm/server_addr","value":"192.168.122.101","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/compute-2.pyghhf/server_addr","value":"192.168.122.102","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/server_port","value":"8443","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ssl","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ssl_server_port","value":"8443","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.phlxkp","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"client.rgw.rgw.compute-1.unzimq","name":"rgw_frontends","value":"beast endpoint=192.168.122.101:8082","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"client.rgw.rgw.compute-2.mqbqmb","name":"rgw_frontends","value":"beast endpoint=192.168.122.102:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Jan 20 18:43:14 compute-0 systemd[1]: libpod-e1c4c1c01bfd34181d3eafe955117d1070a2ca2fc9b2ab5f8e437643becddc99.scope: Deactivated successfully.
Jan 20 18:43:14 compute-0 podman[96284]: 2026-01-20 18:43:14.282720942 +0000 UTC m=+0.545182051 container died e1c4c1c01bfd34181d3eafe955117d1070a2ca2fc9b2ab5f8e437643becddc99 (image=quay.io/ceph/ceph:v19, name=kind_proskuriakova, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:43:14 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 18:43:14 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).mds e4 new map
Jan 20 18:43:14 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).mds e4 print_map
                                           e4
                                           btime 2026-01-20T18:43:13:613330+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        4
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-20T18:42:47.693845+0000
                                           modified        2026-01-20T18:43:13.613325+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24196}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                           [mds.cephfs.compute-2.rrgioo{0:24196} state up:creating seq 1 addr [v2:192.168.122.102:6804/64351334,v1:192.168.122.102:6805/64351334] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
Jan 20 18:43:14 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.rrgioo=up:creating}
Jan 20 18:43:14 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v21: 136 pgs: 136 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:43:14 compute-0 ceph-mon[74381]: pgmap v20: 136 pgs: 136 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:43:14 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:14 compute-0 ceph-mon[74381]: mds.? [v2:192.168.122.102:6804/64351334,v1:192.168.122.102:6805/64351334] up:boot
Jan 20 18:43:14 compute-0 ceph-mon[74381]: daemon mds.cephfs.compute-2.rrgioo assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Jan 20 18:43:14 compute-0 ceph-mon[74381]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Jan 20 18:43:14 compute-0 ceph-mon[74381]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Jan 20 18:43:14 compute-0 ceph-mon[74381]: Cluster is now healthy
Jan 20 18:43:14 compute-0 ceph-mon[74381]: fsmap cephfs:0 1 up:standby
Jan 20 18:43:14 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.rrgioo"}]: dispatch
Jan 20 18:43:14 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/1468372551' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Jan 20 18:43:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-2421ddc1ca1d47ce5b5f7786839dda46d467c4c945932fe16b8dfd503597ea91-merged.mount: Deactivated successfully.
Jan 20 18:43:14 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:14 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.bekmxe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Jan 20 18:43:14 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.bekmxe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 20 18:43:14 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.bekmxe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 20 18:43:14 compute-0 podman[96284]: 2026-01-20 18:43:14.796137175 +0000 UTC m=+1.058598284 container remove e1c4c1c01bfd34181d3eafe955117d1070a2ca2fc9b2ab5f8e437643becddc99 (image=quay.io/ceph/ceph:v19, name=kind_proskuriakova, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Jan 20 18:43:14 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 18:43:14 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:43:14 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.bekmxe on compute-0
Jan 20 18:43:14 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.bekmxe on compute-0
Jan 20 18:43:14 compute-0 ceph-mon[74381]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.rrgioo is now active in filesystem cephfs as rank 0
Jan 20 18:43:14 compute-0 sudo[96281]: pam_unix(sudo:session): session closed for user root
Jan 20 18:43:14 compute-0 sudo[96337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:43:14 compute-0 sudo[96337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:43:14 compute-0 systemd[1]: libpod-conmon-e1c4c1c01bfd34181d3eafe955117d1070a2ca2fc9b2ab5f8e437643becddc99.scope: Deactivated successfully.
Jan 20 18:43:14 compute-0 sudo[96337]: pam_unix(sudo:session): session closed for user root
Jan 20 18:43:14 compute-0 sudo[96362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1
Jan 20 18:43:14 compute-0 sudo[96362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:43:15 compute-0 podman[96428]: 2026-01-20 18:43:15.238697042 +0000 UTC m=+0.041166581 container create 312b5e2b4669339593e8170b06f75ea4a3109b7cb6ce9da97774f84b1981efdd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_franklin, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:43:15 compute-0 systemd[1]: Started libpod-conmon-312b5e2b4669339593e8170b06f75ea4a3109b7cb6ce9da97774f84b1981efdd.scope.
Jan 20 18:43:15 compute-0 podman[96428]: 2026-01-20 18:43:15.217367898 +0000 UTC m=+0.019837467 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:43:15 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:43:15 compute-0 podman[96428]: 2026-01-20 18:43:15.37947068 +0000 UTC m=+0.181940249 container init 312b5e2b4669339593e8170b06f75ea4a3109b7cb6ce9da97774f84b1981efdd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 20 18:43:15 compute-0 podman[96428]: 2026-01-20 18:43:15.38625508 +0000 UTC m=+0.188724609 container start 312b5e2b4669339593e8170b06f75ea4a3109b7cb6ce9da97774f84b1981efdd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Jan 20 18:43:15 compute-0 laughing_franklin[96444]: 167 167
Jan 20 18:43:15 compute-0 systemd[1]: libpod-312b5e2b4669339593e8170b06f75ea4a3109b7cb6ce9da97774f84b1981efdd.scope: Deactivated successfully.
Jan 20 18:43:15 compute-0 podman[96428]: 2026-01-20 18:43:15.391728967 +0000 UTC m=+0.194198526 container attach 312b5e2b4669339593e8170b06f75ea4a3109b7cb6ce9da97774f84b1981efdd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_franklin, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:43:15 compute-0 podman[96428]: 2026-01-20 18:43:15.392117387 +0000 UTC m=+0.194586946 container died 312b5e2b4669339593e8170b06f75ea4a3109b7cb6ce9da97774f84b1981efdd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_franklin, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:43:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-b14078ba013cb94e03ce68b375ae63274e38f303b5f632dbabd3923d69e518cf-merged.mount: Deactivated successfully.
Jan 20 18:43:15 compute-0 podman[96428]: 2026-01-20 18:43:15.431257128 +0000 UTC m=+0.233726667 container remove 312b5e2b4669339593e8170b06f75ea4a3109b7cb6ce9da97774f84b1981efdd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 20 18:43:15 compute-0 systemd[1]: libpod-conmon-312b5e2b4669339593e8170b06f75ea4a3109b7cb6ce9da97774f84b1981efdd.scope: Deactivated successfully.
Jan 20 18:43:15 compute-0 systemd[1]: Reloading.
Jan 20 18:43:15 compute-0 systemd-sysv-generator[96487]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:43:15 compute-0 systemd-rc-local-generator[96484]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:43:15 compute-0 sudo[96519]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-miqgyzpbqemyvagizsthqfcmxkjjcwkr ; /usr/bin/python3'
Jan 20 18:43:15 compute-0 sudo[96519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:43:15 compute-0 systemd[1]: Reloading.
Jan 20 18:43:15 compute-0 systemd-rc-local-generator[96551]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:43:15 compute-0 systemd-sysv-generator[96554]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:43:15 compute-0 python3[96523]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:43:15 compute-0 ceph-mon[74381]: fsmap cephfs:1 {0=cephfs.compute-2.rrgioo=up:creating}
Jan 20 18:43:15 compute-0 ceph-mon[74381]: pgmap v21: 136 pgs: 136 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:43:15 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:15 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.bekmxe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 20 18:43:15 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.bekmxe", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 20 18:43:15 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:43:15 compute-0 ceph-mon[74381]: daemon mds.cephfs.compute-2.rrgioo is now active in filesystem cephfs as rank 0
Jan 20 18:43:15 compute-0 podman[96562]: 2026-01-20 18:43:15.914272919 +0000 UTC m=+0.065039510 container create a15c806a538b5b032f05e447287da8c27175ec751eebfcb7b2169c2083cd84f7 (image=quay.io/ceph/ceph:v19, name=dreamy_elbakyan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Jan 20 18:43:15 compute-0 podman[96562]: 2026-01-20 18:43:15.876131164 +0000 UTC m=+0.026897775 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:43:16 compute-0 systemd[1]: Started libpod-conmon-a15c806a538b5b032f05e447287da8c27175ec751eebfcb7b2169c2083cd84f7.scope.
Jan 20 18:43:16 compute-0 systemd[1]: Starting Ceph mds.cephfs.compute-0.bekmxe for aecbbf3b-b405-507b-97d7-637a83f5b4b1...
Jan 20 18:43:16 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:43:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49e7121e6cd6db138ff5fe223ba49676904eeee693468c354215fbdf76e38c25/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:43:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49e7121e6cd6db138ff5fe223ba49676904eeee693468c354215fbdf76e38c25/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:43:16 compute-0 podman[96562]: 2026-01-20 18:43:16.108977417 +0000 UTC m=+0.259744028 container init a15c806a538b5b032f05e447287da8c27175ec751eebfcb7b2169c2083cd84f7 (image=quay.io/ceph/ceph:v19, name=dreamy_elbakyan, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:43:16 compute-0 podman[96562]: 2026-01-20 18:43:16.120641779 +0000 UTC m=+0.271408360 container start a15c806a538b5b032f05e447287da8c27175ec751eebfcb7b2169c2083cd84f7 (image=quay.io/ceph/ceph:v19, name=dreamy_elbakyan, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Jan 20 18:43:16 compute-0 podman[96562]: 2026-01-20 18:43:16.125148023 +0000 UTC m=+0.275914654 container attach a15c806a538b5b032f05e447287da8c27175ec751eebfcb7b2169c2083cd84f7 (image=quay.io/ceph/ceph:v19, name=dreamy_elbakyan, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:43:16 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).mds e5 new map
Jan 20 18:43:16 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).mds e5 print_map
                                           e5
                                           btime 2026-01-20T18:43:15:795669+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-20T18:42:47.693845+0000
                                           modified        2026-01-20T18:43:15.795666+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24196}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 24196 members: 24196
                                           [mds.cephfs.compute-2.rrgioo{0:24196} state up:active seq 2 addr [v2:192.168.122.102:6804/64351334,v1:192.168.122.102:6805/64351334] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
Jan 20 18:43:16 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/64351334,v1:192.168.122.102:6805/64351334] up:active
Jan 20 18:43:16 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.rrgioo=up:active}
Jan 20 18:43:16 compute-0 podman[96648]: 2026-01-20 18:43:16.282542816 +0000 UTC m=+0.037559722 container create f52e9b086ca4e3ff67fc9a4087a161b39b54d6d407da30947e4c3c7d432b0b48 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mds-cephfs-compute-0-bekmxe, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 18:43:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7be1732f8d39175fb61bec284824c50b459028a1f097ee7fb264352be6bdd818/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:43:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7be1732f8d39175fb61bec284824c50b459028a1f097ee7fb264352be6bdd818/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:43:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7be1732f8d39175fb61bec284824c50b459028a1f097ee7fb264352be6bdd818/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 18:43:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7be1732f8d39175fb61bec284824c50b459028a1f097ee7fb264352be6bdd818/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.bekmxe supports timestamps until 2038 (0x7fffffff)
Jan 20 18:43:16 compute-0 podman[96648]: 2026-01-20 18:43:16.344796225 +0000 UTC m=+0.099813151 container init f52e9b086ca4e3ff67fc9a4087a161b39b54d6d407da30947e4c3c7d432b0b48 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mds-cephfs-compute-0-bekmxe, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Jan 20 18:43:16 compute-0 podman[96648]: 2026-01-20 18:43:16.351316229 +0000 UTC m=+0.106333135 container start f52e9b086ca4e3ff67fc9a4087a161b39b54d6d407da30947e4c3c7d432b0b48 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mds-cephfs-compute-0-bekmxe, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:43:16 compute-0 bash[96648]: f52e9b086ca4e3ff67fc9a4087a161b39b54d6d407da30947e4c3c7d432b0b48
Jan 20 18:43:16 compute-0 podman[96648]: 2026-01-20 18:43:16.265563731 +0000 UTC m=+0.020580657 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:43:16 compute-0 systemd[1]: Started Ceph mds.cephfs.compute-0.bekmxe for aecbbf3b-b405-507b-97d7-637a83f5b4b1.
Jan 20 18:43:16 compute-0 ceph-mds[96670]: set uid:gid to 167:167 (ceph:ceph)
Jan 20 18:43:16 compute-0 ceph-mds[96670]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mds, pid 2
Jan 20 18:43:16 compute-0 ceph-mds[96670]: main not setting numa affinity
Jan 20 18:43:16 compute-0 ceph-mds[96670]: pidfile_write: ignore empty --pid-file
Jan 20 18:43:16 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mds-cephfs-compute-0-bekmxe[96666]: starting mds.cephfs.compute-0.bekmxe at 
Jan 20 18:43:16 compute-0 ceph-mds[96670]: mds.cephfs.compute-0.bekmxe Updating MDS map to version 5 from mon.0
Jan 20 18:43:16 compute-0 sudo[96362]: pam_unix(sudo:session): session closed for user root
Jan 20 18:43:16 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 18:43:16 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:16 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 18:43:16 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:16 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 20 18:43:16 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0)
Jan 20 18:43:16 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/323185268' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Jan 20 18:43:16 compute-0 dreamy_elbakyan[96579]: mimic
Jan 20 18:43:16 compute-0 systemd[1]: libpod-a15c806a538b5b032f05e447287da8c27175ec751eebfcb7b2169c2083cd84f7.scope: Deactivated successfully.
Jan 20 18:43:16 compute-0 podman[96562]: 2026-01-20 18:43:16.514956219 +0000 UTC m=+0.665722830 container died a15c806a538b5b032f05e447287da8c27175ec751eebfcb7b2169c2083cd84f7 (image=quay.io/ceph/ceph:v19, name=dreamy_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:43:16 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-49e7121e6cd6db138ff5fe223ba49676904eeee693468c354215fbdf76e38c25-merged.mount: Deactivated successfully.
Jan 20 18:43:16 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.eisxof", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Jan 20 18:43:16 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.eisxof", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 20 18:43:16 compute-0 podman[96562]: 2026-01-20 18:43:16.563521266 +0000 UTC m=+0.714287857 container remove a15c806a538b5b032f05e447287da8c27175ec751eebfcb7b2169c2083cd84f7 (image=quay.io/ceph/ceph:v19, name=dreamy_elbakyan, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 20 18:43:16 compute-0 systemd[1]: libpod-conmon-a15c806a538b5b032f05e447287da8c27175ec751eebfcb7b2169c2083cd84f7.scope: Deactivated successfully.
Jan 20 18:43:16 compute-0 sudo[96519]: pam_unix(sudo:session): session closed for user root
Jan 20 18:43:16 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.eisxof", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 20 18:43:16 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 18:43:16 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:43:16 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-1.eisxof on compute-1
Jan 20 18:43:16 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-1.eisxof on compute-1
Jan 20 18:43:16 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v22: 136 pgs: 136 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:43:16 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:43:16 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:43:16 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:43:16 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:43:16 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:43:16 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:43:16 compute-0 ceph-mon[74381]: Deploying daemon mds.cephfs.compute-0.bekmxe on compute-0
Jan 20 18:43:16 compute-0 ceph-mon[74381]: mds.? [v2:192.168.122.102:6804/64351334,v1:192.168.122.102:6805/64351334] up:active
Jan 20 18:43:16 compute-0 ceph-mon[74381]: fsmap cephfs:1 {0=cephfs.compute-2.rrgioo=up:active}
Jan 20 18:43:16 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:16 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:16 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/323185268' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Jan 20 18:43:16 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:16 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.eisxof", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Jan 20 18:43:16 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.eisxof", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 20 18:43:16 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:43:16 compute-0 ceph-mon[74381]: Deploying daemon mds.cephfs.compute-1.eisxof on compute-1
Jan 20 18:43:16 compute-0 ceph-mon[74381]: pgmap v22: 136 pgs: 136 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:43:17 compute-0 sshd-session[96703]: Connection closed by 154.117.199.5 port 39629 [preauth]
Jan 20 18:43:17 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).mds e6 new map
Jan 20 18:43:17 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).mds e6 print_map
                                           e6
                                           btime 2026-01-20T18:43:17:270362+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-20T18:42:47.693845+0000
                                           modified        2026-01-20T18:43:15.795666+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24196}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 24196 members: 24196
                                           [mds.cephfs.compute-2.rrgioo{0:24196} state up:active seq 2 addr [v2:192.168.122.102:6804/64351334,v1:192.168.122.102:6805/64351334] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.bekmxe{-1:14580} state up:standby seq 1 addr [v2:192.168.122.100:6806/3813103924,v1:192.168.122.100:6807/3813103924] compat {c=[1],r=[1],i=[1fff]}]
Jan 20 18:43:17 compute-0 ceph-mds[96670]: mds.cephfs.compute-0.bekmxe Updating MDS map to version 6 from mon.0
Jan 20 18:43:17 compute-0 ceph-mds[96670]: mds.cephfs.compute-0.bekmxe Monitors have assigned me to become a standby
Jan 20 18:43:17 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/3813103924,v1:192.168.122.100:6807/3813103924] up:boot
Jan 20 18:43:17 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.rrgioo=up:active} 1 up:standby
Jan 20 18:43:17 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.bekmxe"} v 0)
Jan 20 18:43:17 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.bekmxe"}]: dispatch
Jan 20 18:43:17 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).mds e6 all = 0
Jan 20 18:43:17 compute-0 sudo[96729]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfhbkmgsiwbnpzolynmqngjwxzagtiai ; /usr/bin/python3'
Jan 20 18:43:17 compute-0 sudo[96729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:43:17 compute-0 python3[96731]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:43:17 compute-0 podman[96732]: 2026-01-20 18:43:17.813777438 +0000 UTC m=+0.051887891 container create 29906f3d6e5ff17988dc4955a2fff1007d46d21ecda816a1627e5a68aa2dc512 (image=quay.io/ceph/ceph:v19, name=charming_khorana, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:43:17 compute-0 systemd[1]: Started libpod-conmon-29906f3d6e5ff17988dc4955a2fff1007d46d21ecda816a1627e5a68aa2dc512.scope.
Jan 20 18:43:17 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:43:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15f10906c79cbe0d9f464125380fc53db93cc1a8f69a224ce0a99a29f95bb9f9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:43:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15f10906c79cbe0d9f464125380fc53db93cc1a8f69a224ce0a99a29f95bb9f9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:43:17 compute-0 podman[96732]: 2026-01-20 18:43:17.793048387 +0000 UTC m=+0.031158880 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:43:17 compute-0 podman[96732]: 2026-01-20 18:43:17.901283914 +0000 UTC m=+0.139394397 container init 29906f3d6e5ff17988dc4955a2fff1007d46d21ecda816a1627e5a68aa2dc512 (image=quay.io/ceph/ceph:v19, name=charming_khorana, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Jan 20 18:43:17 compute-0 podman[96732]: 2026-01-20 18:43:17.908974789 +0000 UTC m=+0.147085242 container start 29906f3d6e5ff17988dc4955a2fff1007d46d21ecda816a1627e5a68aa2dc512 (image=quay.io/ceph/ceph:v19, name=charming_khorana, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 20 18:43:17 compute-0 podman[96732]: 2026-01-20 18:43:17.912094552 +0000 UTC m=+0.150205025 container attach 29906f3d6e5ff17988dc4955a2fff1007d46d21ecda816a1627e5a68aa2dc512 (image=quay.io/ceph/ceph:v19, name=charming_khorana, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:43:18 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).mds e7 new map
Jan 20 18:43:18 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).mds e7 print_map
                                           e7
                                           btime 2026-01-20T18:43:17:563137+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-20T18:42:47.693845+0000
                                           modified        2026-01-20T18:43:15.795666+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24196}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 24196 members: 24196
                                           [mds.cephfs.compute-2.rrgioo{0:24196} state up:active seq 2 addr [v2:192.168.122.102:6804/64351334,v1:192.168.122.102:6805/64351334] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.bekmxe{-1:14580} state up:standby seq 1 addr [v2:192.168.122.100:6806/3813103924,v1:192.168.122.100:6807/3813103924] compat {c=[1],r=[1],i=[1fff]}]
Jan 20 18:43:18 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.rrgioo=up:active} 1 up:standby
Jan 20 18:43:18 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions", "format": "json"} v 0)
Jan 20 18:43:18 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1683733983' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Jan 20 18:43:18 compute-0 charming_khorana[96748]: 
Jan 20 18:43:18 compute-0 charming_khorana[96748]: {"mon":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"mgr":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"osd":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"mds":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":2},"rgw":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"overall":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":14}}
Jan 20 18:43:18 compute-0 systemd[1]: libpod-29906f3d6e5ff17988dc4955a2fff1007d46d21ecda816a1627e5a68aa2dc512.scope: Deactivated successfully.
Jan 20 18:43:18 compute-0 podman[96732]: 2026-01-20 18:43:18.336245808 +0000 UTC m=+0.574356301 container died 29906f3d6e5ff17988dc4955a2fff1007d46d21ecda816a1627e5a68aa2dc512 (image=quay.io/ceph/ceph:v19, name=charming_khorana, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 20 18:43:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-15f10906c79cbe0d9f464125380fc53db93cc1a8f69a224ce0a99a29f95bb9f9-merged.mount: Deactivated successfully.
Jan 20 18:43:18 compute-0 podman[96732]: 2026-01-20 18:43:18.374692529 +0000 UTC m=+0.612802982 container remove 29906f3d6e5ff17988dc4955a2fff1007d46d21ecda816a1627e5a68aa2dc512 (image=quay.io/ceph/ceph:v19, name=charming_khorana, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Jan 20 18:43:18 compute-0 systemd[1]: libpod-conmon-29906f3d6e5ff17988dc4955a2fff1007d46d21ecda816a1627e5a68aa2dc512.scope: Deactivated successfully.
Jan 20 18:43:18 compute-0 sudo[96729]: pam_unix(sudo:session): session closed for user root
Jan 20 18:43:18 compute-0 ceph-mon[74381]: mds.? [v2:192.168.122.100:6806/3813103924,v1:192.168.122.100:6807/3813103924] up:boot
Jan 20 18:43:18 compute-0 ceph-mon[74381]: fsmap cephfs:1 {0=cephfs.compute-2.rrgioo=up:active} 1 up:standby
Jan 20 18:43:18 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.bekmxe"}]: dispatch
Jan 20 18:43:18 compute-0 ceph-mon[74381]: fsmap cephfs:1 {0=cephfs.compute-2.rrgioo=up:active} 1 up:standby
Jan 20 18:43:18 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/1683733983' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Jan 20 18:43:18 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v23: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Jan 20 18:43:19 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 18:43:19 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 20 18:43:19 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:19 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 20 18:43:19 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:19 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 20 18:43:19 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).mds e8 new map
Jan 20 18:43:19 compute-0 ceph-mon[74381]: pgmap v23: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Jan 20 18:43:19 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:19 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).mds e8 print_map
                                           e8
                                           btime 2026-01-20T18:43:19:580322+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-20T18:42:47.693845+0000
                                           modified        2026-01-20T18:43:15.795666+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24196}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 24196 members: 24196
                                           [mds.cephfs.compute-2.rrgioo{0:24196} state up:active seq 2 addr [v2:192.168.122.102:6804/64351334,v1:192.168.122.102:6805/64351334] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.bekmxe{-1:14580} state up:standby seq 1 addr [v2:192.168.122.100:6806/3813103924,v1:192.168.122.100:6807/3813103924] compat {c=[1],r=[1],i=[1fff]}]
                                           [mds.cephfs.compute-1.eisxof{-1:24173} state up:standby seq 1 addr [v2:192.168.122.101:6804/2890770273,v1:192.168.122.101:6805/2890770273] compat {c=[1],r=[1],i=[1fff]}]
Jan 20 18:43:19 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/2890770273,v1:192.168.122.101:6805/2890770273] up:boot
Jan 20 18:43:19 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.rrgioo=up:active} 2 up:standby
Jan 20 18:43:19 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-1.eisxof"} v 0)
Jan 20 18:43:19 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.eisxof"}]: dispatch
Jan 20 18:43:19 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).mds e8 all = 0
Jan 20 18:43:19 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:19 compute-0 ceph-mgr[74676]: [progress INFO root] complete: finished ev 1ea6261f-0638-4c63-be1b-b435161c148c (Updating mds.cephfs deployment (+3 -> 3))
Jan 20 18:43:19 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0)
Jan 20 18:43:19 compute-0 ceph-mgr[74676]: [progress INFO root] Completed event 1ea6261f-0638-4c63-be1b-b435161c148c (Updating mds.cephfs deployment (+3 -> 3)) in 10 seconds
Jan 20 18:43:20 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:20 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 20 18:43:20 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:20 compute-0 ceph-mgr[74676]: [progress INFO root] update: starting ev aa82c5c5-5750-4bb3-a15e-118159e55b85 (Updating nfs.cephfs deployment (+3 -> 3))
Jan 20 18:43:20 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 20 18:43:20 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v24: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Jan 20 18:43:20 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:20 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.0.0.compute-1.zazymd
Jan 20 18:43:20 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.0.0.compute-1.zazymd
Jan 20 18:43:20 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.zazymd", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Jan 20 18:43:20 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.zazymd", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Jan 20 18:43:20 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.zazymd", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Jan 20 18:43:20 compute-0 ceph-mgr[74676]: [cephadm INFO root] Ensuring nfs.cephfs.0 is in the ganesha grace table
Jan 20 18:43:20 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.0 is in the ganesha grace table
Jan 20 18:43:20 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Jan 20 18:43:20 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Jan 20 18:43:20 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Jan 20 18:43:20 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 18:43:20 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:43:20 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Jan 20 18:43:20 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Jan 20 18:43:20 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Jan 20 18:43:20 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:20 compute-0 ceph-mon[74381]: mds.? [v2:192.168.122.101:6804/2890770273,v1:192.168.122.101:6805/2890770273] up:boot
Jan 20 18:43:20 compute-0 ceph-mon[74381]: fsmap cephfs:1 {0=cephfs.compute-2.rrgioo=up:active} 2 up:standby
Jan 20 18:43:20 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.eisxof"}]: dispatch
Jan 20 18:43:20 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:20 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:20 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:20 compute-0 ceph-mon[74381]: pgmap v24: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Jan 20 18:43:20 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:20 compute-0 ceph-mon[74381]: Creating key for client.nfs.cephfs.0.0.compute-1.zazymd
Jan 20 18:43:20 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.zazymd", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Jan 20 18:43:20 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.zazymd", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Jan 20 18:43:20 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Jan 20 18:43:20 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Jan 20 18:43:20 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:43:20 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Jan 20 18:43:20 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Jan 20 18:43:20 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Jan 20 18:43:20 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Jan 20 18:43:20 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.0.0.compute-1.zazymd-rgw
Jan 20 18:43:20 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.0.0.compute-1.zazymd-rgw
Jan 20 18:43:20 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.zazymd-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Jan 20 18:43:20 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.zazymd-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 20 18:43:20 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.zazymd-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 20 18:43:20 compute-0 ceph-mgr[74676]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.0.0.compute-1.zazymd's ganesha conf is defaulting to empty
Jan 20 18:43:20 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.0.0.compute-1.zazymd's ganesha conf is defaulting to empty
Jan 20 18:43:20 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 18:43:20 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:43:20 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.0.0.compute-1.zazymd on compute-1
Jan 20 18:43:20 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.0.0.compute-1.zazymd on compute-1
Jan 20 18:43:21 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).mds e9 new map
Jan 20 18:43:21 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).mds e9 print_map
                                           e9
                                           btime 2026-01-20T18:43:21:131961+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-20T18:42:47.693845+0000
                                           modified        2026-01-20T18:43:15.795666+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=24196}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 24196 members: 24196
                                           [mds.cephfs.compute-2.rrgioo{0:24196} state up:active seq 2 addr [v2:192.168.122.102:6804/64351334,v1:192.168.122.102:6805/64351334] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.bekmxe{-1:14580} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/3813103924,v1:192.168.122.100:6807/3813103924] compat {c=[1],r=[1],i=[1fff]}]
                                           [mds.cephfs.compute-1.eisxof{-1:24173} state up:standby seq 1 addr [v2:192.168.122.101:6804/2890770273,v1:192.168.122.101:6805/2890770273] compat {c=[1],r=[1],i=[1fff]}]
Jan 20 18:43:21 compute-0 ceph-mds[96670]: mds.cephfs.compute-0.bekmxe Updating MDS map to version 9 from mon.0
Jan 20 18:43:21 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/3813103924,v1:192.168.122.100:6807/3813103924] up:standby
Jan 20 18:43:21 compute-0 ceph-mon[74381]: log_channel(cluster) log [INF] : Dropping low affinity active daemon mds.cephfs.compute-2.rrgioo in favor of higher affinity standby.
Jan 20 18:43:21 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).mds e9  replacing 24196 [v2:192.168.122.102:6804/64351334,v1:192.168.122.102:6805/64351334] mds.0.4 up:active with 14580/cephfs.compute-0.bekmxe [v2:192.168.122.100:6806/3813103924,v1:192.168.122.100:6807/3813103924]
Jan 20 18:43:21 compute-0 ceph-mon[74381]: log_channel(cluster) log [WRN] : Replacing daemon mds.cephfs.compute-2.rrgioo as rank 0 with standby daemon mds.cephfs.compute-0.bekmxe
Jan 20 18:43:21 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).mds e9 fail_mds_gid 24196 mds.cephfs.compute-2.rrgioo role 0
Jan 20 18:43:21 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Jan 20 18:43:21 compute-0 ceph-mon[74381]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is degraded (FS_DEGRADED)
Jan 20 18:43:21 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.rrgioo=up:active} 2 up:standby
Jan 20 18:43:21 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).mds e10 new map
Jan 20 18:43:21 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).mds e10 print_map
                                           e10
                                           btime 2026-01-20T18:43:21:219517+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        10
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-20T18:42:47.693845+0000
                                           modified        2026-01-20T18:43:21.219516+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        56
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=14580}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 0 members: 
                                           [mds.cephfs.compute-0.bekmxe{0:14580} state up:replay seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/3813103924,v1:192.168.122.100:6807/3813103924] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-1.eisxof{-1:24173} state up:standby seq 1 addr [v2:192.168.122.101:6804/2890770273,v1:192.168.122.101:6805/2890770273] compat {c=[1],r=[1],i=[1fff]}]
Jan 20 18:43:21 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Jan 20 18:43:21 compute-0 ceph-mds[96670]: mds.cephfs.compute-0.bekmxe Updating MDS map to version 10 from mon.0
Jan 20 18:43:21 compute-0 ceph-mds[96670]: mds.0.10 handle_mds_map I am now mds.0.10
Jan 20 18:43:21 compute-0 ceph-mds[96670]: mds.0.10 handle_mds_map state change up:standby --> up:replay
Jan 20 18:43:21 compute-0 ceph-mds[96670]: mds.0.10 replay_start
Jan 20 18:43:21 compute-0 ceph-mds[96670]: mds.0.10  waiting for osdmap 56 (which blocklists prior instance)
Jan 20 18:43:21 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Jan 20 18:43:21 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : fsmap cephfs:1/1 {0=cephfs.compute-0.bekmxe=up:replay} 1 up:standby
Jan 20 18:43:21 compute-0 ceph-mds[96670]: mds.0.cache creating system inode with ino:0x100
Jan 20 18:43:21 compute-0 ceph-mds[96670]: mds.0.cache creating system inode with ino:0x1
Jan 20 18:43:21 compute-0 ceph-mds[96670]: mds.0.10 Finished replaying journal
Jan 20 18:43:21 compute-0 ceph-mds[96670]: mds.0.10 making mds journal writeable
Jan 20 18:43:21 compute-0 ceph-mon[74381]: Ensuring nfs.cephfs.0 is in the ganesha grace table
Jan 20 18:43:21 compute-0 ceph-mon[74381]: Rados config object exists: conf-nfs.cephfs
Jan 20 18:43:21 compute-0 ceph-mon[74381]: Creating key for client.nfs.cephfs.0.0.compute-1.zazymd-rgw
Jan 20 18:43:21 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.zazymd-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 20 18:43:21 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.zazymd-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 20 18:43:21 compute-0 ceph-mon[74381]: Bind address in nfs.cephfs.0.0.compute-1.zazymd's ganesha conf is defaulting to empty
Jan 20 18:43:21 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:43:21 compute-0 ceph-mon[74381]: Deploying daemon nfs.cephfs.0.0.compute-1.zazymd on compute-1
Jan 20 18:43:21 compute-0 ceph-mon[74381]: mds.? [v2:192.168.122.100:6806/3813103924,v1:192.168.122.100:6807/3813103924] up:standby
Jan 20 18:43:21 compute-0 ceph-mon[74381]: Dropping low affinity active daemon mds.cephfs.compute-2.rrgioo in favor of higher affinity standby.
Jan 20 18:43:21 compute-0 ceph-mon[74381]: Replacing daemon mds.cephfs.compute-2.rrgioo as rank 0 with standby daemon mds.cephfs.compute-0.bekmxe
Jan 20 18:43:21 compute-0 ceph-mon[74381]: Health check failed: 1 filesystem is degraded (FS_DEGRADED)
Jan 20 18:43:21 compute-0 ceph-mon[74381]: fsmap cephfs:1 {0=cephfs.compute-2.rrgioo=up:active} 2 up:standby
Jan 20 18:43:21 compute-0 ceph-mon[74381]: osdmap e56: 3 total, 3 up, 3 in
Jan 20 18:43:21 compute-0 ceph-mon[74381]: fsmap cephfs:1/1 {0=cephfs.compute-0.bekmxe=up:replay} 1 up:standby
Jan 20 18:43:22 compute-0 ceph-mgr[74676]: [progress INFO root] Writing back 13 completed events
Jan 20 18:43:22 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 20 18:43:22 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:22 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).mds e11 new map
Jan 20 18:43:22 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).mds e11 print_map
                                           e11
                                           btime 2026-01-20T18:43:22:232260+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        11
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-20T18:42:47.693845+0000
                                           modified        2026-01-20T18:43:21.250890+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        56
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=14580}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 0 members: 
                                           [mds.cephfs.compute-0.bekmxe{0:14580} state up:reconnect seq 3 join_fscid=1 addr [v2:192.168.122.100:6806/3813103924,v1:192.168.122.100:6807/3813103924] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-1.eisxof{-1:24173} state up:standby seq 1 addr [v2:192.168.122.101:6804/2890770273,v1:192.168.122.101:6805/2890770273] compat {c=[1],r=[1],i=[1fff]}]
                                           [mds.cephfs.compute-2.rrgioo{-1:24205} state up:standby seq 1 join_fscid=1 addr [v2:192.168.122.102:6804/823413276,v1:192.168.122.102:6805/823413276] compat {c=[1],r=[1],i=[1fff]}]
Jan 20 18:43:22 compute-0 ceph-mds[96670]: mds.cephfs.compute-0.bekmxe Updating MDS map to version 11 from mon.0
Jan 20 18:43:22 compute-0 ceph-mds[96670]: mds.0.10 handle_mds_map I am now mds.0.10
Jan 20 18:43:22 compute-0 ceph-mds[96670]: mds.0.10 handle_mds_map state change up:replay --> up:reconnect
Jan 20 18:43:22 compute-0 ceph-mds[96670]: mds.0.10 reconnect_start
Jan 20 18:43:22 compute-0 ceph-mds[96670]: mds.0.10 reopen_log
Jan 20 18:43:22 compute-0 ceph-mds[96670]: mds.0.10 reconnect_done
Jan 20 18:43:22 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/3813103924,v1:192.168.122.100:6807/3813103924] up:reconnect
Jan 20 18:43:22 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/823413276,v1:192.168.122.102:6805/823413276] up:boot
Jan 20 18:43:22 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : fsmap cephfs:1/1 {0=cephfs.compute-0.bekmxe=up:reconnect} 2 up:standby
Jan 20 18:43:22 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.rrgioo"} v 0)
Jan 20 18:43:22 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.rrgioo"}]: dispatch
Jan 20 18:43:22 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).mds e11 all = 0
Jan 20 18:43:22 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v26: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s wr, 4 op/s
Jan 20 18:43:22 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 20 18:43:22 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:22 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 20 18:43:22 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:22 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 20 18:43:22 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:22 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.1.0.compute-2.logial
Jan 20 18:43:22 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.1.0.compute-2.logial
Jan 20 18:43:22 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.logial", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Jan 20 18:43:22 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.logial", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Jan 20 18:43:22 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.logial", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Jan 20 18:43:22 compute-0 ceph-mgr[74676]: [cephadm INFO root] Ensuring nfs.cephfs.1 is in the ganesha grace table
Jan 20 18:43:22 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.1 is in the ganesha grace table
Jan 20 18:43:22 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Jan 20 18:43:22 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Jan 20 18:43:22 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Jan 20 18:43:22 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 18:43:22 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:43:23 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:23 compute-0 ceph-mon[74381]: mds.? [v2:192.168.122.100:6806/3813103924,v1:192.168.122.100:6807/3813103924] up:reconnect
Jan 20 18:43:23 compute-0 ceph-mon[74381]: mds.? [v2:192.168.122.102:6804/823413276,v1:192.168.122.102:6805/823413276] up:boot
Jan 20 18:43:23 compute-0 ceph-mon[74381]: fsmap cephfs:1/1 {0=cephfs.compute-0.bekmxe=up:reconnect} 2 up:standby
Jan 20 18:43:23 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.rrgioo"}]: dispatch
Jan 20 18:43:23 compute-0 ceph-mon[74381]: pgmap v26: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s wr, 4 op/s
Jan 20 18:43:23 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:23 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:23 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:23 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.logial", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Jan 20 18:43:23 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.logial", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Jan 20 18:43:23 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Jan 20 18:43:23 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Jan 20 18:43:23 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:43:23 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).mds e12 new map
Jan 20 18:43:23 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).mds e12 print_map
                                           e12
                                           btime 2026-01-20T18:43:23:303596+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        12
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-20T18:42:47.693845+0000
                                           modified        2026-01-20T18:43:22.310998+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        56
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=14580}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 0 members: 
                                           [mds.cephfs.compute-0.bekmxe{0:14580} state up:rejoin seq 4 join_fscid=1 addr [v2:192.168.122.100:6806/3813103924,v1:192.168.122.100:6807/3813103924] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-1.eisxof{-1:24173} state up:standby seq 1 addr [v2:192.168.122.101:6804/2890770273,v1:192.168.122.101:6805/2890770273] compat {c=[1],r=[1],i=[1fff]}]
                                           [mds.cephfs.compute-2.rrgioo{-1:24205} state up:standby seq 1 join_fscid=1 addr [v2:192.168.122.102:6804/823413276,v1:192.168.122.102:6805/823413276] compat {c=[1],r=[1],i=[1fff]}]
Jan 20 18:43:23 compute-0 ceph-mds[96670]: mds.cephfs.compute-0.bekmxe Updating MDS map to version 12 from mon.0
Jan 20 18:43:23 compute-0 ceph-mds[96670]: mds.0.10 handle_mds_map I am now mds.0.10
Jan 20 18:43:23 compute-0 ceph-mds[96670]: mds.0.10 handle_mds_map state change up:reconnect --> up:rejoin
Jan 20 18:43:23 compute-0 ceph-mds[96670]: mds.0.10 rejoin_start
Jan 20 18:43:23 compute-0 ceph-mds[96670]: mds.0.10 rejoin_joint_start
Jan 20 18:43:23 compute-0 ceph-mds[96670]: mds.0.10 rejoin_done
Jan 20 18:43:23 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/3813103924,v1:192.168.122.100:6807/3813103924] up:rejoin
Jan 20 18:43:23 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : fsmap cephfs:1/1 {0=cephfs.compute-0.bekmxe=up:rejoin} 2 up:standby
Jan 20 18:43:23 compute-0 ceph-mon[74381]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.bekmxe is now active in filesystem cephfs as rank 0
Jan 20 18:43:24 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v27: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 1.6 KiB/s wr, 6 op/s
Jan 20 18:43:25 compute-0 ceph-mon[74381]: log_channel(cluster) log [INF] : Health check cleared: FS_DEGRADED (was: 1 filesystem is degraded)
Jan 20 18:43:25 compute-0 ceph-mon[74381]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 20 18:43:25 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e56 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 18:43:25 compute-0 ceph-mon[74381]: Creating key for client.nfs.cephfs.1.0.compute-2.logial
Jan 20 18:43:25 compute-0 ceph-mon[74381]: Ensuring nfs.cephfs.1 is in the ganesha grace table
Jan 20 18:43:25 compute-0 ceph-mon[74381]: mds.? [v2:192.168.122.100:6806/3813103924,v1:192.168.122.100:6807/3813103924] up:rejoin
Jan 20 18:43:25 compute-0 ceph-mon[74381]: fsmap cephfs:1/1 {0=cephfs.compute-0.bekmxe=up:rejoin} 2 up:standby
Jan 20 18:43:25 compute-0 ceph-mon[74381]: daemon mds.cephfs.compute-0.bekmxe is now active in filesystem cephfs as rank 0
Jan 20 18:43:25 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).mds e13 new map
Jan 20 18:43:25 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).mds e13 print_map
                                           e13
                                           btime 2026-01-20T18:43:25:166481+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        13
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-20T18:42:47.693845+0000
                                           modified        2026-01-20T18:43:25.166478+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        56
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=14580}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        1
                                           qdb_cluster        leader: 14580 members: 14580
                                           [mds.cephfs.compute-0.bekmxe{0:14580} state up:active seq 5 join_fscid=1 addr [v2:192.168.122.100:6806/3813103924,v1:192.168.122.100:6807/3813103924] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-1.eisxof{-1:24173} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/2890770273,v1:192.168.122.101:6805/2890770273] compat {c=[1],r=[1],i=[1fff]}]
                                           [mds.cephfs.compute-2.rrgioo{-1:24205} state up:standby seq 1 join_fscid=1 addr [v2:192.168.122.102:6804/823413276,v1:192.168.122.102:6805/823413276] compat {c=[1],r=[1],i=[1fff]}]
Jan 20 18:43:25 compute-0 ceph-mds[96670]: mds.cephfs.compute-0.bekmxe Updating MDS map to version 13 from mon.0
Jan 20 18:43:25 compute-0 ceph-mds[96670]: mds.0.10 handle_mds_map I am now mds.0.10
Jan 20 18:43:25 compute-0 ceph-mds[96670]: mds.0.10 handle_mds_map state change up:rejoin --> up:active
Jan 20 18:43:25 compute-0 ceph-mds[96670]: mds.0.10 recovery_done -- successful recovery!
Jan 20 18:43:25 compute-0 ceph-mds[96670]: mds.0.10 active_start
Jan 20 18:43:25 compute-0 ceph-mds[96670]: mds.0.10 cluster recovered.
Jan 20 18:43:25 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/3813103924,v1:192.168.122.100:6807/3813103924] up:active
Jan 20 18:43:25 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/2890770273,v1:192.168.122.101:6805/2890770273] up:standby
Jan 20 18:43:25 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.bekmxe=up:active} 2 up:standby
Jan 20 18:43:25 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Jan 20 18:43:25 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Jan 20 18:43:25 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Jan 20 18:43:25 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Jan 20 18:43:25 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Jan 20 18:43:25 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.1.0.compute-2.logial-rgw
Jan 20 18:43:25 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.1.0.compute-2.logial-rgw
Jan 20 18:43:25 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.logial-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Jan 20 18:43:25 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.logial-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 20 18:43:25 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.logial-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 20 18:43:25 compute-0 ceph-mgr[74676]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.1.0.compute-2.logial's ganesha conf is defaulting to empty
Jan 20 18:43:25 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.1.0.compute-2.logial's ganesha conf is defaulting to empty
Jan 20 18:43:25 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 18:43:25 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:43:25 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.1.0.compute-2.logial on compute-2
Jan 20 18:43:25 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.1.0.compute-2.logial on compute-2
Jan 20 18:43:26 compute-0 ceph-mon[74381]: pgmap v27: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 1.6 KiB/s wr, 6 op/s
Jan 20 18:43:26 compute-0 ceph-mon[74381]: Health check cleared: FS_DEGRADED (was: 1 filesystem is degraded)
Jan 20 18:43:26 compute-0 ceph-mon[74381]: Cluster is now healthy
Jan 20 18:43:26 compute-0 ceph-mon[74381]: mds.? [v2:192.168.122.100:6806/3813103924,v1:192.168.122.100:6807/3813103924] up:active
Jan 20 18:43:26 compute-0 ceph-mon[74381]: mds.? [v2:192.168.122.101:6804/2890770273,v1:192.168.122.101:6805/2890770273] up:standby
Jan 20 18:43:26 compute-0 ceph-mon[74381]: fsmap cephfs:1 {0=cephfs.compute-0.bekmxe=up:active} 2 up:standby
Jan 20 18:43:26 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Jan 20 18:43:26 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Jan 20 18:43:26 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.logial-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 20 18:43:26 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.logial-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 20 18:43:26 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:43:26 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v28: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 1.6 KiB/s wr, 6 op/s
Jan 20 18:43:27 compute-0 ceph-mon[74381]: Rados config object exists: conf-nfs.cephfs
Jan 20 18:43:27 compute-0 ceph-mon[74381]: Creating key for client.nfs.cephfs.1.0.compute-2.logial-rgw
Jan 20 18:43:27 compute-0 ceph-mon[74381]: Bind address in nfs.cephfs.1.0.compute-2.logial's ganesha conf is defaulting to empty
Jan 20 18:43:27 compute-0 ceph-mon[74381]: Deploying daemon nfs.cephfs.1.0.compute-2.logial on compute-2
Jan 20 18:43:27 compute-0 ceph-mon[74381]: pgmap v28: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 1.6 KiB/s wr, 6 op/s
Jan 20 18:43:27 compute-0 ceph-mds[96670]: mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Jan 20 18:43:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mds-cephfs-compute-0-bekmxe[96666]: 2026-01-20T18:43:27.571+0000 7f6dfb343640 -1 mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Jan 20 18:43:28 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 20 18:43:28 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v29: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 9.4 KiB/s rd, 921 B/s wr, 10 op/s
Jan 20 18:43:28 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:28 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 20 18:43:28 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:28 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 20 18:43:28 compute-0 ceph-mon[74381]: pgmap v29: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 9.4 KiB/s rd, 921 B/s wr, 10 op/s
Jan 20 18:43:28 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:28 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:28 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.2.0.compute-0.ulclbx
Jan 20 18:43:28 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.2.0.compute-0.ulclbx
Jan 20 18:43:28 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.ulclbx", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Jan 20 18:43:28 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.ulclbx", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Jan 20 18:43:28 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.ulclbx", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Jan 20 18:43:28 compute-0 ceph-mgr[74676]: [cephadm INFO root] Ensuring nfs.cephfs.2 is in the ganesha grace table
Jan 20 18:43:28 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.2 is in the ganesha grace table
Jan 20 18:43:28 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Jan 20 18:43:28 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Jan 20 18:43:28 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Jan 20 18:43:28 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 18:43:28 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:43:30 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:30 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:30 compute-0 ceph-mon[74381]: Creating key for client.nfs.cephfs.2.0.compute-0.ulclbx
Jan 20 18:43:30 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.ulclbx", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Jan 20 18:43:30 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.ulclbx", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Jan 20 18:43:30 compute-0 ceph-mon[74381]: Ensuring nfs.cephfs.2 is in the ganesha grace table
Jan 20 18:43:30 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Jan 20 18:43:30 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Jan 20 18:43:30 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:43:30 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e56 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 18:43:30 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v30: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 9.4 KiB/s rd, 921 B/s wr, 10 op/s
Jan 20 18:43:31 compute-0 ceph-mon[74381]: pgmap v30: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 9.4 KiB/s rd, 921 B/s wr, 10 op/s
Jan 20 18:43:31 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Jan 20 18:43:31 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Jan 20 18:43:32 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Jan 20 18:43:32 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Jan 20 18:43:32 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Jan 20 18:43:32 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Jan 20 18:43:32 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Jan 20 18:43:32 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.2.0.compute-0.ulclbx-rgw
Jan 20 18:43:32 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.ulclbx-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Jan 20 18:43:32 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.ulclbx-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 20 18:43:32 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.2.0.compute-0.ulclbx-rgw
Jan 20 18:43:32 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.ulclbx-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 20 18:43:32 compute-0 ceph-mgr[74676]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.2.0.compute-0.ulclbx's ganesha conf is defaulting to empty
Jan 20 18:43:32 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.2.0.compute-0.ulclbx's ganesha conf is defaulting to empty
Jan 20 18:43:32 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 18:43:32 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:43:32 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.2.0.compute-0.ulclbx on compute-0
Jan 20 18:43:32 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.2.0.compute-0.ulclbx on compute-0
Jan 20 18:43:32 compute-0 sudo[96908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:43:32 compute-0 sudo[96908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:43:32 compute-0 sudo[96908]: pam_unix(sudo:session): session closed for user root
Jan 20 18:43:32 compute-0 sudo[96933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1
Jan 20 18:43:32 compute-0 sudo[96933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:43:32 compute-0 podman[96997]: 2026-01-20 18:43:32.611306057 +0000 UTC m=+0.043816626 container create 595b96c703ff9c4d902c1432c1e0b625d11abc0b44a726d4b950c9bf8d1b87a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_hawking, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default)
Jan 20 18:43:32 compute-0 systemd[1]: Started libpod-conmon-595b96c703ff9c4d902c1432c1e0b625d11abc0b44a726d4b950c9bf8d1b87a7.scope.
Jan 20 18:43:32 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v31: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 806 B/s wr, 9 op/s
Jan 20 18:43:32 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:43:32 compute-0 podman[96997]: 2026-01-20 18:43:32.591072249 +0000 UTC m=+0.023582858 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:43:32 compute-0 podman[96997]: 2026-01-20 18:43:32.687957885 +0000 UTC m=+0.120468464 container init 595b96c703ff9c4d902c1432c1e0b625d11abc0b44a726d4b950c9bf8d1b87a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 18:43:32 compute-0 podman[96997]: 2026-01-20 18:43:32.69378556 +0000 UTC m=+0.126296129 container start 595b96c703ff9c4d902c1432c1e0b625d11abc0b44a726d4b950c9bf8d1b87a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_hawking, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Jan 20 18:43:32 compute-0 sleepy_hawking[97014]: 167 167
Jan 20 18:43:32 compute-0 podman[96997]: 2026-01-20 18:43:32.698540176 +0000 UTC m=+0.131050775 container attach 595b96c703ff9c4d902c1432c1e0b625d11abc0b44a726d4b950c9bf8d1b87a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_hawking, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:43:32 compute-0 systemd[1]: libpod-595b96c703ff9c4d902c1432c1e0b625d11abc0b44a726d4b950c9bf8d1b87a7.scope: Deactivated successfully.
Jan 20 18:43:32 compute-0 podman[96997]: 2026-01-20 18:43:32.698928677 +0000 UTC m=+0.131439256 container died 595b96c703ff9c4d902c1432c1e0b625d11abc0b44a726d4b950c9bf8d1b87a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_hawking, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Jan 20 18:43:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-2dafae3463fd471a8f49fd827174555853bf96d62d7870bd580dfc13fb2bbb5a-merged.mount: Deactivated successfully.
Jan 20 18:43:32 compute-0 podman[96997]: 2026-01-20 18:43:32.734368969 +0000 UTC m=+0.166879538 container remove 595b96c703ff9c4d902c1432c1e0b625d11abc0b44a726d4b950c9bf8d1b87a7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 18:43:32 compute-0 systemd[1]: libpod-conmon-595b96c703ff9c4d902c1432c1e0b625d11abc0b44a726d4b950c9bf8d1b87a7.scope: Deactivated successfully.
Jan 20 18:43:32 compute-0 systemd[1]: Reloading.
Jan 20 18:43:32 compute-0 systemd-rc-local-generator[97056]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:43:32 compute-0 systemd-sysv-generator[97060]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:43:33 compute-0 systemd[1]: Reloading.
Jan 20 18:43:33 compute-0 ceph-mon[74381]: Rados config object exists: conf-nfs.cephfs
Jan 20 18:43:33 compute-0 ceph-mon[74381]: Creating key for client.nfs.cephfs.2.0.compute-0.ulclbx-rgw
Jan 20 18:43:33 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.ulclbx-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Jan 20 18:43:33 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.ulclbx-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 20 18:43:33 compute-0 ceph-mon[74381]: Bind address in nfs.cephfs.2.0.compute-0.ulclbx's ganesha conf is defaulting to empty
Jan 20 18:43:33 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:43:33 compute-0 ceph-mon[74381]: Deploying daemon nfs.cephfs.2.0.compute-0.ulclbx on compute-0
Jan 20 18:43:33 compute-0 ceph-mon[74381]: pgmap v31: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 806 B/s wr, 9 op/s
Jan 20 18:43:33 compute-0 systemd-sysv-generator[97100]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:43:33 compute-0 systemd-rc-local-generator[97097]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:43:33 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.ulclbx for aecbbf3b-b405-507b-97d7-637a83f5b4b1...
Jan 20 18:43:33 compute-0 podman[97156]: 2026-01-20 18:43:33.550374523 +0000 UTC m=+0.046180829 container create 19082f4f11f0ebeee5ead65b4e9412ba6458a380ef5a71ffb1c7790f1f979f3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 20 18:43:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95af8f56654db8c05899efc17737e22fbd7c45d82c215669fcf13c08a81786a2/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Jan 20 18:43:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95af8f56654db8c05899efc17737e22fbd7c45d82c215669fcf13c08a81786a2/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 18:43:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95af8f56654db8c05899efc17737e22fbd7c45d82c215669fcf13c08a81786a2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:43:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95af8f56654db8c05899efc17737e22fbd7c45d82c215669fcf13c08a81786a2/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.ulclbx-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 18:43:33 compute-0 podman[97156]: 2026-01-20 18:43:33.614336684 +0000 UTC m=+0.110142970 container init 19082f4f11f0ebeee5ead65b4e9412ba6458a380ef5a71ffb1c7790f1f979f3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:43:33 compute-0 podman[97156]: 2026-01-20 18:43:33.528581103 +0000 UTC m=+0.024387399 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:43:33 compute-0 podman[97156]: 2026-01-20 18:43:33.626887527 +0000 UTC m=+0.122693793 container start 19082f4f11f0ebeee5ead65b4e9412ba6458a380ef5a71ffb1c7790f1f979f3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:43:33 compute-0 bash[97156]: 19082f4f11f0ebeee5ead65b4e9412ba6458a380ef5a71ffb1c7790f1f979f3c
Jan 20 18:43:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[97172]: 20/01/2026 18:43:33 : epoch 696fccd5 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Jan 20 18:43:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[97172]: 20/01/2026 18:43:33 : epoch 696fccd5 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Jan 20 18:43:33 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.ulclbx for aecbbf3b-b405-507b-97d7-637a83f5b4b1.
Jan 20 18:43:33 compute-0 sudo[96933]: pam_unix(sudo:session): session closed for user root
Jan 20 18:43:33 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 18:43:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[97172]: 20/01/2026 18:43:33 : epoch 696fccd5 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Jan 20 18:43:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[97172]: 20/01/2026 18:43:33 : epoch 696fccd5 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Jan 20 18:43:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[97172]: 20/01/2026 18:43:33 : epoch 696fccd5 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Jan 20 18:43:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[97172]: 20/01/2026 18:43:33 : epoch 696fccd5 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Jan 20 18:43:33 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:33 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 18:43:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[97172]: 20/01/2026 18:43:33 : epoch 696fccd5 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Jan 20 18:43:33 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:33 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 20 18:43:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[97172]: 20/01/2026 18:43:33 : epoch 696fccd5 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 20 18:43:33 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:33 compute-0 ceph-mgr[74676]: [progress INFO root] complete: finished ev aa82c5c5-5750-4bb3-a15e-118159e55b85 (Updating nfs.cephfs deployment (+3 -> 3))
Jan 20 18:43:33 compute-0 ceph-mgr[74676]: [progress INFO root] Completed event aa82c5c5-5750-4bb3-a15e-118159e55b85 (Updating nfs.cephfs deployment (+3 -> 3)) in 13 seconds
Jan 20 18:43:34 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 20 18:43:34 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:34 compute-0 ceph-mgr[74676]: [progress INFO root] update: starting ev acffdc11-ea51-4301-8e43-aa2dfad7cb0a (Updating ingress.nfs.cephfs deployment (+6 -> 6))
Jan 20 18:43:34 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.nfs.cephfs/monitor_password}] v 0)
Jan 20 18:43:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[97172]: 20/01/2026 18:43:34 : epoch 696fccd5 : compute-0 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Jan 20 18:43:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[97172]: 20/01/2026 18:43:34 : epoch 696fccd5 : compute-0 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Jan 20 18:43:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[97172]: 20/01/2026 18:43:34 : epoch 696fccd5 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 20 18:43:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[97172]: 20/01/2026 18:43:34 : epoch 696fccd5 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 20 18:43:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[97172]: 20/01/2026 18:43:34 : epoch 696fccd5 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 20 18:43:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[97172]: 20/01/2026 18:43:34 : epoch 696fccd5 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 20 18:43:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[97172]: 20/01/2026 18:43:34 : epoch 696fccd5 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 20 18:43:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[97172]: 20/01/2026 18:43:34 : epoch 696fccd5 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 20 18:43:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[97172]: 20/01/2026 18:43:34 : epoch 696fccd5 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 20 18:43:34 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:34 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-1.qylszn on compute-1
Jan 20 18:43:34 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-1.qylszn on compute-1
Jan 20 18:43:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[97172]: 20/01/2026 18:43:34 : epoch 696fccd5 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 20 18:43:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[97172]: 20/01/2026 18:43:34 : epoch 696fccd5 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 20 18:43:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[97172]: 20/01/2026 18:43:34 : epoch 696fccd5 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 20 18:43:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[97172]: 20/01/2026 18:43:34 : epoch 696fccd5 : compute-0 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000003:nfs.cephfs.2: -2
Jan 20 18:43:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[97172]: 20/01/2026 18:43:34 : epoch 696fccd5 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 20 18:43:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[97172]: 20/01/2026 18:43:34 : epoch 696fccd5 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Jan 20 18:43:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[97172]: 20/01/2026 18:43:34 : epoch 696fccd5 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Jan 20 18:43:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[97172]: 20/01/2026 18:43:34 : epoch 696fccd5 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Jan 20 18:43:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[97172]: 20/01/2026 18:43:34 : epoch 696fccd5 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Jan 20 18:43:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[97172]: 20/01/2026 18:43:34 : epoch 696fccd5 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Jan 20 18:43:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[97172]: 20/01/2026 18:43:34 : epoch 696fccd5 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Jan 20 18:43:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[97172]: 20/01/2026 18:43:34 : epoch 696fccd5 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 20 18:43:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[97172]: 20/01/2026 18:43:34 : epoch 696fccd5 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 20 18:43:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[97172]: 20/01/2026 18:43:34 : epoch 696fccd5 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 20 18:43:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[97172]: 20/01/2026 18:43:34 : epoch 696fccd5 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Jan 20 18:43:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[97172]: 20/01/2026 18:43:34 : epoch 696fccd5 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 20 18:43:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[97172]: 20/01/2026 18:43:34 : epoch 696fccd5 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Jan 20 18:43:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[97172]: 20/01/2026 18:43:34 : epoch 696fccd5 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Jan 20 18:43:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[97172]: 20/01/2026 18:43:34 : epoch 696fccd5 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Jan 20 18:43:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[97172]: 20/01/2026 18:43:34 : epoch 696fccd5 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Jan 20 18:43:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[97172]: 20/01/2026 18:43:34 : epoch 696fccd5 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Jan 20 18:43:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[97172]: 20/01/2026 18:43:34 : epoch 696fccd5 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Jan 20 18:43:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[97172]: 20/01/2026 18:43:34 : epoch 696fccd5 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Jan 20 18:43:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[97172]: 20/01/2026 18:43:34 : epoch 696fccd5 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Jan 20 18:43:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[97172]: 20/01/2026 18:43:34 : epoch 696fccd5 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Jan 20 18:43:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[97172]: 20/01/2026 18:43:34 : epoch 696fccd5 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Jan 20 18:43:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[97172]: 20/01/2026 18:43:34 : epoch 696fccd5 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Jan 20 18:43:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[97172]: 20/01/2026 18:43:34 : epoch 696fccd5 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Jan 20 18:43:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[97172]: 20/01/2026 18:43:34 : epoch 696fccd5 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 20 18:43:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[97172]: 20/01/2026 18:43:34 : epoch 696fccd5 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Jan 20 18:43:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[97172]: 20/01/2026 18:43:34 : epoch 696fccd5 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 20 18:43:34 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v32: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 9.3 KiB/s rd, 1.6 KiB/s wr, 11 op/s
Jan 20 18:43:34 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:34 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:34 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:34 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:34 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:34 compute-0 ceph-mon[74381]: Deploying daemon haproxy.nfs.cephfs.compute-1.qylszn on compute-1
Jan 20 18:43:34 compute-0 ceph-mon[74381]: pgmap v32: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 9.3 KiB/s rd, 1.6 KiB/s wr, 11 op/s
Jan 20 18:43:35 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e56 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 18:43:36 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v33: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 1.4 KiB/s wr, 9 op/s
Jan 20 18:43:37 compute-0 ceph-mon[74381]: pgmap v33: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 1.4 KiB/s wr, 9 op/s
Jan 20 18:43:37 compute-0 ceph-mgr[74676]: [progress INFO root] Writing back 14 completed events
Jan 20 18:43:37 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 20 18:43:37 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:38 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:38 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v34: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 2.4 KiB/s wr, 14 op/s
Jan 20 18:43:39 compute-0 ceph-mon[74381]: pgmap v34: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 2.4 KiB/s wr, 14 op/s
Jan 20 18:43:40 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e56 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 18:43:40 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 20 18:43:40 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:40 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 20 18:43:40 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:40 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Jan 20 18:43:40 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v35: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 4.8 KiB/s rd, 1.8 KiB/s wr, 7 op/s
Jan 20 18:43:40 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:40 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-0.ujqhrm on compute-0
Jan 20 18:43:40 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-0.ujqhrm on compute-0
Jan 20 18:43:40 compute-0 sudo[97227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:43:40 compute-0 sudo[97227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:43:40 compute-0 sudo[97227]: pam_unix(sudo:session): session closed for user root
Jan 20 18:43:40 compute-0 sudo[97252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/haproxy:2.3 --timeout 895 _orch deploy --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1
Jan 20 18:43:40 compute-0 sudo[97252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:43:41 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:41 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:41 compute-0 ceph-mon[74381]: pgmap v35: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 4.8 KiB/s rd, 1.8 KiB/s wr, 7 op/s
Jan 20 18:43:41 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:41 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[97172]: 20/01/2026 18:43:41 : epoch 696fccd5 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6428000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:43:42 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v36: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 4.8 KiB/s rd, 1.8 KiB/s wr, 7 op/s
Jan 20 18:43:42 compute-0 ceph-mon[74381]: Deploying daemon haproxy.nfs.cephfs.compute-0.ujqhrm on compute-0
Jan 20 18:43:43 compute-0 podman[97317]: 2026-01-20 18:43:43.622472404 +0000 UTC m=+2.310191798 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Jan 20 18:43:43 compute-0 podman[97317]: 2026-01-20 18:43:43.641397057 +0000 UTC m=+2.329116411 container create 4e4226928fcf81b6ca7574d7b6864bcdf6779465f831932a1a6100727c94180d (image=quay.io/ceph/haproxy:2.3, name=amazing_wing)
Jan 20 18:43:43 compute-0 systemd[1]: Started libpod-conmon-4e4226928fcf81b6ca7574d7b6864bcdf6779465f831932a1a6100727c94180d.scope.
Jan 20 18:43:43 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:43:43 compute-0 podman[97317]: 2026-01-20 18:43:43.717941282 +0000 UTC m=+2.405660646 container init 4e4226928fcf81b6ca7574d7b6864bcdf6779465f831932a1a6100727c94180d (image=quay.io/ceph/haproxy:2.3, name=amazing_wing)
Jan 20 18:43:43 compute-0 podman[97317]: 2026-01-20 18:43:43.725698678 +0000 UTC m=+2.413418012 container start 4e4226928fcf81b6ca7574d7b6864bcdf6779465f831932a1a6100727c94180d (image=quay.io/ceph/haproxy:2.3, name=amazing_wing)
Jan 20 18:43:43 compute-0 podman[97317]: 2026-01-20 18:43:43.729583391 +0000 UTC m=+2.417302725 container attach 4e4226928fcf81b6ca7574d7b6864bcdf6779465f831932a1a6100727c94180d (image=quay.io/ceph/haproxy:2.3, name=amazing_wing)
Jan 20 18:43:43 compute-0 systemd[1]: libpod-4e4226928fcf81b6ca7574d7b6864bcdf6779465f831932a1a6100727c94180d.scope: Deactivated successfully.
Jan 20 18:43:43 compute-0 amazing_wing[97438]: 0 0
Jan 20 18:43:43 compute-0 podman[97317]: 2026-01-20 18:43:43.73102173 +0000 UTC m=+2.418741074 container died 4e4226928fcf81b6ca7574d7b6864bcdf6779465f831932a1a6100727c94180d (image=quay.io/ceph/haproxy:2.3, name=amazing_wing)
Jan 20 18:43:43 compute-0 conmon[97438]: conmon 4e4226928fcf81b6ca75 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4e4226928fcf81b6ca7574d7b6864bcdf6779465f831932a1a6100727c94180d.scope/container/memory.events
Jan 20 18:43:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-3123fa9707c7a751f88f015dc87a5003a7e5315fb819640888d2e125e9370390-merged.mount: Deactivated successfully.
Jan 20 18:43:43 compute-0 podman[97317]: 2026-01-20 18:43:43.770119129 +0000 UTC m=+2.457838463 container remove 4e4226928fcf81b6ca7574d7b6864bcdf6779465f831932a1a6100727c94180d (image=quay.io/ceph/haproxy:2.3, name=amazing_wing)
Jan 20 18:43:43 compute-0 systemd[1]: libpod-conmon-4e4226928fcf81b6ca7574d7b6864bcdf6779465f831932a1a6100727c94180d.scope: Deactivated successfully.
Jan 20 18:43:43 compute-0 systemd[1]: Reloading.
Jan 20 18:43:43 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[97172]: 20/01/2026 18:43:43 : epoch 696fccd5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f64100016e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:43:43 compute-0 ceph-mon[74381]: pgmap v36: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 4.8 KiB/s rd, 1.8 KiB/s wr, 7 op/s
Jan 20 18:43:43 compute-0 systemd-rc-local-generator[97484]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:43:43 compute-0 systemd-sysv-generator[97488]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:43:44 compute-0 systemd[1]: Reloading.
Jan 20 18:43:44 compute-0 systemd-rc-local-generator[97522]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:43:44 compute-0 systemd-sysv-generator[97526]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:43:44 compute-0 systemd[1]: Starting Ceph haproxy.nfs.cephfs.compute-0.ujqhrm for aecbbf3b-b405-507b-97d7-637a83f5b4b1...
Jan 20 18:43:44 compute-0 podman[97582]: 2026-01-20 18:43:44.578411938 +0000 UTC m=+0.040791266 container create 4e8e761c58a323b2e232c17a02958108663f7cfbfd5c846bd9edf3835afc34ea (image=quay.io/ceph/haproxy:2.3, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm)
Jan 20 18:43:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72d21296265e17c4aaf399b17e695db9b614b7399fab7747faa5128ce502c1f0/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Jan 20 18:43:44 compute-0 podman[97582]: 2026-01-20 18:43:44.642266916 +0000 UTC m=+0.104646254 container init 4e8e761c58a323b2e232c17a02958108663f7cfbfd5c846bd9edf3835afc34ea (image=quay.io/ceph/haproxy:2.3, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm)
Jan 20 18:43:44 compute-0 podman[97582]: 2026-01-20 18:43:44.647037672 +0000 UTC m=+0.109417000 container start 4e8e761c58a323b2e232c17a02958108663f7cfbfd5c846bd9edf3835afc34ea (image=quay.io/ceph/haproxy:2.3, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm)
Jan 20 18:43:44 compute-0 bash[97582]: 4e8e761c58a323b2e232c17a02958108663f7cfbfd5c846bd9edf3835afc34ea
Jan 20 18:43:44 compute-0 podman[97582]: 2026-01-20 18:43:44.561155079 +0000 UTC m=+0.023534427 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Jan 20 18:43:44 compute-0 systemd[1]: Started Ceph haproxy.nfs.cephfs.compute-0.ujqhrm for aecbbf3b-b405-507b-97d7-637a83f5b4b1.
Jan 20 18:43:44 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [NOTICE] 019/184344 (2) : New worker #1 (4) forked
Jan 20 18:43:44 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v37: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 4.8 KiB/s rd, 1.8 KiB/s wr, 7 op/s
Jan 20 18:43:44 compute-0 sudo[97252]: pam_unix(sudo:session): session closed for user root
Jan 20 18:43:44 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 18:43:44 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:44 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 18:43:45 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:45 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Jan 20 18:43:45 compute-0 ceph-mon[74381]: pgmap v37: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 4.8 KiB/s rd, 1.8 KiB/s wr, 7 op/s
Jan 20 18:43:45 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:45 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:45 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-2.sxjmbl on compute-2
Jan 20 18:43:45 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-2.sxjmbl on compute-2
Jan 20 18:43:45 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e56 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 18:43:45 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[97172]: 20/01/2026 18:43:45 : epoch 696fccd5 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f63fc000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:43:46 compute-0 kernel: ganesha.nfsd[97216]: segfault at 50 ip 00007f64ab7e732e sp 00007f64327fb210 error 4 in libntirpc.so.5.8[7f64ab7cc000+2c000] likely on CPU 2 (core 0, socket 2)
Jan 20 18:43:46 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Jan 20 18:43:46 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[97172]: 20/01/2026 18:43:45 : epoch 696fccd5 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f641c001ac0 fd 37 proxy ignored for local
Jan 20 18:43:46 compute-0 systemd[1]: Created slice Slice /system/systemd-coredump.
Jan 20 18:43:46 compute-0 systemd[1]: Started Process Core Dump (PID 97611/UID 0).
Jan 20 18:43:46 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:46 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:46 compute-0 ceph-mon[74381]: Deploying daemon haproxy.nfs.cephfs.compute-2.sxjmbl on compute-2
Jan 20 18:43:46 compute-0 ceph-mgr[74676]: [balancer INFO root] Optimize plan auto_2026-01-20_18:43:46
Jan 20 18:43:46 compute-0 ceph-mgr[74676]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 18:43:46 compute-0 ceph-mgr[74676]: [balancer INFO root] do_upmap
Jan 20 18:43:46 compute-0 ceph-mgr[74676]: [balancer INFO root] pools ['vms', '.nfs', 'default.rgw.log', '.mgr', 'images', 'default.rgw.meta', 'cephfs.cephfs.data', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.control', 'volumes', 'backups']
Jan 20 18:43:46 compute-0 ceph-mgr[74676]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 18:43:46 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v38: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 1023 B/s wr, 4 op/s
Jan 20 18:43:46 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 18:43:46 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:43:46 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 18:43:46 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:43:46 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:43:46 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:43:46 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:43:46 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:43:46 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:43:46 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:43:46 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:43:46 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:43:46 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 1)
Jan 20 18:43:46 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:43:46 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 20 18:43:46 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:43:46 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 1)
Jan 20 18:43:46 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:43:46 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 1)
Jan 20 18:43:46 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:43:46 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 20 18:43:46 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:43:46 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 1)
Jan 20 18:43:46 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:43:46 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 1)
Jan 20 18:43:46 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} v 0)
Jan 20 18:43:46 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Jan 20 18:43:46 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:43:46 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:43:46 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:43:46 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:43:46 compute-0 ceph-mgr[74676]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 18:43:46 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 18:43:46 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:43:46 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:43:46 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 18:43:46 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 18:43:46 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 18:43:46 compute-0 ceph-mgr[74676]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 18:43:46 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 18:43:46 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 18:43:46 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 18:43:46 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 18:43:47 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Jan 20 18:43:47 compute-0 ceph-mon[74381]: pgmap v38: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 1023 B/s wr, 4 op/s
Jan 20 18:43:47 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Jan 20 18:43:47 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Jan 20 18:43:47 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Jan 20 18:43:47 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Jan 20 18:43:47 compute-0 ceph-mgr[74676]: [progress INFO root] update: starting ev 61f12202-09fa-40a9-8176-2123ff5b13ab (PG autoscaler increasing pool 6 PGs from 1 to 16)
Jan 20 18:43:47 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0)
Jan 20 18:43:47 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Jan 20 18:43:48 compute-0 systemd-coredump[97612]: Process 97176 (ganesha.nfsd) of user 0 dumped core.
                                                   
                                                   Stack trace of thread 42:
                                                   #0  0x00007f64ab7e732e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                   ELF object binary architecture: AMD x86-64
Jan 20 18:43:48 compute-0 systemd[1]: systemd-coredump@0-97611-0.service: Deactivated successfully.
Jan 20 18:43:48 compute-0 systemd[1]: systemd-coredump@0-97611-0.service: Consumed 2.064s CPU time.
Jan 20 18:43:48 compute-0 podman[97617]: 2026-01-20 18:43:48.214407192 +0000 UTC m=+0.030137961 container died 19082f4f11f0ebeee5ead65b4e9412ba6458a380ef5a71ffb1c7790f1f979f3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 20 18:43:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-95af8f56654db8c05899efc17737e22fbd7c45d82c215669fcf13c08a81786a2-merged.mount: Deactivated successfully.
Jan 20 18:43:48 compute-0 podman[97617]: 2026-01-20 18:43:48.539795414 +0000 UTC m=+0.355526163 container remove 19082f4f11f0ebeee5ead65b4e9412ba6458a380ef5a71ffb1c7790f1f979f3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Jan 20 18:43:48 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Main process exited, code=exited, status=139/n/a
Jan 20 18:43:48 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Failed with result 'exit-code'.
Jan 20 18:43:48 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Consumed 2.285s CPU time.
Jan 20 18:43:48 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v40: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 0 op/s
Jan 20 18:43:48 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} v 0)
Jan 20 18:43:48 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Jan 20 18:43:48 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Jan 20 18:43:48 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Jan 20 18:43:48 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Jan 20 18:43:48 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Jan 20 18:43:48 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Jan 20 18:43:48 compute-0 ceph-mgr[74676]: [progress INFO root] update: starting ev a5d87db6-d301-4286-aab1-249bede10285 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Jan 20 18:43:48 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0)
Jan 20 18:43:48 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Jan 20 18:43:48 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Jan 20 18:43:48 compute-0 ceph-mon[74381]: osdmap e57: 3 total, 3 up, 3 in
Jan 20 18:43:48 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Jan 20 18:43:48 compute-0 ceph-mon[74381]: pgmap v40: 136 pgs: 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 0 op/s
Jan 20 18:43:48 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Jan 20 18:43:49 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 20 18:43:49 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:49 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 20 18:43:49 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:49 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Jan 20 18:43:49 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:49 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.nfs.cephfs/keepalived_password}] v 0)
Jan 20 18:43:49 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:49 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 20 18:43:49 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 20 18:43:49 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Jan 20 18:43:49 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Jan 20 18:43:49 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 20 18:43:49 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 20 18:43:49 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-0.kuklye on compute-0
Jan 20 18:43:49 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-0.kuklye on compute-0
Jan 20 18:43:49 compute-0 sudo[97657]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:43:49 compute-0 sudo[97657]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:43:49 compute-0 sudo[97657]: pam_unix(sudo:session): session closed for user root
Jan 20 18:43:49 compute-0 sudo[97682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/keepalived:2.2.4 --timeout 895 _orch deploy --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1
Jan 20 18:43:49 compute-0 sudo[97682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:43:49 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Jan 20 18:43:50 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Jan 20 18:43:50 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Jan 20 18:43:50 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Jan 20 18:43:50 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 58 pg[6.0( v 56'46 (0'0,56'46] local-lis/les=23/24 n=22 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=58 pruub=10.100094795s) [0] r=0 lpr=58 pi=[23,58)/1 crt=56'46 lcod 56'45 mlcod 56'45 active pruub 207.337554932s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:43:50 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 59 pg[6.0( v 56'46 lc 0'0 (0'0,56'46] local-lis/les=23/24 n=1 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=58 pruub=10.100094795s) [0] r=0 lpr=58 pi=[23,58)/1 crt=56'46 lcod 56'45 mlcod 0'0 unknown pruub 207.337554932s@ mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:50 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 59 pg[6.7( v 56'46 lc 0'0 (0'0,56'46] local-lis/les=23/24 n=1 ec=58/23 lis/c=23/23 les/c/f=24/24/0 sis=58) [0] r=0 lpr=58 pi=[23,58)/1 crt=56'46 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:50 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 59 pg[6.6( v 56'46 lc 0'0 (0'0,56'46] local-lis/les=23/24 n=2 ec=58/23 lis/c=23/23 les/c/f=24/24/0 sis=58) [0] r=0 lpr=58 pi=[23,58)/1 crt=56'46 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:50 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 59 pg[6.a( v 56'46 lc 0'0 (0'0,56'46] local-lis/les=23/24 n=1 ec=58/23 lis/c=23/23 les/c/f=24/24/0 sis=58) [0] r=0 lpr=58 pi=[23,58)/1 crt=56'46 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:50 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 59 pg[6.f( v 56'46 lc 0'0 (0'0,56'46] local-lis/les=23/24 n=1 ec=58/23 lis/c=23/23 les/c/f=24/24/0 sis=58) [0] r=0 lpr=58 pi=[23,58)/1 crt=56'46 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:50 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 59 pg[6.1( v 56'46 (0'0,56'46] local-lis/les=23/24 n=2 ec=58/23 lis/c=23/23 les/c/f=24/24/0 sis=58) [0] r=0 lpr=58 pi=[23,58)/1 crt=56'46 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:50 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 59 pg[6.e( v 56'46 lc 0'0 (0'0,56'46] local-lis/les=23/24 n=1 ec=58/23 lis/c=23/23 les/c/f=24/24/0 sis=58) [0] r=0 lpr=58 pi=[23,58)/1 crt=56'46 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:50 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 59 pg[6.3( v 56'46 lc 0'0 (0'0,56'46] local-lis/les=23/24 n=2 ec=58/23 lis/c=23/23 les/c/f=24/24/0 sis=58) [0] r=0 lpr=58 pi=[23,58)/1 crt=56'46 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:50 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 59 pg[6.8( v 56'46 lc 0'0 (0'0,56'46] local-lis/les=23/24 n=1 ec=58/23 lis/c=23/23 les/c/f=24/24/0 sis=58) [0] r=0 lpr=58 pi=[23,58)/1 crt=56'46 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:50 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 59 pg[6.2( v 56'46 lc 0'0 (0'0,56'46] local-lis/les=23/24 n=2 ec=58/23 lis/c=23/23 les/c/f=24/24/0 sis=58) [0] r=0 lpr=58 pi=[23,58)/1 crt=56'46 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:50 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 59 pg[6.5( v 56'46 lc 0'0 (0'0,56'46] local-lis/les=23/24 n=2 ec=58/23 lis/c=23/23 les/c/f=24/24/0 sis=58) [0] r=0 lpr=58 pi=[23,58)/1 crt=56'46 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:50 compute-0 ceph-mgr[74676]: [progress INFO root] update: starting ev 48d8330e-98e4-4d7f-969d-304903e61814 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Jan 20 18:43:50 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 59 pg[6.4( v 56'46 lc 0'0 (0'0,56'46] local-lis/les=23/24 n=2 ec=58/23 lis/c=23/23 les/c/f=24/24/0 sis=58) [0] r=0 lpr=58 pi=[23,58)/1 crt=56'46 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:50 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 59 pg[6.d( v 56'46 lc 0'0 (0'0,56'46] local-lis/les=23/24 n=1 ec=58/23 lis/c=23/23 les/c/f=24/24/0 sis=58) [0] r=0 lpr=58 pi=[23,58)/1 crt=56'46 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:50 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 59 pg[6.9( v 56'46 lc 0'0 (0'0,56'46] local-lis/les=23/24 n=1 ec=58/23 lis/c=23/23 les/c/f=24/24/0 sis=58) [0] r=0 lpr=58 pi=[23,58)/1 crt=56'46 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:50 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 59 pg[6.c( v 56'46 lc 0'0 (0'0,56'46] local-lis/les=23/24 n=1 ec=58/23 lis/c=23/23 les/c/f=24/24/0 sis=58) [0] r=0 lpr=58 pi=[23,58)/1 crt=56'46 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:50 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 59 pg[6.b( v 56'46 lc 0'0 (0'0,56'46] local-lis/les=23/24 n=1 ec=58/23 lis/c=23/23 les/c/f=24/24/0 sis=58) [0] r=0 lpr=58 pi=[23,58)/1 crt=56'46 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:50 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0)
Jan 20 18:43:50 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Jan 20 18:43:50 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e59 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 18:43:50 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Jan 20 18:43:50 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Jan 20 18:43:50 compute-0 ceph-mon[74381]: osdmap e58: 3 total, 3 up, 3 in
Jan 20 18:43:50 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Jan 20 18:43:50 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:50 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:50 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:50 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:50 compute-0 ceph-mon[74381]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 20 18:43:50 compute-0 ceph-mon[74381]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Jan 20 18:43:50 compute-0 ceph-mon[74381]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 20 18:43:50 compute-0 ceph-mon[74381]: Deploying daemon keepalived.nfs.cephfs.compute-0.kuklye on compute-0
Jan 20 18:43:50 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v43: 151 pgs: 15 unknown, 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 20 18:43:50 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0)
Jan 20 18:43:50 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 20 18:43:50 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0)
Jan 20 18:43:50 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 20 18:43:51 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Jan 20 18:43:51 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Jan 20 18:43:51 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 18:43:51 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 18:43:51 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Jan 20 18:43:51 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Jan 20 18:43:51 compute-0 ceph-mgr[74676]: [progress INFO root] update: starting ev 910b1b9e-f1ca-4511-82d1-2bf15929ff7d (PG autoscaler increasing pool 9 PGs from 1 to 32)
Jan 20 18:43:51 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0)
Jan 20 18:43:51 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Jan 20 18:43:51 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 60 pg[8.0( v 40'12 (0'0,40'12] local-lis/les=38/40 n=6 ec=38/38 lis/c=38/38 les/c/f=40/40/0 sis=60 pruub=13.039767265s) [0] r=0 lpr=60 pi=[38,60)/1 crt=40'12 lcod 40'11 mlcod 40'11 active pruub 211.369873047s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:43:51 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 60 pg[8.0( v 40'12 lc 0'0 (0'0,40'12] local-lis/les=38/40 n=0 ec=38/38 lis/c=38/38 les/c/f=40/40/0 sis=60 pruub=13.039767265s) [0] r=0 lpr=60 pi=[38,60)/1 crt=40'12 lcod 40'11 mlcod 0'0 unknown pruub 211.369873047s@ mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:51 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0).collection(8.0_head 0x5649cf5e5b00) operator()   moving buffer(0x5649ce4ab248 space 0x5649ce383390 0x0~1000 clean)
Jan 20 18:43:51 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0).collection(8.0_head 0x5649cf5e5b00) operator()   moving buffer(0x5649ce48d7e8 space 0x5649ce3a2760 0x0~1000 clean)
Jan 20 18:43:51 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0).collection(8.0_head 0x5649cf5e5b00) operator()   moving buffer(0x5649ce48c488 space 0x5649ce288900 0x0~1000 clean)
Jan 20 18:43:51 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0).collection(8.0_head 0x5649cf5e5b00) operator()   moving buffer(0x5649ce4aa988 space 0x5649ce383a10 0x0~1000 clean)
Jan 20 18:43:51 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 60 pg[6.c( v 56'46 (0'0,56'46] local-lis/les=58/60 n=1 ec=58/23 lis/c=23/23 les/c/f=24/24/0 sis=58) [0] r=0 lpr=58 pi=[23,58)/1 crt=56'46 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:51 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 60 pg[6.b( v 56'46 (0'0,56'46] local-lis/les=58/60 n=1 ec=58/23 lis/c=23/23 les/c/f=24/24/0 sis=58) [0] r=0 lpr=58 pi=[23,58)/1 crt=56'46 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:51 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 60 pg[6.8( v 56'46 (0'0,56'46] local-lis/les=58/60 n=1 ec=58/23 lis/c=23/23 les/c/f=24/24/0 sis=58) [0] r=0 lpr=58 pi=[23,58)/1 crt=56'46 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:51 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 60 pg[6.a( v 56'46 (0'0,56'46] local-lis/les=58/60 n=1 ec=58/23 lis/c=23/23 les/c/f=24/24/0 sis=58) [0] r=0 lpr=58 pi=[23,58)/1 crt=56'46 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:51 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 60 pg[6.e( v 56'46 (0'0,56'46] local-lis/les=58/60 n=1 ec=58/23 lis/c=23/23 les/c/f=24/24/0 sis=58) [0] r=0 lpr=58 pi=[23,58)/1 crt=56'46 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:51 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 60 pg[6.9( v 56'46 (0'0,56'46] local-lis/les=58/60 n=1 ec=58/23 lis/c=23/23 les/c/f=24/24/0 sis=58) [0] r=0 lpr=58 pi=[23,58)/1 crt=56'46 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:51 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 60 pg[6.5( v 56'46 (0'0,56'46] local-lis/les=58/60 n=2 ec=58/23 lis/c=23/23 les/c/f=24/24/0 sis=58) [0] r=0 lpr=58 pi=[23,58)/1 crt=56'46 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:51 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 60 pg[6.2( v 56'46 (0'0,56'46] local-lis/les=58/60 n=2 ec=58/23 lis/c=23/23 les/c/f=24/24/0 sis=58) [0] r=0 lpr=58 pi=[23,58)/1 crt=56'46 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:51 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 60 pg[6.3( v 56'46 (0'0,56'46] local-lis/les=58/60 n=2 ec=58/23 lis/c=23/23 les/c/f=24/24/0 sis=58) [0] r=0 lpr=58 pi=[23,58)/1 crt=56'46 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:51 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 60 pg[6.4( v 56'46 (0'0,56'46] local-lis/les=58/60 n=2 ec=58/23 lis/c=23/23 les/c/f=24/24/0 sis=58) [0] r=0 lpr=58 pi=[23,58)/1 crt=56'46 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:51 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 60 pg[6.7( v 56'46 (0'0,56'46] local-lis/les=58/60 n=1 ec=58/23 lis/c=23/23 les/c/f=24/24/0 sis=58) [0] r=0 lpr=58 pi=[23,58)/1 crt=56'46 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:51 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 60 pg[6.6( v 56'46 (0'0,56'46] local-lis/les=58/60 n=2 ec=58/23 lis/c=23/23 les/c/f=24/24/0 sis=58) [0] r=0 lpr=58 pi=[23,58)/1 crt=56'46 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:51 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 60 pg[6.1( v 56'46 (0'0,56'46] local-lis/les=58/60 n=2 ec=58/23 lis/c=23/23 les/c/f=24/24/0 sis=58) [0] r=0 lpr=58 pi=[23,58)/1 crt=56'46 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:51 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 60 pg[6.d( v 56'46 (0'0,56'46] local-lis/les=58/60 n=1 ec=58/23 lis/c=23/23 les/c/f=24/24/0 sis=58) [0] r=0 lpr=58 pi=[23,58)/1 crt=56'46 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:51 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 60 pg[6.0( v 56'46 (0'0,56'46] local-lis/les=58/60 n=1 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=58) [0] r=0 lpr=58 pi=[23,58)/1 crt=56'46 lcod 56'45 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:51 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 60 pg[6.f( v 56'46 (0'0,56'46] local-lis/les=58/60 n=1 ec=58/23 lis/c=23/23 les/c/f=24/24/0 sis=58) [0] r=0 lpr=58 pi=[23,58)/1 crt=56'46 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:51 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Jan 20 18:43:51 compute-0 ceph-mon[74381]: osdmap e59: 3 total, 3 up, 3 in
Jan 20 18:43:51 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Jan 20 18:43:51 compute-0 ceph-mon[74381]: pgmap v43: 151 pgs: 15 unknown, 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 20 18:43:51 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 20 18:43:51 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 20 18:43:51 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 6.c scrub starts
Jan 20 18:43:51 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 6.c scrub ok
Jan 20 18:43:52 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Jan 20 18:43:52 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Jan 20 18:43:52 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Jan 20 18:43:52 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Jan 20 18:43:52 compute-0 ceph-mgr[74676]: [progress INFO root] update: starting ev 406636bd-f5cf-48d7-8373-23303aa2908e (PG autoscaler increasing pool 10 PGs from 1 to 32)
Jan 20 18:43:52 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 61 pg[8.1f( v 40'12 lc 0'0 (0'0,40'12] local-lis/les=38/40 n=0 ec=60/38 lis/c=38/38 les/c/f=40/40/0 sis=60) [0] r=0 lpr=60 pi=[38,60)/1 crt=40'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:52 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 61 pg[8.17( v 40'12 lc 0'0 (0'0,40'12] local-lis/les=38/40 n=0 ec=60/38 lis/c=38/38 les/c/f=40/40/0 sis=60) [0] r=0 lpr=60 pi=[38,60)/1 crt=40'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:52 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 61 pg[8.18( v 40'12 lc 0'0 (0'0,40'12] local-lis/les=38/40 n=0 ec=60/38 lis/c=38/38 les/c/f=40/40/0 sis=60) [0] r=0 lpr=60 pi=[38,60)/1 crt=40'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:52 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 61 pg[8.16( v 40'12 lc 0'0 (0'0,40'12] local-lis/les=38/40 n=0 ec=60/38 lis/c=38/38 les/c/f=40/40/0 sis=60) [0] r=0 lpr=60 pi=[38,60)/1 crt=40'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:52 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 61 pg[8.2( v 40'12 lc 0'0 (0'0,40'12] local-lis/les=38/40 n=1 ec=60/38 lis/c=38/38 les/c/f=40/40/0 sis=60) [0] r=0 lpr=60 pi=[38,60)/1 crt=40'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:52 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 61 pg[8.11( v 40'12 lc 0'0 (0'0,40'12] local-lis/les=38/40 n=0 ec=60/38 lis/c=38/38 les/c/f=40/40/0 sis=60) [0] r=0 lpr=60 pi=[38,60)/1 crt=40'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:52 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 61 pg[8.5( v 40'12 lc 0'0 (0'0,40'12] local-lis/les=38/40 n=1 ec=60/38 lis/c=38/38 les/c/f=40/40/0 sis=60) [0] r=0 lpr=60 pi=[38,60)/1 crt=40'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:52 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 61 pg[8.6( v 40'12 lc 0'0 (0'0,40'12] local-lis/les=38/40 n=1 ec=60/38 lis/c=38/38 les/c/f=40/40/0 sis=60) [0] r=0 lpr=60 pi=[38,60)/1 crt=40'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:52 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 61 pg[8.12( v 40'12 lc 0'0 (0'0,40'12] local-lis/les=38/40 n=0 ec=60/38 lis/c=38/38 les/c/f=40/40/0 sis=60) [0] r=0 lpr=60 pi=[38,60)/1 crt=40'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:52 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 61 pg[8.13( v 40'12 lc 0'0 (0'0,40'12] local-lis/les=38/40 n=0 ec=60/38 lis/c=38/38 les/c/f=40/40/0 sis=60) [0] r=0 lpr=60 pi=[38,60)/1 crt=40'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:52 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 61 pg[8.1d( v 40'12 lc 0'0 (0'0,40'12] local-lis/les=38/40 n=0 ec=60/38 lis/c=38/38 les/c/f=40/40/0 sis=60) [0] r=0 lpr=60 pi=[38,60)/1 crt=40'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:52 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 61 pg[8.1e( v 40'12 lc 0'0 (0'0,40'12] local-lis/les=38/40 n=0 ec=60/38 lis/c=38/38 les/c/f=40/40/0 sis=60) [0] r=0 lpr=60 pi=[38,60)/1 crt=40'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:52 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 61 pg[8.19( v 40'12 lc 0'0 (0'0,40'12] local-lis/les=38/40 n=0 ec=60/38 lis/c=38/38 les/c/f=40/40/0 sis=60) [0] r=0 lpr=60 pi=[38,60)/1 crt=40'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:52 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 61 pg[8.1c( v 40'12 lc 0'0 (0'0,40'12] local-lis/les=38/40 n=0 ec=60/38 lis/c=38/38 les/c/f=40/40/0 sis=60) [0] r=0 lpr=60 pi=[38,60)/1 crt=40'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:52 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 61 pg[8.1a( v 40'12 lc 0'0 (0'0,40'12] local-lis/les=38/40 n=0 ec=60/38 lis/c=38/38 les/c/f=40/40/0 sis=60) [0] r=0 lpr=60 pi=[38,60)/1 crt=40'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:52 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 61 pg[8.1b( v 40'12 lc 0'0 (0'0,40'12] local-lis/les=38/40 n=0 ec=60/38 lis/c=38/38 les/c/f=40/40/0 sis=60) [0] r=0 lpr=60 pi=[38,60)/1 crt=40'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:52 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 61 pg[8.4( v 40'12 lc 0'0 (0'0,40'12] local-lis/les=38/40 n=1 ec=60/38 lis/c=38/38 les/c/f=40/40/0 sis=60) [0] r=0 lpr=60 pi=[38,60)/1 crt=40'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:52 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 61 pg[8.7( v 40'12 lc 0'0 (0'0,40'12] local-lis/les=38/40 n=0 ec=60/38 lis/c=38/38 les/c/f=40/40/0 sis=60) [0] r=0 lpr=60 pi=[38,60)/1 crt=40'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:52 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 61 pg[8.b( v 40'12 lc 0'0 (0'0,40'12] local-lis/les=38/40 n=0 ec=60/38 lis/c=38/38 les/c/f=40/40/0 sis=60) [0] r=0 lpr=60 pi=[38,60)/1 crt=40'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:52 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 61 pg[8.d( v 40'12 lc 0'0 (0'0,40'12] local-lis/les=38/40 n=0 ec=60/38 lis/c=38/38 les/c/f=40/40/0 sis=60) [0] r=0 lpr=60 pi=[38,60)/1 crt=40'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:52 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 61 pg[8.a( v 40'12 lc 0'0 (0'0,40'12] local-lis/les=38/40 n=0 ec=60/38 lis/c=38/38 les/c/f=40/40/0 sis=60) [0] r=0 lpr=60 pi=[38,60)/1 crt=40'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:52 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 61 pg[8.9( v 40'12 lc 0'0 (0'0,40'12] local-lis/les=38/40 n=0 ec=60/38 lis/c=38/38 les/c/f=40/40/0 sis=60) [0] r=0 lpr=60 pi=[38,60)/1 crt=40'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:52 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 61 pg[8.8( v 40'12 lc 0'0 (0'0,40'12] local-lis/les=38/40 n=0 ec=60/38 lis/c=38/38 les/c/f=40/40/0 sis=60) [0] r=0 lpr=60 pi=[38,60)/1 crt=40'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:52 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 61 pg[8.f( v 40'12 lc 0'0 (0'0,40'12] local-lis/les=38/40 n=0 ec=60/38 lis/c=38/38 les/c/f=40/40/0 sis=60) [0] r=0 lpr=60 pi=[38,60)/1 crt=40'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:52 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 61 pg[8.e( v 40'12 lc 0'0 (0'0,40'12] local-lis/les=38/40 n=0 ec=60/38 lis/c=38/38 les/c/f=40/40/0 sis=60) [0] r=0 lpr=60 pi=[38,60)/1 crt=40'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:52 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 61 pg[8.3( v 40'12 lc 0'0 (0'0,40'12] local-lis/les=38/40 n=1 ec=60/38 lis/c=38/38 les/c/f=40/40/0 sis=60) [0] r=0 lpr=60 pi=[38,60)/1 crt=40'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:52 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 61 pg[8.c( v 40'12 lc 0'0 (0'0,40'12] local-lis/les=38/40 n=0 ec=60/38 lis/c=38/38 les/c/f=40/40/0 sis=60) [0] r=0 lpr=60 pi=[38,60)/1 crt=40'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:52 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 61 pg[8.1( v 40'12 (0'0,40'12] local-lis/les=38/40 n=1 ec=60/38 lis/c=38/38 les/c/f=40/40/0 sis=60) [0] r=0 lpr=60 pi=[38,60)/1 crt=40'12 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:52 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 61 pg[8.10( v 40'12 lc 0'0 (0'0,40'12] local-lis/les=38/40 n=0 ec=60/38 lis/c=38/38 les/c/f=40/40/0 sis=60) [0] r=0 lpr=60 pi=[38,60)/1 crt=40'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:52 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 61 pg[8.15( v 40'12 lc 0'0 (0'0,40'12] local-lis/les=38/40 n=0 ec=60/38 lis/c=38/38 les/c/f=40/40/0 sis=60) [0] r=0 lpr=60 pi=[38,60)/1 crt=40'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:52 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 61 pg[8.14( v 40'12 lc 0'0 (0'0,40'12] local-lis/les=38/40 n=0 ec=60/38 lis/c=38/38 les/c/f=40/40/0 sis=60) [0] r=0 lpr=60 pi=[38,60)/1 crt=40'12 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:52 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0)
Jan 20 18:43:52 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Jan 20 18:43:52 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 61 pg[8.1f( v 40'12 (0'0,40'12] local-lis/les=60/61 n=0 ec=60/38 lis/c=38/38 les/c/f=40/40/0 sis=60) [0] r=0 lpr=60 pi=[38,60)/1 crt=40'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:52 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 61 pg[8.16( v 40'12 (0'0,40'12] local-lis/les=60/61 n=0 ec=60/38 lis/c=38/38 les/c/f=40/40/0 sis=60) [0] r=0 lpr=60 pi=[38,60)/1 crt=40'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:52 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 61 pg[8.18( v 40'12 (0'0,40'12] local-lis/les=60/61 n=0 ec=60/38 lis/c=38/38 les/c/f=40/40/0 sis=60) [0] r=0 lpr=60 pi=[38,60)/1 crt=40'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:52 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 61 pg[8.2( v 40'12 (0'0,40'12] local-lis/les=60/61 n=1 ec=60/38 lis/c=38/38 les/c/f=40/40/0 sis=60) [0] r=0 lpr=60 pi=[38,60)/1 crt=40'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:52 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 61 pg[8.5( v 40'12 (0'0,40'12] local-lis/les=60/61 n=1 ec=60/38 lis/c=38/38 les/c/f=40/40/0 sis=60) [0] r=0 lpr=60 pi=[38,60)/1 crt=40'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:52 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 61 pg[8.11( v 40'12 (0'0,40'12] local-lis/les=60/61 n=0 ec=60/38 lis/c=38/38 les/c/f=40/40/0 sis=60) [0] r=0 lpr=60 pi=[38,60)/1 crt=40'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:52 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 61 pg[8.12( v 40'12 (0'0,40'12] local-lis/les=60/61 n=0 ec=60/38 lis/c=38/38 les/c/f=40/40/0 sis=60) [0] r=0 lpr=60 pi=[38,60)/1 crt=40'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:52 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 61 pg[8.6( v 40'12 (0'0,40'12] local-lis/les=60/61 n=1 ec=60/38 lis/c=38/38 les/c/f=40/40/0 sis=60) [0] r=0 lpr=60 pi=[38,60)/1 crt=40'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:52 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 61 pg[8.13( v 40'12 (0'0,40'12] local-lis/les=60/61 n=0 ec=60/38 lis/c=38/38 les/c/f=40/40/0 sis=60) [0] r=0 lpr=60 pi=[38,60)/1 crt=40'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:52 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 61 pg[8.1e( v 40'12 (0'0,40'12] local-lis/les=60/61 n=0 ec=60/38 lis/c=38/38 les/c/f=40/40/0 sis=60) [0] r=0 lpr=60 pi=[38,60)/1 crt=40'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:52 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 61 pg[8.1d( v 40'12 (0'0,40'12] local-lis/les=60/61 n=0 ec=60/38 lis/c=38/38 les/c/f=40/40/0 sis=60) [0] r=0 lpr=60 pi=[38,60)/1 crt=40'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:52 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 61 pg[8.19( v 40'12 (0'0,40'12] local-lis/les=60/61 n=0 ec=60/38 lis/c=38/38 les/c/f=40/40/0 sis=60) [0] r=0 lpr=60 pi=[38,60)/1 crt=40'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:52 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 61 pg[8.1c( v 40'12 (0'0,40'12] local-lis/les=60/61 n=0 ec=60/38 lis/c=38/38 les/c/f=40/40/0 sis=60) [0] r=0 lpr=60 pi=[38,60)/1 crt=40'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:52 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 61 pg[8.1a( v 40'12 (0'0,40'12] local-lis/les=60/61 n=0 ec=60/38 lis/c=38/38 les/c/f=40/40/0 sis=60) [0] r=0 lpr=60 pi=[38,60)/1 crt=40'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:52 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 61 pg[8.1b( v 40'12 (0'0,40'12] local-lis/les=60/61 n=0 ec=60/38 lis/c=38/38 les/c/f=40/40/0 sis=60) [0] r=0 lpr=60 pi=[38,60)/1 crt=40'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:52 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 61 pg[8.4( v 40'12 (0'0,40'12] local-lis/les=60/61 n=1 ec=60/38 lis/c=38/38 les/c/f=40/40/0 sis=60) [0] r=0 lpr=60 pi=[38,60)/1 crt=40'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:52 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 61 pg[8.0( v 40'12 (0'0,40'12] local-lis/les=60/61 n=0 ec=38/38 lis/c=38/38 les/c/f=40/40/0 sis=60) [0] r=0 lpr=60 pi=[38,60)/1 crt=40'12 lcod 40'11 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:52 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 61 pg[8.7( v 40'12 (0'0,40'12] local-lis/les=60/61 n=0 ec=60/38 lis/c=38/38 les/c/f=40/40/0 sis=60) [0] r=0 lpr=60 pi=[38,60)/1 crt=40'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:52 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 61 pg[8.b( v 40'12 (0'0,40'12] local-lis/les=60/61 n=0 ec=60/38 lis/c=38/38 les/c/f=40/40/0 sis=60) [0] r=0 lpr=60 pi=[38,60)/1 crt=40'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:52 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 61 pg[8.d( v 40'12 (0'0,40'12] local-lis/les=60/61 n=0 ec=60/38 lis/c=38/38 les/c/f=40/40/0 sis=60) [0] r=0 lpr=60 pi=[38,60)/1 crt=40'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:52 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 61 pg[8.a( v 40'12 (0'0,40'12] local-lis/les=60/61 n=0 ec=60/38 lis/c=38/38 les/c/f=40/40/0 sis=60) [0] r=0 lpr=60 pi=[38,60)/1 crt=40'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:52 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 61 pg[8.8( v 40'12 (0'0,40'12] local-lis/les=60/61 n=0 ec=60/38 lis/c=38/38 les/c/f=40/40/0 sis=60) [0] r=0 lpr=60 pi=[38,60)/1 crt=40'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:52 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 61 pg[8.9( v 40'12 (0'0,40'12] local-lis/les=60/61 n=0 ec=60/38 lis/c=38/38 les/c/f=40/40/0 sis=60) [0] r=0 lpr=60 pi=[38,60)/1 crt=40'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:52 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 61 pg[8.e( v 40'12 (0'0,40'12] local-lis/les=60/61 n=0 ec=60/38 lis/c=38/38 les/c/f=40/40/0 sis=60) [0] r=0 lpr=60 pi=[38,60)/1 crt=40'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:52 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 61 pg[8.f( v 40'12 (0'0,40'12] local-lis/les=60/61 n=0 ec=60/38 lis/c=38/38 les/c/f=40/40/0 sis=60) [0] r=0 lpr=60 pi=[38,60)/1 crt=40'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:52 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 61 pg[8.3( v 40'12 (0'0,40'12] local-lis/les=60/61 n=1 ec=60/38 lis/c=38/38 les/c/f=40/40/0 sis=60) [0] r=0 lpr=60 pi=[38,60)/1 crt=40'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:52 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 61 pg[8.c( v 40'12 (0'0,40'12] local-lis/les=60/61 n=0 ec=60/38 lis/c=38/38 les/c/f=40/40/0 sis=60) [0] r=0 lpr=60 pi=[38,60)/1 crt=40'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:52 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 61 pg[8.1( v 40'12 (0'0,40'12] local-lis/les=60/61 n=1 ec=60/38 lis/c=38/38 les/c/f=40/40/0 sis=60) [0] r=0 lpr=60 pi=[38,60)/1 crt=40'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:52 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 61 pg[8.10( v 40'12 (0'0,40'12] local-lis/les=60/61 n=0 ec=60/38 lis/c=38/38 les/c/f=40/40/0 sis=60) [0] r=0 lpr=60 pi=[38,60)/1 crt=40'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:52 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 61 pg[8.15( v 40'12 (0'0,40'12] local-lis/les=60/61 n=0 ec=60/38 lis/c=38/38 les/c/f=40/40/0 sis=60) [0] r=0 lpr=60 pi=[38,60)/1 crt=40'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:52 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 61 pg[8.14( v 40'12 (0'0,40'12] local-lis/les=60/61 n=0 ec=60/38 lis/c=38/38 les/c/f=40/40/0 sis=60) [0] r=0 lpr=60 pi=[38,60)/1 crt=40'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:52 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 61 pg[8.17( v 40'12 (0'0,40'12] local-lis/les=60/61 n=0 ec=60/38 lis/c=38/38 les/c/f=40/40/0 sis=60) [0] r=0 lpr=60 pi=[38,60)/1 crt=40'12 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:52 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Jan 20 18:43:52 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 18:43:52 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 18:43:52 compute-0 ceph-mon[74381]: osdmap e60: 3 total, 3 up, 3 in
Jan 20 18:43:52 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Jan 20 18:43:52 compute-0 ceph-mon[74381]: 6.c scrub starts
Jan 20 18:43:52 compute-0 ceph-mon[74381]: 6.c scrub ok
Jan 20 18:43:52 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Jan 20 18:43:52 compute-0 ceph-mon[74381]: osdmap e61: 3 total, 3 up, 3 in
Jan 20 18:43:52 compute-0 ceph-mgr[74676]: [progress WARNING root] Starting Global Recovery Event,77 pgs not in active + clean state
Jan 20 18:43:52 compute-0 podman[97747]: 2026-01-20 18:43:52.484083284 +0000 UTC m=+2.682646850 container create a0bae9b31d4e161527048dc88b55ca04446e4bd9fb1782316ee102732f4609f3 (image=quay.io/ceph/keepalived:2.2.4, name=nostalgic_yalow, vcs-type=git, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=keepalived for Ceph, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, name=keepalived, architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=, com.redhat.component=keepalived-container, version=2.2.4)
Jan 20 18:43:52 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 6.b deep-scrub starts
Jan 20 18:43:52 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 6.b deep-scrub ok
Jan 20 18:43:52 compute-0 systemd[1]: Started libpod-conmon-a0bae9b31d4e161527048dc88b55ca04446e4bd9fb1782316ee102732f4609f3.scope.
Jan 20 18:43:52 compute-0 podman[97747]: 2026-01-20 18:43:52.468116539 +0000 UTC m=+2.666680125 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Jan 20 18:43:52 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:43:52 compute-0 podman[97747]: 2026-01-20 18:43:52.570897092 +0000 UTC m=+2.769460688 container init a0bae9b31d4e161527048dc88b55ca04446e4bd9fb1782316ee102732f4609f3 (image=quay.io/ceph/keepalived:2.2.4, name=nostalgic_yalow, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, io.openshift.expose-services=, version=2.2.4, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived)
Jan 20 18:43:52 compute-0 podman[97747]: 2026-01-20 18:43:52.578606976 +0000 UTC m=+2.777170573 container start a0bae9b31d4e161527048dc88b55ca04446e4bd9fb1782316ee102732f4609f3 (image=quay.io/ceph/keepalived:2.2.4, name=nostalgic_yalow, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, release=1793, vcs-type=git, architecture=x86_64, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, vendor=Red Hat, Inc., io.openshift.expose-services=, build-date=2023-02-22T09:23:20, name=keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 20 18:43:52 compute-0 podman[97747]: 2026-01-20 18:43:52.581694898 +0000 UTC m=+2.780258484 container attach a0bae9b31d4e161527048dc88b55ca04446e4bd9fb1782316ee102732f4609f3 (image=quay.io/ceph/keepalived:2.2.4, name=nostalgic_yalow, description=keepalived for Ceph, name=keepalived, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, io.openshift.expose-services=, com.redhat.component=keepalived-container, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, release=1793, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Jan 20 18:43:52 compute-0 nostalgic_yalow[97844]: 0 0
Jan 20 18:43:52 compute-0 systemd[1]: libpod-a0bae9b31d4e161527048dc88b55ca04446e4bd9fb1782316ee102732f4609f3.scope: Deactivated successfully.
Jan 20 18:43:52 compute-0 conmon[97844]: conmon a0bae9b31d4e16152704 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a0bae9b31d4e161527048dc88b55ca04446e4bd9fb1782316ee102732f4609f3.scope/container/memory.events
Jan 20 18:43:52 compute-0 podman[97747]: 2026-01-20 18:43:52.585323635 +0000 UTC m=+2.783887221 container died a0bae9b31d4e161527048dc88b55ca04446e4bd9fb1782316ee102732f4609f3 (image=quay.io/ceph/keepalived:2.2.4, name=nostalgic_yalow, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, release=1793, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.expose-services=, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, architecture=x86_64, io.openshift.tags=Ceph keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=keepalived for Ceph, name=keepalived)
Jan 20 18:43:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-65e603f2192e10525f7742639f7a78e6c3ab5447874ca985f499be8f94fe27af-merged.mount: Deactivated successfully.
Jan 20 18:43:52 compute-0 podman[97747]: 2026-01-20 18:43:52.638269313 +0000 UTC m=+2.836832869 container remove a0bae9b31d4e161527048dc88b55ca04446e4bd9fb1782316ee102732f4609f3 (image=quay.io/ceph/keepalived:2.2.4, name=nostalgic_yalow, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, name=keepalived, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, architecture=x86_64, distribution-scope=public, io.buildah.version=1.28.2, version=2.2.4, vcs-type=git)
Jan 20 18:43:52 compute-0 systemd[1]: libpod-conmon-a0bae9b31d4e161527048dc88b55ca04446e4bd9fb1782316ee102732f4609f3.scope: Deactivated successfully.
Jan 20 18:43:52 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v46: 213 pgs: 77 unknown, 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:43:52 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0)
Jan 20 18:43:52 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 20 18:43:52 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0)
Jan 20 18:43:52 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 20 18:43:52 compute-0 systemd[1]: Reloading.
Jan 20 18:43:52 compute-0 systemd-rc-local-generator[97891]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:43:52 compute-0 systemd-sysv-generator[97895]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:43:52 compute-0 systemd[1]: Reloading.
Jan 20 18:43:53 compute-0 systemd-rc-local-generator[97934]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:43:53 compute-0 systemd-sysv-generator[97937]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:43:53 compute-0 systemd[1]: Starting Ceph keepalived.nfs.cephfs.compute-0.kuklye for aecbbf3b-b405-507b-97d7-637a83f5b4b1...
Jan 20 18:43:53 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Jan 20 18:43:53 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Jan 20 18:43:53 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 18:43:53 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 18:43:53 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Jan 20 18:43:53 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Jan 20 18:43:53 compute-0 ceph-mgr[74676]: [progress INFO root] update: starting ev 29e128f8-299b-4dde-ac90-6b47b1d482b2 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Jan 20 18:43:53 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"} v 0)
Jan 20 18:43:53 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 62 pg[9.0( v 49'1085 (0'0,49'1085] local-lis/les=41/42 n=178 ec=41/41 lis/c=41/41 les/c/f=42/42/0 sis=62 pruub=13.610133171s) [0] r=0 lpr=62 pi=[41,62)/1 crt=49'1085 lcod 49'1084 mlcod 49'1084 active pruub 213.967193604s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:43:53 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]: dispatch
Jan 20 18:43:53 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Jan 20 18:43:53 compute-0 ceph-mon[74381]: 6.b deep-scrub starts
Jan 20 18:43:53 compute-0 ceph-mon[74381]: 6.b deep-scrub ok
Jan 20 18:43:53 compute-0 ceph-mon[74381]: pgmap v46: 213 pgs: 77 unknown, 136 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:43:53 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 20 18:43:53 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 20 18:43:53 compute-0 ceph-mon[74381]: 7.16 scrub starts
Jan 20 18:43:53 compute-0 ceph-mon[74381]: 7.16 scrub ok
Jan 20 18:43:53 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Jan 20 18:43:53 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 18:43:53 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 18:43:53 compute-0 ceph-mon[74381]: osdmap e62: 3 total, 3 up, 3 in
Jan 20 18:43:53 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]: dispatch
Jan 20 18:43:53 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 62 pg[9.0( v 49'1085 lc 0'0 (0'0,49'1085] local-lis/les=41/42 n=5 ec=41/41 lis/c=41/41 les/c/f=42/42/0 sis=62 pruub=13.610133171s) [0] r=0 lpr=62 pi=[41,62)/1 crt=49'1085 lcod 49'1084 mlcod 0'0 unknown pruub 213.967193604s@ mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:53 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x5649cda35b00) operator()   moving buffer(0x5649ce4a60c8 space 0x5649ce3b21b0 0x0~1000 clean)
Jan 20 18:43:53 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x5649cda35b00) operator()   moving buffer(0x5649ce4d8f28 space 0x5649ce3b2830 0x0~1000 clean)
Jan 20 18:43:53 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x5649cda35b00) operator()   moving buffer(0x5649ce4d8488 space 0x5649ce3b2420 0x0~1000 clean)
Jan 20 18:43:53 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x5649cda35b00) operator()   moving buffer(0x5649ce4d9928 space 0x5649ce307ae0 0x0~1000 clean)
Jan 20 18:43:53 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x5649cda35b00) operator()   moving buffer(0x5649ce4c4168 space 0x5649ce3e6420 0x0~1000 clean)
Jan 20 18:43:53 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x5649cda35b00) operator()   moving buffer(0x5649ce49d4c8 space 0x5649ce2fed10 0x0~1000 clean)
Jan 20 18:43:53 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x5649cda35b00) operator()   moving buffer(0x5649ce4eefc8 space 0x5649ce1271f0 0x0~1000 clean)
Jan 20 18:43:53 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x5649cda35b00) operator()   moving buffer(0x5649ce4ee348 space 0x5649ce127050 0x0~1000 clean)
Jan 20 18:43:53 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x5649cda35b00) operator()   moving buffer(0x5649cd091c48 space 0x5649ce3b2760 0x0~1000 clean)
Jan 20 18:43:53 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x5649cda35b00) operator()   moving buffer(0x5649ce4efc48 space 0x5649ce53dd50 0x0~1000 clean)
Jan 20 18:43:53 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x5649cda35b00) operator()   moving buffer(0x5649ce49dd88 space 0x5649ce265390 0x0~1000 clean)
Jan 20 18:43:53 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x5649cda35b00) operator()   moving buffer(0x5649ce4c5928 space 0x5649ce3b29d0 0x0~1000 clean)
Jan 20 18:43:53 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x5649cda35b00) operator()   moving buffer(0x5649ce4ee848 space 0x5649ce127120 0x0~1000 clean)
Jan 20 18:43:53 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x5649cda35b00) operator()   moving buffer(0x5649ce4a7b08 space 0x5649ce3b2010 0x0~1000 clean)
Jan 20 18:43:53 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x5649cda35b00) operator()   moving buffer(0x5649ce4aaca8 space 0x5649ce3e6b70 0x0~1000 clean)
Jan 20 18:43:53 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x5649cda35b00) operator()   moving buffer(0x5649ce4a6528 space 0x5649ce3b2280 0x0~1000 clean)
Jan 20 18:43:53 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x5649cda35b00) operator()   moving buffer(0x5649ce4dfec8 space 0x5649ce3b2b70 0x0~1000 clean)
Jan 20 18:43:53 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x5649cda35b00) operator()   moving buffer(0x5649ce4c52e8 space 0x5649ce3b2900 0x0~1000 clean)
Jan 20 18:43:53 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x5649cda35b00) operator()   moving buffer(0x5649ce4bede8 space 0x5649ce3f5a10 0x0~1000 clean)
Jan 20 18:43:53 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x5649cda35b00) operator()   moving buffer(0x5649ce4a7928 space 0x5649ce3b24f0 0x0~1000 clean)
Jan 20 18:43:53 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x5649cda35b00) operator()   moving buffer(0x5649ce4d99c8 space 0x5649ce3b25c0 0x0~1000 clean)
Jan 20 18:43:53 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x5649cda35b00) operator()   moving buffer(0x5649ce49c848 space 0x5649ce491600 0x0~1000 clean)
Jan 20 18:43:53 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x5649cda35b00) operator()   moving buffer(0x5649ce4a7248 space 0x5649ce3b3c80 0x0~1000 clean)
Jan 20 18:43:53 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x5649cda35b00) operator()   moving buffer(0x5649ce49d7e8 space 0x5649ce383c80 0x0~1000 clean)
Jan 20 18:43:53 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x5649cda35b00) operator()   moving buffer(0x5649ce4c4de8 space 0x5649cde18760 0x0~1000 clean)
Jan 20 18:43:53 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x5649cda35b00) operator()   moving buffer(0x5649ce4ef4c8 space 0x5649ce1276d0 0x0~1000 clean)
Jan 20 18:43:53 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x5649cda35b00) operator()   moving buffer(0x5649ce3311a8 space 0x5649ce3b2350 0x0~1000 clean)
Jan 20 18:43:53 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x5649cda35b00) operator()   moving buffer(0x5649ce4c5e28 space 0x5649ce3b2aa0 0x0~1000 clean)
Jan 20 18:43:53 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x5649cda35b00) operator()   moving buffer(0x5649ce49db08 space 0x5649ce37e690 0x0~1000 clean)
Jan 20 18:43:53 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x5649cda35b00) operator()   moving buffer(0x5649ce4c5c48 space 0x5649ce1277a0 0x0~1000 clean)
Jan 20 18:43:53 compute-0 podman[97990]: 2026-01-20 18:43:53.482202169 +0000 UTC m=+0.055430504 container create 03da620c845e54fd933e41497235ed884812b05ac119911f7372fce805879a03 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-keepalived-nfs-cephfs-compute-0-kuklye, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, io.openshift.expose-services=, name=keepalived, vcs-type=git, distribution-scope=public, version=2.2.4)
Jan 20 18:43:53 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Jan 20 18:43:53 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Jan 20 18:43:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85e3a8c65efd40c12ded2c0b3cc47a7ec3cbb443dc5e80c707586e87434d5705/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:43:53 compute-0 podman[97990]: 2026-01-20 18:43:53.532873527 +0000 UTC m=+0.106101862 container init 03da620c845e54fd933e41497235ed884812b05ac119911f7372fce805879a03 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-keepalived-nfs-cephfs-compute-0-kuklye, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793, vcs-type=git, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, io.buildah.version=1.28.2, version=2.2.4, io.openshift.expose-services=, name=keepalived, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, architecture=x86_64, io.openshift.tags=Ceph keepalived)
Jan 20 18:43:53 compute-0 podman[97990]: 2026-01-20 18:43:53.539716079 +0000 UTC m=+0.112944414 container start 03da620c845e54fd933e41497235ed884812b05ac119911f7372fce805879a03 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-keepalived-nfs-cephfs-compute-0-kuklye, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, com.redhat.component=keepalived-container, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 20 18:43:53 compute-0 bash[97990]: 03da620c845e54fd933e41497235ed884812b05ac119911f7372fce805879a03
Jan 20 18:43:53 compute-0 podman[97990]: 2026-01-20 18:43:53.44873948 +0000 UTC m=+0.021967805 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Jan 20 18:43:53 compute-0 systemd[1]: Started Ceph keepalived.nfs.cephfs.compute-0.kuklye for aecbbf3b-b405-507b-97d7-637a83f5b4b1.
Jan 20 18:43:53 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-keepalived-nfs-cephfs-compute-0-kuklye[98005]: Tue Jan 20 18:43:53 2026: Starting Keepalived v2.2.4 (08/21,2021)
Jan 20 18:43:53 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-keepalived-nfs-cephfs-compute-0-kuklye[98005]: Tue Jan 20 18:43:53 2026: Running on Linux 5.14.0-661.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026 (built for Linux 5.14.0)
Jan 20 18:43:53 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-keepalived-nfs-cephfs-compute-0-kuklye[98005]: Tue Jan 20 18:43:53 2026: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Jan 20 18:43:53 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-keepalived-nfs-cephfs-compute-0-kuklye[98005]: Tue Jan 20 18:43:53 2026: Configuration file /etc/keepalived/keepalived.conf
Jan 20 18:43:53 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-keepalived-nfs-cephfs-compute-0-kuklye[98005]: Tue Jan 20 18:43:53 2026: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Jan 20 18:43:53 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-keepalived-nfs-cephfs-compute-0-kuklye[98005]: Tue Jan 20 18:43:53 2026: Starting VRRP child process, pid=4
Jan 20 18:43:53 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-keepalived-nfs-cephfs-compute-0-kuklye[98005]: Tue Jan 20 18:43:53 2026: Startup complete
Jan 20 18:43:53 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-keepalived-nfs-cephfs-compute-0-kuklye[98005]: Tue Jan 20 18:43:53 2026: (VI_0) Entering BACKUP STATE (init)
Jan 20 18:43:53 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-keepalived-nfs-cephfs-compute-0-kuklye[98005]: Tue Jan 20 18:43:53 2026: VRRP_Script(check_backend) succeeded
Jan 20 18:43:53 compute-0 sudo[97682]: pam_unix(sudo:session): session closed for user root
Jan 20 18:43:53 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 18:43:53 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:53 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 18:43:53 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:53 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Jan 20 18:43:53 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:53 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Jan 20 18:43:53 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Jan 20 18:43:53 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 20 18:43:53 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 20 18:43:53 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 20 18:43:53 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 20 18:43:53 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-1.uxdzcq on compute-1
Jan 20 18:43:53 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-1.uxdzcq on compute-1
Jan 20 18:43:54 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [WARNING] 019/184354 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 20 18:43:54 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Jan 20 18:43:54 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]': finished
Jan 20 18:43:54 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Jan 20 18:43:54 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Jan 20 18:43:54 compute-0 ceph-mgr[74676]: [progress INFO root] update: starting ev b92e8c45-f707-439b-a883-34feca698af8 (PG autoscaler increasing pool 12 PGs from 1 to 32)
Jan 20 18:43:54 compute-0 ceph-mgr[74676]: [progress INFO root] complete: finished ev 61f12202-09fa-40a9-8176-2123ff5b13ab (PG autoscaler increasing pool 6 PGs from 1 to 16)
Jan 20 18:43:54 compute-0 ceph-mgr[74676]: [progress INFO root] Completed event 61f12202-09fa-40a9-8176-2123ff5b13ab (PG autoscaler increasing pool 6 PGs from 1 to 16) in 7 seconds
Jan 20 18:43:54 compute-0 ceph-mgr[74676]: [progress INFO root] complete: finished ev a5d87db6-d301-4286-aab1-249bede10285 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Jan 20 18:43:54 compute-0 ceph-mgr[74676]: [progress INFO root] Completed event a5d87db6-d301-4286-aab1-249bede10285 (PG autoscaler increasing pool 7 PGs from 1 to 32) in 5 seconds
Jan 20 18:43:54 compute-0 ceph-mgr[74676]: [progress INFO root] complete: finished ev 48d8330e-98e4-4d7f-969d-304903e61814 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Jan 20 18:43:54 compute-0 ceph-mgr[74676]: [progress INFO root] Completed event 48d8330e-98e4-4d7f-969d-304903e61814 (PG autoscaler increasing pool 8 PGs from 1 to 32) in 4 seconds
Jan 20 18:43:54 compute-0 ceph-mgr[74676]: [progress INFO root] complete: finished ev 910b1b9e-f1ca-4511-82d1-2bf15929ff7d (PG autoscaler increasing pool 9 PGs from 1 to 32)
Jan 20 18:43:54 compute-0 ceph-mgr[74676]: [progress INFO root] Completed event 910b1b9e-f1ca-4511-82d1-2bf15929ff7d (PG autoscaler increasing pool 9 PGs from 1 to 32) in 3 seconds
Jan 20 18:43:54 compute-0 ceph-mgr[74676]: [progress INFO root] complete: finished ev 406636bd-f5cf-48d7-8373-23303aa2908e (PG autoscaler increasing pool 10 PGs from 1 to 32)
Jan 20 18:43:54 compute-0 ceph-mgr[74676]: [progress INFO root] Completed event 406636bd-f5cf-48d7-8373-23303aa2908e (PG autoscaler increasing pool 10 PGs from 1 to 32) in 2 seconds
Jan 20 18:43:54 compute-0 ceph-mgr[74676]: [progress INFO root] complete: finished ev 29e128f8-299b-4dde-ac90-6b47b1d482b2 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Jan 20 18:43:54 compute-0 ceph-mgr[74676]: [progress INFO root] Completed event 29e128f8-299b-4dde-ac90-6b47b1d482b2 (PG autoscaler increasing pool 11 PGs from 1 to 32) in 1 seconds
Jan 20 18:43:54 compute-0 ceph-mgr[74676]: [progress INFO root] complete: finished ev b92e8c45-f707-439b-a883-34feca698af8 (PG autoscaler increasing pool 12 PGs from 1 to 32)
Jan 20 18:43:54 compute-0 ceph-mgr[74676]: [progress INFO root] Completed event b92e8c45-f707-439b-a883-34feca698af8 (PG autoscaler increasing pool 12 PGs from 1 to 32) in 0 seconds
Jan 20 18:43:54 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 63 pg[9.1e( v 49'1085 lc 0'0 (0'0,49'1085] local-lis/les=41/42 n=5 ec=62/41 lis/c=41/41 les/c/f=42/42/0 sis=62) [0] r=0 lpr=62 pi=[41,62)/1 crt=49'1085 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:54 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 63 pg[9.16( v 49'1085 lc 0'0 (0'0,49'1085] local-lis/les=41/42 n=5 ec=62/41 lis/c=41/41 les/c/f=42/42/0 sis=62) [0] r=0 lpr=62 pi=[41,62)/1 crt=49'1085 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:54 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 63 pg[9.19( v 49'1085 lc 0'0 (0'0,49'1085] local-lis/les=41/42 n=5 ec=62/41 lis/c=41/41 les/c/f=42/42/0 sis=62) [0] r=0 lpr=62 pi=[41,62)/1 crt=49'1085 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:54 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 63 pg[9.17( v 49'1085 lc 0'0 (0'0,49'1085] local-lis/les=41/42 n=5 ec=62/41 lis/c=41/41 les/c/f=42/42/0 sis=62) [0] r=0 lpr=62 pi=[41,62)/1 crt=49'1085 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:54 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 63 pg[9.10( v 49'1085 lc 0'0 (0'0,49'1085] local-lis/les=41/42 n=6 ec=62/41 lis/c=41/41 les/c/f=42/42/0 sis=62) [0] r=0 lpr=62 pi=[41,62)/1 crt=49'1085 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:54 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 63 pg[9.3( v 49'1085 lc 0'0 (0'0,49'1085] local-lis/les=41/42 n=6 ec=62/41 lis/c=41/41 les/c/f=42/42/0 sis=62) [0] r=0 lpr=62 pi=[41,62)/1 crt=49'1085 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:54 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 63 pg[9.4( v 49'1085 lc 0'0 (0'0,49'1085] local-lis/les=41/42 n=6 ec=62/41 lis/c=41/41 les/c/f=42/42/0 sis=62) [0] r=0 lpr=62 pi=[41,62)/1 crt=49'1085 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:54 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 63 pg[9.7( v 49'1085 lc 0'0 (0'0,49'1085] local-lis/les=41/42 n=6 ec=62/41 lis/c=41/41 les/c/f=42/42/0 sis=62) [0] r=0 lpr=62 pi=[41,62)/1 crt=49'1085 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:54 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 63 pg[9.13( v 49'1085 lc 0'0 (0'0,49'1085] local-lis/les=41/42 n=5 ec=62/41 lis/c=41/41 les/c/f=42/42/0 sis=62) [0] r=0 lpr=62 pi=[41,62)/1 crt=49'1085 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:54 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 63 pg[9.12( v 49'1085 lc 0'0 (0'0,49'1085] local-lis/les=41/42 n=6 ec=62/41 lis/c=41/41 les/c/f=42/42/0 sis=62) [0] r=0 lpr=62 pi=[41,62)/1 crt=49'1085 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:54 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 63 pg[9.1d( v 49'1085 lc 0'0 (0'0,49'1085] local-lis/les=41/42 n=5 ec=62/41 lis/c=41/41 les/c/f=42/42/0 sis=62) [0] r=0 lpr=62 pi=[41,62)/1 crt=49'1085 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:54 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 63 pg[9.1c( v 49'1085 lc 0'0 (0'0,49'1085] local-lis/les=41/42 n=5 ec=62/41 lis/c=41/41 les/c/f=42/42/0 sis=62) [0] r=0 lpr=62 pi=[41,62)/1 crt=49'1085 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:54 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 63 pg[9.18( v 49'1085 lc 0'0 (0'0,49'1085] local-lis/les=41/42 n=5 ec=62/41 lis/c=41/41 les/c/f=42/42/0 sis=62) [0] r=0 lpr=62 pi=[41,62)/1 crt=49'1085 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:54 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 63 pg[9.1f( v 49'1085 lc 0'0 (0'0,49'1085] local-lis/les=41/42 n=5 ec=62/41 lis/c=41/41 les/c/f=42/42/0 sis=62) [0] r=0 lpr=62 pi=[41,62)/1 crt=49'1085 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:54 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 63 pg[9.1b( v 49'1085 lc 0'0 (0'0,49'1085] local-lis/les=41/42 n=5 ec=62/41 lis/c=41/41 les/c/f=42/42/0 sis=62) [0] r=0 lpr=62 pi=[41,62)/1 crt=49'1085 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:54 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 63 pg[9.1a( v 49'1085 lc 0'0 (0'0,49'1085] local-lis/les=41/42 n=5 ec=62/41 lis/c=41/41 les/c/f=42/42/0 sis=62) [0] r=0 lpr=62 pi=[41,62)/1 crt=49'1085 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:54 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 63 pg[9.5( v 49'1085 lc 0'0 (0'0,49'1085] local-lis/les=41/42 n=6 ec=62/41 lis/c=41/41 les/c/f=42/42/0 sis=62) [0] r=0 lpr=62 pi=[41,62)/1 crt=49'1085 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:54 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 63 pg[9.6( v 49'1085 lc 0'0 (0'0,49'1085] local-lis/les=41/42 n=6 ec=62/41 lis/c=41/41 les/c/f=42/42/0 sis=62) [0] r=0 lpr=62 pi=[41,62)/1 crt=49'1085 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:54 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 63 pg[9.1( v 49'1085 lc 0'0 (0'0,49'1085] local-lis/les=41/42 n=6 ec=62/41 lis/c=41/41 les/c/f=42/42/0 sis=62) [0] r=0 lpr=62 pi=[41,62)/1 crt=49'1085 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:54 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 63 pg[9.d( v 49'1085 lc 0'0 (0'0,49'1085] local-lis/les=41/42 n=6 ec=62/41 lis/c=41/41 les/c/f=42/42/0 sis=62) [0] r=0 lpr=62 pi=[41,62)/1 crt=49'1085 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:54 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 63 pg[9.a( v 49'1085 lc 0'0 (0'0,49'1085] local-lis/les=41/42 n=6 ec=62/41 lis/c=41/41 les/c/f=42/42/0 sis=62) [0] r=0 lpr=62 pi=[41,62)/1 crt=49'1085 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:54 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 63 pg[9.b( v 49'1085 lc 0'0 (0'0,49'1085] local-lis/les=41/42 n=6 ec=62/41 lis/c=41/41 les/c/f=42/42/0 sis=62) [0] r=0 lpr=62 pi=[41,62)/1 crt=49'1085 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:54 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 63 pg[9.c( v 49'1085 lc 0'0 (0'0,49'1085] local-lis/les=41/42 n=6 ec=62/41 lis/c=41/41 les/c/f=42/42/0 sis=62) [0] r=0 lpr=62 pi=[41,62)/1 crt=49'1085 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:54 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 63 pg[9.8( v 49'1085 lc 0'0 (0'0,49'1085] local-lis/les=41/42 n=6 ec=62/41 lis/c=41/41 les/c/f=42/42/0 sis=62) [0] r=0 lpr=62 pi=[41,62)/1 crt=49'1085 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:54 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 63 pg[9.e( v 49'1085 lc 0'0 (0'0,49'1085] local-lis/les=41/42 n=6 ec=62/41 lis/c=41/41 les/c/f=42/42/0 sis=62) [0] r=0 lpr=62 pi=[41,62)/1 crt=49'1085 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:54 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 63 pg[9.f( v 49'1085 lc 0'0 (0'0,49'1085] local-lis/les=41/42 n=6 ec=62/41 lis/c=41/41 les/c/f=42/42/0 sis=62) [0] r=0 lpr=62 pi=[41,62)/1 crt=49'1085 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:54 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 63 pg[9.9( v 49'1085 lc 0'0 (0'0,49'1085] local-lis/les=41/42 n=6 ec=62/41 lis/c=41/41 les/c/f=42/42/0 sis=62) [0] r=0 lpr=62 pi=[41,62)/1 crt=49'1085 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:54 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 63 pg[9.2( v 49'1085 lc 0'0 (0'0,49'1085] local-lis/les=41/42 n=6 ec=62/41 lis/c=41/41 les/c/f=42/42/0 sis=62) [0] r=0 lpr=62 pi=[41,62)/1 crt=49'1085 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:54 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 63 pg[9.14( v 49'1085 lc 0'0 (0'0,49'1085] local-lis/les=41/42 n=5 ec=62/41 lis/c=41/41 les/c/f=42/42/0 sis=62) [0] r=0 lpr=62 pi=[41,62)/1 crt=49'1085 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:54 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 63 pg[9.11( v 49'1085 lc 0'0 (0'0,49'1085] local-lis/les=41/42 n=6 ec=62/41 lis/c=41/41 les/c/f=42/42/0 sis=62) [0] r=0 lpr=62 pi=[41,62)/1 crt=49'1085 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:54 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 63 pg[9.15( v 49'1085 lc 0'0 (0'0,49'1085] local-lis/les=41/42 n=5 ec=62/41 lis/c=41/41 les/c/f=42/42/0 sis=62) [0] r=0 lpr=62 pi=[41,62)/1 crt=49'1085 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:54 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 63 pg[9.1e( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=5 ec=62/41 lis/c=41/41 les/c/f=42/42/0 sis=62) [0] r=0 lpr=62 pi=[41,62)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:54 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 63 pg[9.16( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=5 ec=62/41 lis/c=41/41 les/c/f=42/42/0 sis=62) [0] r=0 lpr=62 pi=[41,62)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:54 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 63 pg[9.17( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=5 ec=62/41 lis/c=41/41 les/c/f=42/42/0 sis=62) [0] r=0 lpr=62 pi=[41,62)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:54 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 63 pg[9.10( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=6 ec=62/41 lis/c=41/41 les/c/f=42/42/0 sis=62) [0] r=0 lpr=62 pi=[41,62)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:54 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 63 pg[9.4( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=6 ec=62/41 lis/c=41/41 les/c/f=42/42/0 sis=62) [0] r=0 lpr=62 pi=[41,62)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:54 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 63 pg[9.3( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=6 ec=62/41 lis/c=41/41 les/c/f=42/42/0 sis=62) [0] r=0 lpr=62 pi=[41,62)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:54 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 63 pg[9.7( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=6 ec=62/41 lis/c=41/41 les/c/f=42/42/0 sis=62) [0] r=0 lpr=62 pi=[41,62)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:54 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 63 pg[9.13( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=5 ec=62/41 lis/c=41/41 les/c/f=42/42/0 sis=62) [0] r=0 lpr=62 pi=[41,62)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:54 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 63 pg[9.12( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=6 ec=62/41 lis/c=41/41 les/c/f=42/42/0 sis=62) [0] r=0 lpr=62 pi=[41,62)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:54 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 63 pg[9.1d( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=5 ec=62/41 lis/c=41/41 les/c/f=42/42/0 sis=62) [0] r=0 lpr=62 pi=[41,62)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:54 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 63 pg[9.1c( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=5 ec=62/41 lis/c=41/41 les/c/f=42/42/0 sis=62) [0] r=0 lpr=62 pi=[41,62)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:54 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 63 pg[9.18( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=5 ec=62/41 lis/c=41/41 les/c/f=42/42/0 sis=62) [0] r=0 lpr=62 pi=[41,62)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:54 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 63 pg[9.1a( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=5 ec=62/41 lis/c=41/41 les/c/f=42/42/0 sis=62) [0] r=0 lpr=62 pi=[41,62)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:54 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 63 pg[9.1b( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=5 ec=62/41 lis/c=41/41 les/c/f=42/42/0 sis=62) [0] r=0 lpr=62 pi=[41,62)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:54 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 63 pg[9.1f( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=5 ec=62/41 lis/c=41/41 les/c/f=42/42/0 sis=62) [0] r=0 lpr=62 pi=[41,62)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:54 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 63 pg[9.5( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=6 ec=62/41 lis/c=41/41 les/c/f=42/42/0 sis=62) [0] r=0 lpr=62 pi=[41,62)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:54 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 63 pg[9.6( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=6 ec=62/41 lis/c=41/41 les/c/f=42/42/0 sis=62) [0] r=0 lpr=62 pi=[41,62)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:54 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 63 pg[9.19( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=5 ec=62/41 lis/c=41/41 les/c/f=42/42/0 sis=62) [0] r=0 lpr=62 pi=[41,62)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:54 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 63 pg[9.1( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=6 ec=62/41 lis/c=41/41 les/c/f=42/42/0 sis=62) [0] r=0 lpr=62 pi=[41,62)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:54 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 63 pg[9.d( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=6 ec=62/41 lis/c=41/41 les/c/f=42/42/0 sis=62) [0] r=0 lpr=62 pi=[41,62)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:54 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 63 pg[9.a( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=6 ec=62/41 lis/c=41/41 les/c/f=42/42/0 sis=62) [0] r=0 lpr=62 pi=[41,62)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:54 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 63 pg[9.b( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=6 ec=62/41 lis/c=41/41 les/c/f=42/42/0 sis=62) [0] r=0 lpr=62 pi=[41,62)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:54 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 63 pg[9.8( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=6 ec=62/41 lis/c=41/41 les/c/f=42/42/0 sis=62) [0] r=0 lpr=62 pi=[41,62)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:54 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 63 pg[9.c( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=6 ec=62/41 lis/c=41/41 les/c/f=42/42/0 sis=62) [0] r=0 lpr=62 pi=[41,62)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:54 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 63 pg[9.e( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=6 ec=62/41 lis/c=41/41 les/c/f=42/42/0 sis=62) [0] r=0 lpr=62 pi=[41,62)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:54 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 63 pg[9.f( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=6 ec=62/41 lis/c=41/41 les/c/f=42/42/0 sis=62) [0] r=0 lpr=62 pi=[41,62)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:54 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 63 pg[9.9( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=6 ec=62/41 lis/c=41/41 les/c/f=42/42/0 sis=62) [0] r=0 lpr=62 pi=[41,62)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:54 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 63 pg[9.2( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=6 ec=62/41 lis/c=41/41 les/c/f=42/42/0 sis=62) [0] r=0 lpr=62 pi=[41,62)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:54 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 63 pg[9.14( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=5 ec=62/41 lis/c=41/41 les/c/f=42/42/0 sis=62) [0] r=0 lpr=62 pi=[41,62)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:54 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 63 pg[9.15( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=5 ec=62/41 lis/c=41/41 les/c/f=42/42/0 sis=62) [0] r=0 lpr=62 pi=[41,62)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:54 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 63 pg[9.0( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=5 ec=41/41 lis/c=41/41 les/c/f=42/42/0 sis=62) [0] r=0 lpr=62 pi=[41,62)/1 crt=49'1085 lcod 49'1084 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:54 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 63 pg[9.11( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=6 ec=62/41 lis/c=41/41 les/c/f=42/42/0 sis=62) [0] r=0 lpr=62 pi=[41,62)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:54 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 6.a scrub starts
Jan 20 18:43:54 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 6.a scrub ok
Jan 20 18:43:54 compute-0 ceph-mon[74381]: 6.8 scrub starts
Jan 20 18:43:54 compute-0 ceph-mon[74381]: 6.8 scrub ok
Jan 20 18:43:54 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:54 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:54 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:54 compute-0 ceph-mon[74381]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Jan 20 18:43:54 compute-0 ceph-mon[74381]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 20 18:43:54 compute-0 ceph-mon[74381]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 20 18:43:54 compute-0 ceph-mon[74381]: Deploying daemon keepalived.nfs.cephfs.compute-1.uxdzcq on compute-1
Jan 20 18:43:54 compute-0 ceph-mon[74381]: 7.15 scrub starts
Jan 20 18:43:54 compute-0 ceph-mon[74381]: 7.15 scrub ok
Jan 20 18:43:54 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]': finished
Jan 20 18:43:54 compute-0 ceph-mon[74381]: osdmap e63: 3 total, 3 up, 3 in
Jan 20 18:43:54 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v49: 275 pgs: 62 unknown, 213 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:43:54 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"} v 0)
Jan 20 18:43:54 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 20 18:43:54 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0)
Jan 20 18:43:54 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 20 18:43:55 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 18:43:55 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 6.9 deep-scrub starts
Jan 20 18:43:55 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 6.9 deep-scrub ok
Jan 20 18:43:55 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Jan 20 18:43:55 compute-0 ceph-mon[74381]: 6.a scrub starts
Jan 20 18:43:55 compute-0 ceph-mon[74381]: 6.a scrub ok
Jan 20 18:43:55 compute-0 ceph-mon[74381]: pgmap v49: 275 pgs: 62 unknown, 213 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:43:55 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 20 18:43:55 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Jan 20 18:43:55 compute-0 ceph-mon[74381]: 7.c deep-scrub starts
Jan 20 18:43:55 compute-0 ceph-mon[74381]: 7.c deep-scrub ok
Jan 20 18:43:55 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 18:43:55 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 18:43:55 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Jan 20 18:43:55 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 64 pg[11.0( empty local-lis/les=46/47 n=0 ec=46/46 lis/c=46/46 les/c/f=47/47/0 sis=64 pruub=11.741936684s) [0] r=0 lpr=64 pi=[46,64)/1 crt=0'0 mlcod 0'0 active pruub 214.527496338s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:43:55 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Jan 20 18:43:55 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 64 pg[11.0( empty local-lis/les=46/47 n=0 ec=46/46 lis/c=46/46 les/c/f=47/47/0 sis=64 pruub=11.741936684s) [0] r=0 lpr=64 pi=[46,64)/1 crt=0'0 mlcod 0'0 unknown pruub 214.527496338s@ mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:56 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Jan 20 18:43:56 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Jan 20 18:43:56 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v51: 337 pgs: 124 unknown, 213 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:43:56 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Jan 20 18:43:56 compute-0 ceph-mon[74381]: 6.9 deep-scrub starts
Jan 20 18:43:56 compute-0 ceph-mon[74381]: 6.9 deep-scrub ok
Jan 20 18:43:56 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 18:43:56 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 18:43:56 compute-0 ceph-mon[74381]: osdmap e64: 3 total, 3 up, 3 in
Jan 20 18:43:56 compute-0 ceph-mon[74381]: 7.4 scrub starts
Jan 20 18:43:56 compute-0 ceph-mon[74381]: 7.4 scrub ok
Jan 20 18:43:56 compute-0 ceph-mon[74381]: pgmap v51: 337 pgs: 124 unknown, 213 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:43:56 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Jan 20 18:43:56 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Jan 20 18:43:56 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 65 pg[11.17( empty local-lis/les=46/47 n=0 ec=64/46 lis/c=46/46 les/c/f=47/47/0 sis=64) [0] r=0 lpr=64 pi=[46,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:56 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 65 pg[11.16( empty local-lis/les=46/47 n=0 ec=64/46 lis/c=46/46 les/c/f=47/47/0 sis=64) [0] r=0 lpr=64 pi=[46,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:56 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 65 pg[11.13( empty local-lis/les=46/47 n=0 ec=64/46 lis/c=46/46 les/c/f=47/47/0 sis=64) [0] r=0 lpr=64 pi=[46,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:56 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 65 pg[11.2( empty local-lis/les=46/47 n=0 ec=64/46 lis/c=46/46 les/c/f=47/47/0 sis=64) [0] r=0 lpr=64 pi=[46,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:56 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 65 pg[11.d( empty local-lis/les=46/47 n=0 ec=64/46 lis/c=46/46 les/c/f=47/47/0 sis=64) [0] r=0 lpr=64 pi=[46,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:56 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 65 pg[11.c( empty local-lis/les=46/47 n=0 ec=64/46 lis/c=46/46 les/c/f=47/47/0 sis=64) [0] r=0 lpr=64 pi=[46,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:56 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 65 pg[11.b( empty local-lis/les=46/47 n=0 ec=64/46 lis/c=46/46 les/c/f=47/47/0 sis=64) [0] r=0 lpr=64 pi=[46,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:56 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 65 pg[11.a( empty local-lis/les=46/47 n=0 ec=64/46 lis/c=46/46 les/c/f=47/47/0 sis=64) [0] r=0 lpr=64 pi=[46,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:56 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 65 pg[11.9( empty local-lis/les=46/47 n=0 ec=64/46 lis/c=46/46 les/c/f=47/47/0 sis=64) [0] r=0 lpr=64 pi=[46,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:56 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 65 pg[11.e( empty local-lis/les=46/47 n=0 ec=64/46 lis/c=46/46 les/c/f=47/47/0 sis=64) [0] r=0 lpr=64 pi=[46,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:56 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 65 pg[11.8( empty local-lis/les=46/47 n=0 ec=64/46 lis/c=46/46 les/c/f=47/47/0 sis=64) [0] r=0 lpr=64 pi=[46,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:56 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 65 pg[11.f( empty local-lis/les=46/47 n=0 ec=64/46 lis/c=46/46 les/c/f=47/47/0 sis=64) [0] r=0 lpr=64 pi=[46,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:56 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 65 pg[11.3( empty local-lis/les=46/47 n=0 ec=64/46 lis/c=46/46 les/c/f=47/47/0 sis=64) [0] r=0 lpr=64 pi=[46,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:56 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 65 pg[11.7( empty local-lis/les=46/47 n=0 ec=64/46 lis/c=46/46 les/c/f=47/47/0 sis=64) [0] r=0 lpr=64 pi=[46,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:56 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 65 pg[11.4( empty local-lis/les=46/47 n=0 ec=64/46 lis/c=46/46 les/c/f=47/47/0 sis=64) [0] r=0 lpr=64 pi=[46,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:56 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 65 pg[11.19( empty local-lis/les=46/47 n=0 ec=64/46 lis/c=46/46 les/c/f=47/47/0 sis=64) [0] r=0 lpr=64 pi=[46,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:56 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 65 pg[11.18( empty local-lis/les=46/47 n=0 ec=64/46 lis/c=46/46 les/c/f=47/47/0 sis=64) [0] r=0 lpr=64 pi=[46,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:56 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 65 pg[11.1a( empty local-lis/les=46/47 n=0 ec=64/46 lis/c=46/46 les/c/f=47/47/0 sis=64) [0] r=0 lpr=64 pi=[46,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:56 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 65 pg[11.1d( empty local-lis/les=46/47 n=0 ec=64/46 lis/c=46/46 les/c/f=47/47/0 sis=64) [0] r=0 lpr=64 pi=[46,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:56 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 65 pg[11.1e( empty local-lis/les=46/47 n=0 ec=64/46 lis/c=46/46 les/c/f=47/47/0 sis=64) [0] r=0 lpr=64 pi=[46,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:56 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 65 pg[11.1f( empty local-lis/les=46/47 n=0 ec=64/46 lis/c=46/46 les/c/f=47/47/0 sis=64) [0] r=0 lpr=64 pi=[46,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:56 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 65 pg[11.10( empty local-lis/les=46/47 n=0 ec=64/46 lis/c=46/46 les/c/f=47/47/0 sis=64) [0] r=0 lpr=64 pi=[46,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:56 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 65 pg[11.11( empty local-lis/les=46/47 n=0 ec=64/46 lis/c=46/46 les/c/f=47/47/0 sis=64) [0] r=0 lpr=64 pi=[46,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:56 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 65 pg[11.5( empty local-lis/les=46/47 n=0 ec=64/46 lis/c=46/46 les/c/f=47/47/0 sis=64) [0] r=0 lpr=64 pi=[46,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:56 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 65 pg[11.6( empty local-lis/les=46/47 n=0 ec=64/46 lis/c=46/46 les/c/f=47/47/0 sis=64) [0] r=0 lpr=64 pi=[46,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:56 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 65 pg[11.1( empty local-lis/les=46/47 n=0 ec=64/46 lis/c=46/46 les/c/f=47/47/0 sis=64) [0] r=0 lpr=64 pi=[46,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:56 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 65 pg[11.12( empty local-lis/les=46/47 n=0 ec=64/46 lis/c=46/46 les/c/f=47/47/0 sis=64) [0] r=0 lpr=64 pi=[46,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:56 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 65 pg[11.15( empty local-lis/les=46/47 n=0 ec=64/46 lis/c=46/46 les/c/f=47/47/0 sis=64) [0] r=0 lpr=64 pi=[46,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:56 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 65 pg[11.14( empty local-lis/les=46/47 n=0 ec=64/46 lis/c=46/46 les/c/f=47/47/0 sis=64) [0] r=0 lpr=64 pi=[46,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:56 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 65 pg[11.1b( empty local-lis/les=46/47 n=0 ec=64/46 lis/c=46/46 les/c/f=47/47/0 sis=64) [0] r=0 lpr=64 pi=[46,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:56 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 65 pg[11.1c( empty local-lis/les=46/47 n=0 ec=64/46 lis/c=46/46 les/c/f=47/47/0 sis=64) [0] r=0 lpr=64 pi=[46,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:43:56 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 65 pg[11.17( empty local-lis/les=64/65 n=0 ec=64/46 lis/c=46/46 les/c/f=47/47/0 sis=64) [0] r=0 lpr=64 pi=[46,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:56 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 65 pg[11.0( empty local-lis/les=64/65 n=0 ec=46/46 lis/c=46/46 les/c/f=47/47/0 sis=64) [0] r=0 lpr=64 pi=[46,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:56 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 65 pg[11.13( empty local-lis/les=64/65 n=0 ec=64/46 lis/c=46/46 les/c/f=47/47/0 sis=64) [0] r=0 lpr=64 pi=[46,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:56 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 65 pg[11.2( empty local-lis/les=64/65 n=0 ec=64/46 lis/c=46/46 les/c/f=47/47/0 sis=64) [0] r=0 lpr=64 pi=[46,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:56 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 65 pg[11.c( empty local-lis/les=64/65 n=0 ec=64/46 lis/c=46/46 les/c/f=47/47/0 sis=64) [0] r=0 lpr=64 pi=[46,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:56 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 65 pg[11.d( empty local-lis/les=64/65 n=0 ec=64/46 lis/c=46/46 les/c/f=47/47/0 sis=64) [0] r=0 lpr=64 pi=[46,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:56 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 65 pg[11.b( empty local-lis/les=64/65 n=0 ec=64/46 lis/c=46/46 les/c/f=47/47/0 sis=64) [0] r=0 lpr=64 pi=[46,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:56 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 65 pg[11.9( empty local-lis/les=64/65 n=0 ec=64/46 lis/c=46/46 les/c/f=47/47/0 sis=64) [0] r=0 lpr=64 pi=[46,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:56 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 65 pg[11.e( empty local-lis/les=64/65 n=0 ec=64/46 lis/c=46/46 les/c/f=47/47/0 sis=64) [0] r=0 lpr=64 pi=[46,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:56 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 65 pg[11.a( empty local-lis/les=64/65 n=0 ec=64/46 lis/c=46/46 les/c/f=47/47/0 sis=64) [0] r=0 lpr=64 pi=[46,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:56 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 65 pg[11.f( empty local-lis/les=64/65 n=0 ec=64/46 lis/c=46/46 les/c/f=47/47/0 sis=64) [0] r=0 lpr=64 pi=[46,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:56 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 65 pg[11.8( empty local-lis/les=64/65 n=0 ec=64/46 lis/c=46/46 les/c/f=47/47/0 sis=64) [0] r=0 lpr=64 pi=[46,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:56 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 65 pg[11.3( empty local-lis/les=64/65 n=0 ec=64/46 lis/c=46/46 les/c/f=47/47/0 sis=64) [0] r=0 lpr=64 pi=[46,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:56 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 65 pg[11.16( empty local-lis/les=64/65 n=0 ec=64/46 lis/c=46/46 les/c/f=47/47/0 sis=64) [0] r=0 lpr=64 pi=[46,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:56 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 65 pg[11.4( empty local-lis/les=64/65 n=0 ec=64/46 lis/c=46/46 les/c/f=47/47/0 sis=64) [0] r=0 lpr=64 pi=[46,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:56 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 65 pg[11.7( empty local-lis/les=64/65 n=0 ec=64/46 lis/c=46/46 les/c/f=47/47/0 sis=64) [0] r=0 lpr=64 pi=[46,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:56 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 65 pg[11.18( empty local-lis/les=64/65 n=0 ec=64/46 lis/c=46/46 les/c/f=47/47/0 sis=64) [0] r=0 lpr=64 pi=[46,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:56 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 65 pg[11.1a( empty local-lis/les=64/65 n=0 ec=64/46 lis/c=46/46 les/c/f=47/47/0 sis=64) [0] r=0 lpr=64 pi=[46,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:56 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 65 pg[11.19( empty local-lis/les=64/65 n=0 ec=64/46 lis/c=46/46 les/c/f=47/47/0 sis=64) [0] r=0 lpr=64 pi=[46,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:56 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 65 pg[11.1e( empty local-lis/les=64/65 n=0 ec=64/46 lis/c=46/46 les/c/f=47/47/0 sis=64) [0] r=0 lpr=64 pi=[46,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:56 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 65 pg[11.1d( empty local-lis/les=64/65 n=0 ec=64/46 lis/c=46/46 les/c/f=47/47/0 sis=64) [0] r=0 lpr=64 pi=[46,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:56 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 65 pg[11.10( empty local-lis/les=64/65 n=0 ec=64/46 lis/c=46/46 les/c/f=47/47/0 sis=64) [0] r=0 lpr=64 pi=[46,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:56 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 65 pg[11.1f( empty local-lis/les=64/65 n=0 ec=64/46 lis/c=46/46 les/c/f=47/47/0 sis=64) [0] r=0 lpr=64 pi=[46,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:56 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 65 pg[11.11( empty local-lis/les=64/65 n=0 ec=64/46 lis/c=46/46 les/c/f=47/47/0 sis=64) [0] r=0 lpr=64 pi=[46,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:56 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 65 pg[11.5( empty local-lis/les=64/65 n=0 ec=64/46 lis/c=46/46 les/c/f=47/47/0 sis=64) [0] r=0 lpr=64 pi=[46,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:56 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 65 pg[11.1( empty local-lis/les=64/65 n=0 ec=64/46 lis/c=46/46 les/c/f=47/47/0 sis=64) [0] r=0 lpr=64 pi=[46,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:56 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 65 pg[11.6( empty local-lis/les=64/65 n=0 ec=64/46 lis/c=46/46 les/c/f=47/47/0 sis=64) [0] r=0 lpr=64 pi=[46,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:56 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 65 pg[11.15( empty local-lis/les=64/65 n=0 ec=64/46 lis/c=46/46 les/c/f=47/47/0 sis=64) [0] r=0 lpr=64 pi=[46,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:56 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 65 pg[11.12( empty local-lis/les=64/65 n=0 ec=64/46 lis/c=46/46 les/c/f=47/47/0 sis=64) [0] r=0 lpr=64 pi=[46,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:56 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 65 pg[11.14( empty local-lis/les=64/65 n=0 ec=64/46 lis/c=46/46 les/c/f=47/47/0 sis=64) [0] r=0 lpr=64 pi=[46,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:56 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 65 pg[11.1b( empty local-lis/les=64/65 n=0 ec=64/46 lis/c=46/46 les/c/f=47/47/0 sis=64) [0] r=0 lpr=64 pi=[46,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:56 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 65 pg[11.1c( empty local-lis/les=64/65 n=0 ec=64/46 lis/c=46/46 les/c/f=47/47/0 sis=64) [0] r=0 lpr=64 pi=[46,64)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:43:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-keepalived-nfs-cephfs-compute-0-kuklye[98005]: Tue Jan 20 18:43:57 2026: (VI_0) Entering MASTER STATE
Jan 20 18:43:57 compute-0 ceph-mgr[74676]: [progress INFO root] Writing back 21 completed events
Jan 20 18:43:57 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 20 18:43:57 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 6.4 deep-scrub starts
Jan 20 18:43:57 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 6.4 deep-scrub ok
Jan 20 18:43:57 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:57 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Jan 20 18:43:58 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 6.2 deep-scrub starts
Jan 20 18:43:58 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 6.2 deep-scrub ok
Jan 20 18:43:58 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v53: 337 pgs: 32 peering, 305 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:43:58 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Scheduled restart job, restart counter is at 1.
Jan 20 18:43:58 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.ulclbx for aecbbf3b-b405-507b-97d7-637a83f5b4b1.
Jan 20 18:43:58 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Consumed 2.285s CPU time.
Jan 20 18:43:58 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.ulclbx for aecbbf3b-b405-507b-97d7-637a83f5b4b1...
Jan 20 18:43:58 compute-0 podman[98062]: 2026-01-20 18:43:58.954261467 +0000 UTC m=+0.043340443 container create af40c80f0b5ca4437bf16fa284143bc3d66d618a40bbf26cc0fd453c9079a558 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Jan 20 18:43:58 compute-0 ceph-mon[74381]: 6.5 scrub starts
Jan 20 18:43:58 compute-0 ceph-mon[74381]: 6.5 scrub ok
Jan 20 18:43:58 compute-0 ceph-mon[74381]: osdmap e65: 3 total, 3 up, 3 in
Jan 20 18:43:58 compute-0 ceph-mon[74381]: 7.1c deep-scrub starts
Jan 20 18:43:58 compute-0 ceph-mon[74381]: 7.1c deep-scrub ok
Jan 20 18:43:58 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:43:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fc54353cd7985371d48975a97c84c383db39cbb5c9f773671dba86ab45e9ab2/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Jan 20 18:43:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fc54353cd7985371d48975a97c84c383db39cbb5c9f773671dba86ab45e9ab2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:43:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fc54353cd7985371d48975a97c84c383db39cbb5c9f773671dba86ab45e9ab2/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 18:43:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fc54353cd7985371d48975a97c84c383db39cbb5c9f773671dba86ab45e9ab2/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.ulclbx-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 18:43:59 compute-0 podman[98062]: 2026-01-20 18:43:59.00968518 +0000 UTC m=+0.098764196 container init af40c80f0b5ca4437bf16fa284143bc3d66d618a40bbf26cc0fd453c9079a558 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 18:43:59 compute-0 podman[98062]: 2026-01-20 18:43:59.015082694 +0000 UTC m=+0.104161680 container start af40c80f0b5ca4437bf16fa284143bc3d66d618a40bbf26cc0fd453c9079a558 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:43:59 compute-0 bash[98062]: af40c80f0b5ca4437bf16fa284143bc3d66d618a40bbf26cc0fd453c9079a558
Jan 20 18:43:59 compute-0 podman[98062]: 2026-01-20 18:43:58.935110128 +0000 UTC m=+0.024189134 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:43:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:43:59 : epoch 696fccef : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Jan 20 18:43:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:43:59 : epoch 696fccef : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Jan 20 18:43:59 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.ulclbx for aecbbf3b-b405-507b-97d7-637a83f5b4b1.
Jan 20 18:43:59 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Jan 20 18:43:59 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Jan 20 18:43:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:43:59 : epoch 696fccef : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Jan 20 18:43:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:43:59 : epoch 696fccef : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Jan 20 18:43:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:43:59 : epoch 696fccef : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Jan 20 18:43:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:43:59 : epoch 696fccef : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Jan 20 18:43:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:43:59 : epoch 696fccef : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Jan 20 18:43:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:43:59 : epoch 696fccef : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 20 18:43:59 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Jan 20 18:43:59 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Jan 20 18:44:00 compute-0 ceph-mon[74381]: 6.4 deep-scrub starts
Jan 20 18:44:00 compute-0 ceph-mon[74381]: 6.4 deep-scrub ok
Jan 20 18:44:00 compute-0 ceph-mon[74381]: 7.13 scrub starts
Jan 20 18:44:00 compute-0 ceph-mon[74381]: 7.13 scrub ok
Jan 20 18:44:00 compute-0 ceph-mon[74381]: 6.2 deep-scrub starts
Jan 20 18:44:00 compute-0 ceph-mon[74381]: 6.2 deep-scrub ok
Jan 20 18:44:00 compute-0 ceph-mon[74381]: pgmap v53: 337 pgs: 32 peering, 305 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:44:00 compute-0 ceph-mon[74381]: 7.a deep-scrub starts
Jan 20 18:44:00 compute-0 ceph-mon[74381]: 7.a deep-scrub ok
Jan 20 18:44:00 compute-0 ceph-mon[74381]: osdmap e66: 3 total, 3 up, 3 in
Jan 20 18:44:00 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e66 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 18:44:00 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 20 18:44:00 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:00 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 20 18:44:00 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Jan 20 18:44:00 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Jan 20 18:44:00 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:00 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Jan 20 18:44:00 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:00 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 20 18:44:00 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 20 18:44:00 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 20 18:44:00 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 20 18:44:00 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Jan 20 18:44:00 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Jan 20 18:44:00 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-2.rmanvu on compute-2
Jan 20 18:44:00 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-2.rmanvu on compute-2
Jan 20 18:44:00 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v55: 337 pgs: 32 peering, 305 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:44:01 compute-0 ceph-mon[74381]: 6.3 scrub starts
Jan 20 18:44:01 compute-0 ceph-mon[74381]: 6.3 scrub ok
Jan 20 18:44:01 compute-0 ceph-mon[74381]: 7.1d scrub starts
Jan 20 18:44:01 compute-0 ceph-mon[74381]: 7.1d scrub ok
Jan 20 18:44:01 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:01 compute-0 ceph-mon[74381]: 6.6 scrub starts
Jan 20 18:44:01 compute-0 ceph-mon[74381]: 6.6 scrub ok
Jan 20 18:44:01 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:01 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:01 compute-0 ceph-mon[74381]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 20 18:44:01 compute-0 ceph-mon[74381]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 20 18:44:01 compute-0 ceph-mon[74381]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Jan 20 18:44:01 compute-0 ceph-mon[74381]: Deploying daemon keepalived.nfs.cephfs.compute-2.rmanvu on compute-2
Jan 20 18:44:01 compute-0 ceph-mon[74381]: pgmap v55: 337 pgs: 32 peering, 305 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:44:01 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Jan 20 18:44:01 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Jan 20 18:44:01 compute-0 anacron[5103]: Job `cron.daily' started
Jan 20 18:44:01 compute-0 anacron[5103]: Job `cron.daily' terminated
Jan 20 18:44:01 compute-0 sudo[98146]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kokrbayflbevtugpojpmfcsvvcmevlle ; /usr/bin/python3'
Jan 20 18:44:01 compute-0 sudo[98146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:44:02 compute-0 python3[98148]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:44:02 compute-0 ceph-mon[74381]: 7.1e scrub starts
Jan 20 18:44:02 compute-0 ceph-mon[74381]: 7.1e scrub ok
Jan 20 18:44:02 compute-0 ceph-mon[74381]: 6.7 scrub starts
Jan 20 18:44:02 compute-0 ceph-mon[74381]: 6.7 scrub ok
Jan 20 18:44:02 compute-0 podman[98149]: 2026-01-20 18:44:02.195307342 +0000 UTC m=+0.046120446 container create c06a29a50b3fbe6c9170eccaef07587be5a7270c9753041d2da556db8cd68393 (image=quay.io/ceph/ceph:v19, name=sharp_mcclintock, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:44:02 compute-0 systemd[1]: Started libpod-conmon-c06a29a50b3fbe6c9170eccaef07587be5a7270c9753041d2da556db8cd68393.scope.
Jan 20 18:44:02 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:44:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f3670e57fdeaaf6df7a194687f2913559eebc9ad6904e677836951c2bee79be/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:44:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f3670e57fdeaaf6df7a194687f2913559eebc9ad6904e677836951c2bee79be/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:44:02 compute-0 podman[98149]: 2026-01-20 18:44:02.176649636 +0000 UTC m=+0.027462740 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:44:02 compute-0 podman[98149]: 2026-01-20 18:44:02.272738481 +0000 UTC m=+0.123551595 container init c06a29a50b3fbe6c9170eccaef07587be5a7270c9753041d2da556db8cd68393 (image=quay.io/ceph/ceph:v19, name=sharp_mcclintock, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Jan 20 18:44:02 compute-0 podman[98149]: 2026-01-20 18:44:02.283921038 +0000 UTC m=+0.134734142 container start c06a29a50b3fbe6c9170eccaef07587be5a7270c9753041d2da556db8cd68393 (image=quay.io/ceph/ceph:v19, name=sharp_mcclintock, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:44:02 compute-0 podman[98149]: 2026-01-20 18:44:02.287739599 +0000 UTC m=+0.138552753 container attach c06a29a50b3fbe6c9170eccaef07587be5a7270c9753041d2da556db8cd68393 (image=quay.io/ceph/ceph:v19, name=sharp_mcclintock, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Jan 20 18:44:02 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 6.e scrub starts
Jan 20 18:44:02 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 6.e scrub ok
Jan 20 18:44:02 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v56: 337 pgs: 32 peering, 305 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:44:02 compute-0 sharp_mcclintock[98164]: could not fetch user info: no user info saved
Jan 20 18:44:02 compute-0 systemd[1]: libpod-c06a29a50b3fbe6c9170eccaef07587be5a7270c9753041d2da556db8cd68393.scope: Deactivated successfully.
Jan 20 18:44:02 compute-0 podman[98149]: 2026-01-20 18:44:02.744173604 +0000 UTC m=+0.594986708 container died c06a29a50b3fbe6c9170eccaef07587be5a7270c9753041d2da556db8cd68393 (image=quay.io/ceph/ceph:v19, name=sharp_mcclintock, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Jan 20 18:44:03 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Jan 20 18:44:03 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Jan 20 18:44:04 compute-0 ceph-mon[74381]: 7.10 scrub starts
Jan 20 18:44:04 compute-0 ceph-mon[74381]: 7.10 scrub ok
Jan 20 18:44:04 compute-0 ceph-mon[74381]: 6.e scrub starts
Jan 20 18:44:04 compute-0 ceph-mon[74381]: 6.e scrub ok
Jan 20 18:44:04 compute-0 ceph-mon[74381]: pgmap v56: 337 pgs: 32 peering, 305 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:44:04 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-keepalived-nfs-cephfs-compute-0-kuklye[98005]: Tue Jan 20 18:44:04 2026: (VI_0) Received advert from 192.168.122.101 with lower priority 90, ours 100, forcing new election
Jan 20 18:44:04 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 6.d scrub starts
Jan 20 18:44:04 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v57: 337 pgs: 337 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:44:05 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:05 : epoch 696fccef : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 20 18:44:05 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:05 : epoch 696fccef : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 20 18:44:05 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 6.d scrub ok
Jan 20 18:44:05 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 6.0 deep-scrub starts
Jan 20 18:44:06 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v58: 337 pgs: 337 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:44:06 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e66 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 18:44:06 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 20 18:44:06 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 20 18:44:06 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 20 18:44:06 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 20 18:44:06 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 20 18:44:06 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 20 18:44:06 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} v 0)
Jan 20 18:44:06 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Jan 20 18:44:06 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 20 18:44:06 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 20 18:44:06 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 6.f scrub starts
Jan 20 18:44:06 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0)
Jan 20 18:44:06 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Jan 20 18:44:06 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 20 18:44:06 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 20 18:44:06 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 20 18:44:06 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 20 18:44:06 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 20 18:44:06 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 20 18:44:06 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 20 18:44:06 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 20 18:44:06 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 20 18:44:06 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} v 0)
Jan 20 18:44:06 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Jan 20 18:44:06 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 20 18:44:06 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 20 18:44:06 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0)
Jan 20 18:44:06 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Jan 20 18:44:06 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 20 18:44:06 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 20 18:44:06 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 6.0 deep-scrub ok
Jan 20 18:44:06 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Jan 20 18:44:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f3670e57fdeaaf6df7a194687f2913559eebc9ad6904e677836951c2bee79be-merged.mount: Deactivated successfully.
Jan 20 18:44:06 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 6.f scrub ok
Jan 20 18:44:07 compute-0 podman[98149]: 2026-01-20 18:44:07.024720384 +0000 UTC m=+4.875533488 container remove c06a29a50b3fbe6c9170eccaef07587be5a7270c9753041d2da556db8cd68393 (image=quay.io/ceph/ceph:v19, name=sharp_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Jan 20 18:44:07 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 18:44:07 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 18:44:07 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 18:44:07 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 20 18:44:07 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 18:44:07 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 20 18:44:07 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 18:44:07 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 18:44:07 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 18:44:07 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 18:44:07 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 20 18:44:07 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 18:44:07 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 20 18:44:07 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 18:44:07 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Jan 20 18:44:07 compute-0 ceph-mon[74381]: 7.17 scrub starts
Jan 20 18:44:07 compute-0 ceph-mon[74381]: 7.17 scrub ok
Jan 20 18:44:07 compute-0 ceph-mon[74381]: 6.1 scrub starts
Jan 20 18:44:07 compute-0 ceph-mon[74381]: 6.1 scrub ok
Jan 20 18:44:07 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:07 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Jan 20 18:44:07 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 20 18:44:07 compute-0 sudo[98146]: pam_unix(sudo:session): session closed for user root
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[12.1c( empty local-lis/les=0/0 n=0 ec=64/53 lis/c=64/64 les/c/f=66/66/0 sis=67) [0] r=0 lpr=67 pi=[64,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[10.15( empty local-lis/les=0/0 n=0 ec=62/43 lis/c=62/62 les/c/f=63/63/0 sis=67) [0] r=0 lpr=67 pi=[62,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[7.18( empty local-lis/les=0/0 n=0 ec=60/25 lis/c=60/60 les/c/f=61/61/0 sis=67) [0] r=0 lpr=67 pi=[60,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[10.14( empty local-lis/les=0/0 n=0 ec=62/43 lis/c=62/62 les/c/f=63/63/0 sis=67) [0] r=0 lpr=67 pi=[62,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[12.12( empty local-lis/les=0/0 n=0 ec=64/53 lis/c=64/64 les/c/f=66/66/0 sis=67) [0] r=0 lpr=67 pi=[64,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[12.6( empty local-lis/les=0/0 n=0 ec=64/53 lis/c=64/64 les/c/f=66/66/0 sis=67) [0] r=0 lpr=67 pi=[64,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[7.b( empty local-lis/les=0/0 n=0 ec=60/25 lis/c=60/60 les/c/f=61/61/0 sis=67) [0] r=0 lpr=67 pi=[60,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[10.5( empty local-lis/les=0/0 n=0 ec=62/43 lis/c=62/62 les/c/f=63/63/0 sis=67) [0] r=0 lpr=67 pi=[62,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[7.8( empty local-lis/les=0/0 n=0 ec=60/25 lis/c=60/60 les/c/f=61/61/0 sis=67) [0] r=0 lpr=67 pi=[60,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[7.f( empty local-lis/les=0/0 n=0 ec=60/25 lis/c=60/60 les/c/f=61/61/0 sis=67) [0] r=0 lpr=67 pi=[60,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[10.2( empty local-lis/les=0/0 n=0 ec=62/43 lis/c=62/62 les/c/f=63/63/0 sis=67) [0] r=0 lpr=67 pi=[62,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[7.3( empty local-lis/les=0/0 n=0 ec=60/25 lis/c=60/60 les/c/f=61/61/0 sis=67) [0] r=0 lpr=67 pi=[60,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[12.8( empty local-lis/les=0/0 n=0 ec=64/53 lis/c=64/64 les/c/f=66/66/0 sis=67) [0] r=0 lpr=67 pi=[64,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[7.2( empty local-lis/les=0/0 n=0 ec=60/25 lis/c=60/60 les/c/f=61/61/0 sis=67) [0] r=0 lpr=67 pi=[60,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[10.8( empty local-lis/les=0/0 n=0 ec=62/43 lis/c=62/62 les/c/f=63/63/0 sis=67) [0] r=0 lpr=67 pi=[62,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[12.e( empty local-lis/les=0/0 n=0 ec=64/53 lis/c=64/64 les/c/f=66/66/0 sis=67) [0] r=0 lpr=67 pi=[64,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[7.6( empty local-lis/les=0/0 n=0 ec=60/25 lis/c=60/60 les/c/f=61/61/0 sis=67) [0] r=0 lpr=67 pi=[60,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[12.c( empty local-lis/les=0/0 n=0 ec=64/53 lis/c=64/64 les/c/f=66/66/0 sis=67) [0] r=0 lpr=67 pi=[64,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[12.b( empty local-lis/les=0/0 n=0 ec=64/53 lis/c=64/64 les/c/f=66/66/0 sis=67) [0] r=0 lpr=67 pi=[64,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[12.a( empty local-lis/les=0/0 n=0 ec=64/53 lis/c=64/64 les/c/f=66/66/0 sis=67) [0] r=0 lpr=67 pi=[64,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[7.e( empty local-lis/les=0/0 n=0 ec=60/25 lis/c=60/60 les/c/f=61/61/0 sis=67) [0] r=0 lpr=67 pi=[60,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[12.10( empty local-lis/les=0/0 n=0 ec=64/53 lis/c=64/64 les/c/f=66/66/0 sis=67) [0] r=0 lpr=67 pi=[64,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[7.1b( empty local-lis/les=0/0 n=0 ec=60/25 lis/c=60/60 les/c/f=61/61/0 sis=67) [0] r=0 lpr=67 pi=[60,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[11.17( empty local-lis/les=64/65 n=0 ec=64/46 lis/c=64/64 les/c/f=65/65/0 sis=67 pruub=13.722425461s) [2] r=-1 lpr=67 pi=[64,67)/1 crt=0'0 mlcod 0'0 active pruub 227.846771240s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[11.17( empty local-lis/les=64/65 n=0 ec=64/46 lis/c=64/64 les/c/f=65/65/0 sis=67 pruub=13.722401619s) [2] r=-1 lpr=67 pi=[64,67)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 227.846771240s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[10.19( empty local-lis/les=0/0 n=0 ec=62/43 lis/c=62/62 les/c/f=63/63/0 sis=67) [0] r=0 lpr=67 pi=[62,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[7.10( empty local-lis/les=0/0 n=0 ec=60/25 lis/c=60/60 les/c/f=61/61/0 sis=67) [0] r=0 lpr=67 pi=[60,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[11.16( empty local-lis/les=64/65 n=0 ec=64/46 lis/c=64/64 les/c/f=65/65/0 sis=67 pruub=13.724952698s) [2] r=-1 lpr=67 pi=[64,67)/1 crt=0'0 mlcod 0'0 active pruub 227.849929810s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[11.16( empty local-lis/les=64/65 n=0 ec=64/46 lis/c=64/64 les/c/f=65/65/0 sis=67 pruub=13.724940300s) [2] r=-1 lpr=67 pi=[64,67)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 227.849929810s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[11.13( empty local-lis/les=64/65 n=0 ec=64/46 lis/c=64/64 les/c/f=65/65/0 sis=67 pruub=13.724841118s) [2] r=-1 lpr=67 pi=[64,67)/1 crt=0'0 mlcod 0'0 active pruub 227.849929810s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[8.10( v 40'12 (0'0,40'12] local-lis/les=60/61 n=0 ec=60/38 lis/c=60/60 les/c/f=61/61/0 sis=67 pruub=9.228632927s) [1] r=-1 lpr=67 pi=[60,67)/1 crt=40'12 lcod 0'0 mlcod 0'0 active pruub 223.353729248s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[8.15( v 40'12 (0'0,40'12] local-lis/les=60/61 n=0 ec=60/38 lis/c=60/60 les/c/f=61/61/0 sis=67 pruub=9.228602409s) [2] r=-1 lpr=67 pi=[60,67)/1 crt=40'12 lcod 0'0 mlcod 0'0 active pruub 223.353713989s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[11.13( empty local-lis/les=64/65 n=0 ec=64/46 lis/c=64/64 les/c/f=65/65/0 sis=67 pruub=13.724817276s) [2] r=-1 lpr=67 pi=[64,67)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 227.849929810s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[8.10( v 40'12 (0'0,40'12] local-lis/les=60/61 n=0 ec=60/38 lis/c=60/60 les/c/f=61/61/0 sis=67 pruub=9.228616714s) [1] r=-1 lpr=67 pi=[60,67)/1 crt=40'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 223.353729248s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[8.15( v 40'12 (0'0,40'12] local-lis/les=60/61 n=0 ec=60/38 lis/c=60/60 les/c/f=61/61/0 sis=67 pruub=9.228583336s) [2] r=-1 lpr=67 pi=[60,67)/1 crt=40'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 223.353713989s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[8.14( v 40'12 (0'0,40'12] local-lis/les=60/61 n=0 ec=60/38 lis/c=60/60 les/c/f=61/61/0 sis=67 pruub=9.228589058s) [1] r=-1 lpr=67 pi=[60,67)/1 crt=40'12 lcod 0'0 mlcod 0'0 active pruub 223.353759766s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[8.14( v 40'12 (0'0,40'12] local-lis/les=60/61 n=0 ec=60/38 lis/c=60/60 les/c/f=61/61/0 sis=67 pruub=9.228566170s) [1] r=-1 lpr=67 pi=[60,67)/1 crt=40'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 223.353759766s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[6.f( v 56'46 (0'0,56'46] local-lis/les=58/60 n=3 ec=58/23 lis/c=58/58 les/c/f=60/60/0 sis=67 pruub=8.210630417s) [2] r=-1 lpr=67 pi=[58,67)/1 crt=56'46 lcod 0'0 mlcod 0'0 active pruub 222.335952759s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[6.f( v 56'46 (0'0,56'46] local-lis/les=58/60 n=3 ec=58/23 lis/c=58/58 les/c/f=60/60/0 sis=67 pruub=8.210550308s) [2] r=-1 lpr=67 pi=[58,67)/1 crt=56'46 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 222.335952759s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[8.3( v 40'12 (0'0,40'12] local-lis/les=60/61 n=1 ec=60/38 lis/c=60/60 les/c/f=61/61/0 sis=67 pruub=9.228084564s) [2] r=-1 lpr=67 pi=[60,67)/1 crt=40'12 lcod 0'0 mlcod 0'0 active pruub 223.353530884s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[6.d( v 56'46 (0'0,56'46] local-lis/les=58/60 n=2 ec=58/23 lis/c=58/58 les/c/f=60/60/0 sis=67 pruub=8.210454941s) [2] r=-1 lpr=67 pi=[58,67)/1 crt=56'46 lcod 0'0 mlcod 0'0 active pruub 222.335922241s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[6.d( v 56'46 (0'0,56'46] local-lis/les=58/60 n=2 ec=58/23 lis/c=58/58 les/c/f=60/60/0 sis=67 pruub=8.210433960s) [2] r=-1 lpr=67 pi=[58,67)/1 crt=56'46 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 222.335922241s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[8.3( v 40'12 (0'0,40'12] local-lis/les=60/61 n=1 ec=60/38 lis/c=60/60 les/c/f=61/61/0 sis=67 pruub=9.228063583s) [2] r=-1 lpr=67 pi=[60,67)/1 crt=40'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 223.353530884s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[12.19( empty local-lis/les=0/0 n=0 ec=64/53 lis/c=64/64 les/c/f=66/66/0 sis=67) [0] r=0 lpr=67 pi=[64,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[7.13( empty local-lis/les=0/0 n=0 ec=60/25 lis/c=60/60 les/c/f=61/61/0 sis=67) [0] r=0 lpr=67 pi=[60,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[7.9( empty local-lis/les=0/0 n=0 ec=60/25 lis/c=60/60 les/c/f=61/61/0 sis=67) [0] r=0 lpr=67 pi=[60,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[6.1( v 56'46 (0'0,56'46] local-lis/les=58/60 n=0 ec=58/23 lis/c=58/58 les/c/f=60/60/0 sis=67 pruub=8.209563255s) [2] r=-1 lpr=67 pi=[58,67)/1 crt=56'46 lcod 0'0 mlcod 0'0 active pruub 222.335922241s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[6.1( v 56'46 (0'0,56'46] local-lis/les=58/60 n=0 ec=58/23 lis/c=58/58 les/c/f=60/60/0 sis=67 pruub=8.209547997s) [2] r=-1 lpr=67 pi=[58,67)/1 crt=56'46 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 222.335922241s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[8.f( v 40'12 (0'0,40'12] local-lis/les=60/61 n=0 ec=60/38 lis/c=60/60 les/c/f=61/61/0 sis=67 pruub=9.227084160s) [2] r=-1 lpr=67 pi=[60,67)/1 crt=40'12 lcod 0'0 mlcod 0'0 active pruub 223.353485107s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[8.f( v 40'12 (0'0,40'12] local-lis/les=60/61 n=0 ec=60/38 lis/c=60/60 les/c/f=61/61/0 sis=67 pruub=9.227046013s) [2] r=-1 lpr=67 pi=[60,67)/1 crt=40'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 223.353485107s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[8.8( v 40'12 (0'0,40'12] local-lis/les=60/61 n=0 ec=60/38 lis/c=60/60 les/c/f=61/61/0 sis=67 pruub=9.227141380s) [1] r=-1 lpr=67 pi=[60,67)/1 crt=40'12 lcod 0'0 mlcod 0'0 active pruub 223.353607178s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[8.8( v 40'12 (0'0,40'12] local-lis/les=60/61 n=0 ec=60/38 lis/c=60/60 les/c/f=61/61/0 sis=67 pruub=9.227121353s) [1] r=-1 lpr=67 pi=[60,67)/1 crt=40'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 223.353607178s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[7.4( empty local-lis/les=0/0 n=0 ec=60/25 lis/c=60/60 les/c/f=61/61/0 sis=67) [0] r=0 lpr=67 pi=[60,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[11.a( empty local-lis/les=64/65 n=0 ec=64/46 lis/c=64/64 les/c/f=65/65/0 sis=67 pruub=13.722996712s) [2] r=-1 lpr=67 pi=[64,67)/1 crt=0'0 mlcod 0'0 active pruub 227.850082397s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[6.7( v 56'46 (0'0,56'46] local-lis/les=58/60 n=1 ec=58/23 lis/c=58/58 les/c/f=60/60/0 sis=67 pruub=8.208786011s) [2] r=-1 lpr=67 pi=[58,67)/1 crt=56'46 lcod 0'0 mlcod 0'0 active pruub 222.335906982s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[11.a( empty local-lis/les=64/65 n=0 ec=64/46 lis/c=64/64 les/c/f=65/65/0 sis=67 pruub=13.722942352s) [2] r=-1 lpr=67 pi=[64,67)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 227.850082397s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[8.9( v 40'12 (0'0,40'12] local-lis/les=60/61 n=0 ec=60/38 lis/c=60/60 les/c/f=61/61/0 sis=67 pruub=9.226254463s) [2] r=-1 lpr=67 pi=[60,67)/1 crt=40'12 lcod 0'0 mlcod 0'0 active pruub 223.353439331s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[8.9( v 40'12 (0'0,40'12] local-lis/les=60/61 n=0 ec=60/38 lis/c=60/60 les/c/f=61/61/0 sis=67 pruub=9.226236343s) [2] r=-1 lpr=67 pi=[60,67)/1 crt=40'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 223.353439331s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[6.7( v 56'46 (0'0,56'46] local-lis/les=58/60 n=1 ec=58/23 lis/c=58/58 les/c/f=60/60/0 sis=67 pruub=8.208765030s) [2] r=-1 lpr=67 pi=[58,67)/1 crt=56'46 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 222.335906982s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[8.a( v 40'12 (0'0,40'12] local-lis/les=60/61 n=0 ec=60/38 lis/c=60/60 les/c/f=61/61/0 sis=67 pruub=9.225984573s) [2] r=-1 lpr=67 pi=[60,67)/1 crt=40'12 lcod 0'0 mlcod 0'0 active pruub 223.353408813s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[8.a( v 40'12 (0'0,40'12] local-lis/les=60/61 n=0 ec=60/38 lis/c=60/60 les/c/f=61/61/0 sis=67 pruub=9.225967407s) [2] r=-1 lpr=67 pi=[60,67)/1 crt=40'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 223.353408813s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[7.1e( empty local-lis/les=0/0 n=0 ec=60/25 lis/c=60/60 les/c/f=61/61/0 sis=67) [0] r=0 lpr=67 pi=[60,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[8.d( v 40'12 (0'0,40'12] local-lis/les=60/61 n=0 ec=60/38 lis/c=60/60 les/c/f=61/61/0 sis=67 pruub=9.225464821s) [2] r=-1 lpr=67 pi=[60,67)/1 crt=40'12 lcod 0'0 mlcod 0'0 active pruub 223.353393555s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[8.d( v 40'12 (0'0,40'12] local-lis/les=60/61 n=0 ec=60/38 lis/c=60/60 les/c/f=61/61/0 sis=67 pruub=9.225445747s) [2] r=-1 lpr=67 pi=[60,67)/1 crt=40'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 223.353393555s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[6.3( v 56'46 (0'0,56'46] local-lis/les=58/60 n=2 ec=58/23 lis/c=58/58 les/c/f=60/60/0 sis=67 pruub=8.207900047s) [2] r=-1 lpr=67 pi=[58,67)/1 crt=56'46 lcod 0'0 mlcod 0'0 active pruub 222.335861206s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[11.f( empty local-lis/les=64/65 n=0 ec=64/46 lis/c=64/64 les/c/f=65/65/0 sis=67 pruub=13.722096443s) [1] r=-1 lpr=67 pi=[64,67)/1 crt=0'0 mlcod 0'0 active pruub 227.850082397s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[11.e( empty local-lis/les=64/65 n=0 ec=64/46 lis/c=64/64 les/c/f=65/65/0 sis=67 pruub=13.722084999s) [2] r=-1 lpr=67 pi=[64,67)/1 crt=0'0 mlcod 0'0 active pruub 227.850067139s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[6.3( v 56'46 (0'0,56'46] local-lis/les=58/60 n=2 ec=58/23 lis/c=58/58 les/c/f=60/60/0 sis=67 pruub=8.207877159s) [2] r=-1 lpr=67 pi=[58,67)/1 crt=56'46 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 222.335861206s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[11.f( empty local-lis/les=64/65 n=0 ec=64/46 lis/c=64/64 les/c/f=65/65/0 sis=67 pruub=13.722079277s) [1] r=-1 lpr=67 pi=[64,67)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 227.850082397s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[11.e( empty local-lis/les=64/65 n=0 ec=64/46 lis/c=64/64 les/c/f=65/65/0 sis=67 pruub=13.722051620s) [2] r=-1 lpr=67 pi=[64,67)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 227.850067139s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[8.c( v 40'12 (0'0,40'12] local-lis/les=60/61 n=0 ec=60/38 lis/c=60/60 les/c/f=61/61/0 sis=67 pruub=9.225527763s) [2] r=-1 lpr=67 pi=[60,67)/1 crt=40'12 lcod 0'0 mlcod 0'0 active pruub 223.353637695s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[11.8( empty local-lis/les=64/65 n=0 ec=64/46 lis/c=64/64 les/c/f=65/65/0 sis=67 pruub=13.721952438s) [2] r=-1 lpr=67 pi=[64,67)/1 crt=0'0 mlcod 0'0 active pruub 227.850097656s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[8.c( v 40'12 (0'0,40'12] local-lis/les=60/61 n=0 ec=60/38 lis/c=60/60 les/c/f=61/61/0 sis=67 pruub=9.225510597s) [2] r=-1 lpr=67 pi=[60,67)/1 crt=40'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 223.353637695s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[11.8( empty local-lis/les=64/65 n=0 ec=64/46 lis/c=64/64 les/c/f=65/65/0 sis=67 pruub=13.721938133s) [2] r=-1 lpr=67 pi=[64,67)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 227.850097656s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[10.13( empty local-lis/les=0/0 n=0 ec=62/43 lis/c=62/62 les/c/f=63/63/0 sis=67) [0] r=0 lpr=67 pi=[62,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[8.b( v 40'12 (0'0,40'12] local-lis/les=60/61 n=0 ec=60/38 lis/c=60/60 les/c/f=61/61/0 sis=67 pruub=9.224959373s) [2] r=-1 lpr=67 pi=[60,67)/1 crt=40'12 lcod 0'0 mlcod 0'0 active pruub 223.353332520s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[6.5( v 56'46 (0'0,56'46] local-lis/les=58/60 n=2 ec=58/23 lis/c=58/58 les/c/f=60/60/0 sis=67 pruub=8.206751823s) [2] r=-1 lpr=67 pi=[58,67)/1 crt=56'46 lcod 0'0 mlcod 0'0 active pruub 222.335159302s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[8.b( v 40'12 (0'0,40'12] local-lis/les=60/61 n=0 ec=60/38 lis/c=60/60 les/c/f=61/61/0 sis=67 pruub=9.224924088s) [2] r=-1 lpr=67 pi=[60,67)/1 crt=40'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 223.353332520s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[6.5( v 56'46 (0'0,56'46] local-lis/les=58/60 n=2 ec=58/23 lis/c=58/58 les/c/f=60/60/0 sis=67 pruub=8.206730843s) [2] r=-1 lpr=67 pi=[58,67)/1 crt=56'46 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 222.335159302s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[11.3( empty local-lis/les=64/65 n=0 ec=64/46 lis/c=64/64 les/c/f=65/65/0 sis=67 pruub=13.721531868s) [2] r=-1 lpr=67 pi=[64,67)/1 crt=0'0 mlcod 0'0 active pruub 227.850097656s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[11.3( empty local-lis/les=64/65 n=0 ec=64/46 lis/c=64/64 les/c/f=65/65/0 sis=67 pruub=13.721509933s) [2] r=-1 lpr=67 pi=[64,67)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 227.850097656s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[10.18( empty local-lis/les=0/0 n=0 ec=62/43 lis/c=62/62 les/c/f=63/63/0 sis=67) [0] r=0 lpr=67 pi=[62,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[10.1b( empty local-lis/les=0/0 n=0 ec=62/43 lis/c=62/62 les/c/f=63/63/0 sis=67) [0] r=0 lpr=67 pi=[62,67)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[6.9( v 56'46 (0'0,56'46] local-lis/les=58/60 n=0 ec=58/23 lis/c=58/58 les/c/f=60/60/0 sis=67 pruub=8.206306458s) [2] r=-1 lpr=67 pi=[58,67)/1 crt=56'46 lcod 0'0 mlcod 0'0 active pruub 222.335159302s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[6.9( v 56'46 (0'0,56'46] local-lis/les=58/60 n=0 ec=58/23 lis/c=58/58 les/c/f=60/60/0 sis=67 pruub=8.206284523s) [2] r=-1 lpr=67 pi=[58,67)/1 crt=56'46 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 222.335159302s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[11.4( empty local-lis/les=64/65 n=0 ec=64/46 lis/c=64/64 les/c/f=65/65/0 sis=67 pruub=13.721067429s) [1] r=-1 lpr=67 pi=[64,67)/1 crt=0'0 mlcod 0'0 active pruub 227.850112915s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[11.7( empty local-lis/les=64/65 n=0 ec=64/46 lis/c=64/64 les/c/f=65/65/0 sis=67 pruub=13.721053123s) [1] r=-1 lpr=67 pi=[64,67)/1 crt=0'0 mlcod 0'0 active pruub 227.850112915s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[11.4( empty local-lis/les=64/65 n=0 ec=64/46 lis/c=64/64 les/c/f=65/65/0 sis=67 pruub=13.721048355s) [1] r=-1 lpr=67 pi=[64,67)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 227.850112915s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[11.7( empty local-lis/les=64/65 n=0 ec=64/46 lis/c=64/64 les/c/f=65/65/0 sis=67 pruub=13.721035004s) [1] r=-1 lpr=67 pi=[64,67)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 227.850112915s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[8.4( v 40'12 (0'0,40'12] local-lis/les=60/61 n=1 ec=60/38 lis/c=60/60 les/c/f=61/61/0 sis=67 pruub=9.224206924s) [1] r=-1 lpr=67 pi=[60,67)/1 crt=40'12 lcod 0'0 mlcod 0'0 active pruub 223.353317261s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[8.4( v 40'12 (0'0,40'12] local-lis/les=60/61 n=1 ec=60/38 lis/c=60/60 les/c/f=61/61/0 sis=67 pruub=9.224186897s) [1] r=-1 lpr=67 pi=[60,67)/1 crt=40'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 223.353317261s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[8.1b( v 40'12 (0'0,40'12] local-lis/les=60/61 n=0 ec=60/38 lis/c=60/60 les/c/f=61/61/0 sis=67 pruub=9.224051476s) [1] r=-1 lpr=67 pi=[60,67)/1 crt=40'12 lcod 0'0 mlcod 0'0 active pruub 223.353225708s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[11.19( empty local-lis/les=64/65 n=0 ec=64/46 lis/c=64/64 les/c/f=65/65/0 sis=67 pruub=13.720945358s) [2] r=-1 lpr=67 pi=[64,67)/1 crt=0'0 mlcod 0'0 active pruub 227.850143433s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[8.1b( v 40'12 (0'0,40'12] local-lis/les=60/61 n=0 ec=60/38 lis/c=60/60 les/c/f=61/61/0 sis=67 pruub=9.224036217s) [1] r=-1 lpr=67 pi=[60,67)/1 crt=40'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 223.353225708s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[11.19( empty local-lis/les=64/65 n=0 ec=64/46 lis/c=64/64 les/c/f=65/65/0 sis=67 pruub=13.720911980s) [2] r=-1 lpr=67 pi=[64,67)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 227.850143433s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[8.19( v 40'12 (0'0,40'12] local-lis/les=60/61 n=0 ec=60/38 lis/c=60/60 les/c/f=61/61/0 sis=67 pruub=9.223871231s) [1] r=-1 lpr=67 pi=[60,67)/1 crt=40'12 lcod 0'0 mlcod 0'0 active pruub 223.353164673s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[11.1a( empty local-lis/les=64/65 n=0 ec=64/46 lis/c=64/64 les/c/f=65/65/0 sis=67 pruub=13.720842361s) [1] r=-1 lpr=67 pi=[64,67)/1 crt=0'0 mlcod 0'0 active pruub 227.850143433s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[11.1d( empty local-lis/les=64/65 n=0 ec=64/46 lis/c=64/64 les/c/f=65/65/0 sis=67 pruub=13.720829010s) [1] r=-1 lpr=67 pi=[64,67)/1 crt=0'0 mlcod 0'0 active pruub 227.850158691s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[11.1a( empty local-lis/les=64/65 n=0 ec=64/46 lis/c=64/64 les/c/f=65/65/0 sis=67 pruub=13.720829010s) [1] r=-1 lpr=67 pi=[64,67)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 227.850143433s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[11.1d( empty local-lis/les=64/65 n=0 ec=64/46 lis/c=64/64 les/c/f=65/65/0 sis=67 pruub=13.720815659s) [1] r=-1 lpr=67 pi=[64,67)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 227.850158691s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[8.19( v 40'12 (0'0,40'12] local-lis/les=60/61 n=0 ec=60/38 lis/c=60/60 les/c/f=61/61/0 sis=67 pruub=9.223800659s) [1] r=-1 lpr=67 pi=[60,67)/1 crt=40'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 223.353164673s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[11.1e( empty local-lis/les=64/65 n=0 ec=64/46 lis/c=64/64 les/c/f=65/65/0 sis=67 pruub=13.720712662s) [1] r=-1 lpr=67 pi=[64,67)/1 crt=0'0 mlcod 0'0 active pruub 227.850143433s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[11.1e( empty local-lis/les=64/65 n=0 ec=64/46 lis/c=64/64 les/c/f=65/65/0 sis=67 pruub=13.720694542s) [1] r=-1 lpr=67 pi=[64,67)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 227.850143433s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[8.1c( v 40'12 (0'0,40'12] local-lis/les=60/61 n=0 ec=60/38 lis/c=60/60 les/c/f=61/61/0 sis=67 pruub=9.223712921s) [2] r=-1 lpr=67 pi=[60,67)/1 crt=40'12 lcod 0'0 mlcod 0'0 active pruub 223.353240967s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[8.1c( v 40'12 (0'0,40'12] local-lis/les=60/61 n=0 ec=60/38 lis/c=60/60 les/c/f=61/61/0 sis=67 pruub=9.223699570s) [2] r=-1 lpr=67 pi=[60,67)/1 crt=40'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 223.353240967s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[8.6( v 40'12 (0'0,40'12] local-lis/les=60/61 n=1 ec=60/38 lis/c=60/60 les/c/f=61/61/0 sis=67 pruub=9.223310471s) [2] r=-1 lpr=67 pi=[60,67)/1 crt=40'12 lcod 0'0 mlcod 0'0 active pruub 223.353042603s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[11.5( empty local-lis/les=64/65 n=0 ec=64/46 lis/c=64/64 les/c/f=65/65/0 sis=67 pruub=13.720475197s) [1] r=-1 lpr=67 pi=[64,67)/1 crt=0'0 mlcod 0'0 active pruub 227.850219727s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[8.12( v 40'12 (0'0,40'12] local-lis/les=60/61 n=0 ec=60/38 lis/c=60/60 les/c/f=61/61/0 sis=67 pruub=9.223214149s) [1] r=-1 lpr=67 pi=[60,67)/1 crt=40'12 lcod 0'0 mlcod 0'0 active pruub 223.352966309s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[11.5( empty local-lis/les=64/65 n=0 ec=64/46 lis/c=64/64 les/c/f=65/65/0 sis=67 pruub=13.720460892s) [1] r=-1 lpr=67 pi=[64,67)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 227.850219727s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[8.6( v 40'12 (0'0,40'12] local-lis/les=60/61 n=1 ec=60/38 lis/c=60/60 les/c/f=61/61/0 sis=67 pruub=9.223290443s) [2] r=-1 lpr=67 pi=[60,67)/1 crt=40'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 223.353042603s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[8.12( v 40'12 (0'0,40'12] local-lis/les=60/61 n=0 ec=60/38 lis/c=60/60 les/c/f=61/61/0 sis=67 pruub=9.223195076s) [1] r=-1 lpr=67 pi=[60,67)/1 crt=40'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 223.352966309s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[8.5( v 40'12 (0'0,40'12] local-lis/les=60/61 n=1 ec=60/38 lis/c=60/60 les/c/f=61/61/0 sis=67 pruub=9.223131180s) [2] r=-1 lpr=67 pi=[60,67)/1 crt=40'12 lcod 0'0 mlcod 0'0 active pruub 223.352951050s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[6.b( v 56'46 (0'0,56'46] local-lis/les=58/60 n=1 ec=58/23 lis/c=58/58 les/c/f=60/60/0 sis=67 pruub=8.203763962s) [2] r=-1 lpr=67 pi=[58,67)/1 crt=56'46 lcod 0'0 mlcod 0'0 active pruub 222.333602905s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[8.5( v 40'12 (0'0,40'12] local-lis/les=60/61 n=1 ec=60/38 lis/c=60/60 les/c/f=61/61/0 sis=67 pruub=9.223113060s) [2] r=-1 lpr=67 pi=[60,67)/1 crt=40'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 223.352951050s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[6.b( v 56'46 (0'0,56'46] local-lis/les=58/60 n=1 ec=58/23 lis/c=58/58 les/c/f=60/60/0 sis=67 pruub=8.203742027s) [2] r=-1 lpr=67 pi=[58,67)/1 crt=56'46 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 222.333602905s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[8.2( v 40'12 (0'0,40'12] local-lis/les=60/61 n=1 ec=60/38 lis/c=60/60 les/c/f=61/61/0 sis=67 pruub=9.223033905s) [2] r=-1 lpr=67 pi=[60,67)/1 crt=40'12 lcod 0'0 mlcod 0'0 active pruub 223.352951050s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[11.1( empty local-lis/les=64/65 n=0 ec=64/46 lis/c=64/64 les/c/f=65/65/0 sis=67 pruub=13.720293999s) [1] r=-1 lpr=67 pi=[64,67)/1 crt=0'0 mlcod 0'0 active pruub 227.850234985s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[8.2( v 40'12 (0'0,40'12] local-lis/les=60/61 n=1 ec=60/38 lis/c=60/60 les/c/f=61/61/0 sis=67 pruub=9.223016739s) [2] r=-1 lpr=67 pi=[60,67)/1 crt=40'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 223.352951050s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[11.1( empty local-lis/les=64/65 n=0 ec=64/46 lis/c=64/64 les/c/f=65/65/0 sis=67 pruub=13.720280647s) [1] r=-1 lpr=67 pi=[64,67)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 227.850234985s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[11.12( empty local-lis/les=64/65 n=0 ec=64/46 lis/c=64/64 les/c/f=65/65/0 sis=67 pruub=13.720310211s) [1] r=-1 lpr=67 pi=[64,67)/1 crt=0'0 mlcod 0'0 active pruub 227.850311279s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[11.12( empty local-lis/les=64/65 n=0 ec=64/46 lis/c=64/64 les/c/f=65/65/0 sis=67 pruub=13.720289230s) [1] r=-1 lpr=67 pi=[64,67)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 227.850311279s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[8.11( v 40'12 (0'0,40'12] local-lis/les=60/61 n=0 ec=60/38 lis/c=60/60 les/c/f=61/61/0 sis=67 pruub=9.222924232s) [2] r=-1 lpr=67 pi=[60,67)/1 crt=40'12 lcod 0'0 mlcod 0'0 active pruub 223.352981567s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[8.11( v 40'12 (0'0,40'12] local-lis/les=60/61 n=0 ec=60/38 lis/c=60/60 les/c/f=61/61/0 sis=67 pruub=9.222906113s) [2] r=-1 lpr=67 pi=[60,67)/1 crt=40'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 223.352981567s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[8.16( v 40'12 (0'0,40'12] local-lis/les=60/61 n=0 ec=60/38 lis/c=60/60 les/c/f=61/61/0 sis=67 pruub=9.222832680s) [2] r=-1 lpr=67 pi=[60,67)/1 crt=40'12 lcod 0'0 mlcod 0'0 active pruub 223.352935791s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[8.17( v 40'12 (0'0,40'12] local-lis/les=60/61 n=0 ec=60/38 lis/c=60/60 les/c/f=61/61/0 sis=67 pruub=9.223675728s) [1] r=-1 lpr=67 pi=[60,67)/1 crt=40'12 lcod 0'0 mlcod 0'0 active pruub 223.353805542s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[8.16( v 40'12 (0'0,40'12] local-lis/les=60/61 n=0 ec=60/38 lis/c=60/60 les/c/f=61/61/0 sis=67 pruub=9.222815514s) [2] r=-1 lpr=67 pi=[60,67)/1 crt=40'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 223.352935791s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[11.14( empty local-lis/les=64/65 n=0 ec=64/46 lis/c=64/64 les/c/f=65/65/0 sis=67 pruub=13.720170021s) [1] r=-1 lpr=67 pi=[64,67)/1 crt=0'0 mlcod 0'0 active pruub 227.850311279s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[11.14( empty local-lis/les=64/65 n=0 ec=64/46 lis/c=64/64 les/c/f=65/65/0 sis=67 pruub=13.720155716s) [1] r=-1 lpr=67 pi=[64,67)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 227.850311279s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[8.17( v 40'12 (0'0,40'12] local-lis/les=60/61 n=0 ec=60/38 lis/c=60/60 les/c/f=61/61/0 sis=67 pruub=9.223662376s) [1] r=-1 lpr=67 pi=[60,67)/1 crt=40'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 223.353805542s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[11.1b( empty local-lis/les=64/65 n=0 ec=64/46 lis/c=64/64 les/c/f=65/65/0 sis=67 pruub=13.720095634s) [1] r=-1 lpr=67 pi=[64,67)/1 crt=0'0 mlcod 0'0 active pruub 227.850326538s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[11.1c( empty local-lis/les=64/65 n=0 ec=64/46 lis/c=64/64 les/c/f=65/65/0 sis=67 pruub=13.720088005s) [1] r=-1 lpr=67 pi=[64,67)/1 crt=0'0 mlcod 0'0 active pruub 227.850326538s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[11.1b( empty local-lis/les=64/65 n=0 ec=64/46 lis/c=64/64 les/c/f=65/65/0 sis=67 pruub=13.720080376s) [1] r=-1 lpr=67 pi=[64,67)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 227.850326538s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[11.1c( empty local-lis/les=64/65 n=0 ec=64/46 lis/c=64/64 les/c/f=65/65/0 sis=67 pruub=13.720075607s) [1] r=-1 lpr=67 pi=[64,67)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 227.850326538s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[8.18( v 40'12 (0'0,40'12] local-lis/les=60/61 n=0 ec=60/38 lis/c=60/60 les/c/f=61/61/0 sis=67 pruub=9.222670555s) [1] r=-1 lpr=67 pi=[60,67)/1 crt=40'12 lcod 0'0 mlcod 0'0 active pruub 223.352951050s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[8.18( v 40'12 (0'0,40'12] local-lis/les=60/61 n=0 ec=60/38 lis/c=60/60 les/c/f=61/61/0 sis=67 pruub=9.222643852s) [1] r=-1 lpr=67 pi=[60,67)/1 crt=40'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 223.352951050s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[8.1f( v 40'12 (0'0,40'12] local-lis/les=60/61 n=0 ec=60/38 lis/c=60/60 les/c/f=61/61/0 sis=67 pruub=9.219504356s) [2] r=-1 lpr=67 pi=[60,67)/1 crt=40'12 lcod 0'0 mlcod 0'0 active pruub 223.349868774s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 67 pg[8.1f( v 40'12 (0'0,40'12] local-lis/les=60/61 n=0 ec=60/38 lis/c=60/60 les/c/f=61/61/0 sis=67 pruub=9.219487190s) [2] r=-1 lpr=67 pi=[60,67)/1 crt=40'12 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 223.349868774s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:07 compute-0 systemd[1]: libpod-conmon-c06a29a50b3fbe6c9170eccaef07587be5a7270c9753041d2da556db8cd68393.scope: Deactivated successfully.
Jan 20 18:44:07 compute-0 sudo[98291]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygieschmglytlvyxnofirqhxkiqvavky ; /usr/bin/python3'
Jan 20 18:44:07 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:07 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Jan 20 18:44:07 compute-0 sudo[98291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:44:07 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:07 compute-0 ceph-mgr[74676]: [progress INFO root] complete: finished ev acffdc11-ea51-4301-8e43-aa2dfad7cb0a (Updating ingress.nfs.cephfs deployment (+6 -> 6))
Jan 20 18:44:07 compute-0 ceph-mgr[74676]: [progress INFO root] Completed event acffdc11-ea51-4301-8e43-aa2dfad7cb0a (Updating ingress.nfs.cephfs deployment (+6 -> 6)) in 33 seconds
Jan 20 18:44:07 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Jan 20 18:44:07 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:07 compute-0 ceph-mgr[74676]: [progress INFO root] update: starting ev 81431ac0-327c-48b6-ae64-1a747013adb7 (Updating alertmanager deployment (+1 -> 1))
Jan 20 18:44:07 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Deploying daemon alertmanager.compute-0 on compute-0
Jan 20 18:44:07 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Deploying daemon alertmanager.compute-0 on compute-0
Jan 20 18:44:07 compute-0 sudo[98294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:44:07 compute-0 sudo[98294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:44:07 compute-0 sudo[98294]: pam_unix(sudo:session): session closed for user root
Jan 20 18:44:07 compute-0 python3[98293]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:44:07 compute-0 sudo[98319]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/prometheus/alertmanager:v0.25.0 --timeout 895 _orch deploy --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1
Jan 20 18:44:07 compute-0 sudo[98319]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:44:07 compute-0 podman[98331]: 2026-01-20 18:44:07.399527567 +0000 UTC m=+0.046264190 container create 0977a42cd87565efcc618ab707c79146819b49b8a8efb00997900758abf29c49 (image=quay.io/ceph/ceph:v19, name=nice_volhard, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Jan 20 18:44:07 compute-0 systemd[1]: Started libpod-conmon-0977a42cd87565efcc618ab707c79146819b49b8a8efb00997900758abf29c49.scope.
Jan 20 18:44:07 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:44:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cd2c3fe3c30ebfe4120627a31a3ae7b6a809227a4a6a42236a1365330792157/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:44:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cd2c3fe3c30ebfe4120627a31a3ae7b6a809227a4a6a42236a1365330792157/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:44:07 compute-0 podman[98331]: 2026-01-20 18:44:07.46993388 +0000 UTC m=+0.116670523 container init 0977a42cd87565efcc618ab707c79146819b49b8a8efb00997900758abf29c49 (image=quay.io/ceph/ceph:v19, name=nice_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Jan 20 18:44:07 compute-0 podman[98331]: 2026-01-20 18:44:07.38006231 +0000 UTC m=+0.026798963 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:44:07 compute-0 podman[98331]: 2026-01-20 18:44:07.476206276 +0000 UTC m=+0.122942899 container start 0977a42cd87565efcc618ab707c79146819b49b8a8efb00997900758abf29c49 (image=quay.io/ceph/ceph:v19, name=nice_volhard, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:44:07 compute-0 podman[98331]: 2026-01-20 18:44:07.479920735 +0000 UTC m=+0.126657388 container attach 0977a42cd87565efcc618ab707c79146819b49b8a8efb00997900758abf29c49 (image=quay.io/ceph/ceph:v19, name=nice_volhard, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Jan 20 18:44:07 compute-0 ceph-mgr[74676]: [progress INFO root] Writing back 22 completed events
Jan 20 18:44:07 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 20 18:44:07 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:07 compute-0 ceph-mgr[74676]: [progress INFO root] Completed event 6bc8ab9b-9093-4b67-a6e4-b8c5e39c1123 (Global Recovery Event) in 15 seconds
Jan 20 18:44:07 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Jan 20 18:44:07 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Jan 20 18:44:08 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Jan 20 18:44:08 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Jan 20 18:44:08 compute-0 ceph-mon[74381]: 7.b scrub starts
Jan 20 18:44:08 compute-0 ceph-mon[74381]: 7.b scrub ok
Jan 20 18:44:08 compute-0 ceph-mon[74381]: 6.d scrub starts
Jan 20 18:44:08 compute-0 ceph-mon[74381]: pgmap v57: 337 pgs: 337 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:44:08 compute-0 ceph-mon[74381]: 7.8 scrub starts
Jan 20 18:44:08 compute-0 ceph-mon[74381]: 7.8 scrub ok
Jan 20 18:44:08 compute-0 ceph-mon[74381]: 6.d scrub ok
Jan 20 18:44:08 compute-0 ceph-mon[74381]: 6.0 deep-scrub starts
Jan 20 18:44:08 compute-0 ceph-mon[74381]: 7.9 scrub starts
Jan 20 18:44:08 compute-0 ceph-mon[74381]: 7.9 scrub ok
Jan 20 18:44:08 compute-0 ceph-mon[74381]: pgmap v58: 337 pgs: 337 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:44:08 compute-0 ceph-mon[74381]: 7.1f scrub starts
Jan 20 18:44:08 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 20 18:44:08 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 20 18:44:08 compute-0 ceph-mon[74381]: 7.1f scrub ok
Jan 20 18:44:08 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 20 18:44:08 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Jan 20 18:44:08 compute-0 ceph-mon[74381]: 6.f scrub starts
Jan 20 18:44:08 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 20 18:44:08 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Jan 20 18:44:08 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 20 18:44:08 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 20 18:44:08 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 20 18:44:08 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 20 18:44:08 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Jan 20 18:44:08 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 20 18:44:08 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Jan 20 18:44:08 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 20 18:44:08 compute-0 ceph-mon[74381]: 6.0 deep-scrub ok
Jan 20 18:44:08 compute-0 ceph-mon[74381]: 6.f scrub ok
Jan 20 18:44:08 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 18:44:08 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 18:44:08 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 18:44:08 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 20 18:44:08 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 18:44:08 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 20 18:44:08 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 18:44:08 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 18:44:08 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 18:44:08 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 18:44:08 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 20 18:44:08 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 18:44:08 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 20 18:44:08 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 18:44:08 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:08 compute-0 ceph-mon[74381]: osdmap e67: 3 total, 3 up, 3 in
Jan 20 18:44:08 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:08 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:08 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:08 compute-0 ceph-mon[74381]: Deploying daemon alertmanager.compute-0 on compute-0
Jan 20 18:44:08 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:08 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Jan 20 18:44:08 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 68 pg[7.10( empty local-lis/les=67/68 n=0 ec=60/25 lis/c=60/60 les/c/f=61/61/0 sis=67) [0] r=0 lpr=67 pi=[60,67)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:44:08 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 68 pg[10.15( v 66'54 lc 66'53 (0'0,66'54] local-lis/les=67/68 n=0 ec=62/43 lis/c=62/62 les/c/f=63/63/0 sis=67) [0] r=0 lpr=67 pi=[62,67)/1 crt=66'54 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:44:08 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 68 pg[7.18( empty local-lis/les=67/68 n=0 ec=60/25 lis/c=60/60 les/c/f=61/61/0 sis=67) [0] r=0 lpr=67 pi=[60,67)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:44:08 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 68 pg[7.1e( empty local-lis/les=67/68 n=0 ec=60/25 lis/c=60/60 les/c/f=61/61/0 sis=67) [0] r=0 lpr=67 pi=[60,67)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:44:08 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 68 pg[10.14( v 66'54 lc 66'53 (0'0,66'54] local-lis/les=67/68 n=0 ec=62/43 lis/c=62/62 les/c/f=63/63/0 sis=67) [0] r=0 lpr=67 pi=[62,67)/1 crt=66'54 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:44:08 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 68 pg[12.12( v 56'44 (0'0,56'44] local-lis/les=67/68 n=0 ec=64/53 lis/c=64/64 les/c/f=66/66/0 sis=67) [0] r=0 lpr=67 pi=[64,67)/1 crt=56'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:44:08 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 68 pg[7.9( empty local-lis/les=67/68 n=0 ec=60/25 lis/c=60/60 les/c/f=61/61/0 sis=67) [0] r=0 lpr=67 pi=[60,67)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:44:08 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 68 pg[12.6( v 66'45 lc 56'43 (0'0,66'45] local-lis/les=67/68 n=1 ec=64/53 lis/c=64/64 les/c/f=66/66/0 sis=67) [0] r=0 lpr=67 pi=[64,67)/1 crt=66'45 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:44:08 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 68 pg[7.13( empty local-lis/les=67/68 n=0 ec=60/25 lis/c=60/60 les/c/f=61/61/0 sis=67) [0] r=0 lpr=67 pi=[60,67)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:44:08 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 68 pg[10.13( v 44'48 (0'0,44'48] local-lis/les=67/68 n=0 ec=62/43 lis/c=62/62 les/c/f=63/63/0 sis=67) [0] r=0 lpr=67 pi=[62,67)/1 crt=44'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:44:08 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 68 pg[12.19( v 56'44 (0'0,56'44] local-lis/les=67/68 n=0 ec=64/53 lis/c=64/64 les/c/f=66/66/0 sis=67) [0] r=0 lpr=67 pi=[64,67)/1 crt=56'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:44:08 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 68 pg[10.18( v 44'48 (0'0,44'48] local-lis/les=67/68 n=0 ec=62/43 lis/c=62/62 les/c/f=63/63/0 sis=67) [0] r=0 lpr=67 pi=[62,67)/1 crt=44'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:44:08 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 68 pg[12.1c( v 56'44 (0'0,56'44] local-lis/les=67/68 n=0 ec=64/53 lis/c=64/64 les/c/f=66/66/0 sis=67) [0] r=0 lpr=67 pi=[64,67)/1 crt=56'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:44:08 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 68 pg[10.1b( v 44'48 (0'0,44'48] local-lis/les=67/68 n=0 ec=62/43 lis/c=62/62 les/c/f=63/63/0 sis=67) [0] r=0 lpr=67 pi=[62,67)/1 crt=44'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:44:08 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 68 pg[7.8( empty local-lis/les=67/68 n=0 ec=60/25 lis/c=60/60 les/c/f=61/61/0 sis=67) [0] r=0 lpr=67 pi=[60,67)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:44:08 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 68 pg[10.19( v 44'48 (0'0,44'48] local-lis/les=67/68 n=0 ec=62/43 lis/c=62/62 les/c/f=63/63/0 sis=67) [0] r=0 lpr=67 pi=[62,67)/1 crt=44'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:44:08 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 68 pg[7.b( empty local-lis/les=67/68 n=0 ec=60/25 lis/c=60/60 les/c/f=61/61/0 sis=67) [0] r=0 lpr=67 pi=[60,67)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:44:08 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 68 pg[10.2( v 44'48 (0'0,44'48] local-lis/les=67/68 n=1 ec=62/43 lis/c=62/62 les/c/f=63/63/0 sis=67) [0] r=0 lpr=67 pi=[62,67)/1 crt=44'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:44:08 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 68 pg[7.4( empty local-lis/les=67/68 n=0 ec=60/25 lis/c=60/60 les/c/f=61/61/0 sis=67) [0] r=0 lpr=67 pi=[60,67)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:44:08 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 68 pg[10.5( v 44'48 (0'0,44'48] local-lis/les=67/68 n=1 ec=62/43 lis/c=62/62 les/c/f=63/63/0 sis=67) [0] r=0 lpr=67 pi=[62,67)/1 crt=44'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:44:08 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 68 pg[7.f( empty local-lis/les=67/68 n=0 ec=60/25 lis/c=60/60 les/c/f=61/61/0 sis=67) [0] r=0 lpr=67 pi=[60,67)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:44:08 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 68 pg[7.2( empty local-lis/les=67/68 n=0 ec=60/25 lis/c=60/60 les/c/f=61/61/0 sis=67) [0] r=0 lpr=67 pi=[60,67)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:44:08 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 68 pg[7.3( empty local-lis/les=67/68 n=0 ec=60/25 lis/c=60/60 les/c/f=61/61/0 sis=67) [0] r=0 lpr=67 pi=[60,67)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:44:08 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 68 pg[7.6( empty local-lis/les=67/68 n=0 ec=60/25 lis/c=60/60 les/c/f=61/61/0 sis=67) [0] r=0 lpr=67 pi=[60,67)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:44:08 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 68 pg[12.c( v 56'44 (0'0,56'44] local-lis/les=67/68 n=0 ec=64/53 lis/c=64/64 les/c/f=66/66/0 sis=67) [0] r=0 lpr=67 pi=[64,67)/1 crt=56'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:44:08 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 68 pg[12.e( v 56'44 (0'0,56'44] local-lis/les=67/68 n=0 ec=64/53 lis/c=64/64 les/c/f=66/66/0 sis=67) [0] r=0 lpr=67 pi=[64,67)/1 crt=56'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:44:08 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 68 pg[10.8( v 44'48 (0'0,44'48] local-lis/les=67/68 n=1 ec=62/43 lis/c=62/62 les/c/f=63/63/0 sis=67) [0] r=0 lpr=67 pi=[62,67)/1 crt=44'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:44:08 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 68 pg[12.b( v 56'44 (0'0,56'44] local-lis/les=67/68 n=0 ec=64/53 lis/c=64/64 les/c/f=66/66/0 sis=67) [0] r=0 lpr=67 pi=[64,67)/1 crt=56'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:44:08 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 68 pg[12.10( v 66'47 lc 66'46 (0'0,66'47] local-lis/les=67/68 n=0 ec=64/53 lis/c=64/64 les/c/f=66/66/0 sis=67) [0] r=0 lpr=67 pi=[64,67)/1 crt=66'47 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:44:08 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 68 pg[7.e( empty local-lis/les=67/68 n=0 ec=60/25 lis/c=60/60 les/c/f=61/61/0 sis=67) [0] r=0 lpr=67 pi=[60,67)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:44:08 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 68 pg[12.a( v 66'45 lc 0'0 (0'0,66'45] local-lis/les=67/68 n=1 ec=64/53 lis/c=64/64 les/c/f=66/66/0 sis=67) [0] r=0 lpr=67 pi=[64,67)/1 crt=66'45 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:44:08 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 68 pg[7.1b( empty local-lis/les=67/68 n=0 ec=60/25 lis/c=60/60 les/c/f=61/61/0 sis=67) [0] r=0 lpr=67 pi=[60,67)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:44:08 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 68 pg[12.8( v 56'44 (0'0,56'44] local-lis/les=67/68 n=0 ec=64/53 lis/c=64/64 les/c/f=66/66/0 sis=67) [0] r=0 lpr=67 pi=[64,67)/1 crt=56'44 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:44:08 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v61: 337 pgs: 112 peering, 225 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:44:08 compute-0 nice_volhard[98359]: {
Jan 20 18:44:08 compute-0 nice_volhard[98359]:     "user_id": "openstack",
Jan 20 18:44:08 compute-0 nice_volhard[98359]:     "display_name": "openstack",
Jan 20 18:44:08 compute-0 nice_volhard[98359]:     "email": "",
Jan 20 18:44:08 compute-0 nice_volhard[98359]:     "suspended": 0,
Jan 20 18:44:08 compute-0 nice_volhard[98359]:     "max_buckets": 1000,
Jan 20 18:44:08 compute-0 nice_volhard[98359]:     "subusers": [],
Jan 20 18:44:08 compute-0 nice_volhard[98359]:     "keys": [
Jan 20 18:44:08 compute-0 nice_volhard[98359]:         {
Jan 20 18:44:08 compute-0 nice_volhard[98359]:             "user": "openstack",
Jan 20 18:44:08 compute-0 nice_volhard[98359]:             "access_key": "FGKCDM19EV4IB0T1OMAP",
Jan 20 18:44:08 compute-0 nice_volhard[98359]:             "secret_key": "hemdo3JYrquqnQLCIVC8N9CL7CqHZDVCoHdS5FGe",
Jan 20 18:44:08 compute-0 nice_volhard[98359]:             "active": true,
Jan 20 18:44:08 compute-0 nice_volhard[98359]:             "create_date": "2026-01-20T18:44:08.825420Z"
Jan 20 18:44:08 compute-0 nice_volhard[98359]:         }
Jan 20 18:44:08 compute-0 nice_volhard[98359]:     ],
Jan 20 18:44:08 compute-0 nice_volhard[98359]:     "swift_keys": [],
Jan 20 18:44:08 compute-0 nice_volhard[98359]:     "caps": [],
Jan 20 18:44:08 compute-0 nice_volhard[98359]:     "op_mask": "read, write, delete",
Jan 20 18:44:08 compute-0 nice_volhard[98359]:     "default_placement": "",
Jan 20 18:44:08 compute-0 nice_volhard[98359]:     "default_storage_class": "",
Jan 20 18:44:08 compute-0 nice_volhard[98359]:     "placement_tags": [],
Jan 20 18:44:08 compute-0 nice_volhard[98359]:     "bucket_quota": {
Jan 20 18:44:08 compute-0 nice_volhard[98359]:         "enabled": false,
Jan 20 18:44:08 compute-0 nice_volhard[98359]:         "check_on_raw": false,
Jan 20 18:44:08 compute-0 nice_volhard[98359]:         "max_size": -1,
Jan 20 18:44:08 compute-0 nice_volhard[98359]:         "max_size_kb": 0,
Jan 20 18:44:08 compute-0 nice_volhard[98359]:         "max_objects": -1
Jan 20 18:44:08 compute-0 nice_volhard[98359]:     },
Jan 20 18:44:08 compute-0 nice_volhard[98359]:     "user_quota": {
Jan 20 18:44:08 compute-0 nice_volhard[98359]:         "enabled": false,
Jan 20 18:44:08 compute-0 nice_volhard[98359]:         "check_on_raw": false,
Jan 20 18:44:08 compute-0 nice_volhard[98359]:         "max_size": -1,
Jan 20 18:44:08 compute-0 nice_volhard[98359]:         "max_size_kb": 0,
Jan 20 18:44:08 compute-0 nice_volhard[98359]:         "max_objects": -1
Jan 20 18:44:08 compute-0 nice_volhard[98359]:     },
Jan 20 18:44:08 compute-0 nice_volhard[98359]:     "temp_url_keys": [],
Jan 20 18:44:08 compute-0 nice_volhard[98359]:     "type": "rgw",
Jan 20 18:44:08 compute-0 nice_volhard[98359]:     "mfa_ids": [],
Jan 20 18:44:08 compute-0 nice_volhard[98359]:     "account_id": "",
Jan 20 18:44:08 compute-0 nice_volhard[98359]:     "path": "/",
Jan 20 18:44:08 compute-0 nice_volhard[98359]:     "create_date": "2026-01-20T18:44:08.824579Z",
Jan 20 18:44:08 compute-0 nice_volhard[98359]:     "tags": [],
Jan 20 18:44:08 compute-0 nice_volhard[98359]:     "group_ids": []
Jan 20 18:44:08 compute-0 nice_volhard[98359]: }
Jan 20 18:44:08 compute-0 nice_volhard[98359]: 
Jan 20 18:44:08 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-keepalived-nfs-cephfs-compute-0-kuklye[98005]: Tue Jan 20 18:44:08 2026: (VI_0) Received advert from 192.168.122.102 with lower priority 90, ours 100, forcing new election
Jan 20 18:44:08 compute-0 systemd[1]: libpod-0977a42cd87565efcc618ab707c79146819b49b8a8efb00997900758abf29c49.scope: Deactivated successfully.
Jan 20 18:44:08 compute-0 conmon[98359]: conmon 0977a42cd87565efcc61 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0977a42cd87565efcc618ab707c79146819b49b8a8efb00997900758abf29c49.scope/container/memory.events
Jan 20 18:44:08 compute-0 podman[98331]: 2026-01-20 18:44:08.969024134 +0000 UTC m=+1.615760757 container died 0977a42cd87565efcc618ab707c79146819b49b8a8efb00997900758abf29c49 (image=quay.io/ceph/ceph:v19, name=nice_volhard, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:44:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-4cd2c3fe3c30ebfe4120627a31a3ae7b6a809227a4a6a42236a1365330792157-merged.mount: Deactivated successfully.
Jan 20 18:44:09 compute-0 podman[98331]: 2026-01-20 18:44:09.375998143 +0000 UTC m=+2.022734766 container remove 0977a42cd87565efcc618ab707c79146819b49b8a8efb00997900758abf29c49 (image=quay.io/ceph/ceph:v19, name=nice_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Jan 20 18:44:09 compute-0 systemd[1]: libpod-conmon-0977a42cd87565efcc618ab707c79146819b49b8a8efb00997900758abf29c49.scope: Deactivated successfully.
Jan 20 18:44:09 compute-0 sudo[98291]: pam_unix(sudo:session): session closed for user root
Jan 20 18:44:09 compute-0 podman[98432]: 2026-01-20 18:44:09.420331302 +0000 UTC m=+1.711484802 volume create 4ddabca55f59b094211dc2485c0add549ee53d7f050320630b188799116dba21
Jan 20 18:44:09 compute-0 podman[98432]: 2026-01-20 18:44:09.426529657 +0000 UTC m=+1.717683157 container create dd2a226480c05617abc77194e15aadbb6b19627544b93a63980fcbfc0d467200 (image=quay.io/prometheus/alertmanager:v0.25.0, name=thirsty_herschel, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:44:09 compute-0 podman[98432]: 2026-01-20 18:44:09.408770544 +0000 UTC m=+1.699924064 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Jan 20 18:44:09 compute-0 systemd[1]: Started libpod-conmon-dd2a226480c05617abc77194e15aadbb6b19627544b93a63980fcbfc0d467200.scope.
Jan 20 18:44:09 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:44:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ef11d5b8f65556d8251f983a05a418c9313f1009ea74f39c30583d7cde3473b/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Jan 20 18:44:09 compute-0 podman[98432]: 2026-01-20 18:44:09.486042909 +0000 UTC m=+1.777196419 container init dd2a226480c05617abc77194e15aadbb6b19627544b93a63980fcbfc0d467200 (image=quay.io/prometheus/alertmanager:v0.25.0, name=thirsty_herschel, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:44:09 compute-0 podman[98432]: 2026-01-20 18:44:09.49136815 +0000 UTC m=+1.782521650 container start dd2a226480c05617abc77194e15aadbb6b19627544b93a63980fcbfc0d467200 (image=quay.io/prometheus/alertmanager:v0.25.0, name=thirsty_herschel, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:44:09 compute-0 thirsty_herschel[98640]: 65534 65534
Jan 20 18:44:09 compute-0 systemd[1]: libpod-dd2a226480c05617abc77194e15aadbb6b19627544b93a63980fcbfc0d467200.scope: Deactivated successfully.
Jan 20 18:44:09 compute-0 podman[98432]: 2026-01-20 18:44:09.494395571 +0000 UTC m=+1.785549071 container attach dd2a226480c05617abc77194e15aadbb6b19627544b93a63980fcbfc0d467200 (image=quay.io/prometheus/alertmanager:v0.25.0, name=thirsty_herschel, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:44:09 compute-0 podman[98432]: 2026-01-20 18:44:09.494607667 +0000 UTC m=+1.785761187 container died dd2a226480c05617abc77194e15aadbb6b19627544b93a63980fcbfc0d467200 (image=quay.io/prometheus/alertmanager:v0.25.0, name=thirsty_herschel, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:44:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-6ef11d5b8f65556d8251f983a05a418c9313f1009ea74f39c30583d7cde3473b-merged.mount: Deactivated successfully.
Jan 20 18:44:09 compute-0 podman[98432]: 2026-01-20 18:44:09.52896775 +0000 UTC m=+1.820121250 container remove dd2a226480c05617abc77194e15aadbb6b19627544b93a63980fcbfc0d467200 (image=quay.io/prometheus/alertmanager:v0.25.0, name=thirsty_herschel, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:44:09 compute-0 podman[98432]: 2026-01-20 18:44:09.533555083 +0000 UTC m=+1.824708603 volume remove 4ddabca55f59b094211dc2485c0add549ee53d7f050320630b188799116dba21
Jan 20 18:44:09 compute-0 systemd[1]: libpod-conmon-dd2a226480c05617abc77194e15aadbb6b19627544b93a63980fcbfc0d467200.scope: Deactivated successfully.
Jan 20 18:44:09 compute-0 ceph-mon[74381]: 9.15 scrub starts
Jan 20 18:44:09 compute-0 ceph-mon[74381]: 9.15 scrub ok
Jan 20 18:44:09 compute-0 ceph-mon[74381]: 10.17 scrub starts
Jan 20 18:44:09 compute-0 ceph-mon[74381]: 10.17 scrub ok
Jan 20 18:44:09 compute-0 ceph-mon[74381]: osdmap e68: 3 total, 3 up, 3 in
Jan 20 18:44:09 compute-0 ceph-mon[74381]: pgmap v61: 337 pgs: 112 peering, 225 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:44:09 compute-0 ceph-mon[74381]: 10.16 scrub starts
Jan 20 18:44:09 compute-0 ceph-mon[74381]: 10.16 scrub ok
Jan 20 18:44:09 compute-0 podman[98657]: 2026-01-20 18:44:09.586459558 +0000 UTC m=+0.034930989 volume create 7dea72ccbdfc5389189c6a80d2e2d77f674d613b9b14d7eeef7aa1242d4bae5c
Jan 20 18:44:09 compute-0 podman[98657]: 2026-01-20 18:44:09.595177331 +0000 UTC m=+0.043648762 container create 9cbdf69ca9a8541f11ef02f271f5ddf67f41c1378b48a197a06c67eea5c9f0a1 (image=quay.io/prometheus/alertmanager:v0.25.0, name=focused_hamilton, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:44:09 compute-0 systemd[1]: Started libpod-conmon-9cbdf69ca9a8541f11ef02f271f5ddf67f41c1378b48a197a06c67eea5c9f0a1.scope.
Jan 20 18:44:09 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:44:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ebefc0ce7894397e04874265d2f435bd6b372a650a3641ec192844833ad7fdd/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Jan 20 18:44:09 compute-0 podman[98657]: 2026-01-20 18:44:09.573474353 +0000 UTC m=+0.021945804 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Jan 20 18:44:09 compute-0 podman[98657]: 2026-01-20 18:44:09.671944161 +0000 UTC m=+0.120415622 container init 9cbdf69ca9a8541f11ef02f271f5ddf67f41c1378b48a197a06c67eea5c9f0a1 (image=quay.io/prometheus/alertmanager:v0.25.0, name=focused_hamilton, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:44:09 compute-0 podman[98657]: 2026-01-20 18:44:09.676756719 +0000 UTC m=+0.125228150 container start 9cbdf69ca9a8541f11ef02f271f5ddf67f41c1378b48a197a06c67eea5c9f0a1 (image=quay.io/prometheus/alertmanager:v0.25.0, name=focused_hamilton, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:44:09 compute-0 focused_hamilton[98674]: 65534 65534
Jan 20 18:44:09 compute-0 systemd[1]: libpod-9cbdf69ca9a8541f11ef02f271f5ddf67f41c1378b48a197a06c67eea5c9f0a1.scope: Deactivated successfully.
Jan 20 18:44:09 compute-0 podman[98657]: 2026-01-20 18:44:09.67941467 +0000 UTC m=+0.127886131 container attach 9cbdf69ca9a8541f11ef02f271f5ddf67f41c1378b48a197a06c67eea5c9f0a1 (image=quay.io/prometheus/alertmanager:v0.25.0, name=focused_hamilton, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:44:09 compute-0 podman[98657]: 2026-01-20 18:44:09.679649466 +0000 UTC m=+0.128120917 container died 9cbdf69ca9a8541f11ef02f271f5ddf67f41c1378b48a197a06c67eea5c9f0a1 (image=quay.io/prometheus/alertmanager:v0.25.0, name=focused_hamilton, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:44:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-6ebefc0ce7894397e04874265d2f435bd6b372a650a3641ec192844833ad7fdd-merged.mount: Deactivated successfully.
Jan 20 18:44:09 compute-0 podman[98657]: 2026-01-20 18:44:09.715079468 +0000 UTC m=+0.163550909 container remove 9cbdf69ca9a8541f11ef02f271f5ddf67f41c1378b48a197a06c67eea5c9f0a1 (image=quay.io/prometheus/alertmanager:v0.25.0, name=focused_hamilton, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:44:09 compute-0 podman[98657]: 2026-01-20 18:44:09.718151839 +0000 UTC m=+0.166623270 volume remove 7dea72ccbdfc5389189c6a80d2e2d77f674d613b9b14d7eeef7aa1242d4bae5c
Jan 20 18:44:09 compute-0 systemd[1]: libpod-conmon-9cbdf69ca9a8541f11ef02f271f5ddf67f41c1378b48a197a06c67eea5c9f0a1.scope: Deactivated successfully.
Jan 20 18:44:09 compute-0 systemd[1]: Reloading.
Jan 20 18:44:09 compute-0 systemd-rc-local-generator[98741]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:44:09 compute-0 systemd-sysv-generator[98744]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:44:09 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 12.6 scrub starts
Jan 20 18:44:09 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 12.6 scrub ok
Jan 20 18:44:09 compute-0 python3[98715]: ansible-ansible.builtin.get_url Invoked with url=http://192.168.122.100:8443 dest=/tmp/dash_response mode=0644 validate_certs=False force=False http_agent=ansible-httpget use_proxy=True force_basic_auth=False use_gssapi=False backup=False checksum= timeout=10 unredirected_headers=[] decompress=True use_netrc=True unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None headers=None tmp_dest=None ciphers=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:44:10 compute-0 systemd[1]: Reloading.
Jan 20 18:44:10 compute-0 systemd-rc-local-generator[98777]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:44:10 compute-0 ceph-mgr[74676]: [dashboard INFO request] [192.168.122.100:49276] [GET] [200] [0.132s] [6.3K] [1022e126-a3d1-4bb0-9da4-baa252d67ac2] /
Jan 20 18:44:10 compute-0 systemd-sysv-generator[98782]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:44:10 compute-0 systemd[1]: Starting Ceph alertmanager.compute-0 for aecbbf3b-b405-507b-97d7-637a83f5b4b1...
Jan 20 18:44:10 compute-0 podman[98861]: 2026-01-20 18:44:10.48964572 +0000 UTC m=+0.041771551 volume create 9f2672371bbd283d13046fbb26b794c830d0ef5aa64cb2c388880da63ca12449
Jan 20 18:44:10 compute-0 podman[98861]: 2026-01-20 18:44:10.501071475 +0000 UTC m=+0.053197316 container create 37c94b4d155e72e431120bc1516e6aee70acdb4e82b4f6a7d55a98467b041cf1 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:44:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e24557620e2c2001eb42373eb94cf38a998383750a0b4c35120e7eb4bee4120/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Jan 20 18:44:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e24557620e2c2001eb42373eb94cf38a998383750a0b4c35120e7eb4bee4120/merged/etc/alertmanager supports timestamps until 2038 (0x7fffffff)
Jan 20 18:44:10 compute-0 podman[98861]: 2026-01-20 18:44:10.556012515 +0000 UTC m=+0.108138376 container init 37c94b4d155e72e431120bc1516e6aee70acdb4e82b4f6a7d55a98467b041cf1 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:44:10 compute-0 podman[98861]: 2026-01-20 18:44:10.560309209 +0000 UTC m=+0.112435050 container start 37c94b4d155e72e431120bc1516e6aee70acdb4e82b4f6a7d55a98467b041cf1 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:44:10 compute-0 bash[98861]: 37c94b4d155e72e431120bc1516e6aee70acdb4e82b4f6a7d55a98467b041cf1
Jan 20 18:44:10 compute-0 podman[98861]: 2026-01-20 18:44:10.474440417 +0000 UTC m=+0.026566268 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Jan 20 18:44:10 compute-0 systemd[1]: Started Ceph alertmanager.compute-0 for aecbbf3b-b405-507b-97d7-637a83f5b4b1.
Jan 20 18:44:10 compute-0 ceph-mon[74381]: 7.1a scrub starts
Jan 20 18:44:10 compute-0 ceph-mon[74381]: 7.1a scrub ok
Jan 20 18:44:10 compute-0 ceph-mon[74381]: 12.6 scrub starts
Jan 20 18:44:10 compute-0 ceph-mon[74381]: 12.6 scrub ok
Jan 20 18:44:10 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[98876]: ts=2026-01-20T18:44:10.598Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)"
Jan 20 18:44:10 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[98876]: ts=2026-01-20T18:44:10.598Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)"
Jan 20 18:44:10 compute-0 python3[98850]: ansible-ansible.builtin.get_url Invoked with url=http://192.168.122.100:8443 dest=/tmp/dash_http_response mode=0644 validate_certs=False username=VALUE_SPECIFIED_IN_NO_LOG_PARAMETER password=NOT_LOGGING_PARAMETER url_username=VALUE_SPECIFIED_IN_NO_LOG_PARAMETER url_password=NOT_LOGGING_PARAMETER force=False http_agent=ansible-httpget use_proxy=True force_basic_auth=False use_gssapi=False backup=False checksum= timeout=10 unredirected_headers=[] decompress=True use_netrc=True unsafe_writes=False client_cert=None client_key=None headers=None tmp_dest=None ciphers=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:44:10 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[98876]: ts=2026-01-20T18:44:10.609Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.122.100 port=9094
Jan 20 18:44:10 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[98876]: ts=2026-01-20T18:44:10.611Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s
Jan 20 18:44:10 compute-0 sudo[98319]: pam_unix(sudo:session): session closed for user root
Jan 20 18:44:10 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 18:44:10 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:10 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 18:44:10 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[98876]: ts=2026-01-20T18:44:10.646Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml
Jan 20 18:44:10 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[98876]: ts=2026-01-20T18:44:10.646Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml
Jan 20 18:44:10 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:10 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Jan 20 18:44:10 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[98876]: ts=2026-01-20T18:44:10.650Z caller=tls_config.go:232 level=info msg="Listening on" address=192.168.122.100:9093
Jan 20 18:44:10 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[98876]: ts=2026-01-20T18:44:10.650Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=192.168.122.100:9093
Jan 20 18:44:10 compute-0 ceph-mgr[74676]: [dashboard INFO request] [192.168.122.100:49292] [GET] [200] [0.001s] [6.3K] [cfbd2e0c-3603-4ed6-b499-56ec1745f6ed] /
Jan 20 18:44:10 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 10.15 deep-scrub starts
Jan 20 18:44:10 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:10 compute-0 ceph-mgr[74676]: [progress INFO root] complete: finished ev 81431ac0-327c-48b6-ae64-1a747013adb7 (Updating alertmanager deployment (+1 -> 1))
Jan 20 18:44:10 compute-0 ceph-mgr[74676]: [progress INFO root] Completed event 81431ac0-327c-48b6-ae64-1a747013adb7 (Updating alertmanager deployment (+1 -> 1)) in 3 seconds
Jan 20 18:44:10 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Jan 20 18:44:10 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:10 compute-0 ceph-mgr[74676]: [progress INFO root] update: starting ev 5a10f15c-962b-41ba-823a-8b1f4795524e (Updating grafana deployment (+1 -> 1))
Jan 20 18:44:10 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 10.15 deep-scrub ok
Jan 20 18:44:10 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.services.monitoring] Regenerating cephadm self-signed grafana TLS certificates
Jan 20 18:44:10 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Regenerating cephadm self-signed grafana TLS certificates
Jan 20 18:44:10 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v62: 337 pgs: 112 peering, 225 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:44:10 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.grafana_cert}] v 0)
Jan 20 18:44:10 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:10 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.grafana_key}] v 0)
Jan 20 18:44:10 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:10 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"} v 0)
Jan 20 18:44:10 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Jan 20 18:44:10 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Jan 20 18:44:10 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_SSL_VERIFY}] v 0)
Jan 20 18:44:10 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:10 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Deploying daemon grafana.compute-0 on compute-0
Jan 20 18:44:10 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Deploying daemon grafana.compute-0 on compute-0
Jan 20 18:44:10 compute-0 sudo[98897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:44:10 compute-0 sudo[98897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:44:10 compute-0 sudo[98897]: pam_unix(sudo:session): session closed for user root
Jan 20 18:44:10 compute-0 sudo[98922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/grafana:10.4.0 --timeout 895 _orch deploy --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1
Jan 20 18:44:10 compute-0 sudo[98922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:44:11 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 12.a scrub starts
Jan 20 18:44:11 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 12.a scrub ok
Jan 20 18:44:11 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e68 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 18:44:12 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:12 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:12 compute-0 ceph-mon[74381]: 10.15 deep-scrub starts
Jan 20 18:44:12 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:12 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:12 compute-0 ceph-mon[74381]: 10.15 deep-scrub ok
Jan 20 18:44:12 compute-0 ceph-mon[74381]: Regenerating cephadm self-signed grafana TLS certificates
Jan 20 18:44:12 compute-0 ceph-mon[74381]: pgmap v62: 337 pgs: 112 peering, 225 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:44:12 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:12 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:12 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Jan 20 18:44:12 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:12 compute-0 ceph-mon[74381]: 7.19 scrub starts
Jan 20 18:44:12 compute-0 ceph-mon[74381]: 7.19 scrub ok
Jan 20 18:44:12 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[98876]: ts=2026-01-20T18:44:12.611Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000104855s
Jan 20 18:44:12 compute-0 ceph-mgr[74676]: [progress INFO root] Writing back 24 completed events
Jan 20 18:44:12 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 20 18:44:12 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 12.10 scrub starts
Jan 20 18:44:12 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 12.10 scrub ok
Jan 20 18:44:12 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v63: 337 pgs: 112 peering, 225 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:44:12 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:12 compute-0 ceph-mgr[74676]: [progress WARNING root] Starting Global Recovery Event,112 pgs not in active + clean state
Jan 20 18:44:13 compute-0 ceph-mon[74381]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Jan 20 18:44:13 compute-0 ceph-mon[74381]: Deploying daemon grafana.compute-0 on compute-0
Jan 20 18:44:13 compute-0 ceph-mon[74381]: 8.6 deep-scrub starts
Jan 20 18:44:13 compute-0 ceph-mon[74381]: 8.6 deep-scrub ok
Jan 20 18:44:13 compute-0 ceph-mon[74381]: 12.a scrub starts
Jan 20 18:44:13 compute-0 ceph-mon[74381]: 12.a scrub ok
Jan 20 18:44:13 compute-0 ceph-mon[74381]: 12.5 scrub starts
Jan 20 18:44:13 compute-0 ceph-mon[74381]: 12.5 scrub ok
Jan 20 18:44:13 compute-0 ceph-mon[74381]: 12.10 scrub starts
Jan 20 18:44:13 compute-0 ceph-mon[74381]: 12.10 scrub ok
Jan 20 18:44:13 compute-0 ceph-mon[74381]: pgmap v63: 337 pgs: 112 peering, 225 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:44:13 compute-0 ceph-mon[74381]: 10.0 scrub starts
Jan 20 18:44:13 compute-0 ceph-mon[74381]: 10.0 scrub ok
Jan 20 18:44:13 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:14 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Jan 20 18:44:14 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:14 : epoch 696fccef : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 20 18:44:14 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:14 : epoch 696fccef : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Jan 20 18:44:14 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:14 : epoch 696fccef : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Jan 20 18:44:14 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:14 : epoch 696fccef : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Jan 20 18:44:14 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:14 : epoch 696fccef : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Jan 20 18:44:14 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:14 : epoch 696fccef : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Jan 20 18:44:14 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:14 : epoch 696fccef : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Jan 20 18:44:14 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:14 : epoch 696fccef : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 20 18:44:14 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:14 : epoch 696fccef : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 20 18:44:14 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:14 : epoch 696fccef : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 20 18:44:14 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v64: 337 pgs: 337 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 298 B/s, 1 keys/s, 2 objects/s recovering
Jan 20 18:44:14 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Jan 20 18:44:14 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} v 0)
Jan 20 18:44:14 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Jan 20 18:44:14 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0)
Jan 20 18:44:14 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Jan 20 18:44:14 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:14 : epoch 696fccef : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Jan 20 18:44:14 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:14 : epoch 696fccef : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 20 18:44:14 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:14 : epoch 696fccef : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Jan 20 18:44:14 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:14 : epoch 696fccef : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Jan 20 18:44:14 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:14 : epoch 696fccef : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Jan 20 18:44:14 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:14 : epoch 696fccef : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Jan 20 18:44:14 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:14 : epoch 696fccef : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Jan 20 18:44:14 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:14 : epoch 696fccef : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Jan 20 18:44:14 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:14 : epoch 696fccef : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Jan 20 18:44:14 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:14 : epoch 696fccef : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Jan 20 18:44:14 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:14 : epoch 696fccef : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Jan 20 18:44:14 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:14 : epoch 696fccef : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Jan 20 18:44:14 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:14 : epoch 696fccef : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Jan 20 18:44:14 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:14 : epoch 696fccef : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Jan 20 18:44:14 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:14 : epoch 696fccef : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 20 18:44:14 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:14 : epoch 696fccef : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Jan 20 18:44:14 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:14 : epoch 696fccef : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 20 18:44:15 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Jan 20 18:44:15 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Jan 20 18:44:15 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Jan 20 18:44:15 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:15 : epoch 696fccef : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc1bc000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:44:16 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:16 : epoch 696fccef : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc1b4001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:44:16 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:16 : epoch 696fccef : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc1a0000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:44:16 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v65: 337 pgs: 337 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 247 B/s, 1 keys/s, 2 objects/s recovering
Jan 20 18:44:16 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} v 0)
Jan 20 18:44:16 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Jan 20 18:44:16 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0)
Jan 20 18:44:16 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Jan 20 18:44:16 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 9.0 scrub starts
Jan 20 18:44:16 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:44:16 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:44:16 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:44:16 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:44:16 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:44:16 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:44:16 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 9.0 scrub ok
Jan 20 18:44:17 compute-0 podman[98986]: 2026-01-20 18:44:17.688839855 +0000 UTC m=+6.319944001 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Jan 20 18:44:17 compute-0 podman[98986]: 2026-01-20 18:44:17.707325296 +0000 UTC m=+6.338429412 container create 4bdde6838c1905a96a407ae4a070d333decdfa22ed7ea6049bbdf7e409cbbef8 (image=quay.io/ceph/grafana:10.4.0, name=tender_payne, maintainer=Grafana Labs <hello@grafana.com>)
Jan 20 18:44:17 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 8.1 deep-scrub starts
Jan 20 18:44:17 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 8.1 deep-scrub ok
Jan 20 18:44:17 compute-0 systemd[1]: Started libpod-conmon-4bdde6838c1905a96a407ae4a070d333decdfa22ed7ea6049bbdf7e409cbbef8.scope.
Jan 20 18:44:17 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:44:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:17 : epoch 696fccef : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc194000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:44:17 compute-0 ceph-mgr[74676]: [progress INFO root] Completed event 61668e3f-b078-4ccf-a561-1ee716946411 (Global Recovery Event) in 5 seconds
Jan 20 18:44:17 compute-0 podman[98986]: 2026-01-20 18:44:17.919965369 +0000 UTC m=+6.551069495 container init 4bdde6838c1905a96a407ae4a070d333decdfa22ed7ea6049bbdf7e409cbbef8 (image=quay.io/ceph/grafana:10.4.0, name=tender_payne, maintainer=Grafana Labs <hello@grafana.com>)
Jan 20 18:44:17 compute-0 podman[98986]: 2026-01-20 18:44:17.928837396 +0000 UTC m=+6.559941512 container start 4bdde6838c1905a96a407ae4a070d333decdfa22ed7ea6049bbdf7e409cbbef8 (image=quay.io/ceph/grafana:10.4.0, name=tender_payne, maintainer=Grafana Labs <hello@grafana.com>)
Jan 20 18:44:17 compute-0 podman[98986]: 2026-01-20 18:44:17.931650921 +0000 UTC m=+6.562755187 container attach 4bdde6838c1905a96a407ae4a070d333decdfa22ed7ea6049bbdf7e409cbbef8 (image=quay.io/ceph/grafana:10.4.0, name=tender_payne, maintainer=Grafana Labs <hello@grafana.com>)
Jan 20 18:44:17 compute-0 systemd[1]: libpod-4bdde6838c1905a96a407ae4a070d333decdfa22ed7ea6049bbdf7e409cbbef8.scope: Deactivated successfully.
Jan 20 18:44:17 compute-0 tender_payne[99220]: 472 0
Jan 20 18:44:17 compute-0 podman[98986]: 2026-01-20 18:44:17.93429579 +0000 UTC m=+6.565399906 container died 4bdde6838c1905a96a407ae4a070d333decdfa22ed7ea6049bbdf7e409cbbef8 (image=quay.io/ceph/grafana:10.4.0, name=tender_payne, maintainer=Grafana Labs <hello@grafana.com>)
Jan 20 18:44:17 compute-0 conmon[99220]: conmon 4bdde6838c1905a96a40 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4bdde6838c1905a96a407ae4a070d333decdfa22ed7ea6049bbdf7e409cbbef8.scope/container/memory.events
Jan 20 18:44:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae097c90facf5766c527c670e4f8b8edd59600e5550fba87bd5b3b0e87347d72-merged.mount: Deactivated successfully.
Jan 20 18:44:17 compute-0 podman[98986]: 2026-01-20 18:44:17.975452185 +0000 UTC m=+6.606556301 container remove 4bdde6838c1905a96a407ae4a070d333decdfa22ed7ea6049bbdf7e409cbbef8 (image=quay.io/ceph/grafana:10.4.0, name=tender_payne, maintainer=Grafana Labs <hello@grafana.com>)
Jan 20 18:44:17 compute-0 systemd[1]: libpod-conmon-4bdde6838c1905a96a407ae4a070d333decdfa22ed7ea6049bbdf7e409cbbef8.scope: Deactivated successfully.
Jan 20 18:44:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:18 : epoch 696fccef : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc19c000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:44:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [WARNING] 019/184418 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 20 18:44:18 compute-0 podman[99238]: 2026-01-20 18:44:18.054113926 +0000 UTC m=+0.051808349 container create e6d644bee927825f157c3a902685d6c52a6798815362a469859532b227ec9ca9 (image=quay.io/ceph/grafana:10.4.0, name=agitated_wilson, maintainer=Grafana Labs <hello@grafana.com>)
Jan 20 18:44:18 compute-0 systemd[1]: Started libpod-conmon-e6d644bee927825f157c3a902685d6c52a6798815362a469859532b227ec9ca9.scope.
Jan 20 18:44:18 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:44:18 compute-0 podman[99238]: 2026-01-20 18:44:18.025031342 +0000 UTC m=+0.022725785 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Jan 20 18:44:18 compute-0 podman[99238]: 2026-01-20 18:44:18.127897398 +0000 UTC m=+0.125591871 container init e6d644bee927825f157c3a902685d6c52a6798815362a469859532b227ec9ca9 (image=quay.io/ceph/grafana:10.4.0, name=agitated_wilson, maintainer=Grafana Labs <hello@grafana.com>)
Jan 20 18:44:18 compute-0 podman[99238]: 2026-01-20 18:44:18.133148207 +0000 UTC m=+0.130842650 container start e6d644bee927825f157c3a902685d6c52a6798815362a469859532b227ec9ca9 (image=quay.io/ceph/grafana:10.4.0, name=agitated_wilson, maintainer=Grafana Labs <hello@grafana.com>)
Jan 20 18:44:18 compute-0 agitated_wilson[99255]: 472 0
Jan 20 18:44:18 compute-0 systemd[1]: libpod-e6d644bee927825f157c3a902685d6c52a6798815362a469859532b227ec9ca9.scope: Deactivated successfully.
Jan 20 18:44:18 compute-0 podman[99238]: 2026-01-20 18:44:18.136673901 +0000 UTC m=+0.134368344 container attach e6d644bee927825f157c3a902685d6c52a6798815362a469859532b227ec9ca9 (image=quay.io/ceph/grafana:10.4.0, name=agitated_wilson, maintainer=Grafana Labs <hello@grafana.com>)
Jan 20 18:44:18 compute-0 podman[99238]: 2026-01-20 18:44:18.136933277 +0000 UTC m=+0.134627710 container died e6d644bee927825f157c3a902685d6c52a6798815362a469859532b227ec9ca9 (image=quay.io/ceph/grafana:10.4.0, name=agitated_wilson, maintainer=Grafana Labs <hello@grafana.com>)
Jan 20 18:44:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-49fcdbfa8f5fb7d21a40fcc1c57801dab75a26b331fe7116acc36f73533cc206-merged.mount: Deactivated successfully.
Jan 20 18:44:18 compute-0 podman[99238]: 2026-01-20 18:44:18.177420854 +0000 UTC m=+0.175115287 container remove e6d644bee927825f157c3a902685d6c52a6798815362a469859532b227ec9ca9 (image=quay.io/ceph/grafana:10.4.0, name=agitated_wilson, maintainer=Grafana Labs <hello@grafana.com>)
Jan 20 18:44:18 compute-0 systemd[1]: libpod-conmon-e6d644bee927825f157c3a902685d6c52a6798815362a469859532b227ec9ca9.scope: Deactivated successfully.
Jan 20 18:44:18 compute-0 ceph-mon[74381]: 7.d deep-scrub starts
Jan 20 18:44:18 compute-0 ceph-mon[74381]: 7.d deep-scrub ok
Jan 20 18:44:18 compute-0 ceph-mon[74381]: 7.14 deep-scrub starts
Jan 20 18:44:18 compute-0 ceph-mon[74381]: 7.14 deep-scrub ok
Jan 20 18:44:18 compute-0 ceph-mon[74381]: pgmap v64: 337 pgs: 337 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 298 B/s, 1 keys/s, 2 objects/s recovering
Jan 20 18:44:18 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Jan 20 18:44:18 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Jan 20 18:44:18 compute-0 systemd[1]: Reloading.
Jan 20 18:44:18 compute-0 systemd-rc-local-generator[99294]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:44:18 compute-0 systemd-sysv-generator[99298]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:44:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:18 : epoch 696fccef : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc1b4001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:44:18 compute-0 systemd[1]: Reloading.
Jan 20 18:44:18 compute-0 systemd-rc-local-generator[99339]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:44:18 compute-0 systemd-sysv-generator[99344]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:44:18 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v66: 337 pgs: 2 active+clean+scrubbing, 335 active+clean; 456 KiB data, 107 MiB used, 60 GiB / 60 GiB avail; 234 B/s, 1 keys/s, 2 objects/s recovering
Jan 20 18:44:18 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} v 0)
Jan 20 18:44:18 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Jan 20 18:44:18 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0)
Jan 20 18:44:18 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Jan 20 18:44:18 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 11.2 deep-scrub starts
Jan 20 18:44:18 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 11.2 deep-scrub ok
Jan 20 18:44:18 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 20 18:44:18 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 20 18:44:18 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Jan 20 18:44:18 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Jan 20 18:44:18 compute-0 systemd[1]: Starting Ceph grafana.compute-0 for aecbbf3b-b405-507b-97d7-637a83f5b4b1...
Jan 20 18:44:19 compute-0 podman[99395]: 2026-01-20 18:44:19.044928967 +0000 UTC m=+0.039143471 container create e8d4a682724f2039aeaadefabc5bd9331e64c1be264a6b723bd77661ef30236d (image=quay.io/ceph/grafana:10.4.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 20 18:44:19 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 69 pg[6.6( v 56'46 (0'0,56'46] local-lis/les=58/60 n=1 ec=58/23 lis/c=58/58 les/c/f=60/60/0 sis=69 pruub=12.217093468s) [1] r=-1 lpr=69 pi=[58,69)/1 crt=56'46 lcod 0'0 mlcod 0'0 active pruub 238.336166382s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:19 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 69 pg[6.6( v 56'46 (0'0,56'46] local-lis/les=58/60 n=1 ec=58/23 lis/c=58/58 les/c/f=60/60/0 sis=69 pruub=12.217053413s) [1] r=-1 lpr=69 pi=[58,69)/1 crt=56'46 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 238.336166382s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:19 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 69 pg[6.2( v 56'46 (0'0,56'46] local-lis/les=58/60 n=0 ec=58/23 lis/c=58/58 les/c/f=60/60/0 sis=69 pruub=12.216214180s) [1] r=-1 lpr=69 pi=[58,69)/1 crt=56'46 lcod 0'0 mlcod 0'0 active pruub 238.335403442s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:19 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 69 pg[6.2( v 56'46 (0'0,56'46] local-lis/les=58/60 n=0 ec=58/23 lis/c=58/58 les/c/f=60/60/0 sis=69 pruub=12.216195107s) [1] r=-1 lpr=69 pi=[58,69)/1 crt=56'46 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 238.335403442s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:19 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 69 pg[6.e( v 56'46 (0'0,56'46] local-lis/les=58/60 n=1 ec=58/23 lis/c=58/58 les/c/f=60/60/0 sis=69 pruub=12.216073990s) [1] r=-1 lpr=69 pi=[58,69)/1 crt=56'46 lcod 0'0 mlcod 0'0 active pruub 238.335388184s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:19 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 69 pg[6.e( v 56'46 (0'0,56'46] local-lis/les=58/60 n=1 ec=58/23 lis/c=58/58 les/c/f=60/60/0 sis=69 pruub=12.216055870s) [1] r=-1 lpr=69 pi=[58,69)/1 crt=56'46 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 238.335388184s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:19 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 69 pg[6.a( v 56'46 (0'0,56'46] local-lis/les=58/60 n=0 ec=58/23 lis/c=58/58 les/c/f=60/60/0 sis=69 pruub=12.216007233s) [1] r=-1 lpr=69 pi=[58,69)/1 crt=56'46 lcod 0'0 mlcod 0'0 active pruub 238.335403442s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:19 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 69 pg[6.a( v 56'46 (0'0,56'46] local-lis/les=58/60 n=0 ec=58/23 lis/c=58/58 les/c/f=60/60/0 sis=69 pruub=12.215990067s) [1] r=-1 lpr=69 pi=[58,69)/1 crt=56'46 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 238.335403442s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/850f5843f5d719bc6e0b542275fd051de5218e69cf48a54ae5bd381575205976/merged/etc/grafana/certs supports timestamps until 2038 (0x7fffffff)
Jan 20 18:44:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/850f5843f5d719bc6e0b542275fd051de5218e69cf48a54ae5bd381575205976/merged/etc/grafana/grafana.ini supports timestamps until 2038 (0x7fffffff)
Jan 20 18:44:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/850f5843f5d719bc6e0b542275fd051de5218e69cf48a54ae5bd381575205976/merged/etc/grafana/provisioning/datasources supports timestamps until 2038 (0x7fffffff)
Jan 20 18:44:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/850f5843f5d719bc6e0b542275fd051de5218e69cf48a54ae5bd381575205976/merged/etc/grafana/provisioning/dashboards supports timestamps until 2038 (0x7fffffff)
Jan 20 18:44:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/850f5843f5d719bc6e0b542275fd051de5218e69cf48a54ae5bd381575205976/merged/var/lib/grafana/grafana.db supports timestamps until 2038 (0x7fffffff)
Jan 20 18:44:19 compute-0 podman[99395]: 2026-01-20 18:44:19.103099343 +0000 UTC m=+0.097313867 container init e8d4a682724f2039aeaadefabc5bd9331e64c1be264a6b723bd77661ef30236d (image=quay.io/ceph/grafana:10.4.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 20 18:44:19 compute-0 podman[99395]: 2026-01-20 18:44:19.109982016 +0000 UTC m=+0.104196520 container start e8d4a682724f2039aeaadefabc5bd9331e64c1be264a6b723bd77661ef30236d (image=quay.io/ceph/grafana:10.4.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 20 18:44:19 compute-0 bash[99395]: e8d4a682724f2039aeaadefabc5bd9331e64c1be264a6b723bd77661ef30236d
Jan 20 18:44:19 compute-0 podman[99395]: 2026-01-20 18:44:19.027967347 +0000 UTC m=+0.022181871 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Jan 20 18:44:19 compute-0 systemd[1]: Started Ceph grafana.compute-0 for aecbbf3b-b405-507b-97d7-637a83f5b4b1.
Jan 20 18:44:19 compute-0 sudo[98922]: pam_unix(sudo:session): session closed for user root
Jan 20 18:44:19 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 18:44:19 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:19 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 18:44:19 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:19 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Jan 20 18:44:19 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:19 compute-0 ceph-mgr[74676]: [progress INFO root] complete: finished ev 5a10f15c-962b-41ba-823a-8b1f4795524e (Updating grafana deployment (+1 -> 1))
Jan 20 18:44:19 compute-0 ceph-mgr[74676]: [progress INFO root] Completed event 5a10f15c-962b-41ba-823a-8b1f4795524e (Updating grafana deployment (+1 -> 1)) in 9 seconds
Jan 20 18:44:19 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Jan 20 18:44:19 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:19 compute-0 ceph-mgr[74676]: [progress INFO root] update: starting ev 20b96eeb-eb9a-44b6-8305-40113d8c2ed7 (Updating ingress.rgw.default deployment (+4 -> 4))
Jan 20 18:44:19 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/monitor_password}] v 0)
Jan 20 18:44:19 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:19 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-0.nmmjhs on compute-0
Jan 20 18:44:19 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-0.nmmjhs on compute-0
Jan 20 18:44:19 compute-0 sudo[99431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:44:19 compute-0 sudo[99431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:44:19 compute-0 sudo[99431]: pam_unix(sudo:session): session closed for user root
Jan 20 18:44:19 compute-0 ceph-mon[74381]: 9.14 scrub starts
Jan 20 18:44:19 compute-0 ceph-mon[74381]: 9.14 scrub ok
Jan 20 18:44:19 compute-0 ceph-mon[74381]: 10.e scrub starts
Jan 20 18:44:19 compute-0 ceph-mon[74381]: 10.e scrub ok
Jan 20 18:44:19 compute-0 ceph-mon[74381]: 7.5 scrub starts
Jan 20 18:44:19 compute-0 ceph-mon[74381]: 7.5 scrub ok
Jan 20 18:44:19 compute-0 ceph-mon[74381]: 9.11 scrub starts
Jan 20 18:44:19 compute-0 ceph-mon[74381]: 9.11 scrub ok
Jan 20 18:44:19 compute-0 ceph-mon[74381]: 7.0 deep-scrub starts
Jan 20 18:44:19 compute-0 ceph-mon[74381]: 7.0 deep-scrub ok
Jan 20 18:44:19 compute-0 ceph-mon[74381]: 12.3 scrub starts
Jan 20 18:44:19 compute-0 ceph-mon[74381]: pgmap v65: 337 pgs: 337 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 247 B/s, 1 keys/s, 2 objects/s recovering
Jan 20 18:44:19 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Jan 20 18:44:19 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Jan 20 18:44:19 compute-0 ceph-mon[74381]: 9.0 scrub starts
Jan 20 18:44:19 compute-0 ceph-mon[74381]: 9.0 scrub ok
Jan 20 18:44:19 compute-0 ceph-mon[74381]: 10.d scrub starts
Jan 20 18:44:19 compute-0 ceph-mon[74381]: 10.d scrub ok
Jan 20 18:44:19 compute-0 ceph-mon[74381]: 8.1 deep-scrub starts
Jan 20 18:44:19 compute-0 ceph-mon[74381]: 8.1 deep-scrub ok
Jan 20 18:44:19 compute-0 ceph-mon[74381]: 7.1 scrub starts
Jan 20 18:44:19 compute-0 ceph-mon[74381]: 7.1 scrub ok
Jan 20 18:44:19 compute-0 ceph-mon[74381]: pgmap v66: 337 pgs: 2 active+clean+scrubbing, 335 active+clean; 456 KiB data, 107 MiB used, 60 GiB / 60 GiB avail; 234 B/s, 1 keys/s, 2 objects/s recovering
Jan 20 18:44:19 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Jan 20 18:44:19 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Jan 20 18:44:19 compute-0 ceph-mon[74381]: 11.2 deep-scrub starts
Jan 20 18:44:19 compute-0 ceph-mon[74381]: 11.2 deep-scrub ok
Jan 20 18:44:19 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 20 18:44:19 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 20 18:44:19 compute-0 ceph-mon[74381]: osdmap e69: 3 total, 3 up, 3 in
Jan 20 18:44:19 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:19 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:19 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:19 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:19 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=settings t=2026-01-20T18:44:19.305969237Z level=info msg="Starting Grafana" version=10.4.0 commit=03f502a94d17f7dc4e6c34acdf8428aedd986e4c branch=HEAD compiled=2026-01-20T18:44:19Z
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=settings t=2026-01-20T18:44:19.306253754Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=settings t=2026-01-20T18:44:19.306261964Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=settings t=2026-01-20T18:44:19.306266995Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana"
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=settings t=2026-01-20T18:44:19.306271175Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana"
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=settings t=2026-01-20T18:44:19.306275045Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins"
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=settings t=2026-01-20T18:44:19.306278905Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning"
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=settings t=2026-01-20T18:44:19.306282645Z level=info msg="Config overridden from command line" arg="default.log.mode=console"
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=settings t=2026-01-20T18:44:19.306287105Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana"
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=settings t=2026-01-20T18:44:19.306291105Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana"
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=settings t=2026-01-20T18:44:19.306298085Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins"
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=settings t=2026-01-20T18:44:19.306302425Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning"
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=settings t=2026-01-20T18:44:19.306306536Z level=info msg=Target target=[all]
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=settings t=2026-01-20T18:44:19.306314616Z level=info msg="Path Home" path=/usr/share/grafana
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=settings t=2026-01-20T18:44:19.306318226Z level=info msg="Path Data" path=/var/lib/grafana
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=settings t=2026-01-20T18:44:19.306322046Z level=info msg="Path Logs" path=/var/log/grafana
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=settings t=2026-01-20T18:44:19.307305852Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=settings t=2026-01-20T18:44:19.307320353Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=settings t=2026-01-20T18:44:19.307324573Z level=info msg="App mode production"
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=sqlstore t=2026-01-20T18:44:19.307653931Z level=info msg="Connecting to DB" dbtype=sqlite3
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=sqlstore t=2026-01-20T18:44:19.307669532Z level=warn msg="SQLite database file has broader permissions than it should" path=/var/lib/grafana/grafana.db mode=-rw-r--r-- expected=-rw-r-----
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.308630818Z level=info msg="Starting DB migrations"
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.309953753Z level=info msg="Executing migration" id="create migration_log table"
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.311198976Z level=info msg="Migration successfully executed" id="create migration_log table" duration=1.244863ms
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.313287241Z level=info msg="Executing migration" id="create user table"
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.314054632Z level=info msg="Migration successfully executed" id="create user table" duration=768.101µs
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.316224119Z level=info msg="Executing migration" id="add unique index user.login"
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.317033371Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=809.162µs
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.318994103Z level=info msg="Executing migration" id="add unique index user.email"
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.31962981Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=636.157µs
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.321733196Z level=info msg="Executing migration" id="drop index UQE_user_login - v1"
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.322450675Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=717.699µs
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.324672134Z level=info msg="Executing migration" id="drop index UQE_user_email - v1"
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.325319451Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=647.307µs
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.327794758Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1"
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.329950144Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=2.154987ms
Jan 20 18:44:19 compute-0 sudo[99456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/haproxy:2.3 --timeout 895 _orch deploy --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.332954024Z level=info msg="Executing migration" id="create user table v2"
Jan 20 18:44:19 compute-0 sudo[99456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.334203027Z level=info msg="Migration successfully executed" id="create user table v2" duration=1.249033ms
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.338393409Z level=info msg="Executing migration" id="create index UQE_user_login - v2"
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.339158429Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=762.21µs
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.341174963Z level=info msg="Executing migration" id="create index UQE_user_email - v2"
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.341865911Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=690.698µs
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.344342478Z level=info msg="Executing migration" id="copy data_source v1 to v2"
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.344750748Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=407.81µs
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.34711053Z level=info msg="Executing migration" id="Drop old table user_v1"
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.34784953Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=739.26µs
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.350126281Z level=info msg="Executing migration" id="Add column help_flags1 to user table"
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.351106837Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=979.896µs
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.354563139Z level=info msg="Executing migration" id="Update user table charset"
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.35458927Z level=info msg="Migration successfully executed" id="Update user table charset" duration=29.701µs
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.357288732Z level=info msg="Executing migration" id="Add last_seen_at column to user"
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.358329318Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.040866ms
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.360717642Z level=info msg="Executing migration" id="Add missing user data"
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.361047331Z level=info msg="Migration successfully executed" id="Add missing user data" duration=329.729µs
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.363930247Z level=info msg="Executing migration" id="Add is_disabled column to user"
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.364931524Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=999.787µs
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.366431154Z level=info msg="Executing migration" id="Add index user.login/user.email"
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.367119873Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=678.859µs
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.369012063Z level=info msg="Executing migration" id="Add is_service_account column to user"
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.370096852Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.083679ms
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.373063031Z level=info msg="Executing migration" id="Update is_service_account column to nullable"
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.379887252Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=6.820681ms
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.382222545Z level=info msg="Executing migration" id="Add uid column to user"
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.383224591Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=1.001986ms
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.384871744Z level=info msg="Executing migration" id="Update uid column values for users"
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.385130352Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=258.358µs
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.386917069Z level=info msg="Executing migration" id="Add unique index user_uid"
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.38771658Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=802.961µs
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.390441102Z level=info msg="Executing migration" id="create temp user table v1-7"
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.391261664Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=820.652µs
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.39371824Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7"
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.394517711Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=799.701µs
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.396466973Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7"
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.397247163Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=779.95µs
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.399769951Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7"
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.400529771Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=759.81µs
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.403349456Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7"
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.404015103Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=664.987µs
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.405868232Z level=info msg="Executing migration" id="Update temp_user table charset"
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.405972846Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=104.774µs
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.408088672Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1"
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.40875491Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=665.738µs
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.410468225Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1"
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.411163203Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=695.288µs
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.41326355Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1"
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.413941467Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=677.557µs
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.417317288Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1"
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.417959474Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=641.786µs
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.420674576Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1"
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.423302466Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=2.62758ms
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.424608531Z level=info msg="Executing migration" id="create temp_user v2"
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.425299209Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=690.278µs
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.427501848Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2"
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.428118434Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=616.686µs
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.430773955Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2"
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.431485964Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=708.879µs
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.433699572Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2"
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.434376161Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=676.839µs
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.439840136Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2"
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.440564005Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=725.249µs
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.444236463Z level=info msg="Executing migration" id="copy temp_user v1 to v2"
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.444642873Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=405.79µs
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.447379486Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty"
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.448000103Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=620.837µs
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.451374253Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire"
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.451752502Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=377.669µs
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.454572377Z level=info msg="Executing migration" id="create star table"
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.455180984Z level=info msg="Migration successfully executed" id="create star table" duration=608.197µs
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.459822517Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id"
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.460561137Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=739.16µs
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.465045706Z level=info msg="Executing migration" id="create org table v1"
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.465709214Z level=info msg="Migration successfully executed" id="create org table v1" duration=660.527µs
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.468240391Z level=info msg="Executing migration" id="create index UQE_org_name - v1"
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.468878497Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=637.776µs
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.472917545Z level=info msg="Executing migration" id="create org_user table v1"
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.473675186Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=753.18µs
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.478433472Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1"
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.4791188Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=684.728µs
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.480920848Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1"
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.481555266Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=633.826µs
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.483402144Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1"
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.484043881Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=641.357µs
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.486093326Z level=info msg="Executing migration" id="Update org table charset"
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.486150677Z level=info msg="Migration successfully executed" id="Update org table charset" duration=58.661µs
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.487940045Z level=info msg="Executing migration" id="Update org_user table charset"
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.488000296Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=61.061µs
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.490598875Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers"
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.49076548Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=166.605µs
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.49301253Z level=info msg="Executing migration" id="create dashboard table"
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.493663607Z level=info msg="Migration successfully executed" id="create dashboard table" duration=651.037µs
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.495907087Z level=info msg="Executing migration" id="add index dashboard.account_id"
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.496633636Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=726.889µs
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.499837391Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug"
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.500596951Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=759.54µs
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.502893883Z level=info msg="Executing migration" id="create dashboard_tag table"
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:19.503486758Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=592.486µs
Jan 20 18:44:19 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Jan 20 18:44:19 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 11.0 scrub starts
Jan 20 18:44:19 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 11.0 scrub ok
Jan 20 18:44:19 compute-0 podman[99525]: 2026-01-20 18:44:19.707924953 +0000 UTC m=+0.021067731 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Jan 20 18:44:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:19 : epoch 696fccef : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc1a00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:44:20 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:20 : epoch 696fccef : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc1940016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:44:20 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:20.083676223Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term"
Jan 20 18:44:20 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:20.084872234Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=1.199341ms
Jan 20 18:44:20 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:20 : epoch 696fccef : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc19c001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:44:20 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[98876]: ts=2026-01-20T18:44:20.613Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.002376359s
Jan 20 18:44:20 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v68: 337 pgs: 2 active+clean+scrubbing, 335 active+clean; 456 KiB data, 107 MiB used, 60 GiB / 60 GiB avail; 239 B/s, 1 keys/s, 2 objects/s recovering
Jan 20 18:44:20 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} v 0)
Jan 20 18:44:20 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Jan 20 18:44:20 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0)
Jan 20 18:44:20 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Jan 20 18:44:20 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 11.d scrub starts
Jan 20 18:44:20 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:20.96166429Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1"
Jan 20 18:44:20 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:20.963126489Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=1.469189ms
Jan 20 18:44:21 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 20 18:44:21 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 20 18:44:21 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 20 18:44:21 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 20 18:44:21 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Jan 20 18:44:21 compute-0 podman[99525]: 2026-01-20 18:44:21.385710177 +0000 UTC m=+1.698852955 container create 93db59528e68efd6f25fb32218611bfa098327de9351a8f38df692407868bc46 (image=quay.io/ceph/haproxy:2.3, name=pensive_black)
Jan 20 18:44:21 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:21.38616131Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1"
Jan 20 18:44:21 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 11.d scrub ok
Jan 20 18:44:21 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:21.39446316Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=8.29735ms
Jan 20 18:44:21 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Jan 20 18:44:21 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 8.e scrub starts
Jan 20 18:44:21 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:21 : epoch 696fccef : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc1b4001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:44:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:22 : epoch 696fccef : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc1a00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:44:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:22 : epoch 696fccef : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc1940016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:44:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:22.683319068Z level=info msg="Executing migration" id="create dashboard v2"
Jan 20 18:44:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:22.684757186Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=1.441428ms
Jan 20 18:44:22 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e70 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 18:44:22 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v70: 337 pgs: 2 active+clean+scrubbing, 335 active+clean; 456 KiB data, 107 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:44:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:22.691163577Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2"
Jan 20 18:44:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:22.692273106Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=1.111459ms
Jan 20 18:44:22 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Jan 20 18:44:22 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} v 0)
Jan 20 18:44:22 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Jan 20 18:44:22 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0)
Jan 20 18:44:22 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Jan 20 18:44:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:22.69620589Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2"
Jan 20 18:44:22 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 8.e scrub ok
Jan 20 18:44:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:22.69733563Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=1.13105ms
Jan 20 18:44:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:22.710767117Z level=info msg="Executing migration" id="copy dashboard v1 to v2"
Jan 20 18:44:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:22.711431284Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=668.097µs
Jan 20 18:44:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:22.714071905Z level=info msg="Executing migration" id="drop table dashboard_v1"
Jan 20 18:44:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:22.715483242Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=1.410878ms
Jan 20 18:44:22 compute-0 ceph-mon[74381]: 12.3 scrub ok
Jan 20 18:44:22 compute-0 ceph-mon[74381]: 10.12 scrub starts
Jan 20 18:44:22 compute-0 ceph-mon[74381]: 10.12 scrub ok
Jan 20 18:44:22 compute-0 ceph-mon[74381]: 10.c scrub starts
Jan 20 18:44:22 compute-0 ceph-mon[74381]: 10.c scrub ok
Jan 20 18:44:22 compute-0 ceph-mon[74381]: Deploying daemon haproxy.rgw.default.compute-0.nmmjhs on compute-0
Jan 20 18:44:22 compute-0 ceph-mon[74381]: 12.4 scrub starts
Jan 20 18:44:22 compute-0 ceph-mon[74381]: 12.4 scrub ok
Jan 20 18:44:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:22.718839272Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1"
Jan 20 18:44:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:22.718954325Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=117.693µs
Jan 20 18:44:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:22.722101568Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2"
Jan 20 18:44:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:22.72408798Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=1.989682ms
Jan 20 18:44:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:22.726756461Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2"
Jan 20 18:44:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:22.728637522Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=1.876691ms
Jan 20 18:44:22 compute-0 systemd[1]: Started libpod-conmon-93db59528e68efd6f25fb32218611bfa098327de9351a8f38df692407868bc46.scope.
Jan 20 18:44:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:22.730945013Z level=info msg="Executing migration" id="Add column gnetId in dashboard"
Jan 20 18:44:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:22.732623037Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.679045ms
Jan 20 18:44:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:22.735274157Z level=info msg="Executing migration" id="Add index for gnetId in dashboard"
Jan 20 18:44:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:22.7361298Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=857.093µs
Jan 20 18:44:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:22.738242226Z level=info msg="Executing migration" id="Add column plugin_id in dashboard"
Jan 20 18:44:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:22.740110256Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=1.86908ms
Jan 20 18:44:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:22.741818161Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard"
Jan 20 18:44:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:22.742748806Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=945.445µs
Jan 20 18:44:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:22.744918623Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag"
Jan 20 18:44:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:22.746099525Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=1.184392ms
Jan 20 18:44:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:22.748633492Z level=info msg="Executing migration" id="Update dashboard table charset"
Jan 20 18:44:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:22.748674204Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=45.522µs
Jan 20 18:44:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:22.750906273Z level=info msg="Executing migration" id="Update dashboard_tag table charset"
Jan 20 18:44:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:22.750934054Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=29.881µs
Jan 20 18:44:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:22.754655712Z level=info msg="Executing migration" id="Add column folder_id in dashboard"
Jan 20 18:44:22 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 20 18:44:22 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 20 18:44:22 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Jan 20 18:44:22 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:44:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:22.75683782Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=2.183708ms
Jan 20 18:44:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:22.758877664Z level=info msg="Executing migration" id="Add column isFolder in dashboard"
Jan 20 18:44:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:22.760492187Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=1.613313ms
Jan 20 18:44:22 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Jan 20 18:44:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:22.762935412Z level=info msg="Executing migration" id="Add column has_acl in dashboard"
Jan 20 18:44:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:22.765274444Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=2.336352ms
Jan 20 18:44:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:22.76738402Z level=info msg="Executing migration" id="Add column uid in dashboard"
Jan 20 18:44:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:22.76963425Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=2.24934ms
Jan 20 18:44:22 compute-0 podman[99525]: 2026-01-20 18:44:22.772406334 +0000 UTC m=+3.085549112 container init 93db59528e68efd6f25fb32218611bfa098327de9351a8f38df692407868bc46 (image=quay.io/ceph/haproxy:2.3, name=pensive_black)
Jan 20 18:44:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:22.772006273Z level=info msg="Executing migration" id="Update uid column values in dashboard"
Jan 20 18:44:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:22.77224668Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=241.776µs
Jan 20 18:44:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:22.774080608Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid"
Jan 20 18:44:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:22.775032104Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=948.025µs
Jan 20 18:44:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:22.777160279Z level=info msg="Executing migration" id="Remove unique index org_id_slug"
Jan 20 18:44:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:22.778222358Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=1.061669ms
Jan 20 18:44:22 compute-0 podman[99525]: 2026-01-20 18:44:22.779398429 +0000 UTC m=+3.092541177 container start 93db59528e68efd6f25fb32218611bfa098327de9351a8f38df692407868bc46 (image=quay.io/ceph/haproxy:2.3, name=pensive_black)
Jan 20 18:44:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:22.782782899Z level=info msg="Executing migration" id="Update dashboard title length"
Jan 20 18:44:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:22.782895562Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=117.503µs
Jan 20 18:44:22 compute-0 podman[99525]: 2026-01-20 18:44:22.783121809 +0000 UTC m=+3.096264567 container attach 93db59528e68efd6f25fb32218611bfa098327de9351a8f38df692407868bc46 (image=quay.io/ceph/haproxy:2.3, name=pensive_black)
Jan 20 18:44:22 compute-0 pensive_black[99542]: 0 0
Jan 20 18:44:22 compute-0 systemd[1]: libpod-93db59528e68efd6f25fb32218611bfa098327de9351a8f38df692407868bc46.scope: Deactivated successfully.
Jan 20 18:44:22 compute-0 podman[99525]: 2026-01-20 18:44:22.785261745 +0000 UTC m=+3.098404503 container died 93db59528e68efd6f25fb32218611bfa098327de9351a8f38df692407868bc46 (image=quay.io/ceph/haproxy:2.3, name=pensive_black)
Jan 20 18:44:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:22.786887338Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id"
Jan 20 18:44:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:22.788152351Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=1.268163ms
Jan 20 18:44:22 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 71 pg[9.f( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=6 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=71 pruub=11.529809952s) [2] r=-1 lpr=71 pi=[62,71)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active pruub 241.382369995s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:22 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 71 pg[9.f( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=6 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=71 pruub=11.529767036s) [2] r=-1 lpr=71 pi=[62,71)/1 crt=49'1085 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 241.382369995s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:22 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 71 pg[9.b( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=6 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=71 pruub=11.529543877s) [2] r=-1 lpr=71 pi=[62,71)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active pruub 241.382369995s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:22 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 71 pg[9.b( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=6 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=71 pruub=11.529509544s) [2] r=-1 lpr=71 pi=[62,71)/1 crt=49'1085 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 241.382369995s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:22 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 71 pg[9.1b( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=5 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=71 pruub=11.528578758s) [2] r=-1 lpr=71 pi=[62,71)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active pruub 241.382064819s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:22 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 71 pg[9.1b( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=5 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=71 pruub=11.528557777s) [2] r=-1 lpr=71 pi=[62,71)/1 crt=49'1085 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 241.382064819s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:22 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 71 pg[9.1f( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=5 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=71 pruub=11.528330803s) [2] r=-1 lpr=71 pi=[62,71)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active pruub 241.382064819s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:22 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 71 pg[9.7( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=6 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=71 pruub=11.527726173s) [2] r=-1 lpr=71 pi=[62,71)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active pruub 241.381607056s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:22 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 71 pg[9.13( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=5 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=71 pruub=11.528007507s) [2] r=-1 lpr=71 pi=[62,71)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active pruub 241.381881714s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:22 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 71 pg[9.7( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=6 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=71 pruub=11.527700424s) [2] r=-1 lpr=71 pi=[62,71)/1 crt=49'1085 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 241.381607056s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:22 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 71 pg[9.13( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=5 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=71 pruub=11.527991295s) [2] r=-1 lpr=71 pi=[62,71)/1 crt=49'1085 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 241.381881714s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:22 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 71 pg[9.1f( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=5 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=71 pruub=11.528280258s) [2] r=-1 lpr=71 pi=[62,71)/1 crt=49'1085 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 241.382064819s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:22 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 71 pg[9.3( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=6 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=71 pruub=11.527505875s) [2] r=-1 lpr=71 pi=[62,71)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active pruub 241.381591797s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:22 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 71 pg[9.3( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=6 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=71 pruub=11.527460098s) [2] r=-1 lpr=71 pi=[62,71)/1 crt=49'1085 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 241.381591797s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:22 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 71 pg[9.17( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=5 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=71 pruub=11.524264336s) [2] r=-1 lpr=71 pi=[62,71)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active pruub 241.378646851s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:22 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 71 pg[9.17( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=5 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=71 pruub=11.524248123s) [2] r=-1 lpr=71 pi=[62,71)/1 crt=49'1085 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 241.378646851s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:22.791755318Z level=info msg="Executing migration" id="create dashboard_provisioning"
Jan 20 18:44:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:22.793243117Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=1.49065ms
Jan 20 18:44:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:22.796441262Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1"
Jan 20 18:44:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:22.800874779Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=4.431518ms
Jan 20 18:44:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:22.802715229Z level=info msg="Executing migration" id="create dashboard_provisioning v2"
Jan 20 18:44:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:22.803368116Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=653.036µs
Jan 20 18:44:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:22.805466771Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2"
Jan 20 18:44:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:22.806107548Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=638.577µs
Jan 20 18:44:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:22.808298917Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2"
Jan 20 18:44:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:22.808962534Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=663.257µs
Jan 20 18:44:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-8317eaa456466ad866738ab04769546677b1ba359db7291647cf19b8c7755fd9-merged.mount: Deactivated successfully.
Jan 20 18:44:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:22.814331617Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2"
Jan 20 18:44:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:22.815072986Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=746.029µs
Jan 20 18:44:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:22.818357734Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty"
Jan 20 18:44:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:22.819098943Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=741.599µs
Jan 20 18:44:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:22.821547708Z level=info msg="Executing migration" id="Add check_sum column"
Jan 20 18:44:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:22.823369876Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=1.822028ms
Jan 20 18:44:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:22.825409551Z level=info msg="Executing migration" id="Add index for dashboard_title"
Jan 20 18:44:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:22.826085618Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=676.507µs
Jan 20 18:44:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:22.829125919Z level=info msg="Executing migration" id="delete tags for deleted dashboards"
Jan 20 18:44:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:22.829416498Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=294.218µs
Jan 20 18:44:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:22.831634006Z level=info msg="Executing migration" id="delete stars for deleted dashboards"
Jan 20 18:44:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:22.83179231Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=156.294µs
Jan 20 18:44:22 compute-0 podman[99525]: 2026-01-20 18:44:22.832183201 +0000 UTC m=+3.145325959 container remove 93db59528e68efd6f25fb32218611bfa098327de9351a8f38df692407868bc46 (image=quay.io/ceph/haproxy:2.3, name=pensive_black)
Jan 20 18:44:22 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 11.c scrub starts
Jan 20 18:44:22 compute-0 systemd[1]: libpod-conmon-93db59528e68efd6f25fb32218611bfa098327de9351a8f38df692407868bc46.scope: Deactivated successfully.
Jan 20 18:44:22 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 11.c scrub ok
Jan 20 18:44:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:22.878379778Z level=info msg="Executing migration" id="Add index for dashboard_is_folder"
Jan 20 18:44:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:22.879276471Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=895.044µs
Jan 20 18:44:22 compute-0 ceph-mgr[74676]: [progress INFO root] Writing back 26 completed events
Jan 20 18:44:22 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.156546942Z level=info msg="Executing migration" id="Add isPublic for dashboard"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.159761538Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=3.215146ms
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.162782458Z level=info msg="Executing migration" id="create data_source table"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.164006961Z level=info msg="Migration successfully executed" id="create data_source table" duration=1.224743ms
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.167298408Z level=info msg="Executing migration" id="add index data_source.account_id"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.168229113Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=930.895µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.170498303Z level=info msg="Executing migration" id="add unique index data_source.account_id_name"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.171559131Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=1.060248ms
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.175995929Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.177280583Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=1.287864ms
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.185669936Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.187038932Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=1.375256ms
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.200371887Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.207140045Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=6.766119ms
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.20955081Z level=info msg="Executing migration" id="create data_source table v2"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.210546776Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=996.236µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.213104184Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.21443664Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=1.331996ms
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.216690009Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.21745397Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=763.991µs
Jan 20 18:44:23 compute-0 systemd[1]: Reloading.
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.220243054Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.221168358Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=925.334µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.223276324Z level=info msg="Executing migration" id="Add column with_credentials"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.227289881Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=4.015497ms
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.229416217Z level=info msg="Executing migration" id="Add secure json data column"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.233637439Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=4.221392ms
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.236018792Z level=info msg="Executing migration" id="Update data_source table charset"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.236059974Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=42.151µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.238488398Z level=info msg="Executing migration" id="Update initial version to 1"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.238861028Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=368.69µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.241136169Z level=info msg="Executing migration" id="Add read_only data column"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.245355141Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=4.218192ms
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.247247681Z level=info msg="Executing migration" id="Migrate logging ds to loki ds"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.247566799Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=319.619µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.250091746Z level=info msg="Executing migration" id="Update json_data with nulls"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.250313952Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=224.416µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.252224033Z level=info msg="Executing migration" id="Add uid column"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.254156634Z level=info msg="Migration successfully executed" id="Add uid column" duration=1.932651ms
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.256660611Z level=info msg="Executing migration" id="Update uid value"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.256863996Z level=info msg="Migration successfully executed" id="Update uid value" duration=203.155µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.258843278Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.259557178Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=707.519µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.261523679Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.262241119Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=717.49µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.265569577Z level=info msg="Executing migration" id="create api_key table"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.266332908Z level=info msg="Migration successfully executed" id="create api_key table" duration=763.821µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.268864784Z level=info msg="Executing migration" id="add index api_key.account_id"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.269892852Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=1.029388ms
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.272183783Z level=info msg="Executing migration" id="add index api_key.key"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.272765638Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=581.945µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.275004468Z level=info msg="Executing migration" id="add index api_key.account_id_name"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.27588134Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=876.492µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.277974676Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.278718686Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=743.96µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.280391801Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.281168951Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=776.92µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.282946899Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.283606576Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=659.297µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.285219669Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.291532787Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=6.307758ms
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.297925416Z level=info msg="Executing migration" id="create api_key table v2"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.299028585Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=1.11628ms
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.301397529Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.302112457Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=715.808µs
Jan 20 18:44:23 compute-0 systemd-rc-local-generator[99588]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.307015988Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.307769527Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=756.219µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.309593596Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.310263354Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=669.248µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.312437702Z level=info msg="Executing migration" id="copy api_key v1 to v2"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.31276421Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=330.228µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.314375093Z level=info msg="Executing migration" id="Drop old table api_key_v1"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.315066171Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=688.108µs
Jan 20 18:44:23 compute-0 systemd-sysv-generator[99592]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.323694801Z level=info msg="Executing migration" id="Update api_key table charset"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.323748382Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=58.492µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.326313779Z level=info msg="Executing migration" id="Add expires to api_key table"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.328460377Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=2.146758ms
Jan 20 18:44:23 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.330116371Z level=info msg="Executing migration" id="Add service account foreign key"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.332039352Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=1.922931ms
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.333682016Z level=info msg="Executing migration" id="set service account foreign key to nil if 0"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.333822759Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=144.583µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.335561565Z level=info msg="Executing migration" id="Add last_used_at to api_key table"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.337488796Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=1.927381ms
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.339014347Z level=info msg="Executing migration" id="Add is_revoked column to api_key table"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.340901417Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=1.8847ms
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.342394557Z level=info msg="Executing migration" id="create dashboard_snapshot table v4"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.343062684Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=668.057µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.344904534Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.345460458Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=555.534µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.346983178Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.347640156Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=659.328µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.349397323Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.350135262Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=737.489µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.352423933Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.353153193Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=725.64µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.355561416Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.356274445Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=713.609µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.358191356Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.358240367Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=47.001µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.360088667Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.360110907Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=20.81µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.362188022Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.364103693Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=1.915831ms
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.365944732Z level=info msg="Executing migration" id="Add encrypted dashboard json column"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.367999566Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=2.054574ms
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.370442411Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.370488513Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=46.792µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.372079925Z level=info msg="Executing migration" id="create quota table v1"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.37262221Z level=info msg="Migration successfully executed" id="create quota table v1" duration=542.355µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.375341461Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.376534343Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=1.201722ms
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.379229114Z level=info msg="Executing migration" id="Update quota table charset"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.379256395Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=26.391µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.381005862Z level=info msg="Executing migration" id="create plugin_setting table"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.381740451Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=733.879µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.383626621Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.38432026Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=693.619µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.386477827Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.388779008Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=2.300761ms
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.391571622Z level=info msg="Executing migration" id="Update plugin_setting table charset"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.391637864Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=73.082µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.393886654Z level=info msg="Executing migration" id="create session table"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.394848729Z level=info msg="Migration successfully executed" id="create session table" duration=912.894µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.39787055Z level=info msg="Executing migration" id="Drop old table playlist table"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.397956372Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=86.742µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.399572465Z level=info msg="Executing migration" id="Drop old table playlist_item table"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.399645347Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=71.162µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.401707171Z level=info msg="Executing migration" id="create playlist table v2"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.402432381Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=724.99µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.404929757Z level=info msg="Executing migration" id="create playlist item table v2"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.405528793Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=607.986µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.407691831Z level=info msg="Executing migration" id="Update playlist table charset"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.407720252Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=32.18µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.409657003Z level=info msg="Executing migration" id="Update playlist_item table charset"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.409680103Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=27.011µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.411381878Z level=info msg="Executing migration" id="Add playlist column created_at"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.414127291Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=2.748283ms
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.415715263Z level=info msg="Executing migration" id="Add playlist column updated_at"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.418329823Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=2.61344ms
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.420030137Z level=info msg="Executing migration" id="drop preferences table v2"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.42010934Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=78.492µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.421620801Z level=info msg="Executing migration" id="drop preferences table v3"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.421692322Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=71.691µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.423323036Z level=info msg="Executing migration" id="create preferences table v3"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.424020914Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=697.908µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.426328065Z level=info msg="Executing migration" id="Update preferences table charset"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.426351336Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=23.681µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.428184024Z level=info msg="Executing migration" id="Add column team_id in preferences"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.430542477Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=2.358533ms
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.432097068Z level=info msg="Executing migration" id="Update team_id column values in preferences"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.432227462Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=130.584µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.433855995Z level=info msg="Executing migration" id="Add column week_start in preferences"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.436480215Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=2.628099ms
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.438491778Z level=info msg="Executing migration" id="Add column preferences.json_data"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.440879601Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=2.387473ms
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.442377691Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.442450333Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=69.562µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.445342149Z level=info msg="Executing migration" id="Add preferences index org_id"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.446226103Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=883.634µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.448273408Z level=info msg="Executing migration" id="Add preferences index user_id"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.44910639Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=832.602µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.451094303Z level=info msg="Executing migration" id="create alert table v1"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.452060038Z level=info msg="Migration successfully executed" id="create alert table v1" duration=968.525µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.45437699Z level=info msg="Executing migration" id="add index alert org_id & id "
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.455527321Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=1.151601ms
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.45778258Z level=info msg="Executing migration" id="add index alert state"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.458653323Z level=info msg="Migration successfully executed" id="add index alert state" duration=870.463µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.46076513Z level=info msg="Executing migration" id="add index alert dashboard_id"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.461678684Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=912.634µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.464017936Z level=info msg="Executing migration" id="Create alert_rule_tag table v1"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.464738465Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=719.758µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.466941924Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.467700843Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=759.84µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.469684516Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.470373504Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=688.638µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.471921855Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.479049225Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=7.1262ms
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.480532365Z level=info msg="Executing migration" id="Create alert_rule_tag table v2"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.48112897Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=596.155µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.482701192Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.483339059Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=637.727µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.485845175Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.486114893Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=269.668µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.487489359Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.487948331Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=459.142µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.48942853Z level=info msg="Executing migration" id="create alert_notification table v1"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.490008615Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=579.405µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.491615629Z level=info msg="Executing migration" id="Add column is_default"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.494070723Z level=info msg="Migration successfully executed" id="Add column is_default" duration=2.454764ms
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.495400569Z level=info msg="Executing migration" id="Add column frequency"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.498129621Z level=info msg="Migration successfully executed" id="Add column frequency" duration=2.729352ms
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.49957283Z level=info msg="Executing migration" id="Add column send_reminder"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.502876778Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=3.210915ms
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.508898597Z level=info msg="Executing migration" id="Add column disable_resolve_message"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.511439355Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=2.543518ms
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.512812012Z level=info msg="Executing migration" id="add index alert_notification org_id & name"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.513456068Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=658.117µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.516407786Z level=info msg="Executing migration" id="Update alert table charset"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.516429447Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=22.441µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.517894396Z level=info msg="Executing migration" id="Update alert_notification table charset"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.517913376Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=19.4µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.519349244Z level=info msg="Executing migration" id="create notification_journal table v1"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.51992286Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=574.706µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.521840811Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.522469658Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=628.637µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.524558923Z level=info msg="Executing migration" id="drop alert_notification_journal"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.525314774Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=755.891µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.530163032Z level=info msg="Executing migration" id="create alert_notification_state table v1"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.530763007Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=599.935µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.532638438Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.533535241Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=895.973µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.535222777Z level=info msg="Executing migration" id="Add for to alert table"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.537874417Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=2.65109ms
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.539117519Z level=info msg="Executing migration" id="Add column uid in alert_notification"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.541628156Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=2.509737ms
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.543089135Z level=info msg="Executing migration" id="Update uid column values in alert_notification"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.543222028Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=133.183µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.545742875Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.546407844Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=664.658µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.548450377Z level=info msg="Executing migration" id="Remove unique index org_id_name"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.549165667Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=714.48µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.550968004Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.553516242Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=2.547218ms
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.555179616Z level=info msg="Executing migration" id="alter alert.settings to mediumtext"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.555231808Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=52.742µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.556784209Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.557613501Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=829.332µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.559184753Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id"
Jan 20 18:44:23 compute-0 systemd[1]: Reloading.
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.560067545Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=882.422µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.562268914Z level=info msg="Executing migration" id="Drop old annotation table v4"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.562353097Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=84.062µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.56399789Z level=info msg="Executing migration" id="create annotation table v5"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.564693469Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=695.169µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.567500173Z level=info msg="Executing migration" id="add index annotation 0 v3"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.568208402Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=710.609µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.570313118Z level=info msg="Executing migration" id="add index annotation 1 v3"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.571235472Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=922.534µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.573674977Z level=info msg="Executing migration" id="add index annotation 2 v3"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.574330245Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=654.758µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.576331598Z level=info msg="Executing migration" id="add index annotation 3 v3"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.577116049Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=784.671µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.579086861Z level=info msg="Executing migration" id="add index annotation 4 v3"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.579838761Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=751.85µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.582151262Z level=info msg="Executing migration" id="Update annotation table charset"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.582176563Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=26.041µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.583931619Z level=info msg="Executing migration" id="Add column region_id to annotation table"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.587774541Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=3.838932ms
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.590410801Z level=info msg="Executing migration" id="Drop category_id index"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.591256544Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=842.263µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.592917408Z level=info msg="Executing migration" id="Add column tags to annotation table"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.596044791Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=3.127113ms
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.598055345Z level=info msg="Executing migration" id="Create annotation_tag table v2"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.598762843Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=708.188µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.600688104Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.601552867Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=864.253µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.604430364Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.605233565Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=802.541µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.607084394Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.615834636Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=8.746172ms
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.621033835Z level=info msg="Executing migration" id="Create annotation_tag table v3"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.621914098Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=881.884µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.623967752Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.624653611Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=686.549µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.627066824Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3"
Jan 20 18:44:23 compute-0 systemd-sysv-generator[99634]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:44:23 compute-0 systemd-rc-local-generator[99630]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.635256792Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=8.180928ms
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.637385449Z level=info msg="Executing migration" id="drop table annotation_tag_v2"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.638144509Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=759.56µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.639956266Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.640124462Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=168.936µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.641992861Z level=info msg="Executing migration" id="Add created time to annotation table"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.645140204Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=3.146553ms
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.647247691Z level=info msg="Executing migration" id="Add updated time to annotation table"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.650469166Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=3.221205ms
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.655028387Z level=info msg="Executing migration" id="Add index for created in annotation table"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.655742226Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=713.939µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.657570124Z level=info msg="Executing migration" id="Add index for updated in annotation table"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.658266133Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=696.119µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.662133235Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.662345491Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=212.406µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.665562897Z level=info msg="Executing migration" id="Add epoch_end column"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.668544636Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=2.981419ms
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.670576819Z level=info msg="Executing migration" id="Add index for epoch_end"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.671286099Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=708.76µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.673909689Z level=info msg="Executing migration" id="Make epoch_end the same as epoch"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.674052622Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=142.773µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.676180988Z level=info msg="Executing migration" id="Move region to single row"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.676472377Z level=info msg="Migration successfully executed" id="Move region to single row" duration=293.779µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.678405598Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.679125536Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=719.618µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.681049688Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.681751326Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=700.348µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.683307347Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.684047398Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=739.831µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.685929677Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.686609296Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=678.779µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.689209035Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.689909073Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=699.988µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.691996609Z level=info msg="Executing migration" id="Add index for alert_id on annotation table"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.692715477Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=724.378µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.694533215Z level=info msg="Executing migration" id="Increase tags column to length 4096"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.694598718Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=66.003µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.696975581Z level=info msg="Executing migration" id="create test_data table"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.697598728Z level=info msg="Migration successfully executed" id="create test_data table" duration=625.108µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.700136904Z level=info msg="Executing migration" id="create dashboard_version table v1"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.700778082Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=640.878µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.706163875Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.707072758Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=910.823µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.809030466Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.810929937Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=1.895521ms
Jan 20 18:44:23 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 11.b scrub starts
Jan 20 18:44:23 compute-0 systemd[1]: Starting Ceph haproxy.rgw.default.compute-0.nmmjhs for aecbbf3b-b405-507b-97d7-637a83f5b4b1...
Jan 20 18:44:23 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 11.b scrub ok
Jan 20 18:44:23 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:23 : epoch 696fccef : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc19c001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.88977248Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.890084528Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=317.679µs
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.919425207Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1"
Jan 20 18:44:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:23.919918879Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=498.423µs
Jan 20 18:44:24 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:24 : epoch 696fccef : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc1b4001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:44:24 compute-0 podman[99690]: 2026-01-20 18:44:24.005281066 +0000 UTC m=+0.027913982 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Jan 20 18:44:24 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:24.563571058Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1"
Jan 20 18:44:24 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:24.563740622Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=166.544µs
Jan 20 18:44:24 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:24 : epoch 696fccef : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc1a00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:44:24 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v72: 337 pgs: 8 unknown, 2 active+clean+scrubbing, 327 active+clean; 456 KiB data, 107 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 20 18:44:24 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Jan 20 18:44:25 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:25 : epoch 696fccef : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc1940016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:44:26 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:26 : epoch 696fccef : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc19c001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:44:26 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:26 : epoch 696fccef : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc1b4001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:44:26 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v73: 337 pgs: 8 unknown, 2 active+clean+scrubbing, 327 active+clean; 456 KiB data, 107 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 20 18:44:27 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 8.0 scrub starts
Jan 20 18:44:27 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Jan 20 18:44:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:27.746452822Z level=info msg="Executing migration" id="create team table"
Jan 20 18:44:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:27.748157698Z level=info msg="Migration successfully executed" id="create team table" duration=1.708676ms
Jan 20 18:44:27 compute-0 podman[99690]: 2026-01-20 18:44:27.749840103 +0000 UTC m=+3.772472999 container create 85cc2ee544d221f522985c4e0ea926a853a6e7fcb6cc4b41c0ac08dda460cf54 (image=quay.io/ceph/haproxy:2.3, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-rgw-default-compute-0-nmmjhs)
Jan 20 18:44:27 compute-0 ceph-mon[74381]: 11.0 scrub starts
Jan 20 18:44:27 compute-0 ceph-mon[74381]: 11.0 scrub ok
Jan 20 18:44:27 compute-0 ceph-mon[74381]: 7.7 scrub starts
Jan 20 18:44:27 compute-0 ceph-mon[74381]: 7.7 scrub ok
Jan 20 18:44:27 compute-0 ceph-mon[74381]: 10.4 deep-scrub starts
Jan 20 18:44:27 compute-0 ceph-mon[74381]: 10.4 deep-scrub ok
Jan 20 18:44:27 compute-0 ceph-mon[74381]: pgmap v68: 337 pgs: 2 active+clean+scrubbing, 335 active+clean; 456 KiB data, 107 MiB used, 60 GiB / 60 GiB avail; 239 B/s, 1 keys/s, 2 objects/s recovering
Jan 20 18:44:27 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Jan 20 18:44:27 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Jan 20 18:44:27 compute-0 ceph-mon[74381]: 11.d scrub starts
Jan 20 18:44:27 compute-0 ceph-mon[74381]: 10.a scrub starts
Jan 20 18:44:27 compute-0 ceph-mon[74381]: 10.a scrub ok
Jan 20 18:44:27 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 20 18:44:27 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 20 18:44:27 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 20 18:44:27 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 20 18:44:27 compute-0 ceph-mon[74381]: 11.d scrub ok
Jan 20 18:44:27 compute-0 ceph-mon[74381]: 12.2 scrub starts
Jan 20 18:44:27 compute-0 ceph-mon[74381]: 12.2 scrub ok
Jan 20 18:44:27 compute-0 ceph-mon[74381]: osdmap e70: 3 total, 3 up, 3 in
Jan 20 18:44:27 compute-0 ceph-mon[74381]: 10.b deep-scrub starts
Jan 20 18:44:27 compute-0 ceph-mon[74381]: 8.e scrub starts
Jan 20 18:44:27 compute-0 ceph-mon[74381]: 10.b deep-scrub ok
Jan 20 18:44:27 compute-0 ceph-mon[74381]: 10.f scrub starts
Jan 20 18:44:27 compute-0 ceph-mon[74381]: 10.f scrub ok
Jan 20 18:44:27 compute-0 ceph-mon[74381]: pgmap v70: 337 pgs: 2 active+clean+scrubbing, 335 active+clean; 456 KiB data, 107 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:44:27 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Jan 20 18:44:27 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Jan 20 18:44:27 compute-0 ceph-mon[74381]: 8.e scrub ok
Jan 20 18:44:27 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 20 18:44:27 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 20 18:44:27 compute-0 ceph-mon[74381]: osdmap e71: 3 total, 3 up, 3 in
Jan 20 18:44:27 compute-0 ceph-mon[74381]: 12.d scrub starts
Jan 20 18:44:27 compute-0 ceph-mon[74381]: 11.c scrub starts
Jan 20 18:44:27 compute-0 ceph-mon[74381]: 11.c scrub ok
Jan 20 18:44:27 compute-0 ceph-mon[74381]: 12.d scrub ok
Jan 20 18:44:27 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:27 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 8.0 scrub ok
Jan 20 18:44:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:27.781931245Z level=info msg="Executing migration" id="add index team.org_id"
Jan 20 18:44:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:27.78403111Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=2.102675ms
Jan 20 18:44:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:27 : epoch 696fccef : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc1a0002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:28 : epoch 696fccef : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc194002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.040753547Z level=info msg="Executing migration" id="add unique index team_org_id_name"
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.042482683Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=1.731786ms
Jan 20 18:44:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aff691ed5500a8c14729d7b5595b90064c1ef09ec3102cf972e32a01e7697561/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.176449Z level=info msg="Executing migration" id="Add column uid in team"
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.180202109Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=3.7536ms
Jan 20 18:44:28 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 20 18:44:28 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 20 18:44:28 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.329906943Z level=info msg="Executing migration" id="Update uid column values in team"
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.330228831Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=327.588µs
Jan 20 18:44:28 compute-0 ceph-mgr[74676]: [progress WARNING root] Starting Global Recovery Event,8 pgs not in active + clean state
Jan 20 18:44:28 compute-0 podman[99690]: 2026-01-20 18:44:28.399107071 +0000 UTC m=+4.421739987 container init 85cc2ee544d221f522985c4e0ea926a853a6e7fcb6cc4b41c0ac08dda460cf54 (image=quay.io/ceph/haproxy:2.3, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-rgw-default-compute-0-nmmjhs)
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.400151458Z level=info msg="Executing migration" id="Add unique index team_org_id_uid"
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.401279128Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=1.13097ms
Jan 20 18:44:28 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 72 pg[9.3( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=6 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=72) [2]/[0] r=0 lpr=72 pi=[62,72)/1 crt=49'1085 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:28 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 72 pg[9.3( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=6 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=72) [2]/[0] r=0 lpr=72 pi=[62,72)/1 crt=49'1085 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 18:44:28 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 72 pg[9.7( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=6 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=72) [2]/[0] r=0 lpr=72 pi=[62,72)/1 crt=49'1085 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:28 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 72 pg[9.1f( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=5 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=72) [2]/[0] r=0 lpr=72 pi=[62,72)/1 crt=49'1085 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:28 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 72 pg[9.7( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=6 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=72) [2]/[0] r=0 lpr=72 pi=[62,72)/1 crt=49'1085 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 18:44:28 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 72 pg[9.1b( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=5 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=72) [2]/[0] r=0 lpr=72 pi=[62,72)/1 crt=49'1085 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:28 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 72 pg[9.1b( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=5 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=72) [2]/[0] r=0 lpr=72 pi=[62,72)/1 crt=49'1085 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 18:44:28 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 72 pg[9.1f( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=5 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=72) [2]/[0] r=0 lpr=72 pi=[62,72)/1 crt=49'1085 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 18:44:28 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 72 pg[9.17( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=5 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=72) [2]/[0] r=0 lpr=72 pi=[62,72)/1 crt=49'1085 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:28 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 72 pg[9.b( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=6 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=72) [2]/[0] r=0 lpr=72 pi=[62,72)/1 crt=49'1085 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:28 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 72 pg[9.17( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=5 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=72) [2]/[0] r=0 lpr=72 pi=[62,72)/1 crt=49'1085 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 18:44:28 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 72 pg[9.b( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=6 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=72) [2]/[0] r=0 lpr=72 pi=[62,72)/1 crt=49'1085 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 18:44:28 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 72 pg[9.f( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=6 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=72) [2]/[0] r=0 lpr=72 pi=[62,72)/1 crt=49'1085 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:28 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 72 pg[9.f( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=6 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=72) [2]/[0] r=0 lpr=72 pi=[62,72)/1 crt=49'1085 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 18:44:28 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 72 pg[9.13( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=5 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=72) [2]/[0] r=0 lpr=72 pi=[62,72)/1 crt=49'1085 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:28 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 72 pg[9.13( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=5 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=72) [2]/[0] r=0 lpr=72 pi=[62,72)/1 crt=49'1085 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 18:44:28 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Jan 20 18:44:28 compute-0 podman[99690]: 2026-01-20 18:44:28.404492833 +0000 UTC m=+4.427125729 container start 85cc2ee544d221f522985c4e0ea926a853a6e7fcb6cc4b41c0ac08dda460cf54 (image=quay.io/ceph/haproxy:2.3, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-rgw-default-compute-0-nmmjhs)
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-rgw-default-compute-0-nmmjhs[99707]: [NOTICE] 019/184428 (2) : New worker #1 (4) forked
Jan 20 18:44:28 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.493548748Z level=info msg="Executing migration" id="create team member table"
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.494556305Z level=info msg="Migration successfully executed" id="create team member table" duration=1.011067ms
Jan 20 18:44:28 compute-0 bash[99690]: 85cc2ee544d221f522985c4e0ea926a853a6e7fcb6cc4b41c0ac08dda460cf54
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.500389139Z level=info msg="Executing migration" id="add index team_member.org_id"
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.501294084Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=904.775µs
Jan 20 18:44:28 compute-0 systemd[1]: Started Ceph haproxy.rgw.default.compute-0.nmmjhs for aecbbf3b-b405-507b-97d7-637a83f5b4b1.
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.506678806Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id"
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.508103714Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=1.428548ms
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.512344176Z level=info msg="Executing migration" id="add index team_member.team_id"
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.513404635Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=1.066159ms
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.517011111Z level=info msg="Executing migration" id="Add column email to team table"
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.521280714Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=4.265263ms
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.523376939Z level=info msg="Executing migration" id="Add column external to team_member table"
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.527874009Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=4.512139ms
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.530513289Z level=info msg="Executing migration" id="Add column permission to team_member table"
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.534664629Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=4.14912ms
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.537216547Z level=info msg="Executing migration" id="create dashboard acl table"
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.538225004Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=1.009907ms
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.541602093Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id"
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.54259474Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=993.967µs
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.545118057Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id"
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.546177245Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.060148ms
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.549521834Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id"
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.550686465Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=1.166411ms
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.554077235Z level=info msg="Executing migration" id="add index dashboard_acl_user_id"
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.554985739Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=912.535µs
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.558977775Z level=info msg="Executing migration" id="add index dashboard_acl_team_id"
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.55995759Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=984.305µs
Jan 20 18:44:28 compute-0 sudo[99456]: pam_unix(sudo:session): session closed for user root
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.565299683Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role"
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.566555876Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=1.259624ms
Jan 20 18:44:28 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.569453133Z level=info msg="Executing migration" id="add index dashboard_permission"
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.570687536Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=1.237043ms
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.574299492Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table"
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.575655098Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=1.357416ms
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.577780364Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders"
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.578103822Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=322.918µs
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.58063259Z level=info msg="Executing migration" id="create tag table"
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.582984672Z level=info msg="Migration successfully executed" id="create tag table" duration=2.351912ms
Jan 20 18:44:28 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.587630095Z level=info msg="Executing migration" id="add index tag.key_value"
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.588847828Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=1.227753ms
Jan 20 18:44:28 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:28 : epoch 696fccef : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc19c002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.593337037Z level=info msg="Executing migration" id="create login attempt table"
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.594335364Z level=info msg="Migration successfully executed" id="create login attempt table" duration=1.002077ms
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.59721622Z level=info msg="Executing migration" id="add index login_attempt.username"
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.598145285Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=929.855µs
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.604150604Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1"
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.605173042Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=1.025688ms
Jan 20 18:44:28 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.608851349Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1"
Jan 20 18:44:28 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Jan 20 18:44:28 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.620095267Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=11.235398ms
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.622394089Z level=info msg="Executing migration" id="create login_attempt v2"
Jan 20 18:44:28 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-2.foujyq on compute-2
Jan 20 18:44:28 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-2.foujyq on compute-2
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.623243141Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=850.433µs
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.625394988Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2"
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.626135508Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=740.37µs
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.628671665Z level=info msg="Executing migration" id="copy login_attempt v1 to v2"
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.628955253Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=283.068µs
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.631018227Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty"
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.631570493Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=549.276µs
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.635514117Z level=info msg="Executing migration" id="create user auth table"
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.636131203Z level=info msg="Migration successfully executed" id="create user auth table" duration=617.566µs
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.642234415Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1"
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.643063858Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=829.143µs
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.645634336Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190"
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.645680387Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=46.381µs
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.648254605Z level=info msg="Executing migration" id="Add OAuth access token to user_auth"
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.652006505Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=3.75343ms
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.655621441Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth"
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.659608787Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=4.021307ms
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.662462793Z level=info msg="Executing migration" id="Add OAuth token type to user_auth"
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.666562482Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=4.094448ms
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.668537514Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth"
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.672845748Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=4.307954ms
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.674790039Z level=info msg="Executing migration" id="Add index to user_id column in user_auth"
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.67555436Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=764.761µs
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.677943684Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth"
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.681978321Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=4.018876ms
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.684193389Z level=info msg="Executing migration" id="create server_lock table"
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.68497777Z level=info msg="Migration successfully executed" id="create server_lock table" duration=785.011µs
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.688574935Z level=info msg="Executing migration" id="add index server_lock.operation_uid"
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.689654584Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=1.084639ms
Jan 20 18:44:28 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v75: 337 pgs: 4 peering, 8 unknown, 2 active+clean+scrubbing, 323 active+clean; 455 KiB data, 107 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.693005243Z level=info msg="Executing migration" id="create user auth token table"
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.693840475Z level=info msg="Migration successfully executed" id="create user auth token table" duration=835.342µs
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.696436834Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token"
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.697212945Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=775.811µs
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.699396823Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token"
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.700148033Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=751.01µs
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.703074331Z level=info msg="Executing migration" id="add index user_auth_token.user_id"
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.704276132Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.201851ms
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.707545199Z level=info msg="Executing migration" id="Add revoked_at to the user auth token"
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.711766512Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=4.222512ms
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.714166556Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at"
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.715005467Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=839.041µs
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.718434129Z level=info msg="Executing migration" id="create cache_data table"
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.71923984Z level=info msg="Migration successfully executed" id="create cache_data table" duration=805.461µs
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.721549491Z level=info msg="Executing migration" id="add unique index cache_data.cache_key"
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.722391693Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=842.332µs
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.724659304Z level=info msg="Executing migration" id="create short_url table v1"
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.725379273Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=720.059µs
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.729083561Z level=info msg="Executing migration" id="add index short_url.org_id-uid"
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.729944304Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=860.783µs
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.733537039Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint"
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.73359362Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=57.411µs
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.735275196Z level=info msg="Executing migration" id="delete alert_definition table"
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.735371978Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=97.532µs
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.73733224Z level=info msg="Executing migration" id="recreate alert_definition table"
Jan 20 18:44:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:28.738121711Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=789.551µs
Jan 20 18:44:28 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Jan 20 18:44:28 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Jan 20 18:44:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:29.017460047Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns"
Jan 20 18:44:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:29.018559077Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=1.10304ms
Jan 20 18:44:29 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Jan 20 18:44:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:29.464455244Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns"
Jan 20 18:44:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:29.466467098Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=2.015184ms
Jan 20 18:44:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:29.581662246Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql"
Jan 20 18:44:29 compute-0 ceph-mon[74381]: 12.11 scrub starts
Jan 20 18:44:29 compute-0 ceph-mon[74381]: 12.11 scrub ok
Jan 20 18:44:29 compute-0 ceph-mon[74381]: 11.b scrub starts
Jan 20 18:44:29 compute-0 ceph-mon[74381]: 11.b scrub ok
Jan 20 18:44:29 compute-0 ceph-mon[74381]: 12.0 scrub starts
Jan 20 18:44:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:29.582090408Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=445.041µs
Jan 20 18:44:29 compute-0 ceph-mon[74381]: 12.0 scrub ok
Jan 20 18:44:29 compute-0 ceph-mon[74381]: 12.1e deep-scrub starts
Jan 20 18:44:29 compute-0 ceph-mon[74381]: 12.1e deep-scrub ok
Jan 20 18:44:29 compute-0 ceph-mon[74381]: pgmap v72: 337 pgs: 8 unknown, 2 active+clean+scrubbing, 327 active+clean; 456 KiB data, 107 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 20 18:44:29 compute-0 ceph-mon[74381]: 11.9 scrub starts
Jan 20 18:44:29 compute-0 ceph-mon[74381]: 10.6 scrub starts
Jan 20 18:44:29 compute-0 ceph-mon[74381]: 10.6 scrub ok
Jan 20 18:44:29 compute-0 ceph-mon[74381]: 12.1d scrub starts
Jan 20 18:44:29 compute-0 ceph-mon[74381]: 12.1d scrub ok
Jan 20 18:44:29 compute-0 ceph-mon[74381]: 12.1f scrub starts
Jan 20 18:44:29 compute-0 ceph-mon[74381]: 12.1f scrub ok
Jan 20 18:44:29 compute-0 ceph-mon[74381]: pgmap v73: 337 pgs: 8 unknown, 2 active+clean+scrubbing, 327 active+clean; 456 KiB data, 107 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 20 18:44:29 compute-0 ceph-mon[74381]: 12.9 scrub starts
Jan 20 18:44:29 compute-0 ceph-mon[74381]: 12.9 scrub ok
Jan 20 18:44:29 compute-0 ceph-mon[74381]: 10.1a scrub starts
Jan 20 18:44:29 compute-0 ceph-mon[74381]: 10.1a scrub ok
Jan 20 18:44:29 compute-0 ceph-mon[74381]: 12.17 deep-scrub starts
Jan 20 18:44:29 compute-0 ceph-mon[74381]: 12.17 deep-scrub ok
Jan 20 18:44:29 compute-0 ceph-mon[74381]: 8.0 scrub starts
Jan 20 18:44:29 compute-0 ceph-mon[74381]: 11.9 scrub ok
Jan 20 18:44:29 compute-0 ceph-mon[74381]: 8.0 scrub ok
Jan 20 18:44:29 compute-0 ceph-mon[74381]: 10.1c deep-scrub starts
Jan 20 18:44:29 compute-0 ceph-mon[74381]: 10.1c deep-scrub ok
Jan 20 18:44:29 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 20 18:44:29 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 20 18:44:29 compute-0 ceph-mon[74381]: osdmap e72: 3 total, 3 up, 3 in
Jan 20 18:44:29 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:29 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:29 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:29 compute-0 ceph-mon[74381]: Deploying daemon haproxy.rgw.default.compute-2.foujyq on compute-2
Jan 20 18:44:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:29.58746683Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns"
Jan 20 18:44:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:29.589665209Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=2.200909ms
Jan 20 18:44:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:29.591999022Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns"
Jan 20 18:44:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:29.593629795Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=1.631824ms
Jan 20 18:44:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:29.597375403Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns"
Jan 20 18:44:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:29.598972206Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=1.598523ms
Jan 20 18:44:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:29.602787097Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns"
Jan 20 18:44:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:29.604092682Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=1.306235ms
Jan 20 18:44:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:29.607681898Z level=info msg="Executing migration" id="Add column paused in alert_definition"
Jan 20 18:44:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:29.61268151Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=5.007993ms
Jan 20 18:44:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:29.615519295Z level=info msg="Executing migration" id="drop alert_definition table"
Jan 20 18:44:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:29.616720177Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=1.200832ms
Jan 20 18:44:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:29.620025936Z level=info msg="Executing migration" id="delete alert_definition_version table"
Jan 20 18:44:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:29.62020691Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=182.044µs
Jan 20 18:44:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:29.624132494Z level=info msg="Executing migration" id="recreate alert_definition_version table"
Jan 20 18:44:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:29.625278404Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=1.15054ms
Jan 20 18:44:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:29.627071343Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns"
Jan 20 18:44:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:29.628008187Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=936.504µs
Jan 20 18:44:29 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Jan 20 18:44:29 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Jan 20 18:44:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:29 : epoch 696fccef : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc1b4001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:44:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:30 : epoch 696fccef : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc1a0002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:44:30 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Jan 20 18:44:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:30.084303632Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns"
Jan 20 18:44:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:30.085726859Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=1.431058ms
Jan 20 18:44:30 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Jan 20 18:44:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:30.100174703Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql"
Jan 20 18:44:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:30.100383489Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=214.017µs
Jan 20 18:44:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:30.102972497Z level=info msg="Executing migration" id="drop alert_definition_version table"
Jan 20 18:44:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:30.106163242Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=3.036601ms
Jan 20 18:44:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:30.116063405Z level=info msg="Executing migration" id="create alert_instance table"
Jan 20 18:44:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:30.117196235Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=1.13635ms
Jan 20 18:44:30 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 73 pg[9.7( v 49'1085 (0'0,49'1085] local-lis/les=72/73 n=6 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=72) [2]/[0] async=[2] r=0 lpr=72 pi=[62,72)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:44:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:30.191100587Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns"
Jan 20 18:44:30 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 73 pg[9.3( v 49'1085 (0'0,49'1085] local-lis/les=72/73 n=6 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=72) [2]/[0] async=[2] r=0 lpr=72 pi=[62,72)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:44:30 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 73 pg[9.13( v 49'1085 (0'0,49'1085] local-lis/les=72/73 n=5 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=72) [2]/[0] async=[2] r=0 lpr=72 pi=[62,72)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:44:30 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 73 pg[9.1f( v 49'1085 (0'0,49'1085] local-lis/les=72/73 n=5 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=72) [2]/[0] async=[2] r=0 lpr=72 pi=[62,72)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:44:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:30.192449862Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.351955ms
Jan 20 18:44:30 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 73 pg[9.f( v 49'1085 (0'0,49'1085] local-lis/les=72/73 n=6 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=72) [2]/[0] async=[2] r=0 lpr=72 pi=[62,72)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:44:30 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 73 pg[9.b( v 49'1085 (0'0,49'1085] local-lis/les=72/73 n=6 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=72) [2]/[0] async=[2] r=0 lpr=72 pi=[62,72)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:44:30 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 73 pg[9.17( v 49'1085 (0'0,49'1085] local-lis/les=72/73 n=5 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=72) [2]/[0] async=[2] r=0 lpr=72 pi=[62,72)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:44:30 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 73 pg[9.1b( v 49'1085 (0'0,49'1085] local-lis/les=72/73 n=5 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=72) [2]/[0] async=[2] r=0 lpr=72 pi=[62,72)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:44:30 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=1.762045622s ======
Jan 20 18:44:30 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:44:28.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=1.762045622s
Jan 20 18:44:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:30.198513574Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns"
Jan 20 18:44:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:30.19953025Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.019656ms
Jan 20 18:44:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:30.250270188Z level=info msg="Executing migration" id="add column current_state_end to alert_instance"
Jan 20 18:44:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:30.256622426Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=6.356108ms
Jan 20 18:44:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:30.258678681Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance"
Jan 20 18:44:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:30.259632026Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=948.465µs
Jan 20 18:44:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:30.261865116Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance"
Jan 20 18:44:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:30.263103859Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=1.242803ms
Jan 20 18:44:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:30.265112902Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance"
Jan 20 18:44:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:30.288224906Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=23.106104ms
Jan 20 18:44:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:30.339618979Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance"
Jan 20 18:44:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:30.361965503Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=22.344344ms
Jan 20 18:44:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:30.370142Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance"
Jan 20 18:44:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:30.370963492Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=821.872µs
Jan 20 18:44:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:30.373536241Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance"
Jan 20 18:44:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:30.37426979Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=736.299µs
Jan 20 18:44:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:30.376757686Z level=info msg="Executing migration" id="add current_reason column related to current_state"
Jan 20 18:44:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:30.380777613Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=4.022027ms
Jan 20 18:44:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:30.383012212Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance"
Jan 20 18:44:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:30.387014498Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=4.002036ms
Jan 20 18:44:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:30.389168186Z level=info msg="Executing migration" id="create alert_rule table"
Jan 20 18:44:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:30.390002397Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=834.251µs
Jan 20 18:44:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:30.394749984Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns"
Jan 20 18:44:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:30.395656597Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=906.933µs
Jan 20 18:44:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:30.399772217Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns"
Jan 20 18:44:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:30.400967179Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=1.195582ms
Jan 20 18:44:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:30.55618594Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns"
Jan 20 18:44:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:30.559029265Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=2.846055ms
Jan 20 18:44:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:30.590031908Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql"
Jan 20 18:44:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:30.590160421Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=133.643µs
Jan 20 18:44:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:30 : epoch 696fccef : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc194002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:44:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:30.634439388Z level=info msg="Executing migration" id="add column for to alert_rule"
Jan 20 18:44:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:30.639223504Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=4.786857ms
Jan 20 18:44:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:30.644267658Z level=info msg="Executing migration" id="add column annotations to alert_rule"
Jan 20 18:44:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:30.654788008Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=10.494279ms
Jan 20 18:44:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:30.657721776Z level=info msg="Executing migration" id="add column labels to alert_rule"
Jan 20 18:44:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:30.664599269Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=6.870712ms
Jan 20 18:44:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:30.666874228Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns"
Jan 20 18:44:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:30.668178473Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=1.304645ms
Jan 20 18:44:30 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v77: 337 pgs: 4 peering, 8 unknown, 2 active+clean+scrubbing, 323 active+clean; 455 KiB data, 107 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:44:30 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:44:30 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:44:30 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:44:30.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:44:30 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 20 18:44:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:30.933403036Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns"
Jan 20 18:44:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:30.934967377Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=1.567092ms
Jan 20 18:44:30 compute-0 ceph-mon[74381]: 10.10 scrub starts
Jan 20 18:44:30 compute-0 ceph-mon[74381]: pgmap v75: 337 pgs: 4 peering, 8 unknown, 2 active+clean+scrubbing, 323 active+clean; 455 KiB data, 107 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 20 18:44:30 compute-0 ceph-mon[74381]: 10.10 scrub ok
Jan 20 18:44:30 compute-0 ceph-mon[74381]: 8.7 scrub starts
Jan 20 18:44:30 compute-0 ceph-mon[74381]: 8.7 scrub ok
Jan 20 18:44:30 compute-0 ceph-mon[74381]: 12.1b scrub starts
Jan 20 18:44:30 compute-0 ceph-mon[74381]: 12.1b scrub ok
Jan 20 18:44:30 compute-0 ceph-mon[74381]: 10.1d scrub starts
Jan 20 18:44:30 compute-0 ceph-mon[74381]: 10.1d scrub ok
Jan 20 18:44:30 compute-0 ceph-mon[74381]: osdmap e73: 3 total, 3 up, 3 in
Jan 20 18:44:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:30.945478166Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule"
Jan 20 18:44:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:30.950109508Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=4.633383ms
Jan 20 18:44:31 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:31.076149175Z level=info msg="Executing migration" id="add panel_id column to alert_rule"
Jan 20 18:44:31 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Jan 20 18:44:31 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:31.085257167Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=9.102253ms
Jan 20 18:44:31 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:31.089464909Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns"
Jan 20 18:44:31 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:31.091120032Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=1.657574ms
Jan 20 18:44:31 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:31 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:31.095736115Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule"
Jan 20 18:44:31 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 20 18:44:31 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:31.103327967Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=7.591223ms
Jan 20 18:44:31 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:31.107766634Z level=info msg="Executing migration" id="add is_paused column to alert_rule table"
Jan 20 18:44:31 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:31.113092206Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=5.323522ms
Jan 20 18:44:31 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Jan 20 18:44:31 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:31.746603735Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table"
Jan 20 18:44:31 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:31.746958155Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=358.2µs
Jan 20 18:44:31 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:31.755479722Z level=info msg="Executing migration" id="create alert_rule_version table"
Jan 20 18:44:31 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Jan 20 18:44:31 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:31.758949853Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=3.471082ms
Jan 20 18:44:31 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:31.794907318Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns"
Jan 20 18:44:31 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 74 pg[9.3( v 49'1085 (0'0,49'1085] local-lis/les=72/73 n=6 ec=62/41 lis/c=72/62 les/c/f=73/63/0 sis=74 pruub=14.396240234s) [2] async=[2] r=-1 lpr=74 pi=[62,74)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active pruub 253.253921509s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:31 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 74 pg[9.7( v 49'1085 (0'0,49'1085] local-lis/les=72/73 n=6 ec=62/41 lis/c=72/62 les/c/f=73/63/0 sis=74 pruub=14.336577415s) [2] async=[2] r=-1 lpr=74 pi=[62,74)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active pruub 253.194259644s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:31 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 74 pg[9.13( v 49'1085 (0'0,49'1085] local-lis/les=72/73 n=5 ec=62/41 lis/c=72/62 les/c/f=73/63/0 sis=74 pruub=14.396312714s) [2] async=[2] r=-1 lpr=74 pi=[62,74)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active pruub 253.253936768s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:31 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 74 pg[9.1f( v 49'1085 (0'0,49'1085] local-lis/les=72/73 n=5 ec=62/41 lis/c=72/62 les/c/f=73/63/0 sis=74 pruub=14.396112442s) [2] async=[2] r=-1 lpr=74 pi=[62,74)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active pruub 253.253936768s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:31 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 74 pg[9.7( v 49'1085 (0'0,49'1085] local-lis/les=72/73 n=6 ec=62/41 lis/c=72/62 les/c/f=73/63/0 sis=74 pruub=14.336379051s) [2] r=-1 lpr=74 pi=[62,74)/1 crt=49'1085 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 253.194259644s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:31 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 74 pg[9.17( v 49'1085 (0'0,49'1085] local-lis/les=72/73 n=5 ec=62/41 lis/c=72/62 les/c/f=73/63/0 sis=74 pruub=14.396564484s) [2] async=[2] r=-1 lpr=74 pi=[62,74)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active pruub 253.254287720s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:31 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 74 pg[9.3( v 49'1085 (0'0,49'1085] local-lis/les=72/73 n=6 ec=62/41 lis/c=72/62 les/c/f=73/63/0 sis=74 pruub=14.396076202s) [2] r=-1 lpr=74 pi=[62,74)/1 crt=49'1085 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 253.253921509s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:31 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 74 pg[9.1f( v 49'1085 (0'0,49'1085] local-lis/les=72/73 n=5 ec=62/41 lis/c=72/62 les/c/f=73/63/0 sis=74 pruub=14.396032333s) [2] r=-1 lpr=74 pi=[62,74)/1 crt=49'1085 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 253.253936768s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:31 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 74 pg[9.17( v 49'1085 (0'0,49'1085] local-lis/les=72/73 n=5 ec=62/41 lis/c=72/62 les/c/f=73/63/0 sis=74 pruub=14.396185875s) [2] r=-1 lpr=74 pi=[62,74)/1 crt=49'1085 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 253.254287720s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:31 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 74 pg[9.1b( v 49'1085 (0'0,49'1085] local-lis/les=72/73 n=5 ec=62/41 lis/c=72/62 les/c/f=73/63/0 sis=74 pruub=14.395662308s) [2] async=[2] r=-1 lpr=74 pi=[62,74)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active pruub 253.253921509s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:31 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 74 pg[9.13( v 49'1085 (0'0,49'1085] local-lis/les=72/73 n=5 ec=62/41 lis/c=72/62 les/c/f=73/63/0 sis=74 pruub=14.396083832s) [2] r=-1 lpr=74 pi=[62,74)/1 crt=49'1085 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 253.253936768s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:31 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 74 pg[9.b( v 49'1085 (0'0,49'1085] local-lis/les=72/73 n=6 ec=62/41 lis/c=72/62 les/c/f=73/63/0 sis=74 pruub=14.395541191s) [2] async=[2] r=-1 lpr=74 pi=[62,74)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active pruub 253.254196167s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:31 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 74 pg[9.b( v 49'1085 (0'0,49'1085] local-lis/les=72/73 n=6 ec=62/41 lis/c=72/62 les/c/f=73/63/0 sis=74 pruub=14.395462990s) [2] r=-1 lpr=74 pi=[62,74)/1 crt=49'1085 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 253.254196167s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:31 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 74 pg[9.1b( v 49'1085 (0'0,49'1085] local-lis/les=72/73 n=5 ec=62/41 lis/c=72/62 les/c/f=73/63/0 sis=74 pruub=14.395633698s) [2] r=-1 lpr=74 pi=[62,74)/1 crt=49'1085 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 253.253921509s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:31 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 74 pg[9.f( v 49'1085 (0'0,49'1085] local-lis/les=72/73 n=6 ec=62/41 lis/c=72/62 les/c/f=73/63/0 sis=74 pruub=14.394763947s) [2] async=[2] r=-1 lpr=74 pi=[62,74)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active pruub 253.254028320s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:31 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 74 pg[9.f( v 49'1085 (0'0,49'1085] local-lis/les=72/73 n=6 ec=62/41 lis/c=72/62 les/c/f=73/63/0 sis=74 pruub=14.394648552s) [2] r=-1 lpr=74 pi=[62,74)/1 crt=49'1085 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 253.254028320s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:31 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:31.798994827Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=4.087099ms
Jan 20 18:44:31 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:31.804541124Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns"
Jan 20 18:44:31 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:31.805794638Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.253444ms
Jan 20 18:44:31 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:31 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Jan 20 18:44:31 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:31 : epoch 696fccef : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc19c002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:44:32 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:32 : epoch 696fccef : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc1b4001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:44:32 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:32.055946258Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql"
Jan 20 18:44:32 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:32.056072681Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=132.063µs
Jan 20 18:44:32 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:32.059170534Z level=info msg="Executing migration" id="add column for to alert_rule_version"
Jan 20 18:44:32 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:32.065485532Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=6.312247ms
Jan 20 18:44:32 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:44:32 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:44:32 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:44:32.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:44:32 compute-0 sshd-session[99723]: Accepted publickey for zuul from 192.168.122.30 port 41518 ssh2: ECDSA SHA256:OqjpBifwMHsrzwMbJxHbqE54q1skVz6aecL1tN9sOps
Jan 20 18:44:32 compute-0 systemd-logind[796]: New session 38 of user zuul.
Jan 20 18:44:32 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:32.261557867Z level=info msg="Executing migration" id="add column annotations to alert_rule_version"
Jan 20 18:44:32 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:32 compute-0 systemd[1]: Started Session 38 of User zuul.
Jan 20 18:44:32 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:32.272645872Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=11.082654ms
Jan 20 18:44:32 compute-0 sshd-session[99723]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 18:44:32 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:32.318093369Z level=info msg="Executing migration" id="add column labels to alert_rule_version"
Jan 20 18:44:32 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/keepalived_password}] v 0)
Jan 20 18:44:32 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:32.323915633Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=5.811094ms
Jan 20 18:44:32 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:32 : epoch 696fccef : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc1a0002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:44:32 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v79: 337 pgs: 4 peering, 8 unknown, 2 active+clean+scrubbing, 323 active+clean; 455 KiB data, 107 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:44:32 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Jan 20 18:44:32 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:44:32 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:44:32 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:44:32.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:44:32 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:32.917311027Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version"
Jan 20 18:44:32 compute-0 ceph-mon[74381]: 12.13 scrub starts
Jan 20 18:44:32 compute-0 ceph-mon[74381]: 12.13 scrub ok
Jan 20 18:44:32 compute-0 ceph-mon[74381]: 11.18 scrub starts
Jan 20 18:44:32 compute-0 ceph-mon[74381]: 11.18 scrub ok
Jan 20 18:44:32 compute-0 ceph-mon[74381]: pgmap v77: 337 pgs: 4 peering, 8 unknown, 2 active+clean+scrubbing, 323 active+clean; 455 KiB data, 107 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:44:32 compute-0 ceph-mon[74381]: 12.16 scrub starts
Jan 20 18:44:32 compute-0 ceph-mon[74381]: 12.16 scrub ok
Jan 20 18:44:32 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:32 compute-0 ceph-mon[74381]: osdmap e74: 3 total, 3 up, 3 in
Jan 20 18:44:32 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:32 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:32.924274062Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=6.963515ms
Jan 20 18:44:32 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:32.928018621Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table"
Jan 20 18:44:32 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:32.933858866Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=5.834845ms
Jan 20 18:44:32 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:32.936945599Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table"
Jan 20 18:44:32 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:32.937054102Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=112.323µs
Jan 20 18:44:32 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:32.939582238Z level=info msg="Executing migration" id=create_alert_configuration_table
Jan 20 18:44:32 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:32.940531544Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=949.056µs
Jan 20 18:44:32 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:32.94341086Z level=info msg="Executing migration" id="Add column default in alert_configuration"
Jan 20 18:44:32 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:32.948370142Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=4.958332ms
Jan 20 18:44:32 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:32.950858828Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql"
Jan 20 18:44:32 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:32.950938711Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=80.483µs
Jan 20 18:44:32 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:32.953665922Z level=info msg="Executing migration" id="add column org_id in alert_configuration"
Jan 20 18:44:32 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Jan 20 18:44:32 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:32.958653915Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=4.985803ms
Jan 20 18:44:32 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:32 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Jan 20 18:44:32 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 20 18:44:32 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 20 18:44:32 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 20 18:44:32 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 20 18:44:32 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-0.jxtvmz on compute-0
Jan 20 18:44:32 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-0.jxtvmz on compute-0
Jan 20 18:44:33 compute-0 sudo[99827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:44:33 compute-0 sudo[99827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:44:33 compute-0 sudo[99827]: pam_unix(sudo:session): session closed for user root
Jan 20 18:44:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:33.079843083Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column"
Jan 20 18:44:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:33.081296291Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=1.458038ms
Jan 20 18:44:33 compute-0 sudo[99883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/keepalived:2.2.4 --timeout 895 _orch deploy --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1
Jan 20 18:44:33 compute-0 sudo[99883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:44:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:33.131053192Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration"
Jan 20 18:44:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:33.137133423Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=6.077001ms
Jan 20 18:44:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:33.139299931Z level=info msg="Executing migration" id=create_ngalert_configuration_table
Jan 20 18:44:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:33.140186795Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=888.554µs
Jan 20 18:44:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:33.144763116Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column"
Jan 20 18:44:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:33.146170024Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=1.410337ms
Jan 20 18:44:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:33.153558859Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration"
Jan 20 18:44:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:33.159944599Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=6.36489ms
Jan 20 18:44:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:33.316766883Z level=info msg="Executing migration" id="create provenance_type table"
Jan 20 18:44:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:33.318709264Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=1.945981ms
Jan 20 18:44:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:33.324073587Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns"
Jan 20 18:44:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:33.325577427Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=1.508149ms
Jan 20 18:44:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:33.348522105Z level=info msg="Executing migration" id="create alert_image table"
Jan 20 18:44:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:33.349513632Z level=info msg="Migration successfully executed" id="create alert_image table" duration=1.000057ms
Jan 20 18:44:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:33.361043038Z level=info msg="Executing migration" id="add unique index on token to alert_image table"
Jan 20 18:44:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:33.362485076Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=1.446608ms
Jan 20 18:44:33 compute-0 python3.9[99922]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 18:44:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:33.366890894Z level=info msg="Executing migration" id="support longer URLs in alert_image table"
Jan 20 18:44:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:33.367033808Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=147.384µs
Jan 20 18:44:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:33.369690248Z level=info msg="Executing migration" id=create_alert_configuration_history_table
Jan 20 18:44:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:33.370716035Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=1.026837ms
Jan 20 18:44:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:33.477009077Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration"
Jan 20 18:44:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:33.478167688Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=1.163171ms
Jan 20 18:44:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:33.48016746Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists"
Jan 20 18:44:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:33.480522891Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists"
Jan 20 18:44:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:33.482944144Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table"
Jan 20 18:44:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:33.48352529Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=583.096µs
Jan 20 18:44:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:33.485644186Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration"
Jan 20 18:44:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:33.486860338Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.216032ms
Jan 20 18:44:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:33.489662033Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history"
Jan 20 18:44:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:33.496410242Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=6.744009ms
Jan 20 18:44:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:33.498725064Z level=info msg="Executing migration" id="create library_element table v1"
Jan 20 18:44:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:33.499683769Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=957.985µs
Jan 20 18:44:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:33.543131902Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind"
Jan 20 18:44:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:33.54456924Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.439798ms
Jan 20 18:44:33 compute-0 podman[99977]: 2026-01-20 18:44:33.4827889 +0000 UTC m=+0.030301575 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Jan 20 18:44:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:33 : epoch 696fccef : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc194002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:44:33 compute-0 podman[99977]: 2026-01-20 18:44:33.957834573 +0000 UTC m=+0.505347248 container create 3ccf76fa02d4f9c88ec311d73f569b477e04e2646cab433251100e15dbc09c3b (image=quay.io/ceph/keepalived:2.2.4, name=sweet_nash, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, name=keepalived, com.redhat.component=keepalived-container, version=2.2.4, build-date=2023-02-22T09:23:20, vcs-type=git, architecture=x86_64, io.openshift.expose-services=, release=1793)
Jan 20 18:44:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:33.960230286Z level=info msg="Executing migration" id="create library_element_connection table v1"
Jan 20 18:44:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:33.961662354Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=1.442968ms
Jan 20 18:44:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:34.007596124Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id"
Jan 20 18:44:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:34.009155735Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.562391ms
Jan 20 18:44:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:34.016878341Z level=info msg="Executing migration" id="add unique index library_element org_id_uid"
Jan 20 18:44:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:34.017996171Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.12017ms
Jan 20 18:44:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:34.022775747Z level=info msg="Executing migration" id="increase max description length to 2048"
Jan 20 18:44:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:34.02286548Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=92.743µs
Jan 20 18:44:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:34 : epoch 696fccef : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc19c002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:44:34 compute-0 systemd[1]: Started libpod-conmon-3ccf76fa02d4f9c88ec311d73f569b477e04e2646cab433251100e15dbc09c3b.scope.
Jan 20 18:44:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:34.048002007Z level=info msg="Executing migration" id="alter library_element model to mediumtext"
Jan 20 18:44:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:34.04812916Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=130.933µs
Jan 20 18:44:34 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:44:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:34.102023302Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting"
Jan 20 18:44:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:34.102535455Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=516.613µs
Jan 20 18:44:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:34.113315901Z level=info msg="Executing migration" id="create data_keys table"
Jan 20 18:44:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:34.114541994Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.240413ms
Jan 20 18:44:34 compute-0 podman[99977]: 2026-01-20 18:44:34.115432827 +0000 UTC m=+0.662945532 container init 3ccf76fa02d4f9c88ec311d73f569b477e04e2646cab433251100e15dbc09c3b (image=quay.io/ceph/keepalived:2.2.4, name=sweet_nash, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, vcs-type=git, version=2.2.4, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, distribution-scope=public, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., release=1793)
Jan 20 18:44:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:34.117298777Z level=info msg="Executing migration" id="create secrets table"
Jan 20 18:44:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:34.118362005Z level=info msg="Migration successfully executed" id="create secrets table" duration=1.068258ms
Jan 20 18:44:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:34.120503502Z level=info msg="Executing migration" id="rename data_keys name column to id"
Jan 20 18:44:34 compute-0 podman[99977]: 2026-01-20 18:44:34.122256388 +0000 UTC m=+0.669769033 container start 3ccf76fa02d4f9c88ec311d73f569b477e04e2646cab433251100e15dbc09c3b (image=quay.io/ceph/keepalived:2.2.4, name=sweet_nash, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, io.buildah.version=1.28.2, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., name=keepalived, vcs-type=git, architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Jan 20 18:44:34 compute-0 podman[99977]: 2026-01-20 18:44:34.125014352 +0000 UTC m=+0.672527017 container attach 3ccf76fa02d4f9c88ec311d73f569b477e04e2646cab433251100e15dbc09c3b (image=quay.io/ceph/keepalived:2.2.4, name=sweet_nash, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, name=keepalived, build-date=2023-02-22T09:23:20)
Jan 20 18:44:34 compute-0 sweet_nash[100019]: 0 0
Jan 20 18:44:34 compute-0 systemd[1]: libpod-3ccf76fa02d4f9c88ec311d73f569b477e04e2646cab433251100e15dbc09c3b.scope: Deactivated successfully.
Jan 20 18:44:34 compute-0 podman[99977]: 2026-01-20 18:44:34.128937776 +0000 UTC m=+0.676450421 container died 3ccf76fa02d4f9c88ec311d73f569b477e04e2646cab433251100e15dbc09c3b (image=quay.io/ceph/keepalived:2.2.4, name=sweet_nash, release=1793, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, description=keepalived for Ceph, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git)
Jan 20 18:44:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:34.147405446Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=26.894764ms
Jan 20 18:44:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:34.163570556Z level=info msg="Executing migration" id="add name column into data_keys"
Jan 20 18:44:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:34.169836322Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=6.255365ms
Jan 20 18:44:34 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:44:34 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 18:44:34 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:44:34.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 18:44:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:34.213877422Z level=info msg="Executing migration" id="copy data_keys id column values into name"
Jan 20 18:44:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:34.214146338Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=273.817µs
Jan 20 18:44:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:34.21681967Z level=info msg="Executing migration" id="rename data_keys name column to label"
Jan 20 18:44:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-9542e1d694f6379ce6ed64229855e4984b0c68ee563235454b358a70cdf565e6-merged.mount: Deactivated successfully.
Jan 20 18:44:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:34.246174519Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=29.36924ms
Jan 20 18:44:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:34.249101066Z level=info msg="Executing migration" id="rename data_keys id column back to name"
Jan 20 18:44:34 compute-0 podman[99977]: 2026-01-20 18:44:34.249662241 +0000 UTC m=+0.797174886 container remove 3ccf76fa02d4f9c88ec311d73f569b477e04e2646cab433251100e15dbc09c3b (image=quay.io/ceph/keepalived:2.2.4, name=sweet_nash, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793, vcs-type=git, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container)
Jan 20 18:44:34 compute-0 systemd[1]: libpod-conmon-3ccf76fa02d4f9c88ec311d73f569b477e04e2646cab433251100e15dbc09c3b.scope: Deactivated successfully.
Jan 20 18:44:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:34.276961016Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=27.85455ms
Jan 20 18:44:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:34.278941678Z level=info msg="Executing migration" id="create kv_store table v1"
Jan 20 18:44:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:34.27974557Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=804.662µs
Jan 20 18:44:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:34.283985623Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key"
Jan 20 18:44:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:34.284867996Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=881.983µs
Jan 20 18:44:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:34.28728485Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations"
Jan 20 18:44:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:34.287524496Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=239.656µs
Jan 20 18:44:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:34.290084024Z level=info msg="Executing migration" id="create permission table"
Jan 20 18:44:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:34.291373879Z level=info msg="Migration successfully executed" id="create permission table" duration=1.294155ms
Jan 20 18:44:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:34.295095118Z level=info msg="Executing migration" id="add unique index permission.role_id"
Jan 20 18:44:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:34.29596385Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=868.032µs
Jan 20 18:44:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:34.298710673Z level=info msg="Executing migration" id="add unique index role_id_action_scope"
Jan 20 18:44:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:34.300256664Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.550731ms
Jan 20 18:44:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:34.303041058Z level=info msg="Executing migration" id="create role table"
Jan 20 18:44:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:34.304260011Z level=info msg="Migration successfully executed" id="create role table" duration=1.215742ms
Jan 20 18:44:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:34.30686012Z level=info msg="Executing migration" id="add column display_name"
Jan 20 18:44:34 compute-0 systemd[1]: Reloading.
Jan 20 18:44:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:34.313242419Z level=info msg="Migration successfully executed" id="add column display_name" duration=6.378498ms
Jan 20 18:44:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:34.315517929Z level=info msg="Executing migration" id="add column group_name"
Jan 20 18:44:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:34.321087137Z level=info msg="Migration successfully executed" id="add column group_name" duration=5.564558ms
Jan 20 18:44:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:34.323152323Z level=info msg="Executing migration" id="add index role.org_id"
Jan 20 18:44:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:34.324282442Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=1.130759ms
Jan 20 18:44:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:34.326683256Z level=info msg="Executing migration" id="add unique index role_org_id_name"
Jan 20 18:44:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:34.327935839Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.253473ms
Jan 20 18:44:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:34.331093113Z level=info msg="Executing migration" id="add index role_org_id_uid"
Jan 20 18:44:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:34.332078459Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=986.086µs
Jan 20 18:44:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:34.335454469Z level=info msg="Executing migration" id="create team role table"
Jan 20 18:44:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:34.336340562Z level=info msg="Migration successfully executed" id="create team role table" duration=885.694µs
Jan 20 18:44:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:34.339121526Z level=info msg="Executing migration" id="add index team_role.org_id"
Jan 20 18:44:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:34.340059942Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=938.645µs
Jan 20 18:44:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:34.343137913Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id"
Jan 20 18:44:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:34.34415865Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.023627ms
Jan 20 18:44:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:34.348863795Z level=info msg="Executing migration" id="add index team_role.team_id"
Jan 20 18:44:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:34.349616784Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=752.619µs
Jan 20 18:44:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:34.352226704Z level=info msg="Executing migration" id="create user role table"
Jan 20 18:44:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:34.352921353Z level=info msg="Migration successfully executed" id="create user role table" duration=694.359µs
Jan 20 18:44:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:34.355071169Z level=info msg="Executing migration" id="add index user_role.org_id"
Jan 20 18:44:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:34.356197199Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.12554ms
Jan 20 18:44:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:34.359388174Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id"
Jan 20 18:44:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:34.360866213Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.478739ms
Jan 20 18:44:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:34.364864159Z level=info msg="Executing migration" id="add index user_role.user_id"
Jan 20 18:44:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:34.36600258Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.142351ms
Jan 20 18:44:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:34.368545357Z level=info msg="Executing migration" id="create builtin role table"
Jan 20 18:44:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:34.369541524Z level=info msg="Migration successfully executed" id="create builtin role table" duration=1.005307ms
Jan 20 18:44:34 compute-0 systemd-rc-local-generator[100065]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:44:34 compute-0 systemd-sysv-generator[100069]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:44:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:34.435379522Z level=info msg="Executing migration" id="add index builtin_role.role_id"
Jan 20 18:44:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:34.43682381Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.448338ms
Jan 20 18:44:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:34 : epoch 696fccef : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc1b4001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:44:34 compute-0 systemd[1]: Reloading.
Jan 20 18:44:34 compute-0 systemd-rc-local-generator[100107]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:44:34 compute-0 systemd-sysv-generator[100112]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:44:34 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v81: 337 pgs: 337 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 511 B/s wr, 105 op/s; 309 B/s, 1 keys/s, 7 objects/s recovering
Jan 20 18:44:34 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} v 0)
Jan 20 18:44:34 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Jan 20 18:44:34 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0)
Jan 20 18:44:34 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Jan 20 18:44:34 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Jan 20 18:44:34 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:44:34 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:44:34 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:44:34.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:44:34 compute-0 ceph-mon[74381]: 10.1f scrub starts
Jan 20 18:44:34 compute-0 ceph-mon[74381]: 10.1f scrub ok
Jan 20 18:44:34 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:34 compute-0 ceph-mon[74381]: pgmap v79: 337 pgs: 4 peering, 8 unknown, 2 active+clean+scrubbing, 323 active+clean; 455 KiB data, 107 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:44:34 compute-0 ceph-mon[74381]: 7.12 scrub starts
Jan 20 18:44:34 compute-0 ceph-mon[74381]: 7.12 scrub ok
Jan 20 18:44:34 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:34 compute-0 ceph-mon[74381]: osdmap e75: 3 total, 3 up, 3 in
Jan 20 18:44:34 compute-0 ceph-mon[74381]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 20 18:44:34 compute-0 ceph-mon[74381]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 20 18:44:34 compute-0 ceph-mon[74381]: Deploying daemon keepalived.rgw.default.compute-0.jxtvmz on compute-0
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.091593564Z level=info msg="Executing migration" id="add index builtin_role.name"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.093399523Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.808319ms
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.103958792Z level=info msg="Executing migration" id="Add column org_id to builtin_role table"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.116703591Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=12.740768ms
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.120014519Z level=info msg="Executing migration" id="add index builtin_role.org_id"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.121686803Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=1.675264ms
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.124665622Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.125955487Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.289405ms
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.130029405Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.132300145Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=2.272119ms
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.135515451Z level=info msg="Executing migration" id="add unique index role.uid"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.13740237Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.890229ms
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.140763339Z level=info msg="Executing migration" id="create seed assignment table"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.141656693Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=895.444µs
Jan 20 18:44:35 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 20 18:44:35 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 20 18:44:35 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.145167247Z level=info msg="Executing migration" id="add unique index builtin_role_role_name"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.146371029Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.205172ms
Jan 20 18:44:35 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.150598041Z level=info msg="Executing migration" id="add column hidden to role table"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.159939649Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=9.340777ms
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.163285378Z level=info msg="Executing migration" id="permission kind migration"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.171745223Z level=info msg="Migration successfully executed" id="permission kind migration" duration=8.455925ms
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.174623649Z level=info msg="Executing migration" id="permission attribute migration"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.180634868Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=6.010569ms
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.182754584Z level=info msg="Executing migration" id="permission identifier migration"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.1897536Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=6.996886ms
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.192250996Z level=info msg="Executing migration" id="add permission identifier index"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.193627113Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=1.376417ms
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.19726149Z level=info msg="Executing migration" id="add permission action scope role_id index"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.198644407Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=1.388577ms
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.202425347Z level=info msg="Executing migration" id="remove permission role_id action scope index"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.203729171Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=1.305124ms
Jan 20 18:44:35 compute-0 systemd[1]: Starting Ceph keepalived.rgw.default.compute-0.jxtvmz for aecbbf3b-b405-507b-97d7-637a83f5b4b1...
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.207275616Z level=info msg="Executing migration" id="create query_history table v1"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.208605641Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=1.335626ms
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.211554669Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.212394982Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=840.603µs
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.215845563Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.215922305Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=74.822µs
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.218038452Z level=info msg="Executing migration" id="rbac disabled migrator"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.218072403Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=32.541µs
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.222610203Z level=info msg="Executing migration" id="teams permissions migration"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.222996533Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=386.22µs
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.225093459Z level=info msg="Executing migration" id="dashboard permissions"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.225691164Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=598.695µs
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.229862375Z level=info msg="Executing migration" id="dashboard permissions uid scopes"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.230499103Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=637.008µs
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.232842314Z level=info msg="Executing migration" id="drop managed folder create actions"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.2330775Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=235.586µs
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.234865129Z level=info msg="Executing migration" id="alerting notification permissions"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.235420933Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=555.425µs
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.237026725Z level=info msg="Executing migration" id="create query_history_star table v1"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.237735815Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=709.2µs
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.24056906Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.241386301Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=816.951µs
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.244161455Z level=info msg="Executing migration" id="add column org_id in query_history_star"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.252970949Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=8.807404ms
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.256911823Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.257033907Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=128.044µs
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.259393179Z level=info msg="Executing migration" id="create correlation table v1"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.261115475Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=1.722746ms
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.26507066Z level=info msg="Executing migration" id="add index correlations.uid"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.266167209Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.099559ms
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.268572143Z level=info msg="Executing migration" id="add index correlations.source_uid"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.269433166Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=861.152µs
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.271762008Z level=info msg="Executing migration" id="add correlation config column"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.279162714Z level=info msg="Migration successfully executed" id="add correlation config column" duration=7.390786ms
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.281372363Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.282632247Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=1.262254ms
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.284345742Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.285492302Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=1.14645ms
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.287596498Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.306772927Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=19.131568ms
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.309321705Z level=info msg="Executing migration" id="create correlation v2"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.310673711Z level=info msg="Migration successfully executed" id="create correlation v2" duration=1.351656ms
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.314212274Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.315094188Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=881.944µs
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.317507043Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.318429036Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=922.214µs
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.320869671Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.321701944Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=832.483µs
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.324317493Z level=info msg="Executing migration" id="copy correlation v1 to v2"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.32457529Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=258.707µs
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.326444379Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.327291832Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=847.013µs
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.329270384Z level=info msg="Executing migration" id="add provisioning column"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.335427128Z level=info msg="Migration successfully executed" id="add provisioning column" duration=6.151804ms
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.337587565Z level=info msg="Executing migration" id="create entity_events table"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.338546841Z level=info msg="Migration successfully executed" id="create entity_events table" duration=960.576µs
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.340425391Z level=info msg="Executing migration" id="create dashboard public config v1"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.341326135Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=900.154µs
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.343673237Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.344097829Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.346976965Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.347510219Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.350300973Z level=info msg="Executing migration" id="Drop old dashboard public config table"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.351439313Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=1.13941ms
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.35434385Z level=info msg="Executing migration" id="recreate dashboard public config v1"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.355567523Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.223723ms
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.358389098Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.359701943Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.312645ms
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.363363479Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.365090376Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.727637ms
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.367966442Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.368964848Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=996.796µs
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.370921551Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.371983719Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.062879ms
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.374032703Z level=info msg="Executing migration" id="Drop public config table"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.375020669Z level=info msg="Migration successfully executed" id="Drop public config table" duration=987.586µs
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.377511066Z level=info msg="Executing migration" id="Recreate dashboard public config v2"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.378884582Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.373576ms
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.380749101Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.381690287Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=941.286µs
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.384942753Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.386175806Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.234843ms
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.388479396Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.389977606Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.4955ms
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.393680075Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2"
Jan 20 18:44:35 compute-0 podman[100237]: 2026-01-20 18:44:35.409102234 +0000 UTC m=+0.040603249 container create 0b11a68c6e54d0cd5f8cd70784429ad29794996544482f4efc87d62337458f4e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-keepalived-rgw-default-compute-0-jxtvmz, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, vendor=Red Hat, Inc., name=keepalived, vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph, build-date=2023-02-22T09:23:20, version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, distribution-scope=public, architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.417254341Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=23.544205ms
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.420102716Z level=info msg="Executing migration" id="add annotations_enabled column"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.428350005Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=8.243359ms
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.430621926Z level=info msg="Executing migration" id="add time_selection_enabled column"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.43834066Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=7.713344ms
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.440519779Z level=info msg="Executing migration" id="delete orphaned public dashboards"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.440751265Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=233.476µs
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.442748107Z level=info msg="Executing migration" id="add share column"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.451837809Z level=info msg="Migration successfully executed" id="add share column" duration=9.082202ms
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.453977665Z level=info msg="Executing migration" id="backfill empty share column fields with default of public"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.454195781Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=218.556µs
Jan 20 18:44:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c679bdc04fa1c4e4d3ca3f5fb8625415f6f67ab71aad4fac252a6c3702994e3/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.456197285Z level=info msg="Executing migration" id="create file table"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.457046307Z level=info msg="Migration successfully executed" id="create file table" duration=852.092µs
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.459290657Z level=info msg="Executing migration" id="file table idx: path natural pk"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.460131629Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=840.882µs
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.462198984Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.463059897Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=860.043µs
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.466744314Z level=info msg="Executing migration" id="create file_meta table"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.467553646Z level=info msg="Migration successfully executed" id="create file_meta table" duration=810.012µs
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.470521175Z level=info msg="Executing migration" id="file table idx: path key"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.47144456Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=923.315µs
Jan 20 18:44:35 compute-0 podman[100237]: 2026-01-20 18:44:35.471281795 +0000 UTC m=+0.102782830 container init 0b11a68c6e54d0cd5f8cd70784429ad29794996544482f4efc87d62337458f4e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-keepalived-rgw-default-compute-0-jxtvmz, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., vcs-type=git, distribution-scope=public, release=1793, build-date=2023-02-22T09:23:20, name=keepalived, io.openshift.tags=Ceph keepalived, architecture=x86_64, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph)
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.47447738Z level=info msg="Executing migration" id="set path collation in file table"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.474533711Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=57.211µs
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.477211173Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.477271064Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=60.631µs
Jan 20 18:44:35 compute-0 podman[100237]: 2026-01-20 18:44:35.477442629 +0000 UTC m=+0.108943644 container start 0b11a68c6e54d0cd5f8cd70784429ad29794996544482f4efc87d62337458f4e (image=quay.io/ceph/keepalived:2.2.4, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-keepalived-rgw-default-compute-0-jxtvmz, io.openshift.tags=Ceph keepalived, vcs-type=git, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph, io.openshift.expose-services=, distribution-scope=public, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, build-date=2023-02-22T09:23:20, version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.component=keepalived-container, name=keepalived)
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.480305245Z level=info msg="Executing migration" id="managed permissions migration"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.480871059Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=566.475µs
Jan 20 18:44:35 compute-0 bash[100237]: 0b11a68c6e54d0cd5f8cd70784429ad29794996544482f4efc87d62337458f4e
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.484486866Z level=info msg="Executing migration" id="managed folder permissions alert actions migration"
Jan 20 18:44:35 compute-0 podman[100237]: 2026-01-20 18:44:35.391908638 +0000 UTC m=+0.023409673 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.484696801Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=210.246µs
Jan 20 18:44:35 compute-0 systemd[1]: Started Ceph keepalived.rgw.default.compute-0.jxtvmz for aecbbf3b-b405-507b-97d7-637a83f5b4b1.
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-keepalived-rgw-default-compute-0-jxtvmz[100281]: Tue Jan 20 18:44:35 2026: Starting Keepalived v2.2.4 (08/21,2021)
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-keepalived-rgw-default-compute-0-jxtvmz[100281]: Tue Jan 20 18:44:35 2026: Running on Linux 5.14.0-661.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026 (built for Linux 5.14.0)
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-keepalived-rgw-default-compute-0-jxtvmz[100281]: Tue Jan 20 18:44:35 2026: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-keepalived-rgw-default-compute-0-jxtvmz[100281]: Tue Jan 20 18:44:35 2026: Configuration file /etc/keepalived/keepalived.conf
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-keepalived-rgw-default-compute-0-jxtvmz[100281]: Tue Jan 20 18:44:35 2026: Failed to bind to process monitoring socket - errno 98 - Address already in use
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-keepalived-rgw-default-compute-0-jxtvmz[100281]: Tue Jan 20 18:44:35 2026: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-keepalived-rgw-default-compute-0-jxtvmz[100281]: Tue Jan 20 18:44:35 2026: Starting VRRP child process, pid=4
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-keepalived-rgw-default-compute-0-jxtvmz[100281]: Tue Jan 20 18:44:35 2026: Startup complete
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-keepalived-nfs-cephfs-compute-0-kuklye[98005]: Tue Jan 20 18:44:35 2026: (VI_0) Entering BACKUP STATE
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-keepalived-rgw-default-compute-0-jxtvmz[100281]: Tue Jan 20 18:44:35 2026: (VI_0) Entering BACKUP STATE (init)
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-keepalived-rgw-default-compute-0-jxtvmz[100281]: Tue Jan 20 18:44:35 2026: VRRP_Script(check_backend) succeeded
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.630547263Z level=info msg="Executing migration" id="RBAC action name migrator"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.632390633Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.84786ms
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.719209488Z level=info msg="Executing migration" id="Add UID column to playlist"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.728307089Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=9.096631ms
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.733822425Z level=info msg="Executing migration" id="Update uid column values in playlist"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.734190005Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=388.5µs
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.736451705Z level=info msg="Executing migration" id="Add index for uid in playlist"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.737878413Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.430078ms
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.740865242Z level=info msg="Executing migration" id="update group index for alert rules"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.741255942Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=391.89µs
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.743323868Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.743513403Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=189.515µs
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.745831165Z level=info msg="Executing migration" id="admin only folder/dashboard permission"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.746267706Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=441.822µs
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.748347521Z level=info msg="Executing migration" id="add action column to seed_assignment"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.75510759Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=6.755449ms
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.758324806Z level=info msg="Executing migration" id="add scope column to seed_assignment"
Jan 20 18:44:35 compute-0 sudo[99883]: pam_unix(sudo:session): session closed for user root
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.766172694Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=7.844928ms
Jan 20 18:44:35 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.768124047Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.769620856Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=1.49761ms
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.771337862Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable"
Jan 20 18:44:35 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:35 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 18:44:35 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:35 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Jan 20 18:44:35 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:35 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 20 18:44:35 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 20 18:44:35 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 20 18:44:35 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 20 18:44:35 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-2.cuhvnh on compute-2
Jan 20 18:44:35 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-2.cuhvnh on compute-2
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.84811421Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=76.768189ms
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.850583675Z level=info msg="Executing migration" id="add unique index builtin_role_name back"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.851938851Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=1.356436ms
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.855208438Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.856242836Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=1.034818ms
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.858639629Z level=info msg="Executing migration" id="add primary key to seed_assigment"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.882822631Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=24.160811ms
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.886125779Z level=info msg="Executing migration" id="add origin column to seed_assignment"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.892607381Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=6.478401ms
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.894591914Z level=info msg="Executing migration" id="add origin to plugin seed_assignment"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.894953893Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=363.039µs
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.896848604Z level=info msg="Executing migration" id="prevent seeding OnCall access"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.89707223Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=216.676µs
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.898715813Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.8989815Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=265.497µs
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:35 : epoch 696fccef : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc1a0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.900915341Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.901072986Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=157.845µs
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.902842553Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.903073359Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=230.916µs
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.904704632Z level=info msg="Executing migration" id="create folder table"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.905570546Z level=info msg="Migration successfully executed" id="create folder table" duration=865.684µs
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.907928627Z level=info msg="Executing migration" id="Add index for parent_uid"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.908996866Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.067769ms
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.911919884Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.912780697Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=859.343µs
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.915171791Z level=info msg="Executing migration" id="Update folder title length"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.915192351Z level=info msg="Migration successfully executed" id="Update folder title length" duration=21.36µs
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.916956448Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.917958685Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=1.002837ms
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.920772019Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.921747685Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=975.775µs
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.923884512Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.924987821Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=1.103469ms
Jan 20 18:44:35 compute-0 sudo[100363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqnmjsmujtmsozrtpekfyxahbtduwnnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934675.3385322-51-69632969883320/AnsiballZ_command.py'
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.927289822Z level=info msg="Executing migration" id="Sync dashboard and folder table"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.927683843Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=393.961µs
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.929245414Z level=info msg="Executing migration" id="Remove ghost folders from the folder table"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.92947654Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=230.806µs
Jan 20 18:44:35 compute-0 sudo[100363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.931207796Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.93208991Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=882.374µs
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.934123143Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.935099809Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=976.306µs
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.936845376Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.937712069Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=863.953µs
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.940737829Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.941773957Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=1.036488ms
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.944022096Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.94489881Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=877.394µs
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.946862481Z level=info msg="Executing migration" id="create anon_device table"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.947567661Z level=info msg="Migration successfully executed" id="create anon_device table" duration=704.7µs
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.950007735Z level=info msg="Executing migration" id="add unique index anon_device.device_id"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.951038062Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=1.029787ms
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.954482674Z level=info msg="Executing migration" id="add index anon_device.updated_at"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.955422138Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=935.784µs
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.958198212Z level=info msg="Executing migration" id="create signing_key table"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.959144368Z level=info msg="Migration successfully executed" id="create signing_key table" duration=945.336µs
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.961445169Z level=info msg="Executing migration" id="add unique index signing_key.key_id"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.962278331Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=833.002µs
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.964956692Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.965975449Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.018657ms
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.967988392Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.96825554Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=267.768µs
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.973325194Z level=info msg="Executing migration" id="Add folder_uid for dashboard"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.979726504Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=6.4007ms
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.985295002Z level=info msg="Executing migration" id="Populate dashboard folder_uid column"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.986463883Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=1.180331ms
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.989328789Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.990979863Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=1.661004ms
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.993586032Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.994756023Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=1.169601ms
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.996626133Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title"
Jan 20 18:44:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.997965059Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=1.292044ms
Jan 20 18:44:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:35.999979992Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder"
Jan 20 18:44:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:36.001383709Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.404217ms
Jan 20 18:44:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:36.003640479Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title"
Jan 20 18:44:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:36.005044596Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=1.403347ms
Jan 20 18:44:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:36.007406739Z level=info msg="Executing migration" id="create sso_setting table"
Jan 20 18:44:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:36.008644501Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=1.237703ms
Jan 20 18:44:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:36.012312389Z level=info msg="Executing migration" id="copy kvstore migration status to each org"
Jan 20 18:44:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:36.013255305Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=944.226µs
Jan 20 18:44:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:36.016984483Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status"
Jan 20 18:44:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:36.017354734Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=373.521µs
Jan 20 18:44:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:36.021370019Z level=info msg="Executing migration" id="alter kv_store.value to longtext"
Jan 20 18:44:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:36.021483903Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=114.214µs
Jan 20 18:44:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:36.023721172Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table"
Jan 20 18:44:36 compute-0 ceph-mon[74381]: 9.3 scrub starts
Jan 20 18:44:36 compute-0 ceph-mon[74381]: 9.3 scrub ok
Jan 20 18:44:36 compute-0 ceph-mon[74381]: 12.14 scrub starts
Jan 20 18:44:36 compute-0 ceph-mon[74381]: 12.14 scrub ok
Jan 20 18:44:36 compute-0 ceph-mon[74381]: pgmap v81: 337 pgs: 337 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 511 B/s wr, 105 op/s; 309 B/s, 1 keys/s, 7 objects/s recovering
Jan 20 18:44:36 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Jan 20 18:44:36 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Jan 20 18:44:36 compute-0 ceph-mon[74381]: 9.17 scrub starts
Jan 20 18:44:36 compute-0 ceph-mon[74381]: 9.17 scrub ok
Jan 20 18:44:36 compute-0 ceph-mon[74381]: 12.15 scrub starts
Jan 20 18:44:36 compute-0 ceph-mon[74381]: 12.15 scrub ok
Jan 20 18:44:36 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 20 18:44:36 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 20 18:44:36 compute-0 ceph-mon[74381]: osdmap e76: 3 total, 3 up, 3 in
Jan 20 18:44:36 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:36 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:36 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:36 : epoch 696fccef : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc194003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:44:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:36.033615895Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=9.893263ms
Jan 20 18:44:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:36.036001878Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table"
Jan 20 18:44:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:36.046212979Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=10.209381ms
Jan 20 18:44:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:36.048400228Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration"
Jan 20 18:44:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:36.048854769Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=455.371µs
Jan 20 18:44:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=migrator t=2026-01-20T18:44:36.050717129Z level=info msg="migrations completed" performed=547 skipped=0 duration=16.740824927s
Jan 20 18:44:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=sqlstore t=2026-01-20T18:44:36.05263391Z level=info msg="Created default organization"
Jan 20 18:44:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=secrets t=2026-01-20T18:44:36.055025773Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1
Jan 20 18:44:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=plugin.store t=2026-01-20T18:44:36.092770765Z level=info msg="Loading plugins..."
Jan 20 18:44:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-keepalived-nfs-cephfs-compute-0-kuklye[98005]: Tue Jan 20 18:44:36 2026: (VI_0) Entering MASTER STATE
Jan 20 18:44:36 compute-0 python3.9[100365]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                             pushd /var/tmp
                                             curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                             pushd repo-setup-main
                                             python3 -m venv ./venv
                                             PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                             ./venv/bin/repo-setup current-podified -b antelope
                                             popd
                                             rm -rf repo-setup-main
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:44:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=local.finder t=2026-01-20T18:44:36.173355195Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled
Jan 20 18:44:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=plugin.store t=2026-01-20T18:44:36.173405306Z level=info msg="Plugins loaded" count=55 duration=80.637421ms
Jan 20 18:44:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=query_data t=2026-01-20T18:44:36.177156646Z level=info msg="Query Service initialization"
Jan 20 18:44:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=live.push_http t=2026-01-20T18:44:36.183882585Z level=info msg="Live Push Gateway initialization"
Jan 20 18:44:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=ngalert.migration t=2026-01-20T18:44:36.186986087Z level=info msg=Starting
Jan 20 18:44:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=ngalert.migration t=2026-01-20T18:44:36.187358267Z level=info msg="Applying transition" currentType=Legacy desiredType=UnifiedAlerting cleanOnDowngrade=false cleanOnUpgrade=false
Jan 20 18:44:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=ngalert.migration orgID=1 t=2026-01-20T18:44:36.187908931Z level=info msg="Migrating alerts for organisation"
Jan 20 18:44:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=ngalert.migration orgID=1 t=2026-01-20T18:44:36.188482907Z level=info msg="Alerts found to migrate" alerts=0
Jan 20 18:44:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=ngalert.migration t=2026-01-20T18:44:36.190359856Z level=info msg="Completed alerting migration"
Jan 20 18:44:36 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:44:36 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 18:44:36 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:44:36.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 18:44:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=ngalert.state.manager t=2026-01-20T18:44:36.210924582Z level=info msg="Running in alternative execution of Error/NoData mode"
Jan 20 18:44:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=infra.usagestats.collector t=2026-01-20T18:44:36.212929195Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2
Jan 20 18:44:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=provisioning.datasources t=2026-01-20T18:44:36.214017774Z level=info msg="inserting datasource from configuration" name=Loki uid=P8E80F9AEF21F6940
Jan 20 18:44:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=provisioning.alerting t=2026-01-20T18:44:36.224896604Z level=info msg="starting to provision alerting"
Jan 20 18:44:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=provisioning.alerting t=2026-01-20T18:44:36.224920284Z level=info msg="finished to provision alerting"
Jan 20 18:44:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=grafanaStorageLogger t=2026-01-20T18:44:36.22513199Z level=info msg="Storage starting"
Jan 20 18:44:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=ngalert.multiorg.alertmanager t=2026-01-20T18:44:36.225380956Z level=info msg="Starting MultiOrg Alertmanager"
Jan 20 18:44:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=ngalert.state.manager t=2026-01-20T18:44:36.225364556Z level=info msg="Warming state cache for startup"
Jan 20 18:44:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=http.server t=2026-01-20T18:44:36.227648657Z level=info msg="HTTP Server TLS settings" MinTLSVersion=TLS1.2 configuredciphers=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
Jan 20 18:44:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=http.server t=2026-01-20T18:44:36.228190931Z level=info msg="HTTP Server Listen" address=192.168.122.100:3000 protocol=https subUrl= socket=
Jan 20 18:44:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=sqlstore.transactions t=2026-01-20T18:44:36.258196548Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Jan 20 18:44:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=ngalert.state.manager t=2026-01-20T18:44:36.258328171Z level=info msg="State cache has been initialized" states=0 duration=32.960275ms
Jan 20 18:44:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=ngalert.scheduler t=2026-01-20T18:44:36.258362382Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1
Jan 20 18:44:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=ticker t=2026-01-20T18:44:36.258438264Z level=info msg=starting first_tick=2026-01-20T18:44:40Z
Jan 20 18:44:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=provisioning.dashboard t=2026-01-20T18:44:36.301425995Z level=info msg="starting to provision dashboards"
Jan 20 18:44:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=grafana.update.checker t=2026-01-20T18:44:36.310993709Z level=info msg="Update check succeeded" duration=85.287905ms
Jan 20 18:44:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=plugins.update.checker t=2026-01-20T18:44:36.311475322Z level=info msg="Update check succeeded" duration=86.052055ms
Jan 20 18:44:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=sqlstore.transactions t=2026-01-20T18:44:36.341506769Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Jan 20 18:44:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=sqlstore.transactions t=2026-01-20T18:44:36.356891478Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Jan 20 18:44:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:36 : epoch 696fccef : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc19c004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:44:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=provisioning.dashboard t=2026-01-20T18:44:36.670380081Z level=info msg="finished to provision dashboards"
Jan 20 18:44:36 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Jan 20 18:44:36 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Jan 20 18:44:36 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v83: 337 pgs: 337 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 511 B/s wr, 105 op/s; 309 B/s, 1 keys/s, 7 objects/s recovering
Jan 20 18:44:36 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} v 0)
Jan 20 18:44:36 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Jan 20 18:44:36 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0)
Jan 20 18:44:36 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Jan 20 18:44:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=grafana-apiserver t=2026-01-20T18:44:36.743528533Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager"
Jan 20 18:44:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=grafana-apiserver t=2026-01-20T18:44:36.744115679Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager"
Jan 20 18:44:36 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:44:36 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 18:44:36 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:44:36.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 18:44:37 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Jan 20 18:44:37 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 20 18:44:37 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 20 18:44:37 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Jan 20 18:44:37 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Jan 20 18:44:37 compute-0 ceph-mon[74381]: 10.1 scrub starts
Jan 20 18:44:37 compute-0 ceph-mon[74381]: 10.1 scrub ok
Jan 20 18:44:37 compute-0 ceph-mon[74381]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Jan 20 18:44:37 compute-0 ceph-mon[74381]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Jan 20 18:44:37 compute-0 ceph-mon[74381]: Deploying daemon keepalived.rgw.default.compute-2.cuhvnh on compute-2
Jan 20 18:44:37 compute-0 ceph-mon[74381]: 10.9 scrub starts
Jan 20 18:44:37 compute-0 ceph-mon[74381]: 10.9 scrub ok
Jan 20 18:44:37 compute-0 ceph-mon[74381]: 8.1a scrub starts
Jan 20 18:44:37 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Jan 20 18:44:37 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Jan 20 18:44:37 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Jan 20 18:44:37 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Jan 20 18:44:37 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e77 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 18:44:37 compute-0 ceph-mon[74381]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Jan 20 18:44:37 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:44:37.764554) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 18:44:37 compute-0 ceph-mon[74381]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Jan 20 18:44:37 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768934677764850, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7713, "num_deletes": 260, "total_data_size": 14625725, "memory_usage": 15677136, "flush_reason": "Manual Compaction"}
Jan 20 18:44:37 compute-0 ceph-mon[74381]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Jan 20 18:44:37 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768934677859962, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 13157751, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 142, "largest_seqno": 7850, "table_properties": {"data_size": 13129098, "index_size": 18426, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9157, "raw_key_size": 88794, "raw_average_key_size": 24, "raw_value_size": 13058682, "raw_average_value_size": 3570, "num_data_blocks": 806, "num_entries": 3657, "num_filter_entries": 3657, "num_deletions": 260, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768934328, "oldest_key_time": 1768934328, "file_creation_time": 1768934677, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cbf3ab03-d51c-4622-b6c7-e997cd5246eb", "db_session_id": "I40O2DG19JCNHUB0JQU4", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Jan 20 18:44:37 compute-0 ceph-mon[74381]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 95468 microseconds, and 28732 cpu microseconds.
Jan 20 18:44:37 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:44:37.860046) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 13157751 bytes OK
Jan 20 18:44:37 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:44:37.860081) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Jan 20 18:44:37 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:44:37.862395) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Jan 20 18:44:37 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:44:37.862416) EVENT_LOG_v1 {"time_micros": 1768934677862410, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Jan 20 18:44:37 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:44:37.862452) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Jan 20 18:44:37 compute-0 ceph-mon[74381]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 14590314, prev total WAL file size 14590314, number of live WAL files 2.
Jan 20 18:44:37 compute-0 ceph-mon[74381]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 18:44:37 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:44:37.865854) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323539' seq:0, type:0; will stop at (end)
Jan 20 18:44:37 compute-0 ceph-mon[74381]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Jan 20 18:44:37 compute-0 ceph-mon[74381]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(12MB) 13(57KB) 8(1944B)]
Jan 20 18:44:37 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768934677866031, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 13218189, "oldest_snapshot_seqno": -1}
Jan 20 18:44:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:37 : epoch 696fccef : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc1b4001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:44:37 compute-0 ceph-mon[74381]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3465 keys, 13200052 bytes, temperature: kUnknown
Jan 20 18:44:37 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768934677986469, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 13200052, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13171687, "index_size": 18591, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8709, "raw_key_size": 87476, "raw_average_key_size": 25, "raw_value_size": 13102991, "raw_average_value_size": 3781, "num_data_blocks": 813, "num_entries": 3465, "num_filter_entries": 3465, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768934326, "oldest_key_time": 0, "file_creation_time": 1768934677, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cbf3ab03-d51c-4622-b6c7-e997cd5246eb", "db_session_id": "I40O2DG19JCNHUB0JQU4", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Jan 20 18:44:37 compute-0 ceph-mon[74381]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 18:44:37 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:44:37.986941) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 13200052 bytes
Jan 20 18:44:37 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:44:37.990558) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 109.6 rd, 109.5 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(12.6, 0.0 +0.0 blob) out(12.6 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3766, records dropped: 301 output_compression: NoCompression
Jan 20 18:44:37 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:44:37.990599) EVENT_LOG_v1 {"time_micros": 1768934677990581, "job": 4, "event": "compaction_finished", "compaction_time_micros": 120557, "compaction_time_cpu_micros": 35226, "output_level": 6, "num_output_files": 1, "total_output_size": 13200052, "num_input_records": 3766, "num_output_records": 3465, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 18:44:37 compute-0 ceph-mon[74381]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 18:44:37 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768934677992672, "job": 4, "event": "table_file_deletion", "file_number": 19}
Jan 20 18:44:37 compute-0 ceph-mon[74381]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 18:44:37 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768934677992763, "job": 4, "event": "table_file_deletion", "file_number": 13}
Jan 20 18:44:37 compute-0 ceph-mon[74381]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 18:44:37 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768934677992832, "job": 4, "event": "table_file_deletion", "file_number": 8}
Jan 20 18:44:37 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:44:37.865635) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 18:44:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:38 : epoch 696fccef : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc1a0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:44:38 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Jan 20 18:44:38 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:44:38 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:44:38 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:44:38.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:44:38 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Jan 20 18:44:38 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Jan 20 18:44:38 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 77 pg[9.15( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=4 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=77 pruub=12.047843933s) [2] r=-1 lpr=77 pi=[62,77)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active pruub 257.383026123s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:38 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 78 pg[9.15( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=4 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=77 pruub=12.047786713s) [2] r=-1 lpr=77 pi=[62,77)/1 crt=49'1085 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 257.383026123s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:38 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 77 pg[9.5( v 63'1088 (0'0,63'1088] local-lis/les=62/63 n=6 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=77 pruub=12.045457840s) [2] r=-1 lpr=77 pi=[62,77)/1 crt=49'1085 lcod 63'1087 mlcod 63'1087 active pruub 257.382354736s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:38 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 78 pg[9.5( v 63'1088 (0'0,63'1088] local-lis/les=62/63 n=6 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=77 pruub=12.045394897s) [2] r=-1 lpr=77 pi=[62,77)/1 crt=49'1085 lcod 63'1087 mlcod 0'0 unknown NOTIFY pruub 257.382354736s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:38 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 77 pg[9.d( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=6 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=77 pruub=12.045398712s) [2] r=-1 lpr=77 pi=[62,77)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active pruub 257.382446289s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:38 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 78 pg[9.d( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=6 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=77 pruub=12.045159340s) [2] r=-1 lpr=77 pi=[62,77)/1 crt=49'1085 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 257.382446289s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:38 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 77 pg[9.1d( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=5 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=77 pruub=12.044320107s) [2] r=-1 lpr=77 pi=[62,77)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active pruub 257.382110596s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:38 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 78 pg[9.1d( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=5 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=77 pruub=12.044295311s) [2] r=-1 lpr=77 pi=[62,77)/1 crt=49'1085 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 257.382110596s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:38 compute-0 ceph-mon[74381]: 10.11 scrub starts
Jan 20 18:44:38 compute-0 ceph-mon[74381]: 10.11 scrub ok
Jan 20 18:44:38 compute-0 ceph-mon[74381]: 8.1a scrub ok
Jan 20 18:44:38 compute-0 ceph-mon[74381]: pgmap v83: 337 pgs: 337 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 511 B/s wr, 105 op/s; 309 B/s, 1 keys/s, 7 objects/s recovering
Jan 20 18:44:38 compute-0 ceph-mon[74381]: 12.f scrub starts
Jan 20 18:44:38 compute-0 ceph-mon[74381]: 12.f scrub ok
Jan 20 18:44:38 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 20 18:44:38 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 20 18:44:38 compute-0 ceph-mon[74381]: osdmap e77: 3 total, 3 up, 3 in
Jan 20 18:44:38 compute-0 ceph-mon[74381]: 10.7 scrub starts
Jan 20 18:44:38 compute-0 ceph-mon[74381]: 10.7 scrub ok
Jan 20 18:44:38 compute-0 ceph-mon[74381]: osdmap e78: 3 total, 3 up, 3 in
Jan 20 18:44:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:38 : epoch 696fccef : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc194003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:44:38 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Jan 20 18:44:38 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Jan 20 18:44:38 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 20 18:44:38 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:38 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 20 18:44:38 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v86: 337 pgs: 4 unknown, 2 peering, 331 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 39 op/s; 153 B/s, 4 objects/s recovering
Jan 20 18:44:38 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:38 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Jan 20 18:44:38 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:38 compute-0 ceph-mgr[74676]: [progress INFO root] complete: finished ev 20b96eeb-eb9a-44b6-8305-40113d8c2ed7 (Updating ingress.rgw.default deployment (+4 -> 4))
Jan 20 18:44:38 compute-0 ceph-mgr[74676]: [progress INFO root] Completed event 20b96eeb-eb9a-44b6-8305-40113d8c2ed7 (Updating ingress.rgw.default deployment (+4 -> 4)) in 20 seconds
Jan 20 18:44:38 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Jan 20 18:44:38 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:38 compute-0 ceph-mgr[74676]: [progress INFO root] update: starting ev 7093dd8b-5949-431f-8c17-d9582c35ae89 (Updating prometheus deployment (+1 -> 1))
Jan 20 18:44:38 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:44:38 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:44:38 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:44:38.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:44:39 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Deploying daemon prometheus.compute-0 on compute-0
Jan 20 18:44:39 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Deploying daemon prometheus.compute-0 on compute-0
Jan 20 18:44:39 compute-0 sudo[100387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:44:39 compute-0 sudo[100387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:44:39 compute-0 sudo[100387]: pam_unix(sudo:session): session closed for user root
Jan 20 18:44:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-keepalived-rgw-default-compute-0-jxtvmz[100281]: Tue Jan 20 18:44:39 2026: (VI_0) Entering MASTER STATE
Jan 20 18:44:39 compute-0 sudo[100412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/prometheus/prometheus:v2.51.0 --timeout 895 _orch deploy --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1
Jan 20 18:44:39 compute-0 sudo[100412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:44:39 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Jan 20 18:44:39 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Jan 20 18:44:39 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Jan 20 18:44:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:39 : epoch 696fccef : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc19c004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:44:40 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Jan 20 18:44:40 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Jan 20 18:44:40 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:40 : epoch 696fccef : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc1b4001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:44:40 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 79 pg[9.5( v 63'1088 (0'0,63'1088] local-lis/les=62/63 n=6 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=79) [2]/[0] r=0 lpr=79 pi=[62,79)/1 crt=49'1085 lcod 63'1087 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:40 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 79 pg[9.15( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=4 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=79) [2]/[0] r=0 lpr=79 pi=[62,79)/1 crt=49'1085 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:40 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 79 pg[9.15( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=4 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=79) [2]/[0] r=0 lpr=79 pi=[62,79)/1 crt=49'1085 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 18:44:40 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 79 pg[9.5( v 63'1088 (0'0,63'1088] local-lis/les=62/63 n=6 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=79) [2]/[0] r=0 lpr=79 pi=[62,79)/1 crt=49'1085 lcod 63'1087 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 18:44:40 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 79 pg[9.1d( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=5 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=79) [2]/[0] r=0 lpr=79 pi=[62,79)/1 crt=49'1085 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:40 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 79 pg[9.1d( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=5 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=79) [2]/[0] r=0 lpr=79 pi=[62,79)/1 crt=49'1085 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 18:44:40 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 79 pg[9.d( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=6 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=79) [2]/[0] r=0 lpr=79 pi=[62,79)/1 crt=49'1085 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:40 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 79 pg[9.d( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=6 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=79) [2]/[0] r=0 lpr=79 pi=[62,79)/1 crt=49'1085 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 18:44:40 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:44:40 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:44:40 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:44:40.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:44:40 compute-0 ceph-mon[74381]: 8.c scrub starts
Jan 20 18:44:40 compute-0 ceph-mon[74381]: 8.1e scrub starts
Jan 20 18:44:40 compute-0 ceph-mon[74381]: 8.c scrub ok
Jan 20 18:44:40 compute-0 ceph-mon[74381]: 8.1e scrub ok
Jan 20 18:44:40 compute-0 ceph-mon[74381]: 8.1d scrub starts
Jan 20 18:44:40 compute-0 ceph-mon[74381]: 8.1d scrub ok
Jan 20 18:44:40 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:40 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:40 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:40 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:40 compute-0 ceph-mon[74381]: 12.1 scrub starts
Jan 20 18:44:40 compute-0 ceph-mon[74381]: 12.1 scrub ok
Jan 20 18:44:40 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:40 : epoch 696fccef : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc1a0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:44:40 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v88: 337 pgs: 4 unknown, 2 peering, 331 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:44:40 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Jan 20 18:44:40 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Jan 20 18:44:40 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:44:40 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:44:40 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:44:40.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:44:41 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Jan 20 18:44:41 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Jan 20 18:44:41 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Jan 20 18:44:41 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 80 pg[9.d( v 49'1085 (0'0,49'1085] local-lis/les=79/80 n=6 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=79) [2]/[0] async=[2] r=0 lpr=79 pi=[62,79)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:44:41 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 80 pg[9.15( v 49'1085 (0'0,49'1085] local-lis/les=79/80 n=4 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=79) [2]/[0] async=[2] r=0 lpr=79 pi=[62,79)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:44:41 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 80 pg[9.5( v 63'1088 (0'0,63'1088] local-lis/les=79/80 n=6 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=79) [2]/[0] async=[2] r=0 lpr=79 pi=[62,79)/1 crt=63'1088 lcod 63'1087 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:44:41 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 80 pg[9.1d( v 49'1085 (0'0,49'1085] local-lis/les=79/80 n=5 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=79) [2]/[0] async=[2] r=0 lpr=79 pi=[62,79)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:44:41 compute-0 ceph-mon[74381]: 8.1f scrub starts
Jan 20 18:44:41 compute-0 ceph-mon[74381]: 8.1f scrub ok
Jan 20 18:44:41 compute-0 ceph-mon[74381]: pgmap v86: 337 pgs: 4 unknown, 2 peering, 331 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 39 op/s; 153 B/s, 4 objects/s recovering
Jan 20 18:44:41 compute-0 ceph-mon[74381]: Deploying daemon prometheus.compute-0 on compute-0
Jan 20 18:44:41 compute-0 ceph-mon[74381]: 11.17 deep-scrub starts
Jan 20 18:44:41 compute-0 ceph-mon[74381]: 11.17 deep-scrub ok
Jan 20 18:44:41 compute-0 ceph-mon[74381]: 11.1f scrub starts
Jan 20 18:44:41 compute-0 ceph-mon[74381]: 11.1f scrub ok
Jan 20 18:44:41 compute-0 ceph-mon[74381]: 8.17 scrub starts
Jan 20 18:44:41 compute-0 ceph-mon[74381]: 8.17 scrub ok
Jan 20 18:44:41 compute-0 ceph-mon[74381]: osdmap e79: 3 total, 3 up, 3 in
Jan 20 18:44:41 compute-0 ceph-mon[74381]: 7.11 scrub starts
Jan 20 18:44:41 compute-0 ceph-mon[74381]: 7.11 scrub ok
Jan 20 18:44:41 compute-0 ceph-mon[74381]: 11.14 scrub starts
Jan 20 18:44:41 compute-0 ceph-mon[74381]: 11.14 scrub ok
Jan 20 18:44:41 compute-0 ceph-mon[74381]: osdmap e80: 3 total, 3 up, 3 in
Jan 20 18:44:41 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Jan 20 18:44:41 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Jan 20 18:44:41 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:41 : epoch 696fccef : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc194003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:44:42 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:42 : epoch 696fccef : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc19c004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:44:42 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Jan 20 18:44:42 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Jan 20 18:44:42 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Jan 20 18:44:42 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 81 pg[9.15( v 49'1085 (0'0,49'1085] local-lis/les=79/80 n=4 ec=62/41 lis/c=79/62 les/c/f=80/63/0 sis=81 pruub=14.942071915s) [2] async=[2] r=-1 lpr=81 pi=[62,81)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active pruub 264.165832520s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:42 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 81 pg[9.15( v 49'1085 (0'0,49'1085] local-lis/les=79/80 n=4 ec=62/41 lis/c=79/62 les/c/f=80/63/0 sis=81 pruub=14.941643715s) [2] r=-1 lpr=81 pi=[62,81)/1 crt=49'1085 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 264.165832520s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:42 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 81 pg[9.d( v 49'1085 (0'0,49'1085] local-lis/les=79/80 n=6 ec=62/41 lis/c=79/62 les/c/f=80/63/0 sis=81 pruub=14.941053391s) [2] async=[2] r=-1 lpr=81 pi=[62,81)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active pruub 264.165771484s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:42 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 81 pg[9.d( v 49'1085 (0'0,49'1085] local-lis/les=79/80 n=6 ec=62/41 lis/c=79/62 les/c/f=80/63/0 sis=81 pruub=14.940940857s) [2] r=-1 lpr=81 pi=[62,81)/1 crt=49'1085 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 264.165771484s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:42 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 81 pg[9.5( v 80'1092 (0'0,80'1092] local-lis/les=79/80 n=6 ec=62/41 lis/c=79/62 les/c/f=80/63/0 sis=81 pruub=14.946389198s) [2] async=[2] r=-1 lpr=81 pi=[62,81)/1 crt=63'1088 lcod 80'1091 mlcod 80'1091 active pruub 264.172332764s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:42 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 81 pg[9.1d( v 49'1085 (0'0,49'1085] local-lis/les=79/80 n=5 ec=62/41 lis/c=79/62 les/c/f=80/63/0 sis=81 pruub=14.946472168s) [2] async=[2] r=-1 lpr=81 pi=[62,81)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active pruub 264.172363281s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:42 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 81 pg[9.1d( v 49'1085 (0'0,49'1085] local-lis/les=79/80 n=5 ec=62/41 lis/c=79/62 les/c/f=80/63/0 sis=81 pruub=14.945610046s) [2] r=-1 lpr=81 pi=[62,81)/1 crt=49'1085 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 264.172363281s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:42 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 81 pg[9.5( v 80'1092 (0'0,80'1092] local-lis/les=79/80 n=6 ec=62/41 lis/c=79/62 les/c/f=80/63/0 sis=81 pruub=14.946202278s) [2] r=-1 lpr=81 pi=[62,81)/1 crt=63'1088 lcod 80'1091 mlcod 0'0 unknown NOTIFY pruub 264.172332764s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:42 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:44:42 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 18:44:42 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:44:42.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 18:44:42 compute-0 ceph-mon[74381]: pgmap v88: 337 pgs: 4 unknown, 2 peering, 331 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:44:42 compute-0 ceph-mon[74381]: 11.10 scrub starts
Jan 20 18:44:42 compute-0 ceph-mon[74381]: 11.10 scrub ok
Jan 20 18:44:42 compute-0 ceph-mon[74381]: 8.14 deep-scrub starts
Jan 20 18:44:42 compute-0 ceph-mon[74381]: 8.14 deep-scrub ok
Jan 20 18:44:42 compute-0 ceph-mon[74381]: osdmap e81: 3 total, 3 up, 3 in
Jan 20 18:44:42 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:42 : epoch 696fccef : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc1b4003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:44:42 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v91: 337 pgs: 4 unknown, 2 peering, 331 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:44:42 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Jan 20 18:44:42 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e81 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 18:44:42 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Jan 20 18:44:42 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:44:42 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:44:42 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:44:42.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:44:43 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Jan 20 18:44:43 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Jan 20 18:44:43 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Jan 20 18:44:43 compute-0 ceph-mgr[74676]: [progress INFO root] Writing back 27 completed events
Jan 20 18:44:43 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 20 18:44:43 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Jan 20 18:44:43 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:43 : epoch 696fccef : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc1a0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:44:44 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:44 : epoch 696fccef : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc1b4003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:44:44 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:44 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Jan 20 18:44:44 compute-0 ceph-mon[74381]: 8.13 scrub starts
Jan 20 18:44:44 compute-0 ceph-mon[74381]: 8.13 scrub ok
Jan 20 18:44:44 compute-0 ceph-mon[74381]: 11.f scrub starts
Jan 20 18:44:44 compute-0 ceph-mon[74381]: 11.f scrub ok
Jan 20 18:44:44 compute-0 ceph-mon[74381]: osdmap e82: 3 total, 3 up, 3 in
Jan 20 18:44:44 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:44:44 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:44:44 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:44:44.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:44:44 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:44 : epoch 696fccef : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc19c004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:44:44 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v93: 337 pgs: 337 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 33 op/s; 137 B/s, 4 objects/s recovering
Jan 20 18:44:44 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} v 0)
Jan 20 18:44:44 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Jan 20 18:44:44 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0)
Jan 20 18:44:44 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Jan 20 18:44:44 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Jan 20 18:44:44 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Jan 20 18:44:44 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:44:44 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 18:44:44 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:44:44.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 18:44:45 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Jan 20 18:44:45 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 20 18:44:45 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 20 18:44:45 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Jan 20 18:44:45 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Jan 20 18:44:45 compute-0 ceph-mon[74381]: pgmap v91: 337 pgs: 4 unknown, 2 peering, 331 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:44:45 compute-0 ceph-mon[74381]: 11.11 scrub starts
Jan 20 18:44:45 compute-0 ceph-mon[74381]: 11.11 scrub ok
Jan 20 18:44:45 compute-0 ceph-mon[74381]: 9.d scrub starts
Jan 20 18:44:45 compute-0 ceph-mon[74381]: 9.d scrub ok
Jan 20 18:44:45 compute-0 ceph-mon[74381]: 11.6 scrub starts
Jan 20 18:44:45 compute-0 ceph-mon[74381]: 8.8 scrub starts
Jan 20 18:44:45 compute-0 ceph-mon[74381]: 8.8 scrub ok
Jan 20 18:44:45 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:45 compute-0 ceph-mon[74381]: 11.6 scrub ok
Jan 20 18:44:45 compute-0 ceph-mon[74381]: 9.1d scrub starts
Jan 20 18:44:45 compute-0 ceph-mon[74381]: 9.1d scrub ok
Jan 20 18:44:45 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Jan 20 18:44:45 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Jan 20 18:44:45 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 83 pg[6.e( empty local-lis/les=0/0 n=0 ec=58/23 lis/c=69/69 les/c/f=70/70/0 sis=83) [0] r=0 lpr=83 pi=[69,83)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:44:45 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 83 pg[6.6( empty local-lis/les=0/0 n=0 ec=58/23 lis/c=69/69 les/c/f=70/70/0 sis=83) [0] r=0 lpr=83 pi=[69,83)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:44:45 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 83 pg[9.e( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=6 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=83 pruub=13.088119507s) [1] r=-1 lpr=83 pi=[62,83)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active pruub 265.382781982s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:45 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 83 pg[9.e( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=6 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=83 pruub=13.088041306s) [1] r=-1 lpr=83 pi=[62,83)/1 crt=49'1085 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 265.382781982s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:45 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 83 pg[9.6( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=6 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=83 pruub=13.087266922s) [1] r=-1 lpr=83 pi=[62,83)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active pruub 265.382446289s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:45 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 83 pg[9.6( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=6 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=83 pruub=13.087231636s) [1] r=-1 lpr=83 pi=[62,83)/1 crt=49'1085 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 265.382446289s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:45 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 83 pg[9.16( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=5 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=83 pruub=13.083202362s) [1] r=-1 lpr=83 pi=[62,83)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active pruub 265.379211426s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:45 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 83 pg[9.16( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=5 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=83 pruub=13.083170891s) [1] r=-1 lpr=83 pi=[62,83)/1 crt=49'1085 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 265.379211426s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:45 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 83 pg[9.1e( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=5 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=83 pruub=13.082991600s) [1] r=-1 lpr=83 pi=[62,83)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active pruub 265.379241943s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:45 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 83 pg[9.1e( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=5 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=83 pruub=13.082929611s) [1] r=-1 lpr=83 pi=[62,83)/1 crt=49'1085 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 265.379241943s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:45 compute-0 podman[100476]: 2026-01-20 18:44:45.30583935 +0000 UTC m=+5.796317832 volume create 7324909515a49b9fd3b563f48eddebb186c7832859d3d0093d665ec4530c0b73
Jan 20 18:44:45 compute-0 podman[100476]: 2026-01-20 18:44:45.3209456 +0000 UTC m=+5.811424082 container create 290f153a749dfdc914a13ecbe3371ec34c8a1558b430dffea707f5d3f170ca35 (image=quay.io/prometheus/prometheus:v2.51.0, name=nostalgic_lalande, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:44:45 compute-0 systemd[1]: Started libpod-conmon-290f153a749dfdc914a13ecbe3371ec34c8a1558b430dffea707f5d3f170ca35.scope.
Jan 20 18:44:45 compute-0 podman[100476]: 2026-01-20 18:44:45.284560174 +0000 UTC m=+5.775038696 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Jan 20 18:44:45 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:44:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f19f531b5935ab6d5c1255c889e77cbaed7445c8a478a2c52134ee2ea26c70b5/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Jan 20 18:44:45 compute-0 podman[100476]: 2026-01-20 18:44:45.413092937 +0000 UTC m=+5.903571419 container init 290f153a749dfdc914a13ecbe3371ec34c8a1558b430dffea707f5d3f170ca35 (image=quay.io/prometheus/prometheus:v2.51.0, name=nostalgic_lalande, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:44:45 compute-0 podman[100476]: 2026-01-20 18:44:45.422923378 +0000 UTC m=+5.913401860 container start 290f153a749dfdc914a13ecbe3371ec34c8a1558b430dffea707f5d3f170ca35 (image=quay.io/prometheus/prometheus:v2.51.0, name=nostalgic_lalande, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:44:45 compute-0 nostalgic_lalande[100752]: 65534 65534
Jan 20 18:44:45 compute-0 systemd[1]: libpod-290f153a749dfdc914a13ecbe3371ec34c8a1558b430dffea707f5d3f170ca35.scope: Deactivated successfully.
Jan 20 18:44:45 compute-0 podman[100476]: 2026-01-20 18:44:45.427769317 +0000 UTC m=+5.918247809 container attach 290f153a749dfdc914a13ecbe3371ec34c8a1558b430dffea707f5d3f170ca35 (image=quay.io/prometheus/prometheus:v2.51.0, name=nostalgic_lalande, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:44:45 compute-0 podman[100476]: 2026-01-20 18:44:45.428324171 +0000 UTC m=+5.918802633 container died 290f153a749dfdc914a13ecbe3371ec34c8a1558b430dffea707f5d3f170ca35 (image=quay.io/prometheus/prometheus:v2.51.0, name=nostalgic_lalande, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:44:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-f19f531b5935ab6d5c1255c889e77cbaed7445c8a478a2c52134ee2ea26c70b5-merged.mount: Deactivated successfully.
Jan 20 18:44:45 compute-0 podman[100476]: 2026-01-20 18:44:45.507680259 +0000 UTC m=+5.998158731 container remove 290f153a749dfdc914a13ecbe3371ec34c8a1558b430dffea707f5d3f170ca35 (image=quay.io/prometheus/prometheus:v2.51.0, name=nostalgic_lalande, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:44:45 compute-0 systemd[1]: libpod-conmon-290f153a749dfdc914a13ecbe3371ec34c8a1558b430dffea707f5d3f170ca35.scope: Deactivated successfully.
Jan 20 18:44:45 compute-0 podman[100476]: 2026-01-20 18:44:45.516617896 +0000 UTC m=+6.007096368 volume remove 7324909515a49b9fd3b563f48eddebb186c7832859d3d0093d665ec4530c0b73
Jan 20 18:44:45 compute-0 sudo[100363]: pam_unix(sudo:session): session closed for user root
Jan 20 18:44:45 compute-0 podman[100769]: 2026-01-20 18:44:45.600344529 +0000 UTC m=+0.047842621 volume create 2ae699737e445da0b004a3b411141a18325ebfeb077cecc3061ef37819330b5b
Jan 20 18:44:45 compute-0 podman[100769]: 2026-01-20 18:44:45.609011929 +0000 UTC m=+0.056510021 container create 97f90858dcaed7be3c1b609ea35651313b13166fcf26d8fc08e4dbdf8e5aafb5 (image=quay.io/prometheus/prometheus:v2.51.0, name=dazzling_shamir, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:44:45 compute-0 systemd[1]: Started libpod-conmon-97f90858dcaed7be3c1b609ea35651313b13166fcf26d8fc08e4dbdf8e5aafb5.scope.
Jan 20 18:44:45 compute-0 podman[100769]: 2026-01-20 18:44:45.581046596 +0000 UTC m=+0.028544738 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Jan 20 18:44:45 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:44:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0376457d3a64376ed4fbd453b781b8a4b9525b7882b58070c98fedc4f4d86ecc/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Jan 20 18:44:45 compute-0 podman[100769]: 2026-01-20 18:44:45.69760487 +0000 UTC m=+0.145102982 container init 97f90858dcaed7be3c1b609ea35651313b13166fcf26d8fc08e4dbdf8e5aafb5 (image=quay.io/prometheus/prometheus:v2.51.0, name=dazzling_shamir, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:44:45 compute-0 podman[100769]: 2026-01-20 18:44:45.705692835 +0000 UTC m=+0.153190937 container start 97f90858dcaed7be3c1b609ea35651313b13166fcf26d8fc08e4dbdf8e5aafb5 (image=quay.io/prometheus/prometheus:v2.51.0, name=dazzling_shamir, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:44:45 compute-0 dazzling_shamir[100811]: 65534 65534
Jan 20 18:44:45 compute-0 systemd[1]: libpod-97f90858dcaed7be3c1b609ea35651313b13166fcf26d8fc08e4dbdf8e5aafb5.scope: Deactivated successfully.
Jan 20 18:44:45 compute-0 podman[100769]: 2026-01-20 18:44:45.710488253 +0000 UTC m=+0.157986365 container attach 97f90858dcaed7be3c1b609ea35651313b13166fcf26d8fc08e4dbdf8e5aafb5 (image=quay.io/prometheus/prometheus:v2.51.0, name=dazzling_shamir, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:44:45 compute-0 podman[100769]: 2026-01-20 18:44:45.710802041 +0000 UTC m=+0.158300143 container died 97f90858dcaed7be3c1b609ea35651313b13166fcf26d8fc08e4dbdf8e5aafb5 (image=quay.io/prometheus/prometheus:v2.51.0, name=dazzling_shamir, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:44:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-0376457d3a64376ed4fbd453b781b8a4b9525b7882b58070c98fedc4f4d86ecc-merged.mount: Deactivated successfully.
Jan 20 18:44:45 compute-0 podman[100769]: 2026-01-20 18:44:45.751986134 +0000 UTC m=+0.199484226 container remove 97f90858dcaed7be3c1b609ea35651313b13166fcf26d8fc08e4dbdf8e5aafb5 (image=quay.io/prometheus/prometheus:v2.51.0, name=dazzling_shamir, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:44:45 compute-0 podman[100769]: 2026-01-20 18:44:45.757298576 +0000 UTC m=+0.204796678 volume remove 2ae699737e445da0b004a3b411141a18325ebfeb077cecc3061ef37819330b5b
Jan 20 18:44:45 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Jan 20 18:44:45 compute-0 systemd[1]: libpod-conmon-97f90858dcaed7be3c1b609ea35651313b13166fcf26d8fc08e4dbdf8e5aafb5.scope: Deactivated successfully.
Jan 20 18:44:45 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Jan 20 18:44:45 compute-0 systemd[1]: Reloading.
Jan 20 18:44:45 compute-0 systemd-sysv-generator[100859]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:44:45 compute-0 systemd-rc-local-generator[100853]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:44:45 compute-0 sshd-session[99726]: Connection closed by 192.168.122.30 port 41518
Jan 20 18:44:45 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:45 : epoch 696fccef : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc194003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:44:46 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:46 : epoch 696fccef : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc1a0003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:44:46 compute-0 sshd-session[99723]: pam_unix(sshd:session): session closed for user zuul
Jan 20 18:44:46 compute-0 systemd[1]: session-38.scope: Deactivated successfully.
Jan 20 18:44:46 compute-0 systemd[1]: session-38.scope: Consumed 9.100s CPU time.
Jan 20 18:44:46 compute-0 systemd-logind[796]: Session 38 logged out. Waiting for processes to exit.
Jan 20 18:44:46 compute-0 systemd-logind[796]: Removed session 38.
Jan 20 18:44:46 compute-0 systemd[1]: Reloading.
Jan 20 18:44:46 compute-0 systemd-rc-local-generator[100894]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:44:46 compute-0 systemd-sysv-generator[100899]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:44:46 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Jan 20 18:44:46 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Jan 20 18:44:46 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:44:46 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 84 pg[9.e( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=6 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=84) [1]/[0] r=0 lpr=84 pi=[62,84)/1 crt=49'1085 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:46 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 84 pg[9.e( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=6 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=84) [1]/[0] r=0 lpr=84 pi=[62,84)/1 crt=49'1085 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 18:44:46 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 84 pg[9.6( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=6 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=84) [1]/[0] r=0 lpr=84 pi=[62,84)/1 crt=49'1085 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:46 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 84 pg[9.6( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=6 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=84) [1]/[0] r=0 lpr=84 pi=[62,84)/1 crt=49'1085 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 18:44:46 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:44:46 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:44:46.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:44:46 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 84 pg[9.16( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=5 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=84) [1]/[0] r=0 lpr=84 pi=[62,84)/1 crt=49'1085 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:46 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 84 pg[9.1e( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=5 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=84) [1]/[0] r=0 lpr=84 pi=[62,84)/1 crt=49'1085 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:46 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 84 pg[9.1e( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=5 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=84) [1]/[0] r=0 lpr=84 pi=[62,84)/1 crt=49'1085 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 18:44:46 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 84 pg[9.16( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=5 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=84) [1]/[0] r=0 lpr=84 pi=[62,84)/1 crt=49'1085 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 18:44:46 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 84 pg[6.6( v 56'46 lc 55'39 (0'0,56'46] local-lis/les=83/84 n=1 ec=58/23 lis/c=69/69 les/c/f=70/70/0 sis=83) [0] r=0 lpr=83 pi=[69,83)/1 crt=56'46 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:44:46 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Jan 20 18:44:46 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 84 pg[6.e( v 56'46 lc 55'19 (0'0,56'46] local-lis/les=83/84 n=1 ec=58/23 lis/c=69/69 les/c/f=70/70/0 sis=83) [0] r=0 lpr=83 pi=[69,83)/1 crt=56'46 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:44:46 compute-0 ceph-mon[74381]: pgmap v93: 337 pgs: 337 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 33 op/s; 137 B/s, 4 objects/s recovering
Jan 20 18:44:46 compute-0 ceph-mon[74381]: 11.15 scrub starts
Jan 20 18:44:46 compute-0 ceph-mon[74381]: 11.15 scrub ok
Jan 20 18:44:46 compute-0 ceph-mon[74381]: 11.1 scrub starts
Jan 20 18:44:46 compute-0 ceph-mon[74381]: 11.1 scrub ok
Jan 20 18:44:46 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 20 18:44:46 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 20 18:44:46 compute-0 ceph-mon[74381]: osdmap e83: 3 total, 3 up, 3 in
Jan 20 18:44:46 compute-0 ceph-mon[74381]: 9.1f deep-scrub starts
Jan 20 18:44:46 compute-0 ceph-mon[74381]: 9.1f deep-scrub ok
Jan 20 18:44:46 compute-0 systemd[1]: Starting Ceph prometheus.compute-0 for aecbbf3b-b405-507b-97d7-637a83f5b4b1...
Jan 20 18:44:46 compute-0 podman[100956]: 2026-01-20 18:44:46.614045102 +0000 UTC m=+0.041142914 container create 3b46b6d3dc84fc240ebe1bff0868890dcc11b4799d0aa0050a48fba74c90cc31 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:44:46 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:46 : epoch 696fccef : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc1b4003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:44:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45f74f16be36a506a240c888905540a42475650f4de30a32fb3f7fd65c0781e2/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Jan 20 18:44:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45f74f16be36a506a240c888905540a42475650f4de30a32fb3f7fd65c0781e2/merged/etc/prometheus supports timestamps until 2038 (0x7fffffff)
Jan 20 18:44:46 compute-0 ceph-mgr[74676]: [balancer INFO root] Optimize plan auto_2026-01-20_18:44:46
Jan 20 18:44:46 compute-0 ceph-mgr[74676]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 18:44:46 compute-0 ceph-mgr[74676]: [balancer INFO root] do_upmap
Jan 20 18:44:46 compute-0 ceph-mgr[74676]: [balancer INFO root] pools ['images', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'vms', '.nfs', 'default.rgw.control', '.mgr', 'volumes', 'backups', '.rgw.root', 'cephfs.cephfs.meta']
Jan 20 18:44:46 compute-0 ceph-mgr[74676]: [balancer INFO root] prepared 2/10 upmap changes
Jan 20 18:44:46 compute-0 ceph-mgr[74676]: [balancer INFO root] Executing plan auto_2026-01-20_18:44:46
Jan 20 18:44:46 compute-0 ceph-mgr[74676]: [balancer INFO root] ceph osd pg-upmap-items 9.0 mappings [{'from': 0, 'to': 1}]
Jan 20 18:44:46 compute-0 ceph-mgr[74676]: [balancer INFO root] ceph osd pg-upmap-items 9.14 mappings [{'from': 0, 'to': 1}]
Jan 20 18:44:46 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.0", "id": [0, 1]} v 0)
Jan 20 18:44:46 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.0", "id": [0, 1]}]: dispatch
Jan 20 18:44:46 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.14", "id": [0, 1]} v 0)
Jan 20 18:44:46 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.14", "id": [0, 1]}]: dispatch
Jan 20 18:44:46 compute-0 podman[100956]: 2026-01-20 18:44:46.682112929 +0000 UTC m=+0.109210751 container init 3b46b6d3dc84fc240ebe1bff0868890dcc11b4799d0aa0050a48fba74c90cc31 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:44:46 compute-0 podman[100956]: 2026-01-20 18:44:46.59516395 +0000 UTC m=+0.022261772 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Jan 20 18:44:46 compute-0 podman[100956]: 2026-01-20 18:44:46.69005331 +0000 UTC m=+0.117151112 container start 3b46b6d3dc84fc240ebe1bff0868890dcc11b4799d0aa0050a48fba74c90cc31 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:44:46 compute-0 bash[100956]: 3b46b6d3dc84fc240ebe1bff0868890dcc11b4799d0aa0050a48fba74c90cc31
Jan 20 18:44:46 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v96: 337 pgs: 337 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 34 op/s; 141 B/s, 4 objects/s recovering
Jan 20 18:44:46 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} v 0)
Jan 20 18:44:46 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Jan 20 18:44:46 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0)
Jan 20 18:44:46 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Jan 20 18:44:46 compute-0 systemd[1]: Started Ceph prometheus.compute-0 for aecbbf3b-b405-507b-97d7-637a83f5b4b1.
Jan 20 18:44:46 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-prometheus-compute-0[100972]: ts=2026-01-20T18:44:46.726Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.0, branch=HEAD, revision=c05c15512acb675e3f6cd662a6727854e93fc024)"
Jan 20 18:44:46 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-prometheus-compute-0[100972]: ts=2026-01-20T18:44:46.726Z caller=main.go:622 level=info build_context="(go=go1.22.1, platform=linux/amd64, user=root@b5723e458358, date=20240319-10:54:45, tags=netgo,builtinassets,stringlabels)"
Jan 20 18:44:46 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-prometheus-compute-0[100972]: ts=2026-01-20T18:44:46.726Z caller=main.go:623 level=info host_details="(Linux 5.14.0-661.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026 x86_64 compute-0 (none))"
Jan 20 18:44:46 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-prometheus-compute-0[100972]: ts=2026-01-20T18:44:46.726Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)"
Jan 20 18:44:46 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-prometheus-compute-0[100972]: ts=2026-01-20T18:44:46.726Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)"
Jan 20 18:44:46 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-prometheus-compute-0[100972]: ts=2026-01-20T18:44:46.729Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=192.168.122.100:9095
Jan 20 18:44:46 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:44:46 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:44:46 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-prometheus-compute-0[100972]: ts=2026-01-20T18:44:46.731Z caller=main.go:1129 level=info msg="Starting TSDB ..."
Jan 20 18:44:46 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-prometheus-compute-0[100972]: ts=2026-01-20T18:44:46.735Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=192.168.122.100:9095
Jan 20 18:44:46 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-prometheus-compute-0[100972]: ts=2026-01-20T18:44:46.735Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=192.168.122.100:9095
Jan 20 18:44:46 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-prometheus-compute-0[100972]: ts=2026-01-20T18:44:46.736Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any"
Jan 20 18:44:46 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-prometheus-compute-0[100972]: ts=2026-01-20T18:44:46.736Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=38.561µs
Jan 20 18:44:46 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-prometheus-compute-0[100972]: ts=2026-01-20T18:44:46.736Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while"
Jan 20 18:44:46 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-prometheus-compute-0[100972]: ts=2026-01-20T18:44:46.737Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0
Jan 20 18:44:46 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-prometheus-compute-0[100972]: ts=2026-01-20T18:44:46.737Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=35.441µs wal_replay_duration=218.886µs wbl_replay_duration=280ns total_replay_duration=318.189µs
Jan 20 18:44:46 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-prometheus-compute-0[100972]: ts=2026-01-20T18:44:46.738Z caller=main.go:1150 level=info fs_type=XFS_SUPER_MAGIC
Jan 20 18:44:46 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-prometheus-compute-0[100972]: ts=2026-01-20T18:44:46.738Z caller=main.go:1153 level=info msg="TSDB started"
Jan 20 18:44:46 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-prometheus-compute-0[100972]: ts=2026-01-20T18:44:46.738Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml
Jan 20 18:44:46 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:44:46 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:44:46 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:44:46 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:44:46 compute-0 sudo[100412]: pam_unix(sudo:session): session closed for user root
Jan 20 18:44:46 compute-0 ceph-mgr[74676]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 18:44:46 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 18:44:46 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-prometheus-compute-0[100972]: ts=2026-01-20T18:44:46.767Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=28.802935ms db_storage=2µs remote_storage=1.5µs web_handler=1.02µs query_engine=1.56µs scrape=3.442702ms scrape_sd=228.226µs notify=21.581µs notify_sd=24.17µs rules=24.584193ms tracing=14.081µs
Jan 20 18:44:46 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-prometheus-compute-0[100972]: ts=2026-01-20T18:44:46.767Z caller=main.go:1114 level=info msg="Server is ready to receive web requests."
Jan 20 18:44:46 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-prometheus-compute-0[100972]: ts=2026-01-20T18:44:46.768Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..."
Jan 20 18:44:46 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 18:44:46 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 18:44:46 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 18:44:46 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 18:44:46 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:46 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 18:44:46 compute-0 ceph-mgr[74676]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 18:44:46 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 18:44:46 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 18:44:46 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 12.12 scrub starts
Jan 20 18:44:46 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 18:44:46 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 18:44:46 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:46 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Jan 20 18:44:46 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 12.12 scrub ok
Jan 20 18:44:46 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:46 compute-0 ceph-mgr[74676]: [progress INFO root] complete: finished ev 7093dd8b-5949-431f-8c17-d9582c35ae89 (Updating prometheus deployment (+1 -> 1))
Jan 20 18:44:46 compute-0 ceph-mgr[74676]: [progress INFO root] Completed event 7093dd8b-5949-431f-8c17-d9582c35ae89 (Updating prometheus deployment (+1 -> 1)) in 8 seconds
Jan 20 18:44:46 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "prometheus"} v 0)
Jan 20 18:44:46 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch
Jan 20 18:44:46 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:44:46 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:44:46 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:44:46.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:44:47 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Jan 20 18:44:47 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 10.13 deep-scrub starts
Jan 20 18:44:47 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 10.13 deep-scrub ok
Jan 20 18:44:47 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.0", "id": [0, 1]}]': finished
Jan 20 18:44:47 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.14", "id": [0, 1]}]': finished
Jan 20 18:44:47 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 20 18:44:47 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 20 18:44:47 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Jan 20 18:44:47 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e85 crush map has features 3314933000854323200, adjusting msgr requires
Jan 20 18:44:47 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e85 crush map has features 432629239337189376, adjusting msgr requires
Jan 20 18:44:47 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e85 crush map has features 432629239337189376, adjusting msgr requires
Jan 20 18:44:47 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e85 crush map has features 432629239337189376, adjusting msgr requires
Jan 20 18:44:47 compute-0 ceph-osd[82836]: osd.0 85 crush map has features 432629239337189376, adjusting msgr requires for clients
Jan 20 18:44:47 compute-0 ceph-osd[82836]: osd.0 85 crush map has features 432629239337189376 was 288514051259245057, adjusting msgr requires for mons
Jan 20 18:44:47 compute-0 ceph-osd[82836]: osd.0 85 crush map has features 3314933000854323200, adjusting msgr requires for osds
Jan 20 18:44:47 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Jan 20 18:44:47 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 85 pg[9.14( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=5 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=85 pruub=10.431414604s) [1] r=-1 lpr=85 pi=[62,85)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active pruub 265.383117676s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:47 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 85 pg[9.0( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=8 ec=41/41 lis/c=62/62 les/c/f=63/63/0 sis=85 pruub=10.431085587s) [1] r=-1 lpr=85 pi=[62,85)/1 crt=49'1085 lcod 49'1084 mlcod 0'0 active pruub 265.383117676s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:47 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 85 pg[9.0( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=8 ec=41/41 lis/c=62/62 les/c/f=63/63/0 sis=85 pruub=10.431034088s) [1] r=-1 lpr=85 pi=[62,85)/1 crt=49'1085 lcod 49'1084 mlcod 0'0 unknown NOTIFY pruub 265.383117676s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:47 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 85 pg[9.14( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=5 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=85 pruub=10.431174278s) [1] r=-1 lpr=85 pi=[62,85)/1 crt=49'1085 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 265.383117676s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:47 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 85 pg[9.e( v 49'1085 (0'0,49'1085] local-lis/les=84/85 n=6 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=84) [1]/[0] async=[1] r=0 lpr=84 pi=[62,84)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:44:47 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 85 pg[9.16( v 49'1085 (0'0,49'1085] local-lis/les=84/85 n=5 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=84) [1]/[0] async=[1] r=0 lpr=84 pi=[62,84)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:44:47 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 85 pg[9.6( v 49'1085 (0'0,49'1085] local-lis/les=84/85 n=6 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=84) [1]/[0] async=[1] r=0 lpr=84 pi=[62,84)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:44:47 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 85 pg[9.1e( v 49'1085 (0'0,49'1085] local-lis/les=84/85 n=5 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=84) [1]/[0] async=[1] r=0 lpr=84 pi=[62,84)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:44:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:47 : epoch 696fccef : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc194003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:44:47 compute-0 ceph-mon[74381]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Jan 20 18:44:47 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:44:47.944293) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 18:44:47 compute-0 ceph-mon[74381]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Jan 20 18:44:47 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768934687944366, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 551, "num_deletes": 251, "total_data_size": 814900, "memory_usage": 826592, "flush_reason": "Manual Compaction"}
Jan 20 18:44:47 compute-0 ceph-mon[74381]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Jan 20 18:44:47 compute-0 ceph-mon[74381]: 7.18 scrub starts
Jan 20 18:44:47 compute-0 ceph-mon[74381]: 7.18 scrub ok
Jan 20 18:44:47 compute-0 ceph-mon[74381]: 11.4 scrub starts
Jan 20 18:44:47 compute-0 ceph-mon[74381]: 11.4 scrub ok
Jan 20 18:44:47 compute-0 ceph-mon[74381]: osdmap e84: 3 total, 3 up, 3 in
Jan 20 18:44:47 compute-0 ceph-mon[74381]: 9.13 deep-scrub starts
Jan 20 18:44:47 compute-0 ceph-mon[74381]: 9.13 deep-scrub ok
Jan 20 18:44:47 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.0", "id": [0, 1]}]: dispatch
Jan 20 18:44:47 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.14", "id": [0, 1]}]: dispatch
Jan 20 18:44:47 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Jan 20 18:44:47 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Jan 20 18:44:47 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:47 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:47 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:47 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch
Jan 20 18:44:47 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished
Jan 20 18:44:47 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768934687954311, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 809774, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7851, "largest_seqno": 8401, "table_properties": {"data_size": 806469, "index_size": 1149, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8613, "raw_average_key_size": 20, "raw_value_size": 799376, "raw_average_value_size": 1872, "num_data_blocks": 49, "num_entries": 427, "num_filter_entries": 427, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768934678, "oldest_key_time": 1768934678, "file_creation_time": 1768934687, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cbf3ab03-d51c-4622-b6c7-e997cd5246eb", "db_session_id": "I40O2DG19JCNHUB0JQU4", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Jan 20 18:44:47 compute-0 ceph-mon[74381]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 10088 microseconds, and 5989 cpu microseconds.
Jan 20 18:44:47 compute-0 ceph-mon[74381]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 18:44:47 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:44:47.954379) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 809774 bytes OK
Jan 20 18:44:47 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:44:47.954412) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Jan 20 18:44:47 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:44:47.956105) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Jan 20 18:44:47 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:44:47.956128) EVENT_LOG_v1 {"time_micros": 1768934687956121, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 18:44:47 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:44:47.956161) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 18:44:47 compute-0 ceph-mon[74381]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 811512, prev total WAL file size 811512, number of live WAL files 2.
Jan 20 18:44:47 compute-0 ceph-mon[74381]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 18:44:47 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:44:47.956980) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Jan 20 18:44:47 compute-0 ceph-mon[74381]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 18:44:47 compute-0 ceph-mon[74381]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(790KB)], [20(12MB)]
Jan 20 18:44:47 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768934687957036, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 14009826, "oldest_snapshot_seqno": -1}
Jan 20 18:44:47 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : mgrmap e27: compute-0.cepfkm(active, since 2m), standbys: compute-1.whkwsm, compute-2.pyghhf
Jan 20 18:44:48 compute-0 sshd-session[93017]: Connection closed by 192.168.122.100 port 41482
Jan 20 18:44:48 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:48 : epoch 696fccef : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc19c004050 fd 14 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:44:48 compute-0 sshd-session[92986]: pam_unix(sshd:session): session closed for user ceph-admin
Jan 20 18:44:48 compute-0 systemd[1]: session-36.scope: Deactivated successfully.
Jan 20 18:44:48 compute-0 systemd[1]: session-36.scope: Consumed 49.130s CPU time.
Jan 20 18:44:48 compute-0 systemd-logind[796]: Session 36 logged out. Waiting for processes to exit.
Jan 20 18:44:48 compute-0 systemd-logind[796]: Removed session 36.
Jan 20 18:44:48 compute-0 ceph-mon[74381]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3367 keys, 12782839 bytes, temperature: kUnknown
Jan 20 18:44:48 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768934688063997, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 12782839, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12755735, "index_size": 17586, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8453, "raw_key_size": 86975, "raw_average_key_size": 25, "raw_value_size": 12689193, "raw_average_value_size": 3768, "num_data_blocks": 761, "num_entries": 3367, "num_filter_entries": 3367, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768934326, "oldest_key_time": 0, "file_creation_time": 1768934687, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cbf3ab03-d51c-4622-b6c7-e997cd5246eb", "db_session_id": "I40O2DG19JCNHUB0JQU4", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Jan 20 18:44:48 compute-0 ceph-mon[74381]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 18:44:48 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:44:48.064633) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 12782839 bytes
Jan 20 18:44:48 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:44:48.066019) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 130.5 rd, 119.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 12.6 +0.0 blob) out(12.2 +0.0 blob), read-write-amplify(33.1) write-amplify(15.8) OK, records in: 3892, records dropped: 525 output_compression: NoCompression
Jan 20 18:44:48 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:44:48.066042) EVENT_LOG_v1 {"time_micros": 1768934688066030, "job": 6, "event": "compaction_finished", "compaction_time_micros": 107346, "compaction_time_cpu_micros": 34058, "output_level": 6, "num_output_files": 1, "total_output_size": 12782839, "num_input_records": 3892, "num_output_records": 3367, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 18:44:48 compute-0 ceph-mon[74381]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 18:44:48 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768934688066574, "job": 6, "event": "table_file_deletion", "file_number": 22}
Jan 20 18:44:48 compute-0 ceph-mon[74381]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 18:44:48 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768934688069494, "job": 6, "event": "table_file_deletion", "file_number": 20}
Jan 20 18:44:48 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:44:47.956903) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 18:44:48 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:44:48.069627) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 18:44:48 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:44:48.069633) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 18:44:48 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:44:48.069635) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 18:44:48 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:44:48.069636) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 18:44:48 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:44:48.069639) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 18:44:48 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ignoring --setuser ceph since I am not root
Jan 20 18:44:48 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ignoring --setgroup ceph since I am not root
Jan 20 18:44:48 compute-0 ceph-mgr[74676]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Jan 20 18:44:48 compute-0 ceph-mgr[74676]: pidfile_write: ignore empty --pid-file
Jan 20 18:44:48 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'alerts'
Jan 20 18:44:48 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:44:48.223+0000 7fb4a656d140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 20 18:44:48 compute-0 ceph-mgr[74676]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Jan 20 18:44:48 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'balancer'
Jan 20 18:44:48 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:44:48 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:44:48 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:44:48.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:44:48 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:44:48.307+0000 7fb4a656d140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 20 18:44:48 compute-0 ceph-mgr[74676]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Jan 20 18:44:48 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'cephadm'
Jan 20 18:44:48 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:48 : epoch 696fccef : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc1a4000fa0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:44:48 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 12.19 scrub starts
Jan 20 18:44:48 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 12.19 scrub ok
Jan 20 18:44:48 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:44:48 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 18:44:48 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:44:48.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 18:44:48 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Jan 20 18:44:48 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Jan 20 18:44:48 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Jan 20 18:44:48 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 86 pg[9.1e( v 49'1085 (0'0,49'1085] local-lis/les=84/85 n=5 ec=62/41 lis/c=84/62 les/c/f=85/63/0 sis=86 pruub=15.005240440s) [1] async=[1] r=-1 lpr=86 pi=[62,86)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active pruub 270.968933105s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:48 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 86 pg[9.1e( v 49'1085 (0'0,49'1085] local-lis/les=84/85 n=5 ec=62/41 lis/c=84/62 les/c/f=85/63/0 sis=86 pruub=15.005146027s) [1] r=-1 lpr=86 pi=[62,86)/1 crt=49'1085 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 270.968933105s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:48 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 86 pg[9.0( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=8 ec=41/41 lis/c=62/62 les/c/f=63/63/0 sis=86) [1]/[0] r=0 lpr=86 pi=[62,86)/1 crt=49'1085 lcod 49'1084 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:48 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 86 pg[9.e( v 49'1085 (0'0,49'1085] local-lis/les=84/85 n=6 ec=62/41 lis/c=84/62 les/c/f=85/63/0 sis=86 pruub=15.001676559s) [1] async=[1] r=-1 lpr=86 pi=[62,86)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active pruub 270.965545654s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:48 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 86 pg[9.0( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=8 ec=41/41 lis/c=62/62 les/c/f=63/63/0 sis=86) [1]/[0] r=0 lpr=86 pi=[62,86)/1 crt=49'1085 lcod 49'1084 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 18:44:48 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 86 pg[9.14( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=5 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=86) [1]/[0] r=0 lpr=86 pi=[62,86)/1 crt=49'1085 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:48 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 86 pg[9.e( v 49'1085 (0'0,49'1085] local-lis/les=84/85 n=6 ec=62/41 lis/c=84/62 les/c/f=85/63/0 sis=86 pruub=15.001564026s) [1] r=-1 lpr=86 pi=[62,86)/1 crt=49'1085 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 270.965545654s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:48 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 86 pg[9.16( v 49'1085 (0'0,49'1085] local-lis/les=84/85 n=5 ec=62/41 lis/c=84/62 les/c/f=85/63/0 sis=86 pruub=15.005090714s) [1] async=[1] r=-1 lpr=86 pi=[62,86)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active pruub 270.968811035s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:48 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 86 pg[9.16( v 49'1085 (0'0,49'1085] local-lis/les=84/85 n=5 ec=62/41 lis/c=84/62 les/c/f=85/63/0 sis=86 pruub=15.004685402s) [1] r=-1 lpr=86 pi=[62,86)/1 crt=49'1085 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 270.968811035s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:48 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 86 pg[9.14( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=5 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=86) [1]/[0] r=0 lpr=86 pi=[62,86)/1 crt=49'1085 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 18:44:48 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 86 pg[9.6( v 49'1085 (0'0,49'1085] local-lis/les=84/85 n=6 ec=62/41 lis/c=84/62 les/c/f=85/63/0 sis=86 pruub=15.003673553s) [1] async=[1] r=-1 lpr=86 pi=[62,86)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active pruub 270.968841553s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:48 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 86 pg[9.6( v 49'1085 (0'0,49'1085] local-lis/les=84/85 n=6 ec=62/41 lis/c=84/62 les/c/f=85/63/0 sis=86 pruub=15.003576279s) [1] r=-1 lpr=86 pi=[62,86)/1 crt=49'1085 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 270.968841553s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:48 compute-0 ceph-mon[74381]: pgmap v96: 337 pgs: 337 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 34 op/s; 141 B/s, 4 objects/s recovering
Jan 20 18:44:48 compute-0 ceph-mon[74381]: 12.12 scrub starts
Jan 20 18:44:48 compute-0 ceph-mon[74381]: 12.12 scrub ok
Jan 20 18:44:48 compute-0 ceph-mon[74381]: 11.7 scrub starts
Jan 20 18:44:48 compute-0 ceph-mon[74381]: 11.7 scrub ok
Jan 20 18:44:48 compute-0 ceph-mon[74381]: 8.3 scrub starts
Jan 20 18:44:48 compute-0 ceph-mon[74381]: 8.3 scrub ok
Jan 20 18:44:48 compute-0 ceph-mon[74381]: 10.13 deep-scrub starts
Jan 20 18:44:48 compute-0 ceph-mon[74381]: 10.13 deep-scrub ok
Jan 20 18:44:48 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.0", "id": [0, 1]}]': finished
Jan 20 18:44:48 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pg-upmap-items", "format": "json", "pgid": "9.14", "id": [0, 1]}]': finished
Jan 20 18:44:48 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 20 18:44:48 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 20 18:44:48 compute-0 ceph-mon[74381]: osdmap e85: 3 total, 3 up, 3 in
Jan 20 18:44:48 compute-0 ceph-mon[74381]: from='mgr.14469 192.168.122.100:0/833385028' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished
Jan 20 18:44:48 compute-0 ceph-mon[74381]: mgrmap e27: compute-0.cepfkm(active, since 2m), standbys: compute-1.whkwsm, compute-2.pyghhf
Jan 20 18:44:48 compute-0 ceph-mon[74381]: 8.16 deep-scrub starts
Jan 20 18:44:48 compute-0 ceph-mon[74381]: 8.16 deep-scrub ok
Jan 20 18:44:48 compute-0 ceph-mon[74381]: osdmap e86: 3 total, 3 up, 3 in
Jan 20 18:44:49 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'crash'
Jan 20 18:44:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:44:49.101+0000 7fb4a656d140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 20 18:44:49 compute-0 ceph-mgr[74676]: mgr[py] Module crash has missing NOTIFY_TYPES member
Jan 20 18:44:49 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'dashboard'
Jan 20 18:44:49 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'devicehealth'
Jan 20 18:44:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:44:49.726+0000 7fb4a656d140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 20 18:44:49 compute-0 ceph-mgr[74676]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Jan 20 18:44:49 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'diskprediction_local'
Jan 20 18:44:49 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Jan 20 18:44:49 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Jan 20 18:44:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 20 18:44:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 20 18:44:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]:   from numpy import show_config as show_numpy_config
Jan 20 18:44:49 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Jan 20 18:44:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:44:49.890+0000 7fb4a656d140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 20 18:44:49 compute-0 ceph-mgr[74676]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Jan 20 18:44:49 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'influx'
Jan 20 18:44:49 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Jan 20 18:44:49 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Jan 20 18:44:49 compute-0 kernel: ganesha.nfsd[99184]: segfault at 50 ip 00007fc2459c032e sp 00007fc1b2ffc210 error 4 in libntirpc.so.5.8[7fc2459a5000+2c000] likely on CPU 3 (core 0, socket 3)
Jan 20 18:44:49 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Jan 20 18:44:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[98077]: 20/01/2026 18:44:49 : epoch 696fccef : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc198000b60 fd 14 proxy ignored for local
Jan 20 18:44:49 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 87 pg[9.14( v 49'1085 (0'0,49'1085] local-lis/les=86/87 n=5 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=86) [1]/[0] async=[1] r=0 lpr=86 pi=[62,86)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:44:49 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 87 pg[9.0( v 49'1085 (0'0,49'1085] local-lis/les=86/87 n=8 ec=41/41 lis/c=62/62 les/c/f=63/63/0 sis=86) [1]/[0] async=[1] r=0 lpr=86 pi=[62,86)/1 crt=49'1085 lcod 49'1084 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:44:49 compute-0 ceph-mon[74381]: 12.19 scrub starts
Jan 20 18:44:49 compute-0 ceph-mon[74381]: 12.19 scrub ok
Jan 20 18:44:49 compute-0 ceph-mon[74381]: 11.a scrub starts
Jan 20 18:44:49 compute-0 ceph-mon[74381]: 11.a scrub ok
Jan 20 18:44:49 compute-0 ceph-mon[74381]: osdmap e87: 3 total, 3 up, 3 in
Jan 20 18:44:49 compute-0 systemd[1]: Started Process Core Dump (PID 101026/UID 0).
Jan 20 18:44:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:44:49.964+0000 7fb4a656d140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 20 18:44:49 compute-0 ceph-mgr[74676]: mgr[py] Module influx has missing NOTIFY_TYPES member
Jan 20 18:44:49 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'insights'
Jan 20 18:44:50 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'iostat'
Jan 20 18:44:50 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:44:50.117+0000 7fb4a656d140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 20 18:44:50 compute-0 ceph-mgr[74676]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Jan 20 18:44:50 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'k8sevents'
Jan 20 18:44:50 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:44:50 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:44:50 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:44:50.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:44:50 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'localpool'
Jan 20 18:44:50 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'mds_autoscaler'
Jan 20 18:44:50 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'mirroring'
Jan 20 18:44:50 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 12.1c scrub starts
Jan 20 18:44:50 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 12.1c scrub ok
Jan 20 18:44:50 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:44:50 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:44:50 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:44:50.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:44:50 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'nfs'
Jan 20 18:44:51 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:44:51.141+0000 7fb4a656d140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 20 18:44:51 compute-0 ceph-mgr[74676]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Jan 20 18:44:51 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'orchestrator'
Jan 20 18:44:51 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:44:51.364+0000 7fb4a656d140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 20 18:44:51 compute-0 ceph-mgr[74676]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Jan 20 18:44:51 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'osd_perf_query'
Jan 20 18:44:51 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:44:51.446+0000 7fb4a656d140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 20 18:44:51 compute-0 ceph-mgr[74676]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Jan 20 18:44:51 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'osd_support'
Jan 20 18:44:51 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:44:51.514+0000 7fb4a656d140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 20 18:44:51 compute-0 ceph-mgr[74676]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Jan 20 18:44:51 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'pg_autoscaler'
Jan 20 18:44:51 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:44:51.590+0000 7fb4a656d140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 20 18:44:51 compute-0 ceph-mgr[74676]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Jan 20 18:44:51 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'progress'
Jan 20 18:44:51 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:44:51.664+0000 7fb4a656d140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 20 18:44:51 compute-0 ceph-mgr[74676]: mgr[py] Module progress has missing NOTIFY_TYPES member
Jan 20 18:44:51 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'prometheus'
Jan 20 18:44:51 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Jan 20 18:44:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:44:52.027+0000 7fb4a656d140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 20 18:44:52 compute-0 ceph-mgr[74676]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Jan 20 18:44:52 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'rbd_support'
Jan 20 18:44:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:44:52.128+0000 7fb4a656d140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 20 18:44:52 compute-0 ceph-mgr[74676]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Jan 20 18:44:52 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'restful'
Jan 20 18:44:52 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:44:52 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:44:52 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:44:52.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:44:52 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'rgw'
Jan 20 18:44:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:44:52.571+0000 7fb4a656d140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 20 18:44:52 compute-0 ceph-mgr[74676]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Jan 20 18:44:52 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'rook'
Jan 20 18:44:52 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Jan 20 18:44:52 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Jan 20 18:44:52 compute-0 systemd-coredump[101027]: Process 98081 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 52:
                                                    #0  0x00007fc2459c032e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Jan 20 18:44:52 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Jan 20 18:44:52 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:44:52 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:44:52 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:44:52.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:44:52 compute-0 ceph-mon[74381]: 10.18 scrub starts
Jan 20 18:44:52 compute-0 ceph-mon[74381]: 10.18 scrub ok
Jan 20 18:44:52 compute-0 ceph-mon[74381]: 8.11 scrub starts
Jan 20 18:44:52 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Jan 20 18:44:52 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Jan 20 18:44:52 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Jan 20 18:44:52 compute-0 systemd[1]: systemd-coredump@1-101026-0.service: Deactivated successfully.
Jan 20 18:44:52 compute-0 systemd[1]: systemd-coredump@1-101026-0.service: Consumed 1.305s CPU time.
Jan 20 18:44:52 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 88 pg[9.14( v 49'1085 (0'0,49'1085] local-lis/les=86/87 n=5 ec=62/41 lis/c=86/62 les/c/f=87/63/0 sis=88 pruub=12.973260880s) [1] async=[1] r=-1 lpr=88 pi=[62,88)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active pruub 273.014862061s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:52 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 88 pg[9.14( v 49'1085 (0'0,49'1085] local-lis/les=86/87 n=5 ec=62/41 lis/c=86/62 les/c/f=87/63/0 sis=88 pruub=12.973194122s) [1] r=-1 lpr=88 pi=[62,88)/1 crt=49'1085 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 273.014862061s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:52 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 88 pg[9.0( v 49'1085 (0'0,49'1085] local-lis/les=86/87 n=8 ec=41/41 lis/c=86/62 les/c/f=87/63/0 sis=88 pruub=12.971920013s) [1] async=[1] r=-1 lpr=88 pi=[62,88)/1 crt=49'1085 lcod 49'1084 mlcod 49'1084 active pruub 273.014923096s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:52 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 88 pg[9.0( v 49'1085 (0'0,49'1085] local-lis/les=86/87 n=8 ec=41/41 lis/c=86/62 les/c/f=87/63/0 sis=88 pruub=12.971694946s) [1] r=-1 lpr=88 pi=[62,88)/1 crt=49'1085 lcod 49'1084 mlcod 0'0 unknown NOTIFY pruub 273.014923096s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:53 compute-0 podman[101034]: 2026-01-20 18:44:53.022336981 +0000 UTC m=+0.025034276 container died af40c80f0b5ca4437bf16fa284143bc3d66d618a40bbf26cc0fd453c9079a558 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Jan 20 18:44:53 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:44:53.174+0000 7fb4a656d140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 20 18:44:53 compute-0 ceph-mgr[74676]: mgr[py] Module rook has missing NOTIFY_TYPES member
Jan 20 18:44:53 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'selftest'
Jan 20 18:44:53 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:44:53.247+0000 7fb4a656d140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 20 18:44:53 compute-0 ceph-mgr[74676]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Jan 20 18:44:53 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'snap_schedule'
Jan 20 18:44:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-6fc54353cd7985371d48975a97c84c383db39cbb5c9f773671dba86ab45e9ab2-merged.mount: Deactivated successfully.
Jan 20 18:44:53 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:44:53.335+0000 7fb4a656d140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 20 18:44:53 compute-0 ceph-mgr[74676]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Jan 20 18:44:53 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'stats'
Jan 20 18:44:53 compute-0 systemd[93001]: Starting Mark boot as successful...
Jan 20 18:44:53 compute-0 systemd[93001]: Finished Mark boot as successful.
Jan 20 18:44:53 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'status'
Jan 20 18:44:53 compute-0 podman[101034]: 2026-01-20 18:44:53.521981326 +0000 UTC m=+0.524678621 container remove af40c80f0b5ca4437bf16fa284143bc3d66d618a40bbf26cc0fd453c9079a558 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Jan 20 18:44:53 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Main process exited, code=exited, status=139/n/a
Jan 20 18:44:53 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:44:53.534+0000 7fb4a656d140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Jan 20 18:44:53 compute-0 ceph-mgr[74676]: mgr[py] Module status has missing NOTIFY_TYPES member
Jan 20 18:44:53 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'telegraf'
Jan 20 18:44:53 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:44:53.608+0000 7fb4a656d140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 20 18:44:53 compute-0 ceph-mgr[74676]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Jan 20 18:44:53 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'telemetry'
Jan 20 18:44:53 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Failed with result 'exit-code'.
Jan 20 18:44:53 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Consumed 1.588s CPU time.
Jan 20 18:44:53 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:44:53.761+0000 7fb4a656d140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 20 18:44:53 compute-0 ceph-mgr[74676]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Jan 20 18:44:53 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'test_orchestrator'
Jan 20 18:44:53 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 10.2 deep-scrub starts
Jan 20 18:44:53 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 10.2 deep-scrub ok
Jan 20 18:44:53 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:44:53.975+0000 7fb4a656d140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 20 18:44:53 compute-0 ceph-mgr[74676]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Jan 20 18:44:53 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'volumes'
Jan 20 18:44:54 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:44:54.240+0000 7fb4a656d140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 20 18:44:54 compute-0 ceph-mgr[74676]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Jan 20 18:44:54 compute-0 ceph-mgr[74676]: mgr[py] Loading python module 'zabbix'
Jan 20 18:44:54 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:44:54 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:44:54 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:44:54.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:44:54 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:44:54.311+0000 7fb4a656d140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 20 18:44:54 compute-0 ceph-mgr[74676]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Jan 20 18:44:54 compute-0 ceph-mgr[74676]: ms_deliver_dispatch: unhandled message 0x55746e31b860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Jan 20 18:44:54 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Jan 20 18:44:54 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.pyghhf restarted
Jan 20 18:44:54 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.pyghhf started
Jan 20 18:44:54 compute-0 ceph-mon[74381]: log_channel(cluster) log [INF] : Active manager daemon compute-0.cepfkm restarted
Jan 20 18:44:54 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).mgr e27 prepare_beacon:  waiting for osdmon writeable to blocklist old instance.
Jan 20 18:44:54 compute-0 ceph-mon[74381]: 8.11 scrub ok
Jan 20 18:44:54 compute-0 ceph-mon[74381]: 12.1c scrub starts
Jan 20 18:44:54 compute-0 ceph-mon[74381]: 12.1c scrub ok
Jan 20 18:44:54 compute-0 ceph-mon[74381]: 10.1e deep-scrub starts
Jan 20 18:44:54 compute-0 ceph-mon[74381]: 10.1e deep-scrub ok
Jan 20 18:44:54 compute-0 ceph-mon[74381]: 10.1b scrub starts
Jan 20 18:44:54 compute-0 ceph-mon[74381]: 11.e scrub starts
Jan 20 18:44:54 compute-0 ceph-mon[74381]: 11.e scrub ok
Jan 20 18:44:54 compute-0 ceph-mon[74381]: 10.1b scrub ok
Jan 20 18:44:54 compute-0 ceph-mon[74381]: 10.19 scrub starts
Jan 20 18:44:54 compute-0 ceph-mon[74381]: 10.19 scrub ok
Jan 20 18:44:54 compute-0 ceph-mon[74381]: osdmap e88: 3 total, 3 up, 3 in
Jan 20 18:44:54 compute-0 ceph-mon[74381]: 11.5 deep-scrub starts
Jan 20 18:44:54 compute-0 ceph-mon[74381]: 11.5 deep-scrub ok
Jan 20 18:44:54 compute-0 ceph-mon[74381]: 8.5 scrub starts
Jan 20 18:44:54 compute-0 ceph-mon[74381]: 8.5 scrub ok
Jan 20 18:44:54 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Jan 20 18:44:54 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Jan 20 18:44:54 compute-0 ceph-mon[74381]: log_channel(cluster) log [INF] : Active manager daemon compute-0.cepfkm restarted
Jan 20 18:44:54 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Jan 20 18:44:54 compute-0 ceph-mon[74381]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.cepfkm
Jan 20 18:44:54 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : mgrmap e28: compute-0.cepfkm(active, since 2m), standbys: compute-1.whkwsm, compute-2.pyghhf
Jan 20 18:44:54 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Jan 20 18:44:54 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Jan 20 18:44:54 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : mgrmap e29: compute-0.cepfkm(active, starting, since 0.267241s), standbys: compute-1.whkwsm, compute-2.pyghhf
Jan 20 18:44:54 compute-0 ceph-mgr[74676]: mgr handle_mgr_map Activating!
Jan 20 18:44:54 compute-0 ceph-mgr[74676]: mgr handle_mgr_map I am now activating
Jan 20 18:44:54 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.whkwsm restarted
Jan 20 18:44:54 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.whkwsm started
Jan 20 18:44:54 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:44:54 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:44:54 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:44:54.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:44:54 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 7.f scrub starts
Jan 20 18:44:54 compute-0 ceph-mgr[74676]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 18:44:54 compute-0 ceph-mgr[74676]: mgr load Constructed class from module: balancer
Jan 20 18:44:54 compute-0 ceph-mgr[74676]: [balancer INFO root] Starting
Jan 20 18:44:54 compute-0 ceph-mgr[74676]: [balancer INFO root] Optimize plan auto_2026-01-20_18:44:54
Jan 20 18:44:54 compute-0 ceph-mgr[74676]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 18:44:54 compute-0 ceph-mgr[74676]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Jan 20 18:44:54 compute-0 ceph-mgr[74676]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 18:44:54 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 7.f scrub ok
Jan 20 18:44:54 compute-0 ceph-mon[74381]: log_channel(cluster) log [INF] : Manager daemon compute-0.cepfkm is now available
Jan 20 18:44:54 compute-0 ceph-mgr[74676]: mgr load Constructed class from module: cephadm
Jan 20 18:44:54 compute-0 ceph-mgr[74676]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 18:44:54 compute-0 ceph-mgr[74676]: mgr load Constructed class from module: crash
Jan 20 18:44:54 compute-0 ceph-mgr[74676]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 18:44:54 compute-0 ceph-mgr[74676]: mgr load Constructed class from module: dashboard
Jan 20 18:44:54 compute-0 ceph-mgr[74676]: [dashboard INFO access_control] Loading user roles DB version=2
Jan 20 18:44:54 compute-0 ceph-mgr[74676]: [dashboard INFO sso] Loading SSO DB version=1
Jan 20 18:44:54 compute-0 ceph-mgr[74676]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Jan 20 18:44:54 compute-0 ceph-mgr[74676]: [dashboard INFO root] Configured CherryPy, starting engine...
Jan 20 18:44:54 compute-0 ceph-mgr[74676]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 18:44:54 compute-0 ceph-mgr[74676]: mgr load Constructed class from module: devicehealth
Jan 20 18:44:54 compute-0 ceph-mgr[74676]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 18:44:54 compute-0 ceph-mgr[74676]: mgr load Constructed class from module: iostat
Jan 20 18:44:54 compute-0 ceph-mgr[74676]: [devicehealth INFO root] Starting
Jan 20 18:44:54 compute-0 ceph-mgr[74676]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 18:44:54 compute-0 ceph-mgr[74676]: mgr load Constructed class from module: nfs
Jan 20 18:44:54 compute-0 ceph-mgr[74676]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 18:44:54 compute-0 ceph-mgr[74676]: mgr load Constructed class from module: orchestrator
Jan 20 18:44:54 compute-0 ceph-mgr[74676]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 18:44:54 compute-0 ceph-mgr[74676]: mgr load Constructed class from module: pg_autoscaler
Jan 20 18:44:54 compute-0 ceph-mgr[74676]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 18:44:54 compute-0 ceph-mgr[74676]: mgr load Constructed class from module: progress
Jan 20 18:44:54 compute-0 ceph-mgr[74676]: [prometheus DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 18:44:54 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Jan 20 18:44:54 compute-0 ceph-mgr[74676]: [progress INFO root] Loading...
Jan 20 18:44:54 compute-0 ceph-mgr[74676]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7fb4258bc6a0>, <progress.module.GhostEvent object at 0x7fb4258bc670>, <progress.module.GhostEvent object at 0x7fb4258bc640>, <progress.module.GhostEvent object at 0x7fb4258bc610>, <progress.module.GhostEvent object at 0x7fb4258bc6d0>, <progress.module.GhostEvent object at 0x7fb4258bc700>, <progress.module.GhostEvent object at 0x7fb4258bc730>, <progress.module.GhostEvent object at 0x7fb4258bc760>, <progress.module.GhostEvent object at 0x7fb4258bc790>, <progress.module.GhostEvent object at 0x7fb4258bc7c0>, <progress.module.GhostEvent object at 0x7fb4258bc7f0>, <progress.module.GhostEvent object at 0x7fb4258bc820>, <progress.module.GhostEvent object at 0x7fb4258bc850>, <progress.module.GhostEvent object at 0x7fb4258bc880>, <progress.module.GhostEvent object at 0x7fb4258bc8b0>, <progress.module.GhostEvent object at 0x7fb4258bc8e0>, <progress.module.GhostEvent object at 0x7fb4258bc910>, <progress.module.GhostEvent object at 0x7fb4258bc940>, <progress.module.GhostEvent object at 0x7fb4258bc970>, <progress.module.GhostEvent object at 0x7fb4258bc9a0>, <progress.module.GhostEvent object at 0x7fb4258bc9d0>, <progress.module.GhostEvent object at 0x7fb4258bca00>, <progress.module.GhostEvent object at 0x7fb4258bca30>, <progress.module.GhostEvent object at 0x7fb4258bca60>, <progress.module.GhostEvent object at 0x7fb4258bca90>, <progress.module.GhostEvent object at 0x7fb4258bcac0>, <progress.module.GhostEvent object at 0x7fb4258bcaf0>] historic events
Jan 20 18:44:54 compute-0 ceph-mgr[74676]: [progress INFO root] Loaded OSDMap, ready.
Jan 20 18:44:54 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 18:44:54 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:54 compute-0 ceph-mgr[74676]: mgr load Constructed class from module: prometheus
Jan 20 18:44:54 compute-0 ceph-mgr[74676]: [prometheus INFO root] server_addr: :: server_port: 9283
Jan 20 18:44:54 compute-0 ceph-mgr[74676]: [prometheus INFO root] Cache enabled
Jan 20 18:44:54 compute-0 ceph-mgr[74676]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 18:44:54 compute-0 ceph-mgr[74676]: [prometheus INFO root] starting metric collection thread
Jan 20 18:44:54 compute-0 ceph-mgr[74676]: [prometheus INFO root] Starting engine...
Jan 20 18:44:54 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.error] [20/Jan/2026:18:44:54] ENGINE Bus STARTING
Jan 20 18:44:54 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: [20/Jan/2026:18:44:54] ENGINE Bus STARTING
Jan 20 18:44:54 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: CherryPy Checker:
Jan 20 18:44:54 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: The Application mounted at '' has an empty config.
Jan 20 18:44:54 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 
Jan 20 18:44:54 compute-0 ceph-mgr[74676]: [rbd_support INFO root] recovery thread starting
Jan 20 18:44:54 compute-0 ceph-mgr[74676]: [rbd_support INFO root] starting setup
Jan 20 18:44:54 compute-0 ceph-mgr[74676]: mgr load Constructed class from module: rbd_support
Jan 20 18:44:54 compute-0 ceph-mgr[74676]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 18:44:54 compute-0 ceph-mgr[74676]: mgr load Constructed class from module: restful
Jan 20 18:44:54 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.cepfkm/mirror_snapshot_schedule"} v 0)
Jan 20 18:44:54 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.cepfkm/mirror_snapshot_schedule"}]: dispatch
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: mgr load Constructed class from module: status
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [restful INFO root] server_addr: :: server_port: 8003
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: mgr load Constructed class from module: telemetry
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [restful WARNING root] server not running: no certificate configured
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] PerfHandler: starting
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_task_task: vms, start_after=
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_task_task: volumes, start_after=
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: mgr load Constructed class from module: volumes
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_task_task: backups, start_after=
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_task_task: images, start_after=
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: client.0 error registering admin socket command: (17) File exists
Jan 20 18:44:55 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:44:55.077+0000 7fb409cb2640 -1 client.0 error registering admin socket command: (17) File exists
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] TaskHandler: starting
Jan 20 18:44:55 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:44:55.082+0000 7fb40f4bd640 -1 client.0 error registering admin socket command: (17) File exists
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: client.0 error registering admin socket command: (17) File exists
Jan 20 18:44:55 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:44:55.082+0000 7fb40f4bd640 -1 client.0 error registering admin socket command: (17) File exists
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: client.0 error registering admin socket command: (17) File exists
Jan 20 18:44:55 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:44:55.082+0000 7fb40f4bd640 -1 client.0 error registering admin socket command: (17) File exists
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: client.0 error registering admin socket command: (17) File exists
Jan 20 18:44:55 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:44:55.082+0000 7fb40f4bd640 -1 client.0 error registering admin socket command: (17) File exists
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: client.0 error registering admin socket command: (17) File exists
Jan 20 18:44:55 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T18:44:55.082+0000 7fb40f4bd640 -1 client.0 error registering admin socket command: (17) File exists
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: client.0 error registering admin socket command: (17) File exists
Jan 20 18:44:55 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.cepfkm/trash_purge_schedule"} v 0)
Jan 20 18:44:55 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.cepfkm/trash_purge_schedule"}]: dispatch
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] setup complete
Jan 20 18:44:55 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: [20/Jan/2026:18:44:55] ENGINE Serving on http://:::9283
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.error] [20/Jan/2026:18:44:55] ENGINE Serving on http://:::9283
Jan 20 18:44:55 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: [20/Jan/2026:18:44:55] ENGINE Bus STARTED
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.error] [20/Jan/2026:18:44:55] ENGINE Bus STARTED
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [prometheus INFO root] Engine started.
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Jan 20 18:44:55 compute-0 sshd-session[101237]: Accepted publickey for ceph-admin from 192.168.122.100 port 60056 ssh2: RSA SHA256:58ALtshni2jJ/laX5+bMZOBBr4k3I3UMx5wmNonUL8k
Jan 20 18:44:55 compute-0 systemd-logind[796]: New session 39 of user ceph-admin.
Jan 20 18:44:55 compute-0 systemd[1]: Started Session 39 of User ceph-admin.
Jan 20 18:44:55 compute-0 sshd-session[101237]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: [dashboard INFO dashboard.module] Engine started.
Jan 20 18:44:55 compute-0 sudo[101253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:44:55 compute-0 sudo[101253]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:44:55 compute-0 sudo[101253]: pam_unix(sudo:session): session closed for user root
Jan 20 18:44:55 compute-0 sudo[101279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Jan 20 18:44:55 compute-0 sudo[101279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:44:55 compute-0 ceph-mon[74381]: 10.2 deep-scrub starts
Jan 20 18:44:55 compute-0 ceph-mon[74381]: 10.2 deep-scrub ok
Jan 20 18:44:55 compute-0 ceph-mon[74381]: 8.1b scrub starts
Jan 20 18:44:55 compute-0 ceph-mon[74381]: 8.1b scrub ok
Jan 20 18:44:55 compute-0 ceph-mon[74381]: Standby manager daemon compute-2.pyghhf restarted
Jan 20 18:44:55 compute-0 ceph-mon[74381]: Standby manager daemon compute-2.pyghhf started
Jan 20 18:44:55 compute-0 ceph-mon[74381]: Active manager daemon compute-0.cepfkm restarted
Jan 20 18:44:55 compute-0 ceph-mon[74381]: osdmap e89: 3 total, 3 up, 3 in
Jan 20 18:44:55 compute-0 ceph-mon[74381]: 8.d scrub starts
Jan 20 18:44:55 compute-0 ceph-mon[74381]: 8.d scrub ok
Jan 20 18:44:55 compute-0 ceph-mon[74381]: Active manager daemon compute-0.cepfkm restarted
Jan 20 18:44:55 compute-0 ceph-mon[74381]: Activating manager daemon compute-0.cepfkm
Jan 20 18:44:55 compute-0 ceph-mon[74381]: mgrmap e28: compute-0.cepfkm(active, since 2m), standbys: compute-1.whkwsm, compute-2.pyghhf
Jan 20 18:44:55 compute-0 ceph-mon[74381]: osdmap e90: 3 total, 3 up, 3 in
Jan 20 18:44:55 compute-0 ceph-mon[74381]: mgrmap e29: compute-0.cepfkm(active, starting, since 0.267241s), standbys: compute-1.whkwsm, compute-2.pyghhf
Jan 20 18:44:55 compute-0 ceph-mon[74381]: Standby manager daemon compute-1.whkwsm restarted
Jan 20 18:44:55 compute-0 ceph-mon[74381]: Standby manager daemon compute-1.whkwsm started
Jan 20 18:44:55 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Jan 20 18:44:55 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Jan 20 18:44:55 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Jan 20 18:44:55 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.bekmxe"}]: dispatch
Jan 20 18:44:55 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.eisxof"}]: dispatch
Jan 20 18:44:55 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.rrgioo"}]: dispatch
Jan 20 18:44:55 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mgr metadata", "who": "compute-0.cepfkm", "id": "compute-0.cepfkm"}]: dispatch
Jan 20 18:44:55 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mgr metadata", "who": "compute-1.whkwsm", "id": "compute-1.whkwsm"}]: dispatch
Jan 20 18:44:55 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mgr metadata", "who": "compute-2.pyghhf", "id": "compute-2.pyghhf"}]: dispatch
Jan 20 18:44:55 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Jan 20 18:44:55 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Jan 20 18:44:55 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Jan 20 18:44:55 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mds metadata"}]: dispatch
Jan 20 18:44:55 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 20 18:44:55 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mon metadata"}]: dispatch
Jan 20 18:44:55 compute-0 ceph-mon[74381]: Manager daemon compute-0.cepfkm is now available
Jan 20 18:44:55 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:55 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 18:44:55 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.cepfkm/mirror_snapshot_schedule"}]: dispatch
Jan 20 18:44:55 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.cepfkm/mirror_snapshot_schedule"}]: dispatch
Jan 20 18:44:55 compute-0 ceph-mon[74381]: 11.1b scrub starts
Jan 20 18:44:55 compute-0 ceph-mon[74381]: 11.1b scrub ok
Jan 20 18:44:55 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.cepfkm/trash_purge_schedule"}]: dispatch
Jan 20 18:44:55 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.cepfkm/trash_purge_schedule"}]: dispatch
Jan 20 18:44:55 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Jan 20 18:44:55 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Jan 20 18:44:55 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : mgrmap e30: compute-0.cepfkm(active, since 1.27972s), standbys: compute-1.whkwsm, compute-2.pyghhf
Jan 20 18:44:55 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v3: 337 pgs: 337 active+clean; 456 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:44:56 compute-0 podman[101377]: 2026-01-20 18:44:56.140386264 +0000 UTC m=+0.057712164 container exec 2fba800b181b7eef43f9bbe592c3e2bd413c4a1140b3b26fe8cd839a68603da7 (image=quay.io/ceph/ceph:v19, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mon-compute-0, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True)
Jan 20 18:44:56 compute-0 podman[101377]: 2026-01-20 18:44:56.229303484 +0000 UTC m=+0.146629394 container exec_died 2fba800b181b7eef43f9bbe592c3e2bd413c4a1140b3b26fe8cd839a68603da7 (image=quay.io/ceph/ceph:v19, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 20 18:44:56 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:44:56 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 18:44:56 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:44:56.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 18:44:56 compute-0 ceph-mon[74381]: 7.f scrub starts
Jan 20 18:44:56 compute-0 ceph-mon[74381]: 7.f scrub ok
Jan 20 18:44:56 compute-0 ceph-mon[74381]: 11.3 scrub starts
Jan 20 18:44:56 compute-0 ceph-mon[74381]: 11.3 scrub ok
Jan 20 18:44:56 compute-0 ceph-mon[74381]: mgrmap e30: compute-0.cepfkm(active, since 1.27972s), standbys: compute-1.whkwsm, compute-2.pyghhf
Jan 20 18:44:56 compute-0 ceph-mon[74381]: pgmap v3: 337 pgs: 337 active+clean; 456 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:44:56 compute-0 ceph-mon[74381]: 11.1d scrub starts
Jan 20 18:44:56 compute-0 ceph-mon[74381]: 11.1d scrub ok
Jan 20 18:44:56 compute-0 podman[101512]: 2026-01-20 18:44:56.753755779 +0000 UTC m=+0.047094332 container exec ce781f31ce1e6337dcc83c72a15848474c3fba90a583434d21a997d8255e9246 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:44:56 compute-0 podman[101537]: 2026-01-20 18:44:56.810933697 +0000 UTC m=+0.044311408 container exec_died ce781f31ce1e6337dcc83c72a15848474c3fba90a583434d21a997d8255e9246 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:44:56 compute-0 podman[101512]: 2026-01-20 18:44:56.816091993 +0000 UTC m=+0.109430506 container exec_died ce781f31ce1e6337dcc83c72a15848474c3fba90a583434d21a997d8255e9246 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:44:56 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:44:56 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:44:56 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:44:56.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:44:56 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v4: 337 pgs: 337 active+clean; 456 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:44:56 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : mgrmap e31: compute-0.cepfkm(active, since 2s), standbys: compute-1.whkwsm, compute-2.pyghhf
Jan 20 18:44:56 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} v 0)
Jan 20 18:44:56 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Jan 20 18:44:56 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0)
Jan 20 18:44:56 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Jan 20 18:44:56 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Jan 20 18:44:56 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Jan 20 18:44:56 compute-0 ceph-mgr[74676]: [cephadm INFO cherrypy.error] [20/Jan/2026:18:44:56] ENGINE Bus STARTING
Jan 20 18:44:56 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : [20/Jan/2026:18:44:56] ENGINE Bus STARTING
Jan 20 18:44:57 compute-0 ceph-mgr[74676]: [devicehealth INFO root] Check health
Jan 20 18:44:57 compute-0 ceph-mgr[74676]: [cephadm INFO cherrypy.error] [20/Jan/2026:18:44:57] ENGINE Serving on http://192.168.122.100:8765
Jan 20 18:44:57 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : [20/Jan/2026:18:44:57] ENGINE Serving on http://192.168.122.100:8765
Jan 20 18:44:57 compute-0 ceph-mgr[74676]: [cephadm INFO cherrypy.error] [20/Jan/2026:18:44:57] ENGINE Serving on https://192.168.122.100:7150
Jan 20 18:44:57 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : [20/Jan/2026:18:44:57] ENGINE Serving on https://192.168.122.100:7150
Jan 20 18:44:57 compute-0 ceph-mgr[74676]: [cephadm INFO cherrypy.error] [20/Jan/2026:18:44:57] ENGINE Bus STARTED
Jan 20 18:44:57 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : [20/Jan/2026:18:44:57] ENGINE Bus STARTED
Jan 20 18:44:57 compute-0 ceph-mgr[74676]: [cephadm INFO cherrypy.error] [20/Jan/2026:18:44:57] ENGINE Client ('192.168.122.100', 49388) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 20 18:44:57 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : [20/Jan/2026:18:44:57] ENGINE Client ('192.168.122.100', 49388) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 20 18:44:57 compute-0 ceph-mon[74381]: 10.5 scrub starts
Jan 20 18:44:57 compute-0 ceph-mon[74381]: 10.5 scrub ok
Jan 20 18:44:57 compute-0 ceph-mon[74381]: 8.2 scrub starts
Jan 20 18:44:57 compute-0 ceph-mon[74381]: 8.2 scrub ok
Jan 20 18:44:57 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Jan 20 18:44:57 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Jan 20 18:44:57 compute-0 ceph-mon[74381]: mgrmap e31: compute-0.cepfkm(active, since 2s), standbys: compute-1.whkwsm, compute-2.pyghhf
Jan 20 18:44:57 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Jan 20 18:44:57 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Jan 20 18:44:57 compute-0 ceph-mon[74381]: [20/Jan/2026:18:44:56] ENGINE Bus STARTING
Jan 20 18:44:57 compute-0 ceph-mon[74381]: 11.1e scrub starts
Jan 20 18:44:57 compute-0 ceph-mon[74381]: 11.1e scrub ok
Jan 20 18:44:57 compute-0 ceph-mon[74381]: [20/Jan/2026:18:44:57] ENGINE Serving on http://192.168.122.100:8765
Jan 20 18:44:57 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Jan 20 18:44:57 compute-0 podman[101667]: 2026-01-20 18:44:57.900060592 +0000 UTC m=+0.741588379 container exec 4e8e761c58a323b2e232c17a02958108663f7cfbfd5c846bd9edf3835afc34ea (image=quay.io/ceph/haproxy:2.3, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm)
Jan 20 18:44:57 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Jan 20 18:44:57 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Jan 20 18:44:57 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 20 18:44:57 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 20 18:44:57 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Jan 20 18:44:57 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Jan 20 18:44:58 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 91 pg[9.8( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=6 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=91 pruub=8.318687439s) [2] r=-1 lpr=91 pi=[62,91)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active pruub 273.382843018s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:58 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 91 pg[9.8( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=6 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=91 pruub=8.318446159s) [2] r=-1 lpr=91 pi=[62,91)/1 crt=49'1085 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 273.382843018s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:58 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 91 pg[9.18( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=5 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=91 pruub=8.317820549s) [2] r=-1 lpr=91 pi=[62,91)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active pruub 273.382476807s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:58 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 91 pg[9.18( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=5 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=91 pruub=8.317770004s) [2] r=-1 lpr=91 pi=[62,91)/1 crt=49'1085 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 273.382476807s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:58 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 91 pg[6.8( v 56'46 (0'0,56'46] local-lis/les=58/60 n=0 ec=58/23 lis/c=58/58 les/c/f=60/60/0 sis=91 pruub=13.269608498s) [1] r=-1 lpr=91 pi=[58,91)/1 crt=56'46 lcod 0'0 mlcod 0'0 active pruub 278.334503174s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:58 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 91 pg[6.8( v 56'46 (0'0,56'46] local-lis/les=58/60 n=0 ec=58/23 lis/c=58/58 les/c/f=60/60/0 sis=91 pruub=13.269584656s) [1] r=-1 lpr=91 pi=[58,91)/1 crt=56'46 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 278.334503174s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:58 compute-0 podman[101691]: 2026-01-20 18:44:58.006019785 +0000 UTC m=+0.090508003 container exec_died 4e8e761c58a323b2e232c17a02958108663f7cfbfd5c846bd9edf3835afc34ea (image=quay.io/ceph/haproxy:2.3, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm)
Jan 20 18:44:58 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [WARNING] 019/184458 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 20 18:44:58 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 20 18:44:58 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 20 18:44:58 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:44:58 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:44:58 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:44:58.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:44:58 compute-0 podman[101667]: 2026-01-20 18:44:58.497438972 +0000 UTC m=+1.338966729 container exec_died 4e8e761c58a323b2e232c17a02958108663f7cfbfd5c846bd9edf3835afc34ea (image=quay.io/ceph/haproxy:2.3, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm)
Jan 20 18:44:58 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:58 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 20 18:44:58 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:58 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:58 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 20 18:44:58 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:58 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:44:58 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:44:58 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:44:58.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:44:58 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v6: 337 pgs: 337 active+clean; 456 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:44:58 compute-0 ceph-mon[74381]: 7.2 scrub starts
Jan 20 18:44:58 compute-0 ceph-mon[74381]: 7.2 scrub ok
Jan 20 18:44:58 compute-0 ceph-mon[74381]: [20/Jan/2026:18:44:57] ENGINE Serving on https://192.168.122.100:7150
Jan 20 18:44:58 compute-0 ceph-mon[74381]: [20/Jan/2026:18:44:57] ENGINE Bus STARTED
Jan 20 18:44:58 compute-0 ceph-mon[74381]: [20/Jan/2026:18:44:57] ENGINE Client ('192.168.122.100', 49388) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 20 18:44:58 compute-0 ceph-mon[74381]: 11.19 scrub starts
Jan 20 18:44:58 compute-0 ceph-mon[74381]: 11.19 scrub ok
Jan 20 18:44:58 compute-0 ceph-mon[74381]: 7.6 scrub starts
Jan 20 18:44:58 compute-0 ceph-mon[74381]: 7.6 scrub ok
Jan 20 18:44:58 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 20 18:44:58 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 20 18:44:58 compute-0 ceph-mon[74381]: osdmap e91: 3 total, 3 up, 3 in
Jan 20 18:44:58 compute-0 ceph-mon[74381]: 8.4 scrub starts
Jan 20 18:44:58 compute-0 ceph-mon[74381]: 8.4 scrub ok
Jan 20 18:44:58 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:58 compute-0 ceph-mon[74381]: 8.a scrub starts
Jan 20 18:44:58 compute-0 ceph-mon[74381]: 8.a scrub ok
Jan 20 18:44:58 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:58 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:58 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:58 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} v 0)
Jan 20 18:44:58 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Jan 20 18:44:58 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0)
Jan 20 18:44:58 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Jan 20 18:44:58 compute-0 podman[101733]: 2026-01-20 18:44:58.903585955 +0000 UTC m=+0.222293533 container exec 03da620c845e54fd933e41497235ed884812b05ac119911f7372fce805879a03 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-keepalived-nfs-cephfs-compute-0-kuklye, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793, build-date=2023-02-22T09:23:20, version=2.2.4, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=)
Jan 20 18:44:58 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 12.e scrub starts
Jan 20 18:44:58 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 12.e scrub ok
Jan 20 18:44:58 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Jan 20 18:44:59 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 20 18:44:59 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 20 18:44:59 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Jan 20 18:44:59 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Jan 20 18:44:59 compute-0 podman[101733]: 2026-01-20 18:44:59.114321321 +0000 UTC m=+0.433028869 container exec_died 03da620c845e54fd933e41497235ed884812b05ac119911f7372fce805879a03 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-keepalived-nfs-cephfs-compute-0-kuklye, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release=1793, distribution-scope=public, version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph.)
Jan 20 18:44:59 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 92 pg[6.9( empty local-lis/les=0/0 n=0 ec=58/23 lis/c=67/67 les/c/f=68/68/0 sis=92) [0] r=0 lpr=92 pi=[67,92)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:44:59 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 92 pg[9.8( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=6 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=92) [2]/[0] r=0 lpr=92 pi=[62,92)/1 crt=49'1085 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:59 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 92 pg[9.8( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=6 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=92) [2]/[0] r=0 lpr=92 pi=[62,92)/1 crt=49'1085 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 18:44:59 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 92 pg[9.18( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=5 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=92) [2]/[0] r=0 lpr=92 pi=[62,92)/1 crt=49'1085 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:59 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 92 pg[9.18( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=5 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=92) [2]/[0] r=0 lpr=92 pi=[62,92)/1 crt=49'1085 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 18:44:59 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 92 pg[9.9( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=6 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=92 pruub=15.172308922s) [2] r=-1 lpr=92 pi=[62,92)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active pruub 281.383117676s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:59 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 92 pg[9.9( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=6 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=92 pruub=15.172276497s) [2] r=-1 lpr=92 pi=[62,92)/1 crt=49'1085 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 281.383117676s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:59 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 92 pg[9.19( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=5 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=92 pruub=15.170997620s) [2] r=-1 lpr=92 pi=[62,92)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active pruub 281.382812500s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:44:59 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 92 pg[9.19( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=5 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=92 pruub=15.170967102s) [2] r=-1 lpr=92 pi=[62,92)/1 crt=49'1085 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 281.382812500s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:44:59 compute-0 podman[101797]: 2026-01-20 18:44:59.387398421 +0000 UTC m=+0.098817485 container exec 37c94b4d155e72e431120bc1516e6aee70acdb4e82b4f6a7d55a98467b041cf1 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:44:59 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : mgrmap e32: compute-0.cepfkm(active, since 4s), standbys: compute-1.whkwsm, compute-2.pyghhf
Jan 20 18:44:59 compute-0 podman[101828]: 2026-01-20 18:44:59.556743477 +0000 UTC m=+0.127151067 container exec_died 37c94b4d155e72e431120bc1516e6aee70acdb4e82b4f6a7d55a98467b041cf1 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:44:59 compute-0 podman[101797]: 2026-01-20 18:44:59.561435721 +0000 UTC m=+0.272854785 container exec_died 37c94b4d155e72e431120bc1516e6aee70acdb4e82b4f6a7d55a98467b041cf1 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:44:59 compute-0 podman[101876]: 2026-01-20 18:44:59.73237733 +0000 UTC m=+0.042668434 container exec e8d4a682724f2039aeaadefabc5bd9331e64c1be264a6b723bd77661ef30236d (image=quay.io/ceph/grafana:10.4.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 20 18:44:59 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 20 18:44:59 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:59 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:44:59] "GET /metrics HTTP/1.1" 200 46586 "" "Prometheus/2.51.0"
Jan 20 18:44:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:44:59] "GET /metrics HTTP/1.1" 200 46586 "" "Prometheus/2.51.0"
Jan 20 18:44:59 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 20 18:44:59 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:59 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Jan 20 18:44:59 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 20 18:44:59 compute-0 ceph-mon[74381]: pgmap v6: 337 pgs: 337 active+clean; 456 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:44:59 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Jan 20 18:44:59 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Jan 20 18:44:59 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Jan 20 18:44:59 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Jan 20 18:44:59 compute-0 ceph-mon[74381]: 12.e scrub starts
Jan 20 18:44:59 compute-0 ceph-mon[74381]: 12.e scrub ok
Jan 20 18:44:59 compute-0 ceph-mon[74381]: 8.12 deep-scrub starts
Jan 20 18:44:59 compute-0 ceph-mon[74381]: 8.12 deep-scrub ok
Jan 20 18:44:59 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 20 18:44:59 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 20 18:44:59 compute-0 ceph-mon[74381]: osdmap e92: 3 total, 3 up, 3 in
Jan 20 18:44:59 compute-0 ceph-mon[74381]: 8.b scrub starts
Jan 20 18:44:59 compute-0 ceph-mon[74381]: 8.b scrub ok
Jan 20 18:44:59 compute-0 ceph-mon[74381]: mgrmap e32: compute-0.cepfkm(active, since 4s), standbys: compute-1.whkwsm, compute-2.pyghhf
Jan 20 18:44:59 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:59 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:44:59 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 20 18:44:59 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 20 18:44:59 compute-0 podman[101876]: 2026-01-20 18:44:59.898574102 +0000 UTC m=+0.208865206 container exec_died e8d4a682724f2039aeaadefabc5bd9331e64c1be264a6b723bd77661ef30236d (image=quay.io/ceph/grafana:10.4.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 20 18:44:59 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Jan 20 18:44:59 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Jan 20 18:45:00 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 20 18:45:00 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:00 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 20 18:45:00 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:00 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Jan 20 18:45:00 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 20 18:45:00 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Jan 20 18:45:00 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Jan 20 18:45:00 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Jan 20 18:45:00 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 93 pg[9.9( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=6 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=93) [2]/[0] r=0 lpr=93 pi=[62,93)/1 crt=49'1085 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:45:00 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 93 pg[9.9( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=6 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=93) [2]/[0] r=0 lpr=93 pi=[62,93)/1 crt=49'1085 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 18:45:00 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 93 pg[9.19( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=5 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=93) [2]/[0] r=0 lpr=93 pi=[62,93)/1 crt=49'1085 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:45:00 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 93 pg[9.19( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=5 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=93) [2]/[0] r=0 lpr=93 pi=[62,93)/1 crt=49'1085 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 18:45:00 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 93 pg[6.9( v 56'46 (0'0,56'46] local-lis/les=92/93 n=0 ec=58/23 lis/c=67/67 les/c/f=68/68/0 sis=92) [0] r=0 lpr=92 pi=[67,92)/1 crt=56'46 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:45:00 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 93 pg[9.8( v 49'1085 (0'0,49'1085] local-lis/les=92/93 n=6 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=92) [2]/[0] async=[2] r=0 lpr=92 pi=[62,92)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:45:00 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 93 pg[9.18( v 49'1085 (0'0,49'1085] local-lis/les=92/93 n=5 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=92) [2]/[0] async=[2] r=0 lpr=92 pi=[62,92)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:45:00 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:45:00 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 18:45:00 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:45:00.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 18:45:00 compute-0 podman[101987]: 2026-01-20 18:45:00.266225333 +0000 UTC m=+0.055900866 container exec 3b46b6d3dc84fc240ebe1bff0868890dcc11b4799d0aa0050a48fba74c90cc31 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:45:00 compute-0 podman[101987]: 2026-01-20 18:45:00.298188641 +0000 UTC m=+0.087864164 container exec_died 3b46b6d3dc84fc240ebe1bff0868890dcc11b4799d0aa0050a48fba74c90cc31 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:45:00 compute-0 sudo[101279]: pam_unix(sudo:session): session closed for user root
Jan 20 18:45:00 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 18:45:00 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:00 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 18:45:00 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:00 compute-0 sudo[102028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:45:00 compute-0 sudo[102028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:45:00 compute-0 sudo[102028]: pam_unix(sudo:session): session closed for user root
Jan 20 18:45:00 compute-0 sudo[102053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 20 18:45:00 compute-0 sudo[102053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:45:00 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:45:00 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v9: 337 pgs: 2 remapped+peering, 2 active+remapped, 333 active+clean; 456 KiB data, 126 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 12 op/s; 44 B/s, 2 objects/s recovering
Jan 20 18:45:00 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 18:45:00 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:45:00.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 18:45:00 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Jan 20 18:45:00 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Jan 20 18:45:01 compute-0 sudo[102053]: pam_unix(sudo:session): session closed for user root
Jan 20 18:45:01 compute-0 sudo[102109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:45:01 compute-0 sudo[102109]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:45:01 compute-0 sudo[102109]: pam_unix(sudo:session): session closed for user root
Jan 20 18:45:01 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Jan 20 18:45:01 compute-0 sudo[102134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Jan 20 18:45:01 compute-0 sudo[102134]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:45:01 compute-0 ceph-mon[74381]: 7.3 scrub starts
Jan 20 18:45:01 compute-0 ceph-mon[74381]: 7.3 scrub ok
Jan 20 18:45:01 compute-0 ceph-mon[74381]: 11.1c scrub starts
Jan 20 18:45:01 compute-0 ceph-mon[74381]: 11.1c scrub ok
Jan 20 18:45:01 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:01 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:01 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 20 18:45:01 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 20 18:45:01 compute-0 ceph-mon[74381]: osdmap e93: 3 total, 3 up, 3 in
Jan 20 18:45:01 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:01 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:01 compute-0 ceph-mon[74381]: 11.8 scrub starts
Jan 20 18:45:01 compute-0 ceph-mon[74381]: 11.8 scrub ok
Jan 20 18:45:01 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Jan 20 18:45:01 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Jan 20 18:45:01 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 94 pg[9.8( v 49'1085 (0'0,49'1085] local-lis/les=92/93 n=6 ec=62/41 lis/c=92/62 les/c/f=93/63/0 sis=94 pruub=14.854058266s) [2] async=[2] r=-1 lpr=94 pi=[62,94)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active pruub 283.186065674s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:45:01 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 94 pg[9.8( v 49'1085 (0'0,49'1085] local-lis/les=92/93 n=6 ec=62/41 lis/c=92/62 les/c/f=93/63/0 sis=94 pruub=14.853852272s) [2] r=-1 lpr=94 pi=[62,94)/1 crt=49'1085 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 283.186065674s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:45:01 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 94 pg[9.18( v 49'1085 (0'0,49'1085] local-lis/les=92/93 n=5 ec=62/41 lis/c=92/62 les/c/f=93/63/0 sis=94 pruub=14.853639603s) [2] async=[2] r=-1 lpr=94 pi=[62,94)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active pruub 283.186096191s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:45:01 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 94 pg[9.18( v 49'1085 (0'0,49'1085] local-lis/les=92/93 n=5 ec=62/41 lis/c=92/62 les/c/f=93/63/0 sis=94 pruub=14.853594780s) [2] r=-1 lpr=94 pi=[62,94)/1 crt=49'1085 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 283.186096191s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:45:01 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 94 pg[9.19( v 49'1085 (0'0,49'1085] local-lis/les=93/94 n=5 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=93) [2]/[0] async=[2] r=0 lpr=93 pi=[62,93)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:45:01 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 94 pg[9.9( v 49'1085 (0'0,49'1085] local-lis/les=93/94 n=6 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=93) [2]/[0] async=[2] r=0 lpr=93 pi=[62,93)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:45:01 compute-0 sudo[102134]: pam_unix(sudo:session): session closed for user root
Jan 20 18:45:01 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 18:45:01 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:01 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 18:45:01 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:01 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 20 18:45:01 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 20 18:45:01 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Jan 20 18:45:01 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Jan 20 18:45:01 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Jan 20 18:45:01 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Jan 20 18:45:01 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Jan 20 18:45:01 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Jan 20 18:45:01 compute-0 sudo[102180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Jan 20 18:45:01 compute-0 sudo[102180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:45:01 compute-0 sudo[102180]: pam_unix(sudo:session): session closed for user root
Jan 20 18:45:01 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 12.b scrub starts
Jan 20 18:45:01 compute-0 sshd-session[102201]: Accepted publickey for zuul from 192.168.122.30 port 47672 ssh2: ECDSA SHA256:OqjpBifwMHsrzwMbJxHbqE54q1skVz6aecL1tN9sOps
Jan 20 18:45:01 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 12.b scrub ok
Jan 20 18:45:01 compute-0 systemd-logind[796]: New session 40 of user zuul.
Jan 20 18:45:01 compute-0 systemd[1]: Started Session 40 of User zuul.
Jan 20 18:45:01 compute-0 sshd-session[102201]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 18:45:01 compute-0 sudo[102207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/etc/ceph
Jan 20 18:45:01 compute-0 sudo[102207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:45:01 compute-0 sudo[102207]: pam_unix(sudo:session): session closed for user root
Jan 20 18:45:02 compute-0 sudo[102234]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/etc/ceph/ceph.conf.new
Jan 20 18:45:02 compute-0 sudo[102234]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:45:02 compute-0 sudo[102234]: pam_unix(sudo:session): session closed for user root
Jan 20 18:45:02 compute-0 sudo[102299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1
Jan 20 18:45:02 compute-0 sudo[102299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:45:02 compute-0 sudo[102299]: pam_unix(sudo:session): session closed for user root
Jan 20 18:45:02 compute-0 sudo[102336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/etc/ceph/ceph.conf.new
Jan 20 18:45:02 compute-0 sudo[102336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:45:02 compute-0 sudo[102336]: pam_unix(sudo:session): session closed for user root
Jan 20 18:45:02 compute-0 ceph-mon[74381]: pgmap v9: 337 pgs: 2 remapped+peering, 2 active+remapped, 333 active+clean; 456 KiB data, 126 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 12 op/s; 44 B/s, 2 objects/s recovering
Jan 20 18:45:02 compute-0 ceph-mon[74381]: 10.8 scrub starts
Jan 20 18:45:02 compute-0 ceph-mon[74381]: 10.8 scrub ok
Jan 20 18:45:02 compute-0 ceph-mon[74381]: 11.12 deep-scrub starts
Jan 20 18:45:02 compute-0 ceph-mon[74381]: 11.12 deep-scrub ok
Jan 20 18:45:02 compute-0 ceph-mon[74381]: osdmap e94: 3 total, 3 up, 3 in
Jan 20 18:45:02 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:02 compute-0 ceph-mon[74381]: 11.16 deep-scrub starts
Jan 20 18:45:02 compute-0 ceph-mon[74381]: 11.16 deep-scrub ok
Jan 20 18:45:02 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:02 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 20 18:45:02 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 20 18:45:02 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:45:02 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 18:45:02 compute-0 ceph-mon[74381]: Updating compute-0:/etc/ceph/ceph.conf
Jan 20 18:45:02 compute-0 ceph-mon[74381]: Updating compute-1:/etc/ceph/ceph.conf
Jan 20 18:45:02 compute-0 ceph-mon[74381]: Updating compute-2:/etc/ceph/ceph.conf
Jan 20 18:45:02 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Jan 20 18:45:02 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:45:02 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:45:02 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:45:02.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:45:02 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Jan 20 18:45:02 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Jan 20 18:45:02 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 95 pg[9.9( v 49'1085 (0'0,49'1085] local-lis/les=93/94 n=6 ec=62/41 lis/c=93/62 les/c/f=94/63/0 sis=95 pruub=15.003670692s) [2] async=[2] r=-1 lpr=95 pi=[62,95)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active pruub 284.342681885s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:45:02 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 95 pg[9.19( v 49'1085 (0'0,49'1085] local-lis/les=93/94 n=5 ec=62/41 lis/c=93/62 les/c/f=94/63/0 sis=95 pruub=15.003498077s) [2] async=[2] r=-1 lpr=95 pi=[62,95)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active pruub 284.342681885s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:45:02 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 95 pg[9.19( v 49'1085 (0'0,49'1085] local-lis/les=93/94 n=5 ec=62/41 lis/c=93/62 les/c/f=94/63/0 sis=95 pruub=15.003322601s) [2] r=-1 lpr=95 pi=[62,95)/1 crt=49'1085 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 284.342681885s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:45:02 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 95 pg[9.9( v 49'1085 (0'0,49'1085] local-lis/les=93/94 n=6 ec=62/41 lis/c=93/62 les/c/f=94/63/0 sis=95 pruub=15.003035545s) [2] r=-1 lpr=95 pi=[62,95)/1 crt=49'1085 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 284.342681885s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:45:02 compute-0 sudo[102384]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/etc/ceph/ceph.conf.new
Jan 20 18:45:02 compute-0 sudo[102384]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:45:02 compute-0 sudo[102384]: pam_unix(sudo:session): session closed for user root
Jan 20 18:45:02 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.conf
Jan 20 18:45:02 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.conf
Jan 20 18:45:02 compute-0 sudo[102433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/etc/ceph/ceph.conf.new
Jan 20 18:45:02 compute-0 sudo[102433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:45:02 compute-0 sudo[102433]: pam_unix(sudo:session): session closed for user root
Jan 20 18:45:02 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.conf
Jan 20 18:45:02 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.conf
Jan 20 18:45:02 compute-0 sudo[102481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Jan 20 18:45:02 compute-0 sudo[102481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:45:02 compute-0 sudo[102481]: pam_unix(sudo:session): session closed for user root
Jan 20 18:45:02 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.conf
Jan 20 18:45:02 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.conf
Jan 20 18:45:02 compute-0 sudo[102530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config
Jan 20 18:45:02 compute-0 sudo[102530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:45:02 compute-0 sudo[102530]: pam_unix(sudo:session): session closed for user root
Jan 20 18:45:02 compute-0 sudo[102582]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config
Jan 20 18:45:02 compute-0 sudo[102582]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:45:02 compute-0 sudo[102582]: pam_unix(sudo:session): session closed for user root
Jan 20 18:45:02 compute-0 sudo[102607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.conf.new
Jan 20 18:45:02 compute-0 sudo[102607]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:45:02 compute-0 sudo[102607]: pam_unix(sudo:session): session closed for user root
Jan 20 18:45:02 compute-0 sudo[102632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1
Jan 20 18:45:02 compute-0 sudo[102632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:45:02 compute-0 sudo[102632]: pam_unix(sudo:session): session closed for user root
Jan 20 18:45:02 compute-0 python3.9[102581]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 20 18:45:02 compute-0 sudo[102657]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.conf.new
Jan 20 18:45:02 compute-0 sudo[102657]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:45:02 compute-0 sudo[102657]: pam_unix(sudo:session): session closed for user root
Jan 20 18:45:02 compute-0 sudo[102729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.conf.new
Jan 20 18:45:02 compute-0 sudo[102729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:45:02 compute-0 sudo[102729]: pam_unix(sudo:session): session closed for user root
Jan 20 18:45:02 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v12: 337 pgs: 2 remapped+peering, 2 active+remapped, 333 active+clean; 456 KiB data, 126 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 0 B/s wr, 14 op/s; 54 B/s, 2 objects/s recovering
Jan 20 18:45:02 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:45:02 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:45:02 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:45:02.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:45:02 compute-0 sudo[102754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.conf.new
Jan 20 18:45:02 compute-0 sudo[102754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:45:02 compute-0 sudo[102754]: pam_unix(sudo:session): session closed for user root
Jan 20 18:45:02 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 7.e scrub starts
Jan 20 18:45:02 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 7.e scrub ok
Jan 20 18:45:02 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 20 18:45:02 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 20 18:45:02 compute-0 sudo[102802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.conf.new /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.conf
Jan 20 18:45:02 compute-0 sudo[102802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:45:02 compute-0 sudo[102802]: pam_unix(sudo:session): session closed for user root
Jan 20 18:45:02 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 20 18:45:02 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 20 18:45:02 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 20 18:45:02 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 20 18:45:02 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e95 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 18:45:03 compute-0 sudo[102856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Jan 20 18:45:03 compute-0 sudo[102856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:45:03 compute-0 sudo[102856]: pam_unix(sudo:session): session closed for user root
Jan 20 18:45:03 compute-0 sudo[102881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/etc/ceph
Jan 20 18:45:03 compute-0 sudo[102881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:45:03 compute-0 sudo[102881]: pam_unix(sudo:session): session closed for user root
Jan 20 18:45:03 compute-0 sudo[102906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/etc/ceph/ceph.client.admin.keyring.new
Jan 20 18:45:03 compute-0 sudo[102906]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:45:03 compute-0 sudo[102906]: pam_unix(sudo:session): session closed for user root
Jan 20 18:45:03 compute-0 sudo[102931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1
Jan 20 18:45:03 compute-0 sudo[102931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:45:03 compute-0 sudo[102931]: pam_unix(sudo:session): session closed for user root
Jan 20 18:45:03 compute-0 sudo[102956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/etc/ceph/ceph.client.admin.keyring.new
Jan 20 18:45:03 compute-0 sudo[102956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:45:03 compute-0 sudo[102956]: pam_unix(sudo:session): session closed for user root
Jan 20 18:45:03 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Jan 20 18:45:03 compute-0 sudo[103004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/etc/ceph/ceph.client.admin.keyring.new
Jan 20 18:45:03 compute-0 sudo[103004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:45:03 compute-0 sudo[103004]: pam_unix(sudo:session): session closed for user root
Jan 20 18:45:03 compute-0 sudo[103034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/etc/ceph/ceph.client.admin.keyring.new
Jan 20 18:45:03 compute-0 sudo[103034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:45:03 compute-0 sudo[103034]: pam_unix(sudo:session): session closed for user root
Jan 20 18:45:03 compute-0 sudo[103079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Jan 20 18:45:03 compute-0 sudo[103079]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:45:03 compute-0 sudo[103079]: pam_unix(sudo:session): session closed for user root
Jan 20 18:45:03 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.client.admin.keyring
Jan 20 18:45:03 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.client.admin.keyring
Jan 20 18:45:03 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.client.admin.keyring
Jan 20 18:45:03 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.client.admin.keyring
Jan 20 18:45:03 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.client.admin.keyring
Jan 20 18:45:03 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.client.admin.keyring
Jan 20 18:45:03 compute-0 sudo[103126]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config
Jan 20 18:45:03 compute-0 sudo[103126]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:45:03 compute-0 sudo[103126]: pam_unix(sudo:session): session closed for user root
Jan 20 18:45:03 compute-0 sudo[103176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config
Jan 20 18:45:03 compute-0 sudo[103176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:45:03 compute-0 sudo[103176]: pam_unix(sudo:session): session closed for user root
Jan 20 18:45:03 compute-0 sudo[103229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.client.admin.keyring.new
Jan 20 18:45:03 compute-0 sudo[103229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:45:03 compute-0 sudo[103229]: pam_unix(sudo:session): session closed for user root
Jan 20 18:45:03 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Jan 20 18:45:03 compute-0 ceph-mon[74381]: 12.b scrub starts
Jan 20 18:45:03 compute-0 ceph-mon[74381]: 12.b scrub ok
Jan 20 18:45:03 compute-0 ceph-mon[74381]: 11.1a scrub starts
Jan 20 18:45:03 compute-0 ceph-mon[74381]: 11.1a scrub ok
Jan 20 18:45:03 compute-0 ceph-mon[74381]: osdmap e95: 3 total, 3 up, 3 in
Jan 20 18:45:03 compute-0 ceph-mon[74381]: Updating compute-1:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.conf
Jan 20 18:45:03 compute-0 ceph-mon[74381]: Updating compute-2:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.conf
Jan 20 18:45:03 compute-0 ceph-mon[74381]: Updating compute-0:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.conf
Jan 20 18:45:03 compute-0 ceph-mon[74381]: 8.15 scrub starts
Jan 20 18:45:03 compute-0 ceph-mon[74381]: 8.15 scrub ok
Jan 20 18:45:03 compute-0 ceph-mon[74381]: pgmap v12: 337 pgs: 2 remapped+peering, 2 active+remapped, 333 active+clean; 456 KiB data, 126 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 0 B/s wr, 14 op/s; 54 B/s, 2 objects/s recovering
Jan 20 18:45:03 compute-0 ceph-mon[74381]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Jan 20 18:45:03 compute-0 ceph-mon[74381]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 20 18:45:03 compute-0 ceph-mon[74381]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Jan 20 18:45:03 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Jan 20 18:45:03 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Scheduled restart job, restart counter is at 2.
Jan 20 18:45:03 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.ulclbx for aecbbf3b-b405-507b-97d7-637a83f5b4b1.
Jan 20 18:45:03 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Consumed 1.588s CPU time.
Jan 20 18:45:03 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.ulclbx for aecbbf3b-b405-507b-97d7-637a83f5b4b1...
Jan 20 18:45:03 compute-0 sudo[103254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1
Jan 20 18:45:03 compute-0 sudo[103254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:45:03 compute-0 sudo[103254]: pam_unix(sudo:session): session closed for user root
Jan 20 18:45:03 compute-0 sudo[103282]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.client.admin.keyring.new
Jan 20 18:45:03 compute-0 sudo[103282]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:45:03 compute-0 sudo[103282]: pam_unix(sudo:session): session closed for user root
Jan 20 18:45:03 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Jan 20 18:45:03 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Jan 20 18:45:03 compute-0 python3.9[103227]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 18:45:03 compute-0 sudo[103358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.client.admin.keyring.new
Jan 20 18:45:03 compute-0 sudo[103358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:45:03 compute-0 sudo[103358]: pam_unix(sudo:session): session closed for user root
Jan 20 18:45:04 compute-0 sudo[103404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.client.admin.keyring.new
Jan 20 18:45:04 compute-0 sudo[103404]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:45:04 compute-0 sudo[103404]: pam_unix(sudo:session): session closed for user root
Jan 20 18:45:04 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 20 18:45:04 compute-0 podman[103397]: 2026-01-20 18:45:04.044423144 +0000 UTC m=+0.056676217 container create a66f259d59f97851d239d62a1c86fcfe8cc108afab9f56a3ae31f80c5ab5d699 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:45:04 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:04 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 20 18:45:04 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:04 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 20 18:45:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93008f74cac035563ae48118542e269946171003c03a0ded4d97a2c21df53c79/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Jan 20 18:45:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93008f74cac035563ae48118542e269946171003c03a0ded4d97a2c21df53c79/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:45:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93008f74cac035563ae48118542e269946171003c03a0ded4d97a2c21df53c79/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 18:45:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93008f74cac035563ae48118542e269946171003c03a0ded4d97a2c21df53c79/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.ulclbx-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 18:45:04 compute-0 sudo[103436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-aecbbf3b-b405-507b-97d7-637a83f5b4b1/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.client.admin.keyring.new /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.client.admin.keyring
Jan 20 18:45:04 compute-0 sudo[103436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:45:04 compute-0 podman[103397]: 2026-01-20 18:45:04.019988874 +0000 UTC m=+0.032241997 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:45:04 compute-0 sudo[103436]: pam_unix(sudo:session): session closed for user root
Jan 20 18:45:04 compute-0 podman[103397]: 2026-01-20 18:45:04.126957525 +0000 UTC m=+0.139210588 container init a66f259d59f97851d239d62a1c86fcfe8cc108afab9f56a3ae31f80c5ab5d699 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:45:04 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:04 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 18:45:04 compute-0 podman[103397]: 2026-01-20 18:45:04.133698984 +0000 UTC m=+0.145952017 container start a66f259d59f97851d239d62a1c86fcfe8cc108afab9f56a3ae31f80c5ab5d699 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Jan 20 18:45:04 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 20 18:45:04 compute-0 bash[103397]: a66f259d59f97851d239d62a1c86fcfe8cc108afab9f56a3ae31f80c5ab5d699
Jan 20 18:45:04 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:04 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:04 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Jan 20 18:45:04 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:04 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Jan 20 18:45:04 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.ulclbx for aecbbf3b-b405-507b-97d7-637a83f5b4b1.
Jan 20 18:45:04 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:04 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 18:45:04 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:04 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v14: 337 pgs: 2 remapped+peering, 2 active+remapped, 333 active+clean; 456 KiB data, 126 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 0 B/s wr, 14 op/s; 53 B/s, 2 objects/s recovering
Jan 20 18:45:04 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 18:45:04 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:04 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Jan 20 18:45:04 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:04 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Jan 20 18:45:04 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:04 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Jan 20 18:45:04 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:04 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Jan 20 18:45:04 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:04 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:04 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Jan 20 18:45:04 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 20 18:45:04 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:04 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:04 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 20 18:45:04 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:45:04 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:45:04 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:45:04.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:45:04 compute-0 sudo[103529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:45:04 compute-0 sudo[103529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:45:04 compute-0 sudo[103529]: pam_unix(sudo:session): session closed for user root
Jan 20 18:45:04 compute-0 sudo[103554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 18:45:04 compute-0 sudo[103554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:45:04 compute-0 ceph-mon[74381]: 7.e scrub starts
Jan 20 18:45:04 compute-0 ceph-mon[74381]: 7.e scrub ok
Jan 20 18:45:04 compute-0 ceph-mon[74381]: 8.10 scrub starts
Jan 20 18:45:04 compute-0 ceph-mon[74381]: 8.10 scrub ok
Jan 20 18:45:04 compute-0 ceph-mon[74381]: Updating compute-0:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.client.admin.keyring
Jan 20 18:45:04 compute-0 ceph-mon[74381]: Updating compute-1:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.client.admin.keyring
Jan 20 18:45:04 compute-0 ceph-mon[74381]: 11.13 scrub starts
Jan 20 18:45:04 compute-0 ceph-mon[74381]: 11.13 scrub ok
Jan 20 18:45:04 compute-0 ceph-mon[74381]: Updating compute-2:/var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/config/ceph.client.admin.keyring
Jan 20 18:45:04 compute-0 ceph-mon[74381]: osdmap e96: 3 total, 3 up, 3 in
Jan 20 18:45:04 compute-0 ceph-mon[74381]: 8.18 scrub starts
Jan 20 18:45:04 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:04 compute-0 ceph-mon[74381]: 8.18 scrub ok
Jan 20 18:45:04 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:04 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:04 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:04 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:04 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:04 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:04 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:04 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 18:45:04 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 18:45:04 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:45:04 compute-0 ceph-mon[74381]: 8.1c scrub starts
Jan 20 18:45:04 compute-0 ceph-mon[74381]: 8.1c scrub ok
Jan 20 18:45:04 compute-0 podman[103694]: 2026-01-20 18:45:04.803007624 +0000 UTC m=+0.049089264 container create 5bd6720aa86bbc1b688c8e40a95f244705c036f560f92f307a928f24214e1b09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 20 18:45:04 compute-0 systemd[1]: Started libpod-conmon-5bd6720aa86bbc1b688c8e40a95f244705c036f560f92f307a928f24214e1b09.scope.
Jan 20 18:45:04 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 12.8 deep-scrub starts
Jan 20 18:45:04 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 12.8 deep-scrub ok
Jan 20 18:45:04 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:45:04 compute-0 podman[103694]: 2026-01-20 18:45:04.78593835 +0000 UTC m=+0.032020010 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:45:04 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:45:04 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:45:04 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:45:04.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:45:04 compute-0 podman[103694]: 2026-01-20 18:45:04.898421417 +0000 UTC m=+0.144503077 container init 5bd6720aa86bbc1b688c8e40a95f244705c036f560f92f307a928f24214e1b09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_galois, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:45:04 compute-0 podman[103694]: 2026-01-20 18:45:04.907142668 +0000 UTC m=+0.153224318 container start 5bd6720aa86bbc1b688c8e40a95f244705c036f560f92f307a928f24214e1b09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_galois, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Jan 20 18:45:04 compute-0 podman[103694]: 2026-01-20 18:45:04.912449819 +0000 UTC m=+0.158531479 container attach 5bd6720aa86bbc1b688c8e40a95f244705c036f560f92f307a928f24214e1b09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_galois, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 18:45:04 compute-0 agitated_galois[103736]: 167 167
Jan 20 18:45:04 compute-0 systemd[1]: libpod-5bd6720aa86bbc1b688c8e40a95f244705c036f560f92f307a928f24214e1b09.scope: Deactivated successfully.
Jan 20 18:45:04 compute-0 podman[103694]: 2026-01-20 18:45:04.916552888 +0000 UTC m=+0.162634528 container died 5bd6720aa86bbc1b688c8e40a95f244705c036f560f92f307a928f24214e1b09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_galois, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 20 18:45:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-dfffa7d21c3b1227aa348325565659c75bb237c49408cfb041dd6b6ce0a0269c-merged.mount: Deactivated successfully.
Jan 20 18:45:04 compute-0 sudo[103774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsqbwaxmsqvawvjmepiaakwiiarikidc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934704.4272523-88-268298907379567/AnsiballZ_command.py'
Jan 20 18:45:04 compute-0 sudo[103774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:45:04 compute-0 podman[103694]: 2026-01-20 18:45:04.967154022 +0000 UTC m=+0.213235662 container remove 5bd6720aa86bbc1b688c8e40a95f244705c036f560f92f307a928f24214e1b09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_galois, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True)
Jan 20 18:45:04 compute-0 systemd[1]: libpod-conmon-5bd6720aa86bbc1b688c8e40a95f244705c036f560f92f307a928f24214e1b09.scope: Deactivated successfully.
Jan 20 18:45:05 compute-0 podman[103788]: 2026-01-20 18:45:05.147648104 +0000 UTC m=+0.053894802 container create da5e92073db3f2b84245d4a5ef09f2429aa940487c73e65d4c24618691ce3e76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_keller, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 20 18:45:05 compute-0 python3.9[103780]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:45:05 compute-0 ceph-mon[74381]: log_channel(cluster) log [WRN] : Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Jan 20 18:45:05 compute-0 systemd[1]: Started libpod-conmon-da5e92073db3f2b84245d4a5ef09f2429aa940487c73e65d4c24618691ce3e76.scope.
Jan 20 18:45:05 compute-0 sudo[103774]: pam_unix(sudo:session): session closed for user root
Jan 20 18:45:05 compute-0 podman[103788]: 2026-01-20 18:45:05.127846888 +0000 UTC m=+0.034093606 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:45:05 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:45:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/790adc27b320353e7bf8644a0dca08f09f270f99eb95eeec7538ee798dd7300d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 18:45:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/790adc27b320353e7bf8644a0dca08f09f270f99eb95eeec7538ee798dd7300d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:45:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/790adc27b320353e7bf8644a0dca08f09f270f99eb95eeec7538ee798dd7300d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:45:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/790adc27b320353e7bf8644a0dca08f09f270f99eb95eeec7538ee798dd7300d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 18:45:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/790adc27b320353e7bf8644a0dca08f09f270f99eb95eeec7538ee798dd7300d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 18:45:05 compute-0 podman[103788]: 2026-01-20 18:45:05.250623997 +0000 UTC m=+0.156870745 container init da5e92073db3f2b84245d4a5ef09f2429aa940487c73e65d4c24618691ce3e76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_keller, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:45:05 compute-0 podman[103788]: 2026-01-20 18:45:05.259951905 +0000 UTC m=+0.166198603 container start da5e92073db3f2b84245d4a5ef09f2429aa940487c73e65d4c24618691ce3e76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_keller, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Jan 20 18:45:05 compute-0 podman[103788]: 2026-01-20 18:45:05.263549201 +0000 UTC m=+0.169795899 container attach da5e92073db3f2b84245d4a5ef09f2429aa940487c73e65d4c24618691ce3e76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_keller, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Jan 20 18:45:05 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [WARNING] 019/184505 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 20 18:45:05 compute-0 interesting_keller[103806]: --> passed data devices: 0 physical, 1 LVM
Jan 20 18:45:05 compute-0 interesting_keller[103806]: --> All data devices are unavailable
Jan 20 18:45:05 compute-0 systemd[1]: libpod-da5e92073db3f2b84245d4a5ef09f2429aa940487c73e65d4c24618691ce3e76.scope: Deactivated successfully.
Jan 20 18:45:05 compute-0 podman[103788]: 2026-01-20 18:45:05.59913095 +0000 UTC m=+0.505377658 container died da5e92073db3f2b84245d4a5ef09f2429aa940487c73e65d4c24618691ce3e76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_keller, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Jan 20 18:45:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-790adc27b320353e7bf8644a0dca08f09f270f99eb95eeec7538ee798dd7300d-merged.mount: Deactivated successfully.
Jan 20 18:45:05 compute-0 podman[103788]: 2026-01-20 18:45:05.650749941 +0000 UTC m=+0.556996629 container remove da5e92073db3f2b84245d4a5ef09f2429aa940487c73e65d4c24618691ce3e76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Jan 20 18:45:05 compute-0 systemd[1]: libpod-conmon-da5e92073db3f2b84245d4a5ef09f2429aa940487c73e65d4c24618691ce3e76.scope: Deactivated successfully.
Jan 20 18:45:05 compute-0 ceph-mon[74381]: 7.1b scrub starts
Jan 20 18:45:05 compute-0 ceph-mon[74381]: 7.1b scrub ok
Jan 20 18:45:05 compute-0 ceph-mon[74381]: pgmap v14: 337 pgs: 2 remapped+peering, 2 active+remapped, 333 active+clean; 456 KiB data, 126 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 0 B/s wr, 14 op/s; 53 B/s, 2 objects/s recovering
Jan 20 18:45:05 compute-0 ceph-mon[74381]: 12.8 deep-scrub starts
Jan 20 18:45:05 compute-0 ceph-mon[74381]: 12.8 deep-scrub ok
Jan 20 18:45:05 compute-0 ceph-mon[74381]: 8.19 scrub starts
Jan 20 18:45:05 compute-0 ceph-mon[74381]: 8.19 scrub ok
Jan 20 18:45:05 compute-0 ceph-mon[74381]: Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Jan 20 18:45:05 compute-0 ceph-mon[74381]: 10.3 scrub starts
Jan 20 18:45:05 compute-0 ceph-mon[74381]: 10.3 scrub ok
Jan 20 18:45:05 compute-0 sudo[103554]: pam_unix(sudo:session): session closed for user root
Jan 20 18:45:05 compute-0 sudo[103910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:45:05 compute-0 sudo[103910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:45:05 compute-0 sudo[103910]: pam_unix(sudo:session): session closed for user root
Jan 20 18:45:05 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 12.c scrub starts
Jan 20 18:45:05 compute-0 sudo[103935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- lvm list --format json
Jan 20 18:45:05 compute-0 sudo[103935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:45:05 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 12.c scrub ok
Jan 20 18:45:06 compute-0 sudo[104045]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngxxojdeffpdggnyrxapsflhqqxvkeny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934705.6375518-124-225712881012170/AnsiballZ_stat.py'
Jan 20 18:45:06 compute-0 sudo[104045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:45:06 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v15: 337 pgs: 337 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 1 objects/s recovering
Jan 20 18:45:06 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} v 0)
Jan 20 18:45:06 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Jan 20 18:45:06 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0)
Jan 20 18:45:06 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Jan 20 18:45:06 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:45:06 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:45:06 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:45:06.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:45:06 compute-0 python3.9[104054]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 18:45:06 compute-0 sudo[104045]: pam_unix(sudo:session): session closed for user root
Jan 20 18:45:06 compute-0 podman[104077]: 2026-01-20 18:45:06.293590658 +0000 UTC m=+0.027988414 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:45:06 compute-0 podman[104077]: 2026-01-20 18:45:06.423016935 +0000 UTC m=+0.157414641 container create e727696a333ff88a1339f392aa943aa52868f0ecf2000361530dd67cd361671b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_lichterman, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Jan 20 18:45:06 compute-0 systemd[1]: Started libpod-conmon-e727696a333ff88a1339f392aa943aa52868f0ecf2000361530dd67cd361671b.scope.
Jan 20 18:45:06 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:45:06 compute-0 podman[104077]: 2026-01-20 18:45:06.500631585 +0000 UTC m=+0.235029341 container init e727696a333ff88a1339f392aa943aa52868f0ecf2000361530dd67cd361671b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 18:45:06 compute-0 podman[104077]: 2026-01-20 18:45:06.508814352 +0000 UTC m=+0.243212058 container start e727696a333ff88a1339f392aa943aa52868f0ecf2000361530dd67cd361671b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_lichterman, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid)
Jan 20 18:45:06 compute-0 adoring_lichterman[104120]: 167 167
Jan 20 18:45:06 compute-0 podman[104077]: 2026-01-20 18:45:06.513418564 +0000 UTC m=+0.247816300 container attach e727696a333ff88a1339f392aa943aa52868f0ecf2000361530dd67cd361671b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_lichterman, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Jan 20 18:45:06 compute-0 systemd[1]: libpod-e727696a333ff88a1339f392aa943aa52868f0ecf2000361530dd67cd361671b.scope: Deactivated successfully.
Jan 20 18:45:06 compute-0 podman[104077]: 2026-01-20 18:45:06.51662757 +0000 UTC m=+0.251025286 container died e727696a333ff88a1339f392aa943aa52868f0ecf2000361530dd67cd361671b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_lichterman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:45:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-d42e5dc2c7a2bdee4b8a5fe8dbd4313406d8b95f5e48863d437a4d4233eab5b4-merged.mount: Deactivated successfully.
Jan 20 18:45:06 compute-0 podman[104077]: 2026-01-20 18:45:06.555867562 +0000 UTC m=+0.290265288 container remove e727696a333ff88a1339f392aa943aa52868f0ecf2000361530dd67cd361671b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_lichterman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 20 18:45:06 compute-0 systemd[1]: libpod-conmon-e727696a333ff88a1339f392aa943aa52868f0ecf2000361530dd67cd361671b.scope: Deactivated successfully.
Jan 20 18:45:06 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Jan 20 18:45:06 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 20 18:45:06 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 20 18:45:06 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Jan 20 18:45:06 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Jan 20 18:45:06 compute-0 ceph-mon[74381]: 12.c scrub starts
Jan 20 18:45:06 compute-0 ceph-mon[74381]: 12.c scrub ok
Jan 20 18:45:06 compute-0 ceph-mon[74381]: 9.16 deep-scrub starts
Jan 20 18:45:06 compute-0 ceph-mon[74381]: 9.16 deep-scrub ok
Jan 20 18:45:06 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Jan 20 18:45:06 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Jan 20 18:45:06 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Jan 20 18:45:06 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Jan 20 18:45:06 compute-0 ceph-mon[74381]: 12.7 scrub starts
Jan 20 18:45:06 compute-0 ceph-mon[74381]: 12.7 scrub ok
Jan 20 18:45:06 compute-0 podman[104154]: 2026-01-20 18:45:06.729030479 +0000 UTC m=+0.046727351 container create 0fa58b1f91a5726499601f61bf78f96de84834762fb25d946eb3ddff5862df44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_mendel, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Jan 20 18:45:06 compute-0 systemd[1]: Started libpod-conmon-0fa58b1f91a5726499601f61bf78f96de84834762fb25d946eb3ddff5862df44.scope.
Jan 20 18:45:06 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 97 pg[9.a( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=6 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=97 pruub=15.529147148s) [1] r=-1 lpr=97 pi=[62,97)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active pruub 289.383178711s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:45:06 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 97 pg[9.a( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=6 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=97 pruub=15.529088020s) [1] r=-1 lpr=97 pi=[62,97)/1 crt=49'1085 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 289.383178711s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:45:06 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 97 pg[9.1a( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=5 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=97 pruub=15.528749466s) [1] r=-1 lpr=97 pi=[62,97)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active pruub 289.382965088s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:45:06 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 97 pg[9.1a( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=5 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=97 pruub=15.528717041s) [1] r=-1 lpr=97 pi=[62,97)/1 crt=49'1085 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 289.382965088s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:45:06 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:45:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d124d33064f7dd82cbafa202c780309bb95e397b22108f46eed2a4c4580a634f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 18:45:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d124d33064f7dd82cbafa202c780309bb95e397b22108f46eed2a4c4580a634f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:45:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d124d33064f7dd82cbafa202c780309bb95e397b22108f46eed2a4c4580a634f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:45:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d124d33064f7dd82cbafa202c780309bb95e397b22108f46eed2a4c4580a634f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 18:45:06 compute-0 podman[104154]: 2026-01-20 18:45:06.707139138 +0000 UTC m=+0.024836010 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:45:06 compute-0 podman[104154]: 2026-01-20 18:45:06.816734538 +0000 UTC m=+0.134431430 container init 0fa58b1f91a5726499601f61bf78f96de84834762fb25d946eb3ddff5862df44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_mendel, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Jan 20 18:45:06 compute-0 podman[104154]: 2026-01-20 18:45:06.82398368 +0000 UTC m=+0.141680552 container start 0fa58b1f91a5726499601f61bf78f96de84834762fb25d946eb3ddff5862df44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_mendel, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Jan 20 18:45:06 compute-0 podman[104154]: 2026-01-20 18:45:06.827792321 +0000 UTC m=+0.145489213 container attach 0fa58b1f91a5726499601f61bf78f96de84834762fb25d946eb3ddff5862df44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_mendel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:45:06 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Jan 20 18:45:06 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Jan 20 18:45:06 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:45:06 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 18:45:06 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:45:06.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 18:45:07 compute-0 objective_mendel[104210]: {
Jan 20 18:45:07 compute-0 objective_mendel[104210]:     "0": [
Jan 20 18:45:07 compute-0 objective_mendel[104210]:         {
Jan 20 18:45:07 compute-0 objective_mendel[104210]:             "devices": [
Jan 20 18:45:07 compute-0 objective_mendel[104210]:                 "/dev/loop3"
Jan 20 18:45:07 compute-0 objective_mendel[104210]:             ],
Jan 20 18:45:07 compute-0 objective_mendel[104210]:             "lv_name": "ceph_lv0",
Jan 20 18:45:07 compute-0 objective_mendel[104210]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 18:45:07 compute-0 objective_mendel[104210]:             "lv_size": "21470642176",
Jan 20 18:45:07 compute-0 objective_mendel[104210]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=aecbbf3b-b405-507b-97d7-637a83f5b4b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5f53c0c6-6046-4836-83f9-ff93da7e674e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 18:45:07 compute-0 objective_mendel[104210]:             "lv_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 18:45:07 compute-0 objective_mendel[104210]:             "name": "ceph_lv0",
Jan 20 18:45:07 compute-0 objective_mendel[104210]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 18:45:07 compute-0 objective_mendel[104210]:             "tags": {
Jan 20 18:45:07 compute-0 objective_mendel[104210]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 18:45:07 compute-0 objective_mendel[104210]:                 "ceph.block_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 18:45:07 compute-0 objective_mendel[104210]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 18:45:07 compute-0 objective_mendel[104210]:                 "ceph.cluster_fsid": "aecbbf3b-b405-507b-97d7-637a83f5b4b1",
Jan 20 18:45:07 compute-0 objective_mendel[104210]:                 "ceph.cluster_name": "ceph",
Jan 20 18:45:07 compute-0 objective_mendel[104210]:                 "ceph.crush_device_class": "",
Jan 20 18:45:07 compute-0 objective_mendel[104210]:                 "ceph.encrypted": "0",
Jan 20 18:45:07 compute-0 objective_mendel[104210]:                 "ceph.osd_fsid": "5f53c0c6-6046-4836-83f9-ff93da7e674e",
Jan 20 18:45:07 compute-0 objective_mendel[104210]:                 "ceph.osd_id": "0",
Jan 20 18:45:07 compute-0 objective_mendel[104210]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 18:45:07 compute-0 objective_mendel[104210]:                 "ceph.type": "block",
Jan 20 18:45:07 compute-0 objective_mendel[104210]:                 "ceph.vdo": "0",
Jan 20 18:45:07 compute-0 objective_mendel[104210]:                 "ceph.with_tpm": "0"
Jan 20 18:45:07 compute-0 objective_mendel[104210]:             },
Jan 20 18:45:07 compute-0 objective_mendel[104210]:             "type": "block",
Jan 20 18:45:07 compute-0 objective_mendel[104210]:             "vg_name": "ceph_vg0"
Jan 20 18:45:07 compute-0 objective_mendel[104210]:         }
Jan 20 18:45:07 compute-0 objective_mendel[104210]:     ]
Jan 20 18:45:07 compute-0 objective_mendel[104210]: }
Jan 20 18:45:07 compute-0 systemd[1]: libpod-0fa58b1f91a5726499601f61bf78f96de84834762fb25d946eb3ddff5862df44.scope: Deactivated successfully.
Jan 20 18:45:07 compute-0 podman[104154]: 2026-01-20 18:45:07.152021959 +0000 UTC m=+0.469718871 container died 0fa58b1f91a5726499601f61bf78f96de84834762fb25d946eb3ddff5862df44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_mendel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Jan 20 18:45:07 compute-0 sudo[104292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxgphrhjeqvjfclqjdcnrydcqqwcomhb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934706.7007828-157-56311362550139/AnsiballZ_file.py'
Jan 20 18:45:07 compute-0 sudo[104292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:45:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-d124d33064f7dd82cbafa202c780309bb95e397b22108f46eed2a4c4580a634f-merged.mount: Deactivated successfully.
Jan 20 18:45:07 compute-0 podman[104154]: 2026-01-20 18:45:07.204017359 +0000 UTC m=+0.521714231 container remove 0fa58b1f91a5726499601f61bf78f96de84834762fb25d946eb3ddff5862df44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_mendel, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:45:07 compute-0 systemd[1]: libpod-conmon-0fa58b1f91a5726499601f61bf78f96de84834762fb25d946eb3ddff5862df44.scope: Deactivated successfully.
Jan 20 18:45:07 compute-0 sudo[103935]: pam_unix(sudo:session): session closed for user root
Jan 20 18:45:07 compute-0 sudo[104306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:45:07 compute-0 sudo[104306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:45:07 compute-0 sudo[104306]: pam_unix(sudo:session): session closed for user root
Jan 20 18:45:07 compute-0 python3.9[104295]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:45:07 compute-0 sudo[104292]: pam_unix(sudo:session): session closed for user root
Jan 20 18:45:07 compute-0 sudo[104331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- raw list --format json
Jan 20 18:45:07 compute-0 sudo[104331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:45:07 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Jan 20 18:45:07 compute-0 ceph-mon[74381]: pgmap v15: 337 pgs: 337 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 1 objects/s recovering
Jan 20 18:45:07 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 20 18:45:07 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 20 18:45:07 compute-0 ceph-mon[74381]: osdmap e97: 3 total, 3 up, 3 in
Jan 20 18:45:07 compute-0 ceph-mon[74381]: 10.14 scrub starts
Jan 20 18:45:07 compute-0 ceph-mon[74381]: 10.14 scrub ok
Jan 20 18:45:07 compute-0 ceph-mon[74381]: 9.e scrub starts
Jan 20 18:45:07 compute-0 ceph-mon[74381]: 9.e scrub ok
Jan 20 18:45:07 compute-0 ceph-mon[74381]: 8.9 scrub starts
Jan 20 18:45:07 compute-0 ceph-mon[74381]: 8.9 scrub ok
Jan 20 18:45:07 compute-0 podman[104494]: 2026-01-20 18:45:07.800174197 +0000 UTC m=+0.043589968 container create 4f1e5af57fd4ba684a38d9d2352a1cab99f0788008c644754b64f2d68aed2787 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 20 18:45:07 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Jan 20 18:45:07 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Jan 20 18:45:07 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Jan 20 18:45:07 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Jan 20 18:45:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 98 pg[9.a( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=6 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=98) [1]/[0] r=0 lpr=98 pi=[62,98)/1 crt=49'1085 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:45:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 98 pg[9.a( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=6 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=98) [1]/[0] r=0 lpr=98 pi=[62,98)/1 crt=49'1085 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 18:45:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 98 pg[9.1a( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=5 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=98) [1]/[0] r=0 lpr=98 pi=[62,98)/1 crt=49'1085 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:45:07 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 98 pg[9.1a( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=5 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=98) [1]/[0] r=0 lpr=98 pi=[62,98)/1 crt=49'1085 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 18:45:07 compute-0 systemd[1]: Started libpod-conmon-4f1e5af57fd4ba684a38d9d2352a1cab99f0788008c644754b64f2d68aed2787.scope.
Jan 20 18:45:07 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:45:07 compute-0 podman[104494]: 2026-01-20 18:45:07.778652556 +0000 UTC m=+0.022068357 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:45:07 compute-0 podman[104494]: 2026-01-20 18:45:07.886035948 +0000 UTC m=+0.129451729 container init 4f1e5af57fd4ba684a38d9d2352a1cab99f0788008c644754b64f2d68aed2787 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_lovelace, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Jan 20 18:45:07 compute-0 podman[104494]: 2026-01-20 18:45:07.894960334 +0000 UTC m=+0.138376105 container start 4f1e5af57fd4ba684a38d9d2352a1cab99f0788008c644754b64f2d68aed2787 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_lovelace, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Jan 20 18:45:07 compute-0 podman[104494]: 2026-01-20 18:45:07.898665153 +0000 UTC m=+0.142080924 container attach 4f1e5af57fd4ba684a38d9d2352a1cab99f0788008c644754b64f2d68aed2787 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_lovelace, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:45:07 compute-0 laughing_lovelace[104539]: 167 167
Jan 20 18:45:07 compute-0 systemd[1]: libpod-4f1e5af57fd4ba684a38d9d2352a1cab99f0788008c644754b64f2d68aed2787.scope: Deactivated successfully.
Jan 20 18:45:07 compute-0 conmon[104539]: conmon 4f1e5af57fd4ba684a38 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4f1e5af57fd4ba684a38d9d2352a1cab99f0788008c644754b64f2d68aed2787.scope/container/memory.events
Jan 20 18:45:07 compute-0 podman[104494]: 2026-01-20 18:45:07.901107387 +0000 UTC m=+0.144523158 container died 4f1e5af57fd4ba684a38d9d2352a1cab99f0788008c644754b64f2d68aed2787 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:45:07 compute-0 sudo[104569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axexuzvjlbrtveglaoyrfrdljrzsklzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934707.6385539-184-22615884232572/AnsiballZ_file.py'
Jan 20 18:45:07 compute-0 sudo[104569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:45:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf83ea04dfc6c5fd4586ead2d13fa854dd8e5f78814ee9595d5562aa0bc61cf1-merged.mount: Deactivated successfully.
Jan 20 18:45:07 compute-0 podman[104494]: 2026-01-20 18:45:07.939199179 +0000 UTC m=+0.182614940 container remove 4f1e5af57fd4ba684a38d9d2352a1cab99f0788008c644754b64f2d68aed2787 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_lovelace, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:45:07 compute-0 systemd[1]: libpod-conmon-4f1e5af57fd4ba684a38d9d2352a1cab99f0788008c644754b64f2d68aed2787.scope: Deactivated successfully.
Jan 20 18:45:07 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e98 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 18:45:08 compute-0 python3.9[104579]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:45:08 compute-0 podman[104591]: 2026-01-20 18:45:08.122880916 +0000 UTC m=+0.053256695 container create acab6e9667ebc8d05cbb63d4ffe69a8c5212948276999d3019c6260cc0dfa93b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 18:45:08 compute-0 sudo[104569]: pam_unix(sudo:session): session closed for user root
Jan 20 18:45:08 compute-0 systemd[1]: Started libpod-conmon-acab6e9667ebc8d05cbb63d4ffe69a8c5212948276999d3019c6260cc0dfa93b.scope.
Jan 20 18:45:08 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v18: 337 pgs: 337 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 1 objects/s recovering
Jan 20 18:45:08 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} v 0)
Jan 20 18:45:08 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Jan 20 18:45:08 compute-0 podman[104591]: 2026-01-20 18:45:08.100691406 +0000 UTC m=+0.031067205 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:45:08 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0)
Jan 20 18:45:08 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Jan 20 18:45:08 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:45:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8968d423f4c7bb0c57766bce47447c3812eed14478ca03809486ffba8841f9a1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 18:45:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8968d423f4c7bb0c57766bce47447c3812eed14478ca03809486ffba8841f9a1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:45:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8968d423f4c7bb0c57766bce47447c3812eed14478ca03809486ffba8841f9a1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:45:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8968d423f4c7bb0c57766bce47447c3812eed14478ca03809486ffba8841f9a1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 18:45:08 compute-0 podman[104591]: 2026-01-20 18:45:08.215301319 +0000 UTC m=+0.145677108 container init acab6e9667ebc8d05cbb63d4ffe69a8c5212948276999d3019c6260cc0dfa93b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_ptolemy, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 20 18:45:08 compute-0 podman[104591]: 2026-01-20 18:45:08.221851283 +0000 UTC m=+0.152227082 container start acab6e9667ebc8d05cbb63d4ffe69a8c5212948276999d3019c6260cc0dfa93b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True)
Jan 20 18:45:08 compute-0 podman[104591]: 2026-01-20 18:45:08.226879946 +0000 UTC m=+0.157255735 container attach acab6e9667ebc8d05cbb63d4ffe69a8c5212948276999d3019c6260cc0dfa93b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_ptolemy, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid)
Jan 20 18:45:08 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:45:08 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:45:08 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:45:08.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:45:08 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 9.c deep-scrub starts
Jan 20 18:45:08 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Jan 20 18:45:08 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 9.c deep-scrub ok
Jan 20 18:45:08 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 20 18:45:08 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 20 18:45:08 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Jan 20 18:45:08 compute-0 ceph-mon[74381]: 9.2 scrub starts
Jan 20 18:45:08 compute-0 ceph-mon[74381]: 9.2 scrub ok
Jan 20 18:45:08 compute-0 ceph-mon[74381]: osdmap e98: 3 total, 3 up, 3 in
Jan 20 18:45:08 compute-0 ceph-mon[74381]: 9.6 deep-scrub starts
Jan 20 18:45:08 compute-0 ceph-mon[74381]: 9.6 deep-scrub ok
Jan 20 18:45:08 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Jan 20 18:45:08 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Jan 20 18:45:08 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Jan 20 18:45:08 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Jan 20 18:45:08 compute-0 ceph-mon[74381]: 12.1a scrub starts
Jan 20 18:45:08 compute-0 ceph-mon[74381]: 12.1a scrub ok
Jan 20 18:45:08 compute-0 lvm[104829]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 18:45:08 compute-0 lvm[104829]: VG ceph_vg0 finished
Jan 20 18:45:08 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Jan 20 18:45:08 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:45:08 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:45:08 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:45:08.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:45:08 compute-0 confident_ptolemy[104611]: {}
Jan 20 18:45:08 compute-0 systemd[1]: libpod-acab6e9667ebc8d05cbb63d4ffe69a8c5212948276999d3019c6260cc0dfa93b.scope: Deactivated successfully.
Jan 20 18:45:08 compute-0 podman[104591]: 2026-01-20 18:45:08.931087293 +0000 UTC m=+0.861463072 container died acab6e9667ebc8d05cbb63d4ffe69a8c5212948276999d3019c6260cc0dfa93b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_ptolemy, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:45:08 compute-0 systemd[1]: libpod-acab6e9667ebc8d05cbb63d4ffe69a8c5212948276999d3019c6260cc0dfa93b.scope: Consumed 1.049s CPU time.
Jan 20 18:45:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-8968d423f4c7bb0c57766bce47447c3812eed14478ca03809486ffba8841f9a1-merged.mount: Deactivated successfully.
Jan 20 18:45:08 compute-0 podman[104591]: 2026-01-20 18:45:08.977045683 +0000 UTC m=+0.907421462 container remove acab6e9667ebc8d05cbb63d4ffe69a8c5212948276999d3019c6260cc0dfa93b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_ptolemy, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:45:08 compute-0 systemd[1]: libpod-conmon-acab6e9667ebc8d05cbb63d4ffe69a8c5212948276999d3019c6260cc0dfa93b.scope: Deactivated successfully.
Jan 20 18:45:09 compute-0 sudo[104331]: pam_unix(sudo:session): session closed for user root
Jan 20 18:45:09 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 18:45:09 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:09 compute-0 python3.9[104832]: ansible-ansible.builtin.service_facts Invoked
Jan 20 18:45:09 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 18:45:09 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:09 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Jan 20 18:45:09 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:09 compute-0 network[104866]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 20 18:45:09 compute-0 network[104868]: 'network-scripts' will be removed from distribution in near future.
Jan 20 18:45:09 compute-0 network[104872]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 20 18:45:09 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 99 pg[6.b( empty local-lis/les=0/0 n=0 ec=58/23 lis/c=71/71 les/c/f=72/72/0 sis=99) [0] r=0 lpr=99 pi=[71,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:45:09 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 99 pg[9.1a( v 49'1085 (0'0,49'1085] local-lis/les=98/99 n=5 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=98) [1]/[0] async=[1] r=0 lpr=98 pi=[62,98)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:45:09 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 99 pg[9.a( v 49'1085 (0'0,49'1085] local-lis/les=98/99 n=6 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=98) [1]/[0] async=[1] r=0 lpr=98 pi=[62,98)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:45:09 compute-0 sudo[104867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 18:45:09 compute-0 sudo[104869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 18:45:09 compute-0 sudo[104867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:45:09 compute-0 sudo[104869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:45:09 compute-0 sudo[104869]: pam_unix(sudo:session): session closed for user root
Jan 20 18:45:09 compute-0 sudo[104867]: pam_unix(sudo:session): session closed for user root
Jan 20 18:45:09 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Jan 20 18:45:09 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:45:09] "GET /metrics HTTP/1.1" 200 46586 "" "Prometheus/2.51.0"
Jan 20 18:45:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:45:09] "GET /metrics HTTP/1.1" 200 46586 "" "Prometheus/2.51.0"
Jan 20 18:45:09 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Jan 20 18:45:09 compute-0 ceph-mon[74381]: pgmap v18: 337 pgs: 337 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 1 objects/s recovering
Jan 20 18:45:09 compute-0 ceph-mon[74381]: 9.c deep-scrub starts
Jan 20 18:45:09 compute-0 ceph-mon[74381]: 9.c deep-scrub ok
Jan 20 18:45:09 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 20 18:45:09 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 20 18:45:09 compute-0 ceph-mon[74381]: osdmap e99: 3 total, 3 up, 3 in
Jan 20 18:45:09 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:09 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:09 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:09 compute-0 ceph-mon[74381]: 9.1e deep-scrub starts
Jan 20 18:45:09 compute-0 ceph-mon[74381]: 9.1e deep-scrub ok
Jan 20 18:45:09 compute-0 ceph-mon[74381]: 8.f scrub starts
Jan 20 18:45:09 compute-0 ceph-mon[74381]: 8.f scrub ok
Jan 20 18:45:09 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (monmap changed)...
Jan 20 18:45:09 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (monmap changed)...
Jan 20 18:45:09 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Jan 20 18:45:09 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Jan 20 18:45:09 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Jan 20 18:45:09 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:09 compute-0 sudo[104939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:45:09 compute-0 sudo[104939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:45:09 compute-0 sudo[104939]: pam_unix(sudo:session): session closed for user root
Jan 20 18:45:10 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Jan 20 18:45:10 compute-0 sudo[104969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph:v19 --timeout 895 _orch deploy --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1
Jan 20 18:45:10 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Jan 20 18:45:10 compute-0 sudo[104969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:45:10 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Jan 20 18:45:10 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 100 pg[9.a( v 49'1085 (0'0,49'1085] local-lis/les=98/99 n=6 ec=62/41 lis/c=98/62 les/c/f=99/63/0 sis=100 pruub=15.027091026s) [1] async=[1] r=-1 lpr=100 pi=[62,100)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active pruub 292.171112061s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:45:10 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 100 pg[9.1a( v 49'1085 (0'0,49'1085] local-lis/les=98/99 n=5 ec=62/41 lis/c=98/62 les/c/f=99/63/0 sis=100 pruub=15.023661613s) [1] async=[1] r=-1 lpr=100 pi=[62,100)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active pruub 292.167785645s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:45:10 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 100 pg[9.1a( v 49'1085 (0'0,49'1085] local-lis/les=98/99 n=5 ec=62/41 lis/c=98/62 les/c/f=99/63/0 sis=100 pruub=15.023618698s) [1] r=-1 lpr=100 pi=[62,100)/1 crt=49'1085 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 292.167785645s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:45:10 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 100 pg[9.a( v 49'1085 (0'0,49'1085] local-lis/les=98/99 n=6 ec=62/41 lis/c=98/62 les/c/f=99/63/0 sis=100 pruub=15.026808739s) [1] r=-1 lpr=100 pi=[62,100)/1 crt=49'1085 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 292.171112061s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:45:10 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 100 pg[6.b( v 56'46 lc 0'0 (0'0,56'46] local-lis/les=99/100 n=1 ec=58/23 lis/c=71/71 les/c/f=72/72/0 sis=99) [0] r=0 lpr=99 pi=[71,99)/1 crt=56'46 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:45:10 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v21: 337 pgs: 337 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:45:10 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} v 0)
Jan 20 18:45:10 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Jan 20 18:45:10 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0)
Jan 20 18:45:10 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Jan 20 18:45:10 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:10 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 20 18:45:10 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:10 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 20 18:45:10 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:10 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 20 18:45:10 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:45:10 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:45:10 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:45:10.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:45:10 compute-0 podman[105027]: 2026-01-20 18:45:10.393675425 +0000 UTC m=+0.045299595 container create 481ffea6ace2e19b9c7bcb220b54729e4faad3e2808ad62327d1456f729c7f24 (image=quay.io/ceph/ceph:v19, name=affectionate_austin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 20 18:45:10 compute-0 systemd[1]: Started libpod-conmon-481ffea6ace2e19b9c7bcb220b54729e4faad3e2808ad62327d1456f729c7f24.scope.
Jan 20 18:45:10 compute-0 podman[105027]: 2026-01-20 18:45:10.374180017 +0000 UTC m=+0.025804217 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:45:10 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:45:10 compute-0 podman[105027]: 2026-01-20 18:45:10.49105943 +0000 UTC m=+0.142683650 container init 481ffea6ace2e19b9c7bcb220b54729e4faad3e2808ad62327d1456f729c7f24 (image=quay.io/ceph/ceph:v19, name=affectionate_austin, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Jan 20 18:45:10 compute-0 podman[105027]: 2026-01-20 18:45:10.497306116 +0000 UTC m=+0.148930286 container start 481ffea6ace2e19b9c7bcb220b54729e4faad3e2808ad62327d1456f729c7f24 (image=quay.io/ceph/ceph:v19, name=affectionate_austin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Jan 20 18:45:10 compute-0 podman[105027]: 2026-01-20 18:45:10.501561529 +0000 UTC m=+0.153185749 container attach 481ffea6ace2e19b9c7bcb220b54729e4faad3e2808ad62327d1456f729c7f24 (image=quay.io/ceph/ceph:v19, name=affectionate_austin, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:45:10 compute-0 affectionate_austin[105048]: 167 167
Jan 20 18:45:10 compute-0 systemd[1]: libpod-481ffea6ace2e19b9c7bcb220b54729e4faad3e2808ad62327d1456f729c7f24.scope: Deactivated successfully.
Jan 20 18:45:10 compute-0 podman[105027]: 2026-01-20 18:45:10.503647414 +0000 UTC m=+0.155271574 container died 481ffea6ace2e19b9c7bcb220b54729e4faad3e2808ad62327d1456f729c7f24 (image=quay.io/ceph/ceph:v19, name=affectionate_austin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 20 18:45:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-b2eca03815f116bfb22d0706ab7bf5c68055dfa92c3418a4d533c02b9f21e0c6-merged.mount: Deactivated successfully.
Jan 20 18:45:10 compute-0 podman[105027]: 2026-01-20 18:45:10.541699804 +0000 UTC m=+0.193323974 container remove 481ffea6ace2e19b9c7bcb220b54729e4faad3e2808ad62327d1456f729c7f24 (image=quay.io/ceph/ceph:v19, name=affectionate_austin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Jan 20 18:45:10 compute-0 systemd[1]: libpod-conmon-481ffea6ace2e19b9c7bcb220b54729e4faad3e2808ad62327d1456f729c7f24.scope: Deactivated successfully.
Jan 20 18:45:10 compute-0 sudo[104969]: pam_unix(sudo:session): session closed for user root
Jan 20 18:45:10 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 18:45:10 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:10 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 18:45:10 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:10 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.cepfkm (monmap changed)...
Jan 20 18:45:10 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.cepfkm (monmap changed)...
Jan 20 18:45:10 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.cepfkm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Jan 20 18:45:10 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.cepfkm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 20 18:45:10 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.cepfkm on compute-0
Jan 20 18:45:10 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.cepfkm on compute-0
Jan 20 18:45:10 compute-0 sudo[105074]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:45:10 compute-0 sudo[105074]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:45:10 compute-0 sudo[105074]: pam_unix(sudo:session): session closed for user root
Jan 20 18:45:10 compute-0 sudo[105104]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph:v19 --timeout 895 _orch deploy --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1
Jan 20 18:45:10 compute-0 sudo[105104]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:45:10 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Jan 20 18:45:10 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Jan 20 18:45:10 compute-0 ceph-mon[74381]: 9.1 scrub starts
Jan 20 18:45:10 compute-0 ceph-mon[74381]: 9.1 scrub ok
Jan 20 18:45:10 compute-0 ceph-mon[74381]: Reconfiguring mon.compute-0 (monmap changed)...
Jan 20 18:45:10 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 20 18:45:10 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 20 18:45:10 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:45:10 compute-0 ceph-mon[74381]: Reconfiguring daemon mon.compute-0 on compute-0
Jan 20 18:45:10 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:10 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 18:45:10 compute-0 ceph-mon[74381]: osdmap e100: 3 total, 3 up, 3 in
Jan 20 18:45:10 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Jan 20 18:45:10 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Jan 20 18:45:10 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Jan 20 18:45:10 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Jan 20 18:45:10 compute-0 ceph-mon[74381]: 12.18 scrub starts
Jan 20 18:45:10 compute-0 ceph-mon[74381]: 12.18 scrub ok
Jan 20 18:45:10 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:10 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:10 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.cepfkm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 20 18:45:10 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.cepfkm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 20 18:45:10 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 20 18:45:10 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:45:10 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:45:10 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:45:10 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:45:10.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:45:11 compute-0 podman[105161]: 2026-01-20 18:45:11.063918119 +0000 UTC m=+0.039418168 container create de14f0da5e44ca18ece0d2218f2b9d9b3361bda85cc6600311517d47c3d39861 (image=quay.io/ceph/ceph:v19, name=magical_noether, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:45:11 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Jan 20 18:45:11 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 20 18:45:11 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 20 18:45:11 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Jan 20 18:45:11 compute-0 systemd[1]: Started libpod-conmon-de14f0da5e44ca18ece0d2218f2b9d9b3361bda85cc6600311517d47c3d39861.scope.
Jan 20 18:45:11 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Jan 20 18:45:11 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:45:11 compute-0 podman[105161]: 2026-01-20 18:45:11.135721856 +0000 UTC m=+0.111221915 container init de14f0da5e44ca18ece0d2218f2b9d9b3361bda85cc6600311517d47c3d39861 (image=quay.io/ceph/ceph:v19, name=magical_noether, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:45:11 compute-0 podman[105161]: 2026-01-20 18:45:11.046906187 +0000 UTC m=+0.022406236 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Jan 20 18:45:11 compute-0 podman[105161]: 2026-01-20 18:45:11.14533468 +0000 UTC m=+0.120834759 container start de14f0da5e44ca18ece0d2218f2b9d9b3361bda85cc6600311517d47c3d39861 (image=quay.io/ceph/ceph:v19, name=magical_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:45:11 compute-0 podman[105161]: 2026-01-20 18:45:11.14908852 +0000 UTC m=+0.124588579 container attach de14f0da5e44ca18ece0d2218f2b9d9b3361bda85cc6600311517d47c3d39861 (image=quay.io/ceph/ceph:v19, name=magical_noether, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Jan 20 18:45:11 compute-0 magical_noether[105181]: 167 167
Jan 20 18:45:11 compute-0 systemd[1]: libpod-de14f0da5e44ca18ece0d2218f2b9d9b3361bda85cc6600311517d47c3d39861.scope: Deactivated successfully.
Jan 20 18:45:11 compute-0 podman[105161]: 2026-01-20 18:45:11.149693237 +0000 UTC m=+0.125193276 container died de14f0da5e44ca18ece0d2218f2b9d9b3361bda85cc6600311517d47c3d39861 (image=quay.io/ceph/ceph:v19, name=magical_noether, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid)
Jan 20 18:45:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-4218ac183da80548418da7bb076071da6026d5851545b214d8ce962d88394236-merged.mount: Deactivated successfully.
Jan 20 18:45:11 compute-0 podman[105161]: 2026-01-20 18:45:11.184950453 +0000 UTC m=+0.160450492 container remove de14f0da5e44ca18ece0d2218f2b9d9b3361bda85cc6600311517d47c3d39861 (image=quay.io/ceph/ceph:v19, name=magical_noether, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:45:11 compute-0 systemd[1]: libpod-conmon-de14f0da5e44ca18ece0d2218f2b9d9b3361bda85cc6600311517d47c3d39861.scope: Deactivated successfully.
Jan 20 18:45:11 compute-0 sudo[105104]: pam_unix(sudo:session): session closed for user root
Jan 20 18:45:11 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 18:45:11 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:11 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 18:45:11 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:11 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-0 (monmap changed)...
Jan 20 18:45:11 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-0 (monmap changed)...
Jan 20 18:45:11 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Jan 20 18:45:11 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 20 18:45:11 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-0 on compute-0
Jan 20 18:45:11 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-0 on compute-0
Jan 20 18:45:11 compute-0 sudo[105225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:45:11 compute-0 sudo[105225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:45:11 compute-0 sudo[105225]: pam_unix(sudo:session): session closed for user root
Jan 20 18:45:11 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:11 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 20 18:45:11 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:11 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 20 18:45:11 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:11 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 20 18:45:11 compute-0 sudo[105255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1
Jan 20 18:45:11 compute-0 sudo[105255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:45:11 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 9.12 deep-scrub starts
Jan 20 18:45:11 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 9.12 deep-scrub ok
Jan 20 18:45:11 compute-0 podman[105320]: 2026-01-20 18:45:11.904589159 +0000 UTC m=+0.044037241 container create 9deb92dd2aa1521f835c446bada662c4bbc0edbc12799bb0649d108c0b0fa4c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Jan 20 18:45:11 compute-0 systemd[1]: Started libpod-conmon-9deb92dd2aa1521f835c446bada662c4bbc0edbc12799bb0649d108c0b0fa4c3.scope.
Jan 20 18:45:11 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:45:11 compute-0 podman[105320]: 2026-01-20 18:45:11.965076515 +0000 UTC m=+0.104524607 container init 9deb92dd2aa1521f835c446bada662c4bbc0edbc12799bb0649d108c0b0fa4c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Jan 20 18:45:11 compute-0 podman[105320]: 2026-01-20 18:45:11.976413805 +0000 UTC m=+0.115861917 container start 9deb92dd2aa1521f835c446bada662c4bbc0edbc12799bb0649d108c0b0fa4c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_brattain, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid)
Jan 20 18:45:11 compute-0 podman[105320]: 2026-01-20 18:45:11.882319898 +0000 UTC m=+0.021768000 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:45:11 compute-0 podman[105320]: 2026-01-20 18:45:11.980444913 +0000 UTC m=+0.119893035 container attach 9deb92dd2aa1521f835c446bada662c4bbc0edbc12799bb0649d108c0b0fa4c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_brattain, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Jan 20 18:45:11 compute-0 reverent_brattain[105335]: 167 167
Jan 20 18:45:11 compute-0 systemd[1]: libpod-9deb92dd2aa1521f835c446bada662c4bbc0edbc12799bb0649d108c0b0fa4c3.scope: Deactivated successfully.
Jan 20 18:45:11 compute-0 conmon[105335]: conmon 9deb92dd2aa1521f835c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9deb92dd2aa1521f835c446bada662c4bbc0edbc12799bb0649d108c0b0fa4c3.scope/container/memory.events
Jan 20 18:45:11 compute-0 podman[105320]: 2026-01-20 18:45:11.983028471 +0000 UTC m=+0.122476543 container died 9deb92dd2aa1521f835c446bada662c4bbc0edbc12799bb0649d108c0b0fa4c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:45:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-58316e6955097af3591429d2bbd7014f73cd661b39813dd1550ee429dc53bfc7-merged.mount: Deactivated successfully.
Jan 20 18:45:12 compute-0 podman[105320]: 2026-01-20 18:45:12.018474382 +0000 UTC m=+0.157922454 container remove 9deb92dd2aa1521f835c446bada662c4bbc0edbc12799bb0649d108c0b0fa4c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_brattain, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:45:12 compute-0 systemd[1]: libpod-conmon-9deb92dd2aa1521f835c446bada662c4bbc0edbc12799bb0649d108c0b0fa4c3.scope: Deactivated successfully.
Jan 20 18:45:12 compute-0 sudo[105255]: pam_unix(sudo:session): session closed for user root
Jan 20 18:45:12 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 18:45:12 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:12 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 18:45:12 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:12 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Reconfiguring osd.0 (monmap changed)...
Jan 20 18:45:12 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Reconfiguring osd.0 (monmap changed)...
Jan 20 18:45:12 compute-0 ceph-mon[74381]: pgmap v21: 337 pgs: 337 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:45:12 compute-0 ceph-mon[74381]: Reconfiguring mgr.compute-0.cepfkm (monmap changed)...
Jan 20 18:45:12 compute-0 ceph-mon[74381]: Reconfiguring daemon mgr.compute-0.cepfkm on compute-0
Jan 20 18:45:12 compute-0 ceph-mon[74381]: 9.1c scrub starts
Jan 20 18:45:12 compute-0 ceph-mon[74381]: 9.1c scrub ok
Jan 20 18:45:12 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 20 18:45:12 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 20 18:45:12 compute-0 ceph-mon[74381]: osdmap e101: 3 total, 3 up, 3 in
Jan 20 18:45:12 compute-0 ceph-mon[74381]: 9.a scrub starts
Jan 20 18:45:12 compute-0 ceph-mon[74381]: 9.a scrub ok
Jan 20 18:45:12 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:12 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:12 compute-0 ceph-mon[74381]: Reconfiguring crash.compute-0 (monmap changed)...
Jan 20 18:45:12 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 20 18:45:12 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 20 18:45:12 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:45:12 compute-0 ceph-mon[74381]: Reconfiguring daemon crash.compute-0 on compute-0
Jan 20 18:45:12 compute-0 ceph-mon[74381]: 9.9 scrub starts
Jan 20 18:45:12 compute-0 ceph-mon[74381]: 9.9 scrub ok
Jan 20 18:45:12 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:12 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.0 on compute-0
Jan 20 18:45:12 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.0 on compute-0
Jan 20 18:45:12 compute-0 sudo[105351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:45:12 compute-0 sudo[105351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:45:12 compute-0 sudo[105351]: pam_unix(sudo:session): session closed for user root
Jan 20 18:45:12 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v23: 337 pgs: 2 peering, 335 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:45:12 compute-0 sudo[105376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1
Jan 20 18:45:12 compute-0 sudo[105376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:45:12 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:45:12 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:45:12 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:45:12.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:45:12 compute-0 podman[105416]: 2026-01-20 18:45:12.543309156 +0000 UTC m=+0.052012171 container create 94dad213a9b2bed88fdf1b261c2d7734378515689422e1ca0441f29c488fdf9e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_keller, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:45:12 compute-0 systemd[1]: Started libpod-conmon-94dad213a9b2bed88fdf1b261c2d7734378515689422e1ca0441f29c488fdf9e.scope.
Jan 20 18:45:12 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:45:12 compute-0 podman[105416]: 2026-01-20 18:45:12.606195926 +0000 UTC m=+0.114898951 container init 94dad213a9b2bed88fdf1b261c2d7734378515689422e1ca0441f29c488fdf9e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_keller, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Jan 20 18:45:12 compute-0 podman[105416]: 2026-01-20 18:45:12.519685699 +0000 UTC m=+0.028388744 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:45:12 compute-0 podman[105416]: 2026-01-20 18:45:12.613538791 +0000 UTC m=+0.122241796 container start 94dad213a9b2bed88fdf1b261c2d7734378515689422e1ca0441f29c488fdf9e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:45:12 compute-0 podman[105416]: 2026-01-20 18:45:12.616772476 +0000 UTC m=+0.125475491 container attach 94dad213a9b2bed88fdf1b261c2d7734378515689422e1ca0441f29c488fdf9e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_keller, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid)
Jan 20 18:45:12 compute-0 brave_keller[105433]: 167 167
Jan 20 18:45:12 compute-0 systemd[1]: libpod-94dad213a9b2bed88fdf1b261c2d7734378515689422e1ca0441f29c488fdf9e.scope: Deactivated successfully.
Jan 20 18:45:12 compute-0 podman[105416]: 2026-01-20 18:45:12.618463702 +0000 UTC m=+0.127166707 container died 94dad213a9b2bed88fdf1b261c2d7734378515689422e1ca0441f29c488fdf9e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 20 18:45:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e100f1c7612c723321e74376c430cb28996b80103c145505c2e161f7c0d7c3d-merged.mount: Deactivated successfully.
Jan 20 18:45:12 compute-0 podman[105416]: 2026-01-20 18:45:12.656129291 +0000 UTC m=+0.164832296 container remove 94dad213a9b2bed88fdf1b261c2d7734378515689422e1ca0441f29c488fdf9e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_keller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325)
Jan 20 18:45:12 compute-0 systemd[1]: libpod-conmon-94dad213a9b2bed88fdf1b261c2d7734378515689422e1ca0441f29c488fdf9e.scope: Deactivated successfully.
Jan 20 18:45:12 compute-0 sudo[105376]: pam_unix(sudo:session): session closed for user root
Jan 20 18:45:12 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 18:45:12 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:12 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Jan 20 18:45:12 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 18:45:12 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:12 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Reconfiguring node-exporter.compute-0 (unknown last config time)...
Jan 20 18:45:12 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Reconfiguring node-exporter.compute-0 (unknown last config time)...
Jan 20 18:45:12 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Reconfiguring daemon node-exporter.compute-0 on compute-0
Jan 20 18:45:12 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Reconfiguring daemon node-exporter.compute-0 on compute-0
Jan 20 18:45:12 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Jan 20 18:45:12 compute-0 sudo[105457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:45:12 compute-0 sudo[105457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:45:12 compute-0 sudo[105457]: pam_unix(sudo:session): session closed for user root
Jan 20 18:45:12 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:45:12 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 18:45:12 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:45:12.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 18:45:12 compute-0 sudo[105482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/prometheus/node-exporter:v1.7.0 --timeout 895 _orch deploy --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1
Jan 20 18:45:12 compute-0 sudo[105482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:45:12 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e101 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 18:45:13 compute-0 systemd[1]: Stopping Ceph node-exporter.compute-0 for aecbbf3b-b405-507b-97d7-637a83f5b4b1...
Jan 20 18:45:13 compute-0 ceph-mon[74381]: 9.12 deep-scrub starts
Jan 20 18:45:13 compute-0 ceph-mon[74381]: 9.12 deep-scrub ok
Jan 20 18:45:13 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:13 compute-0 ceph-mon[74381]: Reconfiguring osd.0 (monmap changed)...
Jan 20 18:45:13 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Jan 20 18:45:13 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:45:13 compute-0 ceph-mon[74381]: Reconfiguring daemon osd.0 on compute-0
Jan 20 18:45:13 compute-0 ceph-mon[74381]: 9.1a scrub starts
Jan 20 18:45:13 compute-0 ceph-mon[74381]: pgmap v23: 337 pgs: 2 peering, 335 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:45:13 compute-0 ceph-mon[74381]: 9.1a scrub ok
Jan 20 18:45:13 compute-0 ceph-mon[74381]: 9.8 scrub starts
Jan 20 18:45:13 compute-0 ceph-mon[74381]: 9.8 scrub ok
Jan 20 18:45:13 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:13 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:13 compute-0 ceph-mon[74381]: Reconfiguring node-exporter.compute-0 (unknown last config time)...
Jan 20 18:45:13 compute-0 ceph-mon[74381]: Reconfiguring daemon node-exporter.compute-0 on compute-0
Jan 20 18:45:13 compute-0 podman[105554]: 2026-01-20 18:45:13.363336668 +0000 UTC m=+0.044894784 container died ce781f31ce1e6337dcc83c72a15848474c3fba90a583434d21a997d8255e9246 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:45:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-0efa559753edbff0e68fb9753e884ab2c71245a3a2814ffc4cccc69b3e1fcc9a-merged.mount: Deactivated successfully.
Jan 20 18:45:13 compute-0 podman[105554]: 2026-01-20 18:45:13.406549895 +0000 UTC m=+0.088108001 container remove ce781f31ce1e6337dcc83c72a15848474c3fba90a583434d21a997d8255e9246 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:45:13 compute-0 bash[105554]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0
Jan 20 18:45:13 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@node-exporter.compute-0.service: Main process exited, code=exited, status=143/n/a
Jan 20 18:45:13 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@node-exporter.compute-0.service: Failed with result 'exit-code'.
Jan 20 18:45:13 compute-0 systemd[1]: Stopped Ceph node-exporter.compute-0 for aecbbf3b-b405-507b-97d7-637a83f5b4b1.
Jan 20 18:45:13 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@node-exporter.compute-0.service: Consumed 2.347s CPU time.
Jan 20 18:45:13 compute-0 systemd[1]: Starting Ceph node-exporter.compute-0 for aecbbf3b-b405-507b-97d7-637a83f5b4b1...
Jan 20 18:45:13 compute-0 podman[105712]: 2026-01-20 18:45:13.788124486 +0000 UTC m=+0.041182094 container create d3ebab8fa832c2f5835ae87e49226279b06e4ab5bb811ac33df7c9afc2478663 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:45:13 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Jan 20 18:45:13 compute-0 ceph-osd[82836]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Jan 20 18:45:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbb89bb8266383f9d0e21873cc8368c837d09b4d17a89c603b5d7ae93a3cc44c/merged/etc/node-exporter supports timestamps until 2038 (0x7fffffff)
Jan 20 18:45:13 compute-0 podman[105712]: 2026-01-20 18:45:13.834675822 +0000 UTC m=+0.087733450 container init d3ebab8fa832c2f5835ae87e49226279b06e4ab5bb811ac33df7c9afc2478663 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:45:13 compute-0 podman[105712]: 2026-01-20 18:45:13.839243583 +0000 UTC m=+0.092301191 container start d3ebab8fa832c2f5835ae87e49226279b06e4ab5bb811ac33df7c9afc2478663 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:45:13 compute-0 bash[105712]: d3ebab8fa832c2f5835ae87e49226279b06e4ab5bb811ac33df7c9afc2478663
Jan 20 18:45:13 compute-0 podman[105712]: 2026-01-20 18:45:13.765929216 +0000 UTC m=+0.018986844 image pull 72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e quay.io/prometheus/node-exporter:v1.7.0
Jan 20 18:45:13 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[105727]: ts=2026-01-20T18:45:13.844Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)"
Jan 20 18:45:13 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[105727]: ts=2026-01-20T18:45:13.844Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)"
Jan 20 18:45:13 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[105727]: ts=2026-01-20T18:45:13.846Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Jan 20 18:45:13 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[105727]: ts=2026-01-20T18:45:13.846Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Jan 20 18:45:13 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[105727]: ts=2026-01-20T18:45:13.846Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Jan 20 18:45:13 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[105727]: ts=2026-01-20T18:45:13.846Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Jan 20 18:45:13 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[105727]: ts=2026-01-20T18:45:13.846Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Jan 20 18:45:13 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[105727]: ts=2026-01-20T18:45:13.846Z caller=node_exporter.go:117 level=info collector=arp
Jan 20 18:45:13 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[105727]: ts=2026-01-20T18:45:13.846Z caller=node_exporter.go:117 level=info collector=bcache
Jan 20 18:45:13 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[105727]: ts=2026-01-20T18:45:13.846Z caller=node_exporter.go:117 level=info collector=bonding
Jan 20 18:45:13 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[105727]: ts=2026-01-20T18:45:13.846Z caller=node_exporter.go:117 level=info collector=btrfs
Jan 20 18:45:13 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[105727]: ts=2026-01-20T18:45:13.846Z caller=node_exporter.go:117 level=info collector=conntrack
Jan 20 18:45:13 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[105727]: ts=2026-01-20T18:45:13.846Z caller=node_exporter.go:117 level=info collector=cpu
Jan 20 18:45:13 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[105727]: ts=2026-01-20T18:45:13.846Z caller=node_exporter.go:117 level=info collector=cpufreq
Jan 20 18:45:13 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[105727]: ts=2026-01-20T18:45:13.846Z caller=node_exporter.go:117 level=info collector=diskstats
Jan 20 18:45:13 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[105727]: ts=2026-01-20T18:45:13.846Z caller=node_exporter.go:117 level=info collector=dmi
Jan 20 18:45:13 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[105727]: ts=2026-01-20T18:45:13.846Z caller=node_exporter.go:117 level=info collector=edac
Jan 20 18:45:13 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[105727]: ts=2026-01-20T18:45:13.846Z caller=node_exporter.go:117 level=info collector=entropy
Jan 20 18:45:13 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[105727]: ts=2026-01-20T18:45:13.846Z caller=node_exporter.go:117 level=info collector=fibrechannel
Jan 20 18:45:13 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[105727]: ts=2026-01-20T18:45:13.846Z caller=node_exporter.go:117 level=info collector=filefd
Jan 20 18:45:13 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[105727]: ts=2026-01-20T18:45:13.846Z caller=node_exporter.go:117 level=info collector=filesystem
Jan 20 18:45:13 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[105727]: ts=2026-01-20T18:45:13.846Z caller=node_exporter.go:117 level=info collector=hwmon
Jan 20 18:45:13 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[105727]: ts=2026-01-20T18:45:13.846Z caller=node_exporter.go:117 level=info collector=infiniband
Jan 20 18:45:13 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[105727]: ts=2026-01-20T18:45:13.846Z caller=node_exporter.go:117 level=info collector=ipvs
Jan 20 18:45:13 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[105727]: ts=2026-01-20T18:45:13.846Z caller=node_exporter.go:117 level=info collector=loadavg
Jan 20 18:45:13 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[105727]: ts=2026-01-20T18:45:13.846Z caller=node_exporter.go:117 level=info collector=mdadm
Jan 20 18:45:13 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[105727]: ts=2026-01-20T18:45:13.846Z caller=node_exporter.go:117 level=info collector=meminfo
Jan 20 18:45:13 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[105727]: ts=2026-01-20T18:45:13.846Z caller=node_exporter.go:117 level=info collector=netclass
Jan 20 18:45:13 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[105727]: ts=2026-01-20T18:45:13.846Z caller=node_exporter.go:117 level=info collector=netdev
Jan 20 18:45:13 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[105727]: ts=2026-01-20T18:45:13.846Z caller=node_exporter.go:117 level=info collector=netstat
Jan 20 18:45:13 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[105727]: ts=2026-01-20T18:45:13.846Z caller=node_exporter.go:117 level=info collector=nfs
Jan 20 18:45:13 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[105727]: ts=2026-01-20T18:45:13.846Z caller=node_exporter.go:117 level=info collector=nfsd
Jan 20 18:45:13 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[105727]: ts=2026-01-20T18:45:13.846Z caller=node_exporter.go:117 level=info collector=nvme
Jan 20 18:45:13 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[105727]: ts=2026-01-20T18:45:13.846Z caller=node_exporter.go:117 level=info collector=os
Jan 20 18:45:13 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[105727]: ts=2026-01-20T18:45:13.846Z caller=node_exporter.go:117 level=info collector=powersupplyclass
Jan 20 18:45:13 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[105727]: ts=2026-01-20T18:45:13.846Z caller=node_exporter.go:117 level=info collector=pressure
Jan 20 18:45:13 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[105727]: ts=2026-01-20T18:45:13.846Z caller=node_exporter.go:117 level=info collector=rapl
Jan 20 18:45:13 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[105727]: ts=2026-01-20T18:45:13.846Z caller=node_exporter.go:117 level=info collector=schedstat
Jan 20 18:45:13 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[105727]: ts=2026-01-20T18:45:13.846Z caller=node_exporter.go:117 level=info collector=selinux
Jan 20 18:45:13 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[105727]: ts=2026-01-20T18:45:13.846Z caller=node_exporter.go:117 level=info collector=sockstat
Jan 20 18:45:13 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[105727]: ts=2026-01-20T18:45:13.846Z caller=node_exporter.go:117 level=info collector=softnet
Jan 20 18:45:13 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[105727]: ts=2026-01-20T18:45:13.846Z caller=node_exporter.go:117 level=info collector=stat
Jan 20 18:45:13 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[105727]: ts=2026-01-20T18:45:13.846Z caller=node_exporter.go:117 level=info collector=tapestats
Jan 20 18:45:13 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[105727]: ts=2026-01-20T18:45:13.846Z caller=node_exporter.go:117 level=info collector=textfile
Jan 20 18:45:13 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[105727]: ts=2026-01-20T18:45:13.846Z caller=node_exporter.go:117 level=info collector=thermal_zone
Jan 20 18:45:13 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[105727]: ts=2026-01-20T18:45:13.846Z caller=node_exporter.go:117 level=info collector=time
Jan 20 18:45:13 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[105727]: ts=2026-01-20T18:45:13.846Z caller=node_exporter.go:117 level=info collector=udp_queues
Jan 20 18:45:13 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[105727]: ts=2026-01-20T18:45:13.846Z caller=node_exporter.go:117 level=info collector=uname
Jan 20 18:45:13 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[105727]: ts=2026-01-20T18:45:13.846Z caller=node_exporter.go:117 level=info collector=vmstat
Jan 20 18:45:13 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[105727]: ts=2026-01-20T18:45:13.846Z caller=node_exporter.go:117 level=info collector=xfs
Jan 20 18:45:13 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[105727]: ts=2026-01-20T18:45:13.846Z caller=node_exporter.go:117 level=info collector=zfs
Jan 20 18:45:13 compute-0 systemd[1]: Started Ceph node-exporter.compute-0 for aecbbf3b-b405-507b-97d7-637a83f5b4b1.
Jan 20 18:45:13 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[105727]: ts=2026-01-20T18:45:13.848Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100
Jan 20 18:45:13 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0[105727]: ts=2026-01-20T18:45:13.848Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100
Jan 20 18:45:13 compute-0 sudo[105482]: pam_unix(sudo:session): session closed for user root
Jan 20 18:45:13 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 18:45:13 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:13 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 18:45:13 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:13 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Reconfiguring alertmanager.compute-0 (dependencies changed)...
Jan 20 18:45:13 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Reconfiguring alertmanager.compute-0 (dependencies changed)...
Jan 20 18:45:13 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Reconfiguring daemon alertmanager.compute-0 on compute-0
Jan 20 18:45:13 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Reconfiguring daemon alertmanager.compute-0 on compute-0
Jan 20 18:45:14 compute-0 sudo[105783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:45:14 compute-0 sudo[105783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:45:14 compute-0 sudo[105783]: pam_unix(sudo:session): session closed for user root
Jan 20 18:45:14 compute-0 sudo[105835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/prometheus/alertmanager:v0.25.0 --timeout 895 _orch deploy --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1
Jan 20 18:45:14 compute-0 sudo[105835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:45:14 compute-0 python3.9[105832]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:45:14 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v24: 337 pgs: 2 peering, 335 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:45:14 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:45:14 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 18:45:14 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:45:14.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 18:45:14 compute-0 ceph-mon[74381]: 9.4 scrub starts
Jan 20 18:45:14 compute-0 ceph-mon[74381]: 9.4 scrub ok
Jan 20 18:45:14 compute-0 ceph-mon[74381]: 9.19 scrub starts
Jan 20 18:45:14 compute-0 ceph-mon[74381]: 9.19 scrub ok
Jan 20 18:45:14 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:14 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:14 compute-0 ceph-mon[74381]: Reconfiguring alertmanager.compute-0 (dependencies changed)...
Jan 20 18:45:14 compute-0 ceph-mon[74381]: Reconfiguring daemon alertmanager.compute-0 on compute-0
Jan 20 18:45:14 compute-0 podman[105900]: 2026-01-20 18:45:14.372445159 +0000 UTC m=+0.041873063 volume create 8e9af8f506d1a74c6d50b923323007816e8c70231199eeaf1a0def097db63c7e
Jan 20 18:45:14 compute-0 podman[105900]: 2026-01-20 18:45:14.382880107 +0000 UTC m=+0.052308011 container create 1b7f8c7cd11cc579697064feab9ff0429a04fe7be12f15f1715ab961576f7b60 (image=quay.io/prometheus/alertmanager:v0.25.0, name=determined_jackson, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:45:14 compute-0 systemd[1]: Started libpod-conmon-1b7f8c7cd11cc579697064feab9ff0429a04fe7be12f15f1715ab961576f7b60.scope.
Jan 20 18:45:14 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:45:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b55fe2a80b87534976c9a7c53744670f428adb3caddd834b85399a5cf6d930a5/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Jan 20 18:45:14 compute-0 podman[105900]: 2026-01-20 18:45:14.357624256 +0000 UTC m=+0.027052180 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Jan 20 18:45:14 compute-0 podman[105900]: 2026-01-20 18:45:14.461658448 +0000 UTC m=+0.131086372 container init 1b7f8c7cd11cc579697064feab9ff0429a04fe7be12f15f1715ab961576f7b60 (image=quay.io/prometheus/alertmanager:v0.25.0, name=determined_jackson, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:45:14 compute-0 podman[105900]: 2026-01-20 18:45:14.46850032 +0000 UTC m=+0.137928224 container start 1b7f8c7cd11cc579697064feab9ff0429a04fe7be12f15f1715ab961576f7b60 (image=quay.io/prometheus/alertmanager:v0.25.0, name=determined_jackson, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:45:14 compute-0 podman[105900]: 2026-01-20 18:45:14.472131546 +0000 UTC m=+0.141559470 container attach 1b7f8c7cd11cc579697064feab9ff0429a04fe7be12f15f1715ab961576f7b60 (image=quay.io/prometheus/alertmanager:v0.25.0, name=determined_jackson, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:45:14 compute-0 determined_jackson[105916]: 65534 65534
Jan 20 18:45:14 compute-0 systemd[1]: libpod-1b7f8c7cd11cc579697064feab9ff0429a04fe7be12f15f1715ab961576f7b60.scope: Deactivated successfully.
Jan 20 18:45:14 compute-0 podman[105900]: 2026-01-20 18:45:14.474213311 +0000 UTC m=+0.143641235 container died 1b7f8c7cd11cc579697064feab9ff0429a04fe7be12f15f1715ab961576f7b60 (image=quay.io/prometheus/alertmanager:v0.25.0, name=determined_jackson, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:45:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-b55fe2a80b87534976c9a7c53744670f428adb3caddd834b85399a5cf6d930a5-merged.mount: Deactivated successfully.
Jan 20 18:45:14 compute-0 podman[105900]: 2026-01-20 18:45:14.519399151 +0000 UTC m=+0.188827055 container remove 1b7f8c7cd11cc579697064feab9ff0429a04fe7be12f15f1715ab961576f7b60 (image=quay.io/prometheus/alertmanager:v0.25.0, name=determined_jackson, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:45:14 compute-0 podman[105900]: 2026-01-20 18:45:14.523219752 +0000 UTC m=+0.192647656 volume remove 8e9af8f506d1a74c6d50b923323007816e8c70231199eeaf1a0def097db63c7e
Jan 20 18:45:14 compute-0 systemd[1]: libpod-conmon-1b7f8c7cd11cc579697064feab9ff0429a04fe7be12f15f1715ab961576f7b60.scope: Deactivated successfully.
Jan 20 18:45:14 compute-0 podman[105935]: 2026-01-20 18:45:14.581848169 +0000 UTC m=+0.037858496 volume create aee3ca11d28d45f88f8c2879e9265688ddb6f72f3a6bf3a276d73decc2a2866c
Jan 20 18:45:14 compute-0 podman[105935]: 2026-01-20 18:45:14.591088474 +0000 UTC m=+0.047098801 container create 553c8b26d36b9a27ea72fc0278f56361fd6dd1dcc790b516854b122fe8ab11b5 (image=quay.io/prometheus/alertmanager:v0.25.0, name=youthful_lehmann, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:45:14 compute-0 systemd[1]: Started libpod-conmon-553c8b26d36b9a27ea72fc0278f56361fd6dd1dcc790b516854b122fe8ab11b5.scope.
Jan 20 18:45:14 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:45:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a62bfca8130e8bb3961e3076c0da8d9f9b6f11179b18e2e305061ec8455b19e/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Jan 20 18:45:14 compute-0 podman[105935]: 2026-01-20 18:45:14.66022498 +0000 UTC m=+0.116235307 container init 553c8b26d36b9a27ea72fc0278f56361fd6dd1dcc790b516854b122fe8ab11b5 (image=quay.io/prometheus/alertmanager:v0.25.0, name=youthful_lehmann, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:45:14 compute-0 podman[105935]: 2026-01-20 18:45:14.567656862 +0000 UTC m=+0.023667209 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Jan 20 18:45:14 compute-0 podman[105935]: 2026-01-20 18:45:14.665958412 +0000 UTC m=+0.121968749 container start 553c8b26d36b9a27ea72fc0278f56361fd6dd1dcc790b516854b122fe8ab11b5 (image=quay.io/prometheus/alertmanager:v0.25.0, name=youthful_lehmann, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:45:14 compute-0 youthful_lehmann[106000]: 65534 65534
Jan 20 18:45:14 compute-0 systemd[1]: libpod-553c8b26d36b9a27ea72fc0278f56361fd6dd1dcc790b516854b122fe8ab11b5.scope: Deactivated successfully.
Jan 20 18:45:14 compute-0 podman[105935]: 2026-01-20 18:45:14.669555227 +0000 UTC m=+0.125565554 container attach 553c8b26d36b9a27ea72fc0278f56361fd6dd1dcc790b516854b122fe8ab11b5 (image=quay.io/prometheus/alertmanager:v0.25.0, name=youthful_lehmann, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:45:14 compute-0 podman[105935]: 2026-01-20 18:45:14.669894817 +0000 UTC m=+0.125905144 container died 553c8b26d36b9a27ea72fc0278f56361fd6dd1dcc790b516854b122fe8ab11b5 (image=quay.io/prometheus/alertmanager:v0.25.0, name=youthful_lehmann, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:45:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-4a62bfca8130e8bb3961e3076c0da8d9f9b6f11179b18e2e305061ec8455b19e-merged.mount: Deactivated successfully.
Jan 20 18:45:14 compute-0 podman[105935]: 2026-01-20 18:45:14.708139242 +0000 UTC m=+0.164149569 container remove 553c8b26d36b9a27ea72fc0278f56361fd6dd1dcc790b516854b122fe8ab11b5 (image=quay.io/prometheus/alertmanager:v0.25.0, name=youthful_lehmann, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:45:14 compute-0 podman[105935]: 2026-01-20 18:45:14.712627641 +0000 UTC m=+0.168637968 volume remove aee3ca11d28d45f88f8c2879e9265688ddb6f72f3a6bf3a276d73decc2a2866c
Jan 20 18:45:14 compute-0 systemd[1]: libpod-conmon-553c8b26d36b9a27ea72fc0278f56361fd6dd1dcc790b516854b122fe8ab11b5.scope: Deactivated successfully.
Jan 20 18:45:14 compute-0 systemd[1]: Stopping Ceph alertmanager.compute-0 for aecbbf3b-b405-507b-97d7-637a83f5b4b1...
Jan 20 18:45:14 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[98876]: ts=2026-01-20T18:45:14.901Z caller=main.go:583 level=info msg="Received SIGTERM, exiting gracefully..."
Jan 20 18:45:14 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:45:14 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:45:14 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:45:14.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:45:14 compute-0 podman[106120]: 2026-01-20 18:45:14.911712206 +0000 UTC m=+0.046147376 container died 37c94b4d155e72e431120bc1516e6aee70acdb4e82b4f6a7d55a98467b041cf1 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:45:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e24557620e2c2001eb42373eb94cf38a998383750a0b4c35120e7eb4bee4120-merged.mount: Deactivated successfully.
Jan 20 18:45:14 compute-0 podman[106120]: 2026-01-20 18:45:14.974141264 +0000 UTC m=+0.108576414 container remove 37c94b4d155e72e431120bc1516e6aee70acdb4e82b4f6a7d55a98467b041cf1 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:45:14 compute-0 podman[106120]: 2026-01-20 18:45:14.978319755 +0000 UTC m=+0.112754925 volume remove 9f2672371bbd283d13046fbb26b794c830d0ef5aa64cb2c388880da63ca12449
Jan 20 18:45:14 compute-0 bash[106120]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0
Jan 20 18:45:15 compute-0 python3.9[106113]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 18:45:15 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@alertmanager.compute-0.service: Deactivated successfully.
Jan 20 18:45:15 compute-0 systemd[1]: Stopped Ceph alertmanager.compute-0 for aecbbf3b-b405-507b-97d7-637a83f5b4b1.
Jan 20 18:45:15 compute-0 systemd[1]: Starting Ceph alertmanager.compute-0 for aecbbf3b-b405-507b-97d7-637a83f5b4b1...
Jan 20 18:45:15 compute-0 podman[106229]: 2026-01-20 18:45:15.333185326 +0000 UTC m=+0.033862720 volume create 5d3fc559d37f2f2f6a3bc71d1ccb948f4085bbde398f53e77a96fd41fe472ad5
Jan 20 18:45:15 compute-0 podman[106229]: 2026-01-20 18:45:15.342185106 +0000 UTC m=+0.042862500 container create a8dfcd2937823e651e7ddb7da3250c4b2d68a69832f3be4617cbfedf4f0f7749 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:45:15 compute-0 ceph-mon[74381]: 9.10 scrub starts
Jan 20 18:45:15 compute-0 ceph-mon[74381]: 9.10 scrub ok
Jan 20 18:45:15 compute-0 ceph-mon[74381]: pgmap v24: 337 pgs: 2 peering, 335 active+clean; 457 KiB data, 126 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:45:15 compute-0 ceph-mon[74381]: 9.5 scrub starts
Jan 20 18:45:15 compute-0 ceph-mon[74381]: 9.5 scrub ok
Jan 20 18:45:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fdc75b86239f39b989a5ec40ea38789ec171d2780a709179d03c08f8649059e/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Jan 20 18:45:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fdc75b86239f39b989a5ec40ea38789ec171d2780a709179d03c08f8649059e/merged/etc/alertmanager supports timestamps until 2038 (0x7fffffff)
Jan 20 18:45:15 compute-0 podman[106229]: 2026-01-20 18:45:15.406179935 +0000 UTC m=+0.106857329 container init a8dfcd2937823e651e7ddb7da3250c4b2d68a69832f3be4617cbfedf4f0f7749 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:45:15 compute-0 podman[106229]: 2026-01-20 18:45:15.411077945 +0000 UTC m=+0.111755339 container start a8dfcd2937823e651e7ddb7da3250c4b2d68a69832f3be4617cbfedf4f0f7749 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:45:15 compute-0 bash[106229]: a8dfcd2937823e651e7ddb7da3250c4b2d68a69832f3be4617cbfedf4f0f7749
Jan 20 18:45:15 compute-0 podman[106229]: 2026-01-20 18:45:15.320928292 +0000 UTC m=+0.021605686 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Jan 20 18:45:15 compute-0 systemd[1]: Started Ceph alertmanager.compute-0 for aecbbf3b-b405-507b-97d7-637a83f5b4b1.
Jan 20 18:45:15 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:45:15.433Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)"
Jan 20 18:45:15 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:45:15.433Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)"
Jan 20 18:45:15 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:45:15.442Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.122.100 port=9094
Jan 20 18:45:15 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:45:15.444Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s
Jan 20 18:45:15 compute-0 sudo[105835]: pam_unix(sudo:session): session closed for user root
Jan 20 18:45:15 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 18:45:15 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:15 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 18:45:15 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:45:15.480Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml
Jan 20 18:45:15 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:45:15.480Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml
Jan 20 18:45:15 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:45:15.485Z caller=tls_config.go:232 level=info msg="Listening on" address=192.168.122.100:9093
Jan 20 18:45:15 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:45:15.485Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=192.168.122.100:9093
Jan 20 18:45:15 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:15 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Reconfiguring grafana.compute-0 (dependencies changed)...
Jan 20 18:45:15 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Reconfiguring grafana.compute-0 (dependencies changed)...
Jan 20 18:45:15 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Reconfiguring daemon grafana.compute-0 on compute-0
Jan 20 18:45:15 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Reconfiguring daemon grafana.compute-0 on compute-0
Jan 20 18:45:15 compute-0 sudo[106290]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:45:15 compute-0 sudo[106290]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:45:15 compute-0 sudo[106290]: pam_unix(sudo:session): session closed for user root
Jan 20 18:45:15 compute-0 sudo[106316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/grafana:10.4.0 --timeout 895 _orch deploy --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1
Jan 20 18:45:15 compute-0 sudo[106316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:45:16 compute-0 podman[106434]: 2026-01-20 18:45:16.081885564 +0000 UTC m=+0.040024433 container create aadd9d45e4e6b79d88b8bbea36c079a40fd27904923a257607ccdce1889153fa (image=quay.io/ceph/grafana:10.4.0, name=relaxed_wiles, maintainer=Grafana Labs <hello@grafana.com>)
Jan 20 18:45:16 compute-0 systemd[1]: Started libpod-conmon-aadd9d45e4e6b79d88b8bbea36c079a40fd27904923a257607ccdce1889153fa.scope.
Jan 20 18:45:16 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:45:16 compute-0 podman[106434]: 2026-01-20 18:45:16.062411497 +0000 UTC m=+0.020550386 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Jan 20 18:45:16 compute-0 podman[106434]: 2026-01-20 18:45:16.165913845 +0000 UTC m=+0.124052734 container init aadd9d45e4e6b79d88b8bbea36c079a40fd27904923a257607ccdce1889153fa (image=quay.io/ceph/grafana:10.4.0, name=relaxed_wiles, maintainer=Grafana Labs <hello@grafana.com>)
Jan 20 18:45:16 compute-0 podman[106434]: 2026-01-20 18:45:16.172312076 +0000 UTC m=+0.130450955 container start aadd9d45e4e6b79d88b8bbea36c079a40fd27904923a257607ccdce1889153fa (image=quay.io/ceph/grafana:10.4.0, name=relaxed_wiles, maintainer=Grafana Labs <hello@grafana.com>)
Jan 20 18:45:16 compute-0 relaxed_wiles[106475]: 472 0
Jan 20 18:45:16 compute-0 podman[106434]: 2026-01-20 18:45:16.175369287 +0000 UTC m=+0.133508166 container attach aadd9d45e4e6b79d88b8bbea36c079a40fd27904923a257607ccdce1889153fa (image=quay.io/ceph/grafana:10.4.0, name=relaxed_wiles, maintainer=Grafana Labs <hello@grafana.com>)
Jan 20 18:45:16 compute-0 systemd[1]: libpod-aadd9d45e4e6b79d88b8bbea36c079a40fd27904923a257607ccdce1889153fa.scope: Deactivated successfully.
Jan 20 18:45:16 compute-0 podman[106434]: 2026-01-20 18:45:16.176710632 +0000 UTC m=+0.134849501 container died aadd9d45e4e6b79d88b8bbea36c079a40fd27904923a257607ccdce1889153fa (image=quay.io/ceph/grafana:10.4.0, name=relaxed_wiles, maintainer=Grafana Labs <hello@grafana.com>)
Jan 20 18:45:16 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v25: 337 pgs: 337 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:45:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-1838a123fe19d7342177007fc42b5686533a11e91843a79e4a418f06ec63ac86-merged.mount: Deactivated successfully.
Jan 20 18:45:16 compute-0 podman[106434]: 2026-01-20 18:45:16.216486798 +0000 UTC m=+0.174625667 container remove aadd9d45e4e6b79d88b8bbea36c079a40fd27904923a257607ccdce1889153fa (image=quay.io/ceph/grafana:10.4.0, name=relaxed_wiles, maintainer=Grafana Labs <hello@grafana.com>)
Jan 20 18:45:16 compute-0 systemd[1]: libpod-conmon-aadd9d45e4e6b79d88b8bbea36c079a40fd27904923a257607ccdce1889153fa.scope: Deactivated successfully.
Jan 20 18:45:16 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} v 0)
Jan 20 18:45:16 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Jan 20 18:45:16 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0)
Jan 20 18:45:16 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Jan 20 18:45:16 compute-0 podman[106519]: 2026-01-20 18:45:16.274274453 +0000 UTC m=+0.041858383 container create 5749bb1bba8dfb0fe7a5f9b3b2d569d7d0444bc26358bb716c239d7d130250dc (image=quay.io/ceph/grafana:10.4.0, name=silly_golick, maintainer=Grafana Labs <hello@grafana.com>)
Jan 20 18:45:16 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:45:16 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 18:45:16 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:45:16.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 18:45:16 compute-0 systemd[1]: Started libpod-conmon-5749bb1bba8dfb0fe7a5f9b3b2d569d7d0444bc26358bb716c239d7d130250dc.scope.
Jan 20 18:45:16 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:45:16 compute-0 podman[106519]: 2026-01-20 18:45:16.254829297 +0000 UTC m=+0.022413257 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Jan 20 18:45:16 compute-0 podman[106519]: 2026-01-20 18:45:16.353140136 +0000 UTC m=+0.120724106 container init 5749bb1bba8dfb0fe7a5f9b3b2d569d7d0444bc26358bb716c239d7d130250dc (image=quay.io/ceph/grafana:10.4.0, name=silly_golick, maintainer=Grafana Labs <hello@grafana.com>)
Jan 20 18:45:16 compute-0 podman[106519]: 2026-01-20 18:45:16.357958324 +0000 UTC m=+0.125542264 container start 5749bb1bba8dfb0fe7a5f9b3b2d569d7d0444bc26358bb716c239d7d130250dc (image=quay.io/ceph/grafana:10.4.0, name=silly_golick, maintainer=Grafana Labs <hello@grafana.com>)
Jan 20 18:45:16 compute-0 silly_golick[106535]: 472 0
Jan 20 18:45:16 compute-0 systemd[1]: libpod-5749bb1bba8dfb0fe7a5f9b3b2d569d7d0444bc26358bb716c239d7d130250dc.scope: Deactivated successfully.
Jan 20 18:45:16 compute-0 podman[106519]: 2026-01-20 18:45:16.362133165 +0000 UTC m=+0.129717205 container attach 5749bb1bba8dfb0fe7a5f9b3b2d569d7d0444bc26358bb716c239d7d130250dc (image=quay.io/ceph/grafana:10.4.0, name=silly_golick, maintainer=Grafana Labs <hello@grafana.com>)
Jan 20 18:45:16 compute-0 podman[106519]: 2026-01-20 18:45:16.3627039 +0000 UTC m=+0.130287880 container died 5749bb1bba8dfb0fe7a5f9b3b2d569d7d0444bc26358bb716c239d7d130250dc (image=quay.io/ceph/grafana:10.4.0, name=silly_golick, maintainer=Grafana Labs <hello@grafana.com>)
Jan 20 18:45:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-951fb9088735e42dbd449cdedbc6a60d8a159427546d199b5c5f474de11c0e13-merged.mount: Deactivated successfully.
Jan 20 18:45:16 compute-0 podman[106519]: 2026-01-20 18:45:16.412684747 +0000 UTC m=+0.180268687 container remove 5749bb1bba8dfb0fe7a5f9b3b2d569d7d0444bc26358bb716c239d7d130250dc (image=quay.io/ceph/grafana:10.4.0, name=silly_golick, maintainer=Grafana Labs <hello@grafana.com>)
Jan 20 18:45:16 compute-0 systemd[1]: libpod-conmon-5749bb1bba8dfb0fe7a5f9b3b2d569d7d0444bc26358bb716c239d7d130250dc.scope: Deactivated successfully.
Jan 20 18:45:16 compute-0 python3.9[106504]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 18:45:16 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:16 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:16 compute-0 ceph-mon[74381]: Reconfiguring grafana.compute-0 (dependencies changed)...
Jan 20 18:45:16 compute-0 ceph-mon[74381]: Reconfiguring daemon grafana.compute-0 on compute-0
Jan 20 18:45:16 compute-0 ceph-mon[74381]: 9.18 deep-scrub starts
Jan 20 18:45:16 compute-0 ceph-mon[74381]: 9.18 deep-scrub ok
Jan 20 18:45:16 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Jan 20 18:45:16 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Jan 20 18:45:16 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Jan 20 18:45:16 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Jan 20 18:45:16 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Jan 20 18:45:16 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 20 18:45:16 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 20 18:45:16 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Jan 20 18:45:16 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Jan 20 18:45:16 compute-0 systemd[1]: Stopping Ceph grafana.compute-0 for aecbbf3b-b405-507b-97d7-637a83f5b4b1...
Jan 20 18:45:16 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=server t=2026-01-20T18:45:16.652568976Z level=info msg="Shutdown started" reason="System signal: terminated"
Jan 20 18:45:16 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=tracing t=2026-01-20T18:45:16.653440279Z level=info msg="Closing tracing"
Jan 20 18:45:16 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=grafana-apiserver t=2026-01-20T18:45:16.653700237Z level=info msg="StorageObjectCountTracker pruner is exiting"
Jan 20 18:45:16 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=ticker t=2026-01-20T18:45:16.653779749Z level=info msg=stopped last_tick=2026-01-20T18:45:10Z
Jan 20 18:45:16 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[99411]: logger=sqlstore.transactions t=2026-01-20T18:45:16.665049048Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Jan 20 18:45:16 compute-0 podman[106588]: 2026-01-20 18:45:16.685584032 +0000 UTC m=+0.069148136 container died e8d4a682724f2039aeaadefabc5bd9331e64c1be264a6b723bd77661ef30236d (image=quay.io/ceph/grafana:10.4.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 20 18:45:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-850f5843f5d719bc6e0b542275fd051de5218e69cf48a54ae5bd381575205976-merged.mount: Deactivated successfully.
Jan 20 18:45:16 compute-0 podman[106588]: 2026-01-20 18:45:16.730492475 +0000 UTC m=+0.114056579 container remove e8d4a682724f2039aeaadefabc5bd9331e64c1be264a6b723bd77661ef30236d (image=quay.io/ceph/grafana:10.4.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 20 18:45:16 compute-0 bash[106588]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0
Jan 20 18:45:16 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@grafana.compute-0.service: Deactivated successfully.
Jan 20 18:45:16 compute-0 systemd[1]: Stopped Ceph grafana.compute-0 for aecbbf3b-b405-507b-97d7-637a83f5b4b1.
Jan 20 18:45:16 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@grafana.compute-0.service: Consumed 4.348s CPU time.
Jan 20 18:45:16 compute-0 systemd[1]: Starting Ceph grafana.compute-0 for aecbbf3b-b405-507b-97d7-637a83f5b4b1...
Jan 20 18:45:16 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:45:16 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:45:16 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:45:16.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:45:17 compute-0 podman[106715]: 2026-01-20 18:45:17.094311444 +0000 UTC m=+0.061188535 container create 07b235bbcf8e275aef429beeae1d676b774a5b2e004859627b1a936217e5b679 (image=quay.io/ceph/grafana:10.4.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 20 18:45:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/202d5ca682e8641130a9b59dd1ae728955e2f4eb71ab351574f276cbf25d5fd3/merged/etc/grafana/grafana.ini supports timestamps until 2038 (0x7fffffff)
Jan 20 18:45:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/202d5ca682e8641130a9b59dd1ae728955e2f4eb71ab351574f276cbf25d5fd3/merged/etc/grafana/certs supports timestamps until 2038 (0x7fffffff)
Jan 20 18:45:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/202d5ca682e8641130a9b59dd1ae728955e2f4eb71ab351574f276cbf25d5fd3/merged/etc/grafana/provisioning/datasources supports timestamps until 2038 (0x7fffffff)
Jan 20 18:45:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/202d5ca682e8641130a9b59dd1ae728955e2f4eb71ab351574f276cbf25d5fd3/merged/etc/grafana/provisioning/dashboards supports timestamps until 2038 (0x7fffffff)
Jan 20 18:45:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/202d5ca682e8641130a9b59dd1ae728955e2f4eb71ab351574f276cbf25d5fd3/merged/var/lib/grafana/grafana.db supports timestamps until 2038 (0x7fffffff)
Jan 20 18:45:17 compute-0 podman[106715]: 2026-01-20 18:45:17.141274891 +0000 UTC m=+0.108151992 container init 07b235bbcf8e275aef429beeae1d676b774a5b2e004859627b1a936217e5b679 (image=quay.io/ceph/grafana:10.4.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 20 18:45:17 compute-0 podman[106715]: 2026-01-20 18:45:17.151560094 +0000 UTC m=+0.118437185 container start 07b235bbcf8e275aef429beeae1d676b774a5b2e004859627b1a936217e5b679 (image=quay.io/ceph/grafana:10.4.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 20 18:45:17 compute-0 bash[106715]: 07b235bbcf8e275aef429beeae1d676b774a5b2e004859627b1a936217e5b679
Jan 20 18:45:17 compute-0 podman[106715]: 2026-01-20 18:45:17.073524202 +0000 UTC m=+0.040401313 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Jan 20 18:45:17 compute-0 systemd[1]: Started Ceph grafana.compute-0 for aecbbf3b-b405-507b-97d7-637a83f5b4b1.
Jan 20 18:45:17 compute-0 sudo[106316]: pam_unix(sudo:session): session closed for user root
Jan 20 18:45:17 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 18:45:17 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:17 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 18:45:17 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:17 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-1 (monmap changed)...
Jan 20 18:45:17 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-1 (monmap changed)...
Jan 20 18:45:17 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Jan 20 18:45:17 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 20 18:45:17 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-1 on compute-1
Jan 20 18:45:17 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-1 on compute-1
Jan 20 18:45:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[106777]: logger=settings t=2026-01-20T18:45:17.331393509Z level=info msg="Starting Grafana" version=10.4.0 commit=03f502a94d17f7dc4e6c34acdf8428aedd986e4c branch=HEAD compiled=2026-01-20T18:45:17Z
Jan 20 18:45:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[106777]: logger=settings t=2026-01-20T18:45:17.331669466Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini
Jan 20 18:45:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[106777]: logger=settings t=2026-01-20T18:45:17.331677166Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini
Jan 20 18:45:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[106777]: logger=settings t=2026-01-20T18:45:17.331682637Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana"
Jan 20 18:45:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[106777]: logger=settings t=2026-01-20T18:45:17.331687337Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana"
Jan 20 18:45:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[106777]: logger=settings t=2026-01-20T18:45:17.331691867Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins"
Jan 20 18:45:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[106777]: logger=settings t=2026-01-20T18:45:17.331696217Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning"
Jan 20 18:45:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[106777]: logger=settings t=2026-01-20T18:45:17.331700627Z level=info msg="Config overridden from command line" arg="default.log.mode=console"
Jan 20 18:45:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[106777]: logger=settings t=2026-01-20T18:45:17.331705747Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana"
Jan 20 18:45:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[106777]: logger=settings t=2026-01-20T18:45:17.331710677Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana"
Jan 20 18:45:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[106777]: logger=settings t=2026-01-20T18:45:17.331715737Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins"
Jan 20 18:45:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[106777]: logger=settings t=2026-01-20T18:45:17.331720007Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning"
Jan 20 18:45:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[106777]: logger=settings t=2026-01-20T18:45:17.331724378Z level=info msg=Target target=[all]
Jan 20 18:45:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[106777]: logger=settings t=2026-01-20T18:45:17.331732768Z level=info msg="Path Home" path=/usr/share/grafana
Jan 20 18:45:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[106777]: logger=settings t=2026-01-20T18:45:17.331736798Z level=info msg="Path Data" path=/var/lib/grafana
Jan 20 18:45:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[106777]: logger=settings t=2026-01-20T18:45:17.331740938Z level=info msg="Path Logs" path=/var/log/grafana
Jan 20 18:45:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[106777]: logger=settings t=2026-01-20T18:45:17.331745168Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins
Jan 20 18:45:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[106777]: logger=settings t=2026-01-20T18:45:17.331749688Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning
Jan 20 18:45:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[106777]: logger=settings t=2026-01-20T18:45:17.331753738Z level=info msg="App mode production"
Jan 20 18:45:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[106777]: logger=sqlstore t=2026-01-20T18:45:17.332085907Z level=info msg="Connecting to DB" dbtype=sqlite3
Jan 20 18:45:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[106777]: logger=sqlstore t=2026-01-20T18:45:17.332108688Z level=warn msg="SQLite database file has broader permissions than it should" path=/var/lib/grafana/grafana.db mode=-rw-r--r-- expected=-rw-r-----
Jan 20 18:45:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[106777]: logger=migrator t=2026-01-20T18:45:17.332799106Z level=info msg="Starting DB migrations"
Jan 20 18:45:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[106777]: logger=migrator t=2026-01-20T18:45:17.350370942Z level=info msg="migrations completed" performed=0 skipped=547 duration=746.849µs
Jan 20 18:45:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[106777]: logger=sqlstore t=2026-01-20T18:45:17.351704538Z level=info msg="Created default organization"
Jan 20 18:45:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[106777]: logger=secrets t=2026-01-20T18:45:17.352306824Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1
Jan 20 18:45:17 compute-0 sudo[106874]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frjtdxkrliedkempegsavoyhtqiteknb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934717.0809991-328-67852590555299/AnsiballZ_setup.py'
Jan 20 18:45:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[106777]: logger=plugin.store t=2026-01-20T18:45:17.374500913Z level=info msg="Loading plugins..."
Jan 20 18:45:17 compute-0 sudo[106874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:45:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[106777]: logger=local.finder t=2026-01-20T18:45:17.444861002Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled
Jan 20 18:45:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:45:17.444Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000482062s
Jan 20 18:45:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[106777]: logger=plugin.store t=2026-01-20T18:45:17.444900193Z level=info msg="Plugins loaded" count=55 duration=70.400879ms
Jan 20 18:45:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[106777]: logger=query_data t=2026-01-20T18:45:17.447748888Z level=info msg="Query Service initialization"
Jan 20 18:45:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[106777]: logger=live.push_http t=2026-01-20T18:45:17.450646815Z level=info msg="Live Push Gateway initialization"
Jan 20 18:45:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[106777]: logger=ngalert.migration t=2026-01-20T18:45:17.456720616Z level=info msg=Starting
Jan 20 18:45:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[106777]: logger=ngalert.state.manager t=2026-01-20T18:45:17.471750345Z level=info msg="Running in alternative execution of Error/NoData mode"
Jan 20 18:45:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[106777]: logger=infra.usagestats.collector t=2026-01-20T18:45:17.474764165Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2
Jan 20 18:45:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[106777]: logger=provisioning.datasources t=2026-01-20T18:45:17.477434026Z level=info msg="inserting datasource from configuration" name=Dashboard1 uid=P43CA22E17D0F9596
Jan 20 18:45:17 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Jan 20 18:45:17 compute-0 ceph-mon[74381]: pgmap v25: 337 pgs: 337 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:45:17 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 20 18:45:17 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 20 18:45:17 compute-0 ceph-mon[74381]: osdmap e102: 3 total, 3 up, 3 in
Jan 20 18:45:17 compute-0 ceph-mon[74381]: 9.1b scrub starts
Jan 20 18:45:17 compute-0 ceph-mon[74381]: 9.1b scrub ok
Jan 20 18:45:17 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:17 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:17 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 20 18:45:17 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Jan 20 18:45:17 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:45:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[106777]: logger=provisioning.alerting t=2026-01-20T18:45:17.507003631Z level=info msg="starting to provision alerting"
Jan 20 18:45:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[106777]: logger=provisioning.alerting t=2026-01-20T18:45:17.507037102Z level=info msg="finished to provision alerting"
Jan 20 18:45:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[106777]: logger=ngalert.state.manager t=2026-01-20T18:45:17.507134604Z level=info msg="Warming state cache for startup"
Jan 20 18:45:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[106777]: logger=ngalert.multiorg.alertmanager t=2026-01-20T18:45:17.507310779Z level=info msg="Starting MultiOrg Alertmanager"
Jan 20 18:45:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[106777]: logger=grafanaStorageLogger t=2026-01-20T18:45:17.507521094Z level=info msg="Storage starting"
Jan 20 18:45:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[106777]: logger=ngalert.state.manager t=2026-01-20T18:45:17.507535075Z level=info msg="State cache has been initialized" states=0 duration=396.261µs
Jan 20 18:45:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[106777]: logger=ngalert.scheduler t=2026-01-20T18:45:17.507567366Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1
Jan 20 18:45:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[106777]: logger=ticker t=2026-01-20T18:45:17.507624047Z level=info msg=starting first_tick=2026-01-20T18:45:20Z
Jan 20 18:45:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[106777]: logger=http.server t=2026-01-20T18:45:17.509528218Z level=info msg="HTTP Server TLS settings" MinTLSVersion=TLS1.2 configuredciphers=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
Jan 20 18:45:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[106777]: logger=http.server t=2026-01-20T18:45:17.50998674Z level=info msg="HTTP Server Listen" address=192.168.122.100:3000 protocol=https subUrl= socket=
Jan 20 18:45:17 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Jan 20 18:45:17 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Jan 20 18:45:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[106777]: logger=provisioning.dashboard t=2026-01-20T18:45:17.552450357Z level=info msg="starting to provision dashboards"
Jan 20 18:45:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[106777]: logger=provisioning.dashboard t=2026-01-20T18:45:17.574901233Z level=info msg="finished to provision dashboards"
Jan 20 18:45:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[106777]: logger=plugins.update.checker t=2026-01-20T18:45:17.59095349Z level=info msg="Update check succeeded" duration=82.964803ms
Jan 20 18:45:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[106777]: logger=grafana.update.checker t=2026-01-20T18:45:17.591540605Z level=info msg="Update check succeeded" duration=84.035621ms
Jan 20 18:45:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:17 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 20 18:45:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:17 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Jan 20 18:45:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:17 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Jan 20 18:45:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:17 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Jan 20 18:45:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:17 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Jan 20 18:45:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:17 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Jan 20 18:45:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:17 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Jan 20 18:45:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:17 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 20 18:45:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:17 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 20 18:45:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:17 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 20 18:45:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:17 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Jan 20 18:45:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:17 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 20 18:45:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:17 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Jan 20 18:45:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:17 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Jan 20 18:45:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:17 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Jan 20 18:45:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:17 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Jan 20 18:45:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:17 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Jan 20 18:45:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:17 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Jan 20 18:45:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:17 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Jan 20 18:45:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:17 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Jan 20 18:45:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:17 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Jan 20 18:45:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:17 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Jan 20 18:45:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:17 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Jan 20 18:45:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:17 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Jan 20 18:45:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:17 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 20 18:45:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:17 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Jan 20 18:45:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:17 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 20 18:45:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:17 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 20 18:45:17 compute-0 python3.9[106876]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 18:45:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[106777]: logger=grafana-apiserver t=2026-01-20T18:45:17.840216448Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager"
Jan 20 18:45:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[106777]: logger=grafana-apiserver t=2026-01-20T18:45:17.840623338Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager"
Jan 20 18:45:17 compute-0 sudo[106874]: pam_unix(sudo:session): session closed for user root
Jan 20 18:45:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:17 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9b4000df0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:45:17 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e103 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 18:45:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:18 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9ac001c00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:45:18 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 20 18:45:18 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:18 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 20 18:45:18 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:18 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Reconfiguring osd.1 (monmap changed)...
Jan 20 18:45:18 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Reconfiguring osd.1 (monmap changed)...
Jan 20 18:45:18 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on compute-1
Jan 20 18:45:18 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on compute-1
Jan 20 18:45:18 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v28: 337 pgs: 337 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:45:18 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} v 0)
Jan 20 18:45:18 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Jan 20 18:45:18 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0)
Jan 20 18:45:18 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Jan 20 18:45:18 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:45:18 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:45:18 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:45:18.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:45:18 compute-0 sudo[106980]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uamoquhnbteqwzvxzwcmwihqkqrfeidq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934717.0809991-328-67852590555299/AnsiballZ_dnf.py'
Jan 20 18:45:18 compute-0 sudo[106980]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:45:18 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Jan 20 18:45:18 compute-0 ceph-mon[74381]: Reconfiguring crash.compute-1 (monmap changed)...
Jan 20 18:45:18 compute-0 ceph-mon[74381]: Reconfiguring daemon crash.compute-1 on compute-1
Jan 20 18:45:18 compute-0 ceph-mon[74381]: 9.7 scrub starts
Jan 20 18:45:18 compute-0 ceph-mon[74381]: 9.7 scrub ok
Jan 20 18:45:18 compute-0 ceph-mon[74381]: osdmap e103: 3 total, 3 up, 3 in
Jan 20 18:45:18 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:18 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:18 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Jan 20 18:45:18 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:45:18 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Jan 20 18:45:18 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Jan 20 18:45:18 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Jan 20 18:45:18 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Jan 20 18:45:18 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 20 18:45:18 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 20 18:45:18 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Jan 20 18:45:18 compute-0 python3.9[106982]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 18:45:18 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Jan 20 18:45:18 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 104 pg[6.e( v 56'46 (0'0,56'46] local-lis/les=83/84 n=1 ec=58/23 lis/c=83/83 les/c/f=84/84/0 sis=104 pruub=15.693621635s) [1] r=-1 lpr=104 pi=[83,104)/1 crt=56'46 mlcod 56'46 active pruub 301.317626953s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:45:18 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 104 pg[6.e( v 56'46 (0'0,56'46] local-lis/les=83/84 n=1 ec=58/23 lis/c=83/83 les/c/f=84/84/0 sis=104 pruub=15.693530083s) [1] r=-1 lpr=104 pi=[83,104)/1 crt=56'46 mlcod 0'0 unknown NOTIFY pruub 301.317626953s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:45:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:18 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9a0000b60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:45:18 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:45:18 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:45:18 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:45:18.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:45:18 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 20 18:45:19 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:19 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 20 18:45:19 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:19 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-1 (monmap changed)...
Jan 20 18:45:19 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-1 (monmap changed)...
Jan 20 18:45:19 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-1 on compute-1
Jan 20 18:45:19 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-1 on compute-1
Jan 20 18:45:19 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Jan 20 18:45:19 compute-0 ceph-mon[74381]: Reconfiguring osd.1 (monmap changed)...
Jan 20 18:45:19 compute-0 ceph-mon[74381]: Reconfiguring daemon osd.1 on compute-1
Jan 20 18:45:19 compute-0 ceph-mon[74381]: pgmap v28: 337 pgs: 337 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:45:19 compute-0 ceph-mon[74381]: 9.b scrub starts
Jan 20 18:45:19 compute-0 ceph-mon[74381]: 9.b scrub ok
Jan 20 18:45:19 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 20 18:45:19 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 20 18:45:19 compute-0 ceph-mon[74381]: osdmap e104: 3 total, 3 up, 3 in
Jan 20 18:45:19 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:19 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:19 compute-0 ceph-mon[74381]: Reconfiguring mon.compute-1 (monmap changed)...
Jan 20 18:45:19 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 20 18:45:19 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 20 18:45:19 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:45:19 compute-0 ceph-mon[74381]: Reconfiguring daemon mon.compute-1 on compute-1
Jan 20 18:45:19 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Jan 20 18:45:19 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Jan 20 18:45:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:45:19] "GET /metrics HTTP/1.1" 200 48353 "" "Prometheus/2.51.0"
Jan 20 18:45:19 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:45:19] "GET /metrics HTTP/1.1" 200 48353 "" "Prometheus/2.51.0"
Jan 20 18:45:19 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 20 18:45:19 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:19 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 20 18:45:19 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:19 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-2 (monmap changed)...
Jan 20 18:45:19 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-2 (monmap changed)...
Jan 20 18:45:19 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-2 on compute-2
Jan 20 18:45:19 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-2 on compute-2
Jan 20 18:45:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:19 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9b4000df0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:45:20 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [WARNING] 019/184520 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 20 18:45:20 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:20 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd99c000fa0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:45:20 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v31: 337 pgs: 337 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:45:20 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} v 0)
Jan 20 18:45:20 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Jan 20 18:45:20 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0)
Jan 20 18:45:20 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Jan 20 18:45:20 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:45:20 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:45:20 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:45:20.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:45:20 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Jan 20 18:45:20 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 20 18:45:20 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 20 18:45:20 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Jan 20 18:45:20 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Jan 20 18:45:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 106 pg[6.f( empty local-lis/les=0/0 n=0 ec=58/23 lis/c=71/71 les/c/f=72/72/0 sis=106) [0] r=0 lpr=106 pi=[71,106)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:45:20 compute-0 ceph-mon[74381]: 9.f scrub starts
Jan 20 18:45:20 compute-0 ceph-mon[74381]: 9.f scrub ok
Jan 20 18:45:20 compute-0 ceph-mon[74381]: osdmap e105: 3 total, 3 up, 3 in
Jan 20 18:45:20 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:20 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:20 compute-0 ceph-mon[74381]: Reconfiguring mon.compute-2 (monmap changed)...
Jan 20 18:45:20 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Jan 20 18:45:20 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Jan 20 18:45:20 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:45:20 compute-0 ceph-mon[74381]: Reconfiguring daemon mon.compute-2 on compute-2
Jan 20 18:45:20 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Jan 20 18:45:20 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Jan 20 18:45:20 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Jan 20 18:45:20 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Jan 20 18:45:20 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 20 18:45:20 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:20 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9ac001c00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:45:20 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:20 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:20 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 20 18:45:20 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:20 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 20 18:45:20 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 20 18:45:20 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:20 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-2.pyghhf (monmap changed)...
Jan 20 18:45:20 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-2.pyghhf (monmap changed)...
Jan 20 18:45:20 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.pyghhf", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Jan 20 18:45:20 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.pyghhf", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 20 18:45:20 compute-0 ceph-mgr[74676]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-2.pyghhf on compute-2
Jan 20 18:45:20 compute-0 ceph-mgr[74676]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-2.pyghhf on compute-2
Jan 20 18:45:20 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:45:20 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 18:45:20 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:45:20.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 18:45:21 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 20 18:45:21 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:21 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 20 18:45:21 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:21 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Jan 20 18:45:21 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Jan 20 18:45:21 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Jan 20 18:45:21 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_URL}] v 0)
Jan 20 18:45:21 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:21 compute-0 ceph-mgr[74676]: [prometheus INFO root] Restarting engine...
Jan 20 18:45:21 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: [20/Jan/2026:18:45:21] ENGINE Bus STOPPING
Jan 20 18:45:21 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.error] [20/Jan/2026:18:45:21] ENGINE Bus STOPPING
Jan 20 18:45:21 compute-0 sudo[107048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:45:21 compute-0 sudo[107048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:45:21 compute-0 sudo[107048]: pam_unix(sudo:session): session closed for user root
Jan 20 18:45:21 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Jan 20 18:45:21 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Jan 20 18:45:21 compute-0 sudo[107075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Jan 20 18:45:21 compute-0 sudo[107075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:45:21 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Jan 20 18:45:21 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 107 pg[6.f( v 56'46 lc 55'1 (0'0,56'46] local-lis/les=106/107 n=3 ec=58/23 lis/c=71/71 les/c/f=72/72/0 sis=106) [0] r=0 lpr=106 pi=[71,106)/1 crt=56'46 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:45:21 compute-0 ceph-mon[74381]: pgmap v31: 337 pgs: 337 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:45:21 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 20 18:45:21 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 20 18:45:21 compute-0 ceph-mon[74381]: osdmap e106: 3 total, 3 up, 3 in
Jan 20 18:45:21 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:21 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:21 compute-0 ceph-mon[74381]: Reconfiguring mgr.compute-2.pyghhf (monmap changed)...
Jan 20 18:45:21 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.pyghhf", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 20 18:45:21 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.pyghhf", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Jan 20 18:45:21 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 20 18:45:21 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:45:21 compute-0 ceph-mon[74381]: Reconfiguring daemon mgr.compute-2.pyghhf on compute-2
Jan 20 18:45:21 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:21 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:21 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Jan 20 18:45:21 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Jan 20 18:45:21 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Jan 20 18:45:21 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:21 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: [20/Jan/2026:18:45:21] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down
Jan 20 18:45:21 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.error] [20/Jan/2026:18:45:21] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down
Jan 20 18:45:21 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.error] [20/Jan/2026:18:45:21] ENGINE Bus STOPPED
Jan 20 18:45:21 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.error] [20/Jan/2026:18:45:21] ENGINE Bus STARTING
Jan 20 18:45:21 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: [20/Jan/2026:18:45:21] ENGINE Bus STOPPED
Jan 20 18:45:21 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: [20/Jan/2026:18:45:21] ENGINE Bus STARTING
Jan 20 18:45:21 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: [20/Jan/2026:18:45:21] ENGINE Serving on http://:::9283
Jan 20 18:45:21 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: [20/Jan/2026:18:45:21] ENGINE Bus STARTED
Jan 20 18:45:21 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.error] [20/Jan/2026:18:45:21] ENGINE Serving on http://:::9283
Jan 20 18:45:21 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.error] [20/Jan/2026:18:45:21] ENGINE Bus STARTED
Jan 20 18:45:21 compute-0 ceph-mgr[74676]: [prometheus INFO root] Engine started.
Jan 20 18:45:21 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:21 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9a00016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:45:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:22 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9b4000df0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:45:22 compute-0 podman[107188]: 2026-01-20 18:45:22.135962088 +0000 UTC m=+0.055331189 container exec 2fba800b181b7eef43f9bbe592c3e2bd413c4a1140b3b26fe8cd839a68603da7 (image=quay.io/ceph/ceph:v19, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mon-compute-0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:45:22 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v34: 337 pgs: 2 unknown, 1 peering, 334 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:45:22 compute-0 podman[107188]: 2026-01-20 18:45:22.225416243 +0000 UTC m=+0.144785354 container exec_died 2fba800b181b7eef43f9bbe592c3e2bd413c4a1140b3b26fe8cd839a68603da7 (image=quay.io/ceph/ceph:v19, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:45:22 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:45:22 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 18:45:22 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:45:22.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 18:45:22 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Jan 20 18:45:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:22 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd99c001ac0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:45:22 compute-0 ceph-mon[74381]: from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Jan 20 18:45:22 compute-0 ceph-mon[74381]: from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Jan 20 18:45:22 compute-0 ceph-mon[74381]: from='mon.? -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Jan 20 18:45:22 compute-0 ceph-mon[74381]: osdmap e107: 3 total, 3 up, 3 in
Jan 20 18:45:22 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Jan 20 18:45:22 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Jan 20 18:45:22 compute-0 podman[107328]: 2026-01-20 18:45:22.710658277 +0000 UTC m=+0.062016059 container exec d3ebab8fa832c2f5835ae87e49226279b06e4ab5bb811ac33df7c9afc2478663 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:45:22 compute-0 podman[107328]: 2026-01-20 18:45:22.720120857 +0000 UTC m=+0.071478619 container exec_died d3ebab8fa832c2f5835ae87e49226279b06e4ab5bb811ac33df7c9afc2478663 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:45:22 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:45:22 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:45:22 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:45:22.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:45:22 compute-0 podman[107400]: 2026-01-20 18:45:22.947665799 +0000 UTC m=+0.046346272 container exec a66f259d59f97851d239d62a1c86fcfe8cc108afab9f56a3ae31f80c5ab5d699 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:45:22 compute-0 podman[107400]: 2026-01-20 18:45:22.959123373 +0000 UTC m=+0.057803826 container exec_died a66f259d59f97851d239d62a1c86fcfe8cc108afab9f56a3ae31f80c5ab5d699 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:45:22 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 18:45:22 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Jan 20 18:45:22 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Jan 20 18:45:22 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Jan 20 18:45:23 compute-0 podman[107466]: 2026-01-20 18:45:23.164859875 +0000 UTC m=+0.051469978 container exec 4e8e761c58a323b2e232c17a02958108663f7cfbfd5c846bd9edf3835afc34ea (image=quay.io/ceph/haproxy:2.3, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm)
Jan 20 18:45:23 compute-0 podman[107466]: 2026-01-20 18:45:23.174200364 +0000 UTC m=+0.060810447 container exec_died 4e8e761c58a323b2e232c17a02958108663f7cfbfd5c846bd9edf3835afc34ea (image=quay.io/ceph/haproxy:2.3, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm)
Jan 20 18:45:23 compute-0 podman[107528]: 2026-01-20 18:45:23.368340037 +0000 UTC m=+0.062831709 container exec 03da620c845e54fd933e41497235ed884812b05ac119911f7372fce805879a03 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-keepalived-nfs-cephfs-compute-0-kuklye, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, architecture=x86_64, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, version=2.2.4, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.expose-services=, name=keepalived, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph.)
Jan 20 18:45:23 compute-0 podman[107528]: 2026-01-20 18:45:23.386298134 +0000 UTC m=+0.080789746 container exec_died 03da620c845e54fd933e41497235ed884812b05ac119911f7372fce805879a03 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-keepalived-nfs-cephfs-compute-0-kuklye, distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, vendor=Red Hat, Inc., description=keepalived for Ceph, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, com.redhat.component=keepalived-container, version=2.2.4)
Jan 20 18:45:23 compute-0 podman[107594]: 2026-01-20 18:45:23.595711865 +0000 UTC m=+0.066114848 container exec a8dfcd2937823e651e7ddb7da3250c4b2d68a69832f3be4617cbfedf4f0f7749 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:45:23 compute-0 podman[107594]: 2026-01-20 18:45:23.638688215 +0000 UTC m=+0.109091158 container exec_died a8dfcd2937823e651e7ddb7da3250c4b2d68a69832f3be4617cbfedf4f0f7749 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:45:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:23 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 20 18:45:23 compute-0 ceph-mon[74381]: pgmap v34: 337 pgs: 2 unknown, 1 peering, 334 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:45:23 compute-0 ceph-mon[74381]: osdmap e108: 3 total, 3 up, 3 in
Jan 20 18:45:23 compute-0 ceph-mon[74381]: osdmap e109: 3 total, 3 up, 3 in
Jan 20 18:45:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:23 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9ac001c00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:45:23 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Jan 20 18:45:24 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:24 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9a00016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:45:24 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Jan 20 18:45:24 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Jan 20 18:45:24 compute-0 podman[107668]: 2026-01-20 18:45:24.161467985 +0000 UTC m=+0.361846918 container exec 07b235bbcf8e275aef429beeae1d676b774a5b2e004859627b1a936217e5b679 (image=quay.io/ceph/grafana:10.4.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 20 18:45:24 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v38: 337 pgs: 2 unknown, 1 peering, 334 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:45:24 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:45:24 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:45:24 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:45:24.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:45:24 compute-0 podman[107668]: 2026-01-20 18:45:24.33002414 +0000 UTC m=+0.530403023 container exec_died 07b235bbcf8e275aef429beeae1d676b774a5b2e004859627b1a936217e5b679 (image=quay.io/ceph/grafana:10.4.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 20 18:45:24 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:24 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9b4000df0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:45:24 compute-0 podman[107782]: 2026-01-20 18:45:24.657547445 +0000 UTC m=+0.056016738 container exec 3b46b6d3dc84fc240ebe1bff0868890dcc11b4799d0aa0050a48fba74c90cc31 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:45:24 compute-0 podman[107782]: 2026-01-20 18:45:24.694492096 +0000 UTC m=+0.092961369 container exec_died 3b46b6d3dc84fc240ebe1bff0868890dcc11b4799d0aa0050a48fba74c90cc31 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:45:24 compute-0 sudo[107075]: pam_unix(sudo:session): session closed for user root
Jan 20 18:45:24 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 18:45:24 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:24 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 18:45:24 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:24 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v39: 337 pgs: 2 unknown, 1 peering, 334 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:45:24 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 18:45:24 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:45:24 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:45:24 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:45:24.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:45:24 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:24 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 20 18:45:24 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:45:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:45:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:45:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:45:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:45:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:45:25 compute-0 sudo[107823]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:45:25 compute-0 sudo[107823]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:45:25 compute-0 sudo[107823]: pam_unix(sudo:session): session closed for user root
Jan 20 18:45:25 compute-0 ceph-mon[74381]: log_channel(cluster) log [INF] : Health check cleared: CEPHADM_FAILED_DAEMON (was: 1 failed cephadm daemon(s))
Jan 20 18:45:25 compute-0 ceph-mon[74381]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 20 18:45:25 compute-0 ceph-mon[74381]: osdmap e110: 3 total, 3 up, 3 in
Jan 20 18:45:25 compute-0 ceph-mon[74381]: pgmap v38: 337 pgs: 2 unknown, 1 peering, 334 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:45:25 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:25 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:25 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:45:25 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 18:45:25 compute-0 ceph-mon[74381]: pgmap v39: 337 pgs: 2 unknown, 1 peering, 334 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:45:25 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:25 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:25 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 18:45:25 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 18:45:25 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 18:45:25 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:45:25 compute-0 sudo[107848]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 18:45:25 compute-0 sudo[107848]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:45:25 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:45:25.447Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.002792131s
Jan 20 18:45:25 compute-0 podman[107916]: 2026-01-20 18:45:25.524142703 +0000 UTC m=+0.038997476 container create 39d140b1be90c855651ef3c159aeee8879d730c69bd8e7226f804144762431b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Jan 20 18:45:25 compute-0 systemd[1]: Started libpod-conmon-39d140b1be90c855651ef3c159aeee8879d730c69bd8e7226f804144762431b8.scope.
Jan 20 18:45:25 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:45:25 compute-0 podman[107916]: 2026-01-20 18:45:25.591389668 +0000 UTC m=+0.106244471 container init 39d140b1be90c855651ef3c159aeee8879d730c69bd8e7226f804144762431b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_brahmagupta, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Jan 20 18:45:25 compute-0 podman[107916]: 2026-01-20 18:45:25.598672832 +0000 UTC m=+0.113527605 container start 39d140b1be90c855651ef3c159aeee8879d730c69bd8e7226f804144762431b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_brahmagupta, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:45:25 compute-0 podman[107916]: 2026-01-20 18:45:25.505596961 +0000 UTC m=+0.020451744 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:45:25 compute-0 podman[107916]: 2026-01-20 18:45:25.602015351 +0000 UTC m=+0.116870154 container attach 39d140b1be90c855651ef3c159aeee8879d730c69bd8e7226f804144762431b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:45:25 compute-0 xenodochial_brahmagupta[107933]: 167 167
Jan 20 18:45:25 compute-0 systemd[1]: libpod-39d140b1be90c855651ef3c159aeee8879d730c69bd8e7226f804144762431b8.scope: Deactivated successfully.
Jan 20 18:45:25 compute-0 podman[107916]: 2026-01-20 18:45:25.602911534 +0000 UTC m=+0.117766327 container died 39d140b1be90c855651ef3c159aeee8879d730c69bd8e7226f804144762431b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_brahmagupta, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Jan 20 18:45:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-a991b24c5a64834d168753efe0cdf1564447bca8ecf1f40b2391c70a0573560c-merged.mount: Deactivated successfully.
Jan 20 18:45:25 compute-0 podman[107916]: 2026-01-20 18:45:25.648775804 +0000 UTC m=+0.163630577 container remove 39d140b1be90c855651ef3c159aeee8879d730c69bd8e7226f804144762431b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_brahmagupta, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Jan 20 18:45:25 compute-0 systemd[1]: libpod-conmon-39d140b1be90c855651ef3c159aeee8879d730c69bd8e7226f804144762431b8.scope: Deactivated successfully.
Jan 20 18:45:25 compute-0 podman[107960]: 2026-01-20 18:45:25.789026739 +0000 UTC m=+0.043351387 container create d92dbf33be19fd49a36afb632aaead38986361ef65f8cdcebb724a1f66389747 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Jan 20 18:45:25 compute-0 systemd[1]: Started libpod-conmon-d92dbf33be19fd49a36afb632aaead38986361ef65f8cdcebb724a1f66389747.scope.
Jan 20 18:45:25 compute-0 podman[107960]: 2026-01-20 18:45:25.769434927 +0000 UTC m=+0.023759585 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:45:25 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:45:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6e32a44706a1c38f93520cc26f7471f86647d4c1b42c373735f122e131c22ec/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 18:45:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6e32a44706a1c38f93520cc26f7471f86647d4c1b42c373735f122e131c22ec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:45:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6e32a44706a1c38f93520cc26f7471f86647d4c1b42c373735f122e131c22ec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:45:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6e32a44706a1c38f93520cc26f7471f86647d4c1b42c373735f122e131c22ec/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 18:45:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6e32a44706a1c38f93520cc26f7471f86647d4c1b42c373735f122e131c22ec/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 18:45:25 compute-0 podman[107960]: 2026-01-20 18:45:25.918706068 +0000 UTC m=+0.173030736 container init d92dbf33be19fd49a36afb632aaead38986361ef65f8cdcebb724a1f66389747 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:45:25 compute-0 podman[107960]: 2026-01-20 18:45:25.927879196 +0000 UTC m=+0.182203844 container start d92dbf33be19fd49a36afb632aaead38986361ef65f8cdcebb724a1f66389747 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_mclaren, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Jan 20 18:45:25 compute-0 podman[107960]: 2026-01-20 18:45:25.931370232 +0000 UTC m=+0.185694880 container attach d92dbf33be19fd49a36afb632aaead38986361ef65f8cdcebb724a1f66389747 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_mclaren, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Jan 20 18:45:25 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:25 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9b4000df0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:45:26 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:26 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9ac001c00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:45:26 compute-0 ceph-mon[74381]: Health check cleared: CEPHADM_FAILED_DAEMON (was: 1 failed cephadm daemon(s))
Jan 20 18:45:26 compute-0 ceph-mon[74381]: Cluster is now healthy
Jan 20 18:45:26 compute-0 trusting_mclaren[107977]: --> passed data devices: 0 physical, 1 LVM
Jan 20 18:45:26 compute-0 trusting_mclaren[107977]: --> All data devices are unavailable
Jan 20 18:45:26 compute-0 systemd[1]: libpod-d92dbf33be19fd49a36afb632aaead38986361ef65f8cdcebb724a1f66389747.scope: Deactivated successfully.
Jan 20 18:45:26 compute-0 podman[107960]: 2026-01-20 18:45:26.266164806 +0000 UTC m=+0.520489464 container died d92dbf33be19fd49a36afb632aaead38986361ef65f8cdcebb724a1f66389747 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Jan 20 18:45:26 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:45:26 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:45:26 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:45:26.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:45:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-d6e32a44706a1c38f93520cc26f7471f86647d4c1b42c373735f122e131c22ec-merged.mount: Deactivated successfully.
Jan 20 18:45:26 compute-0 podman[107960]: 2026-01-20 18:45:26.316486851 +0000 UTC m=+0.570811499 container remove d92dbf33be19fd49a36afb632aaead38986361ef65f8cdcebb724a1f66389747 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_mclaren, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:45:26 compute-0 systemd[1]: libpod-conmon-d92dbf33be19fd49a36afb632aaead38986361ef65f8cdcebb724a1f66389747.scope: Deactivated successfully.
Jan 20 18:45:26 compute-0 sudo[107848]: pam_unix(sudo:session): session closed for user root
Jan 20 18:45:26 compute-0 sudo[108003]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:45:26 compute-0 sudo[108003]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:45:26 compute-0 sudo[108003]: pam_unix(sudo:session): session closed for user root
Jan 20 18:45:26 compute-0 sudo[108028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- lvm list --format json
Jan 20 18:45:26 compute-0 sudo[108028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:45:26 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:26 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9a00016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:45:26 compute-0 podman[108096]: 2026-01-20 18:45:26.836298635 +0000 UTC m=+0.053428830 container create e4145a3b30ceca1b644a11ed7f4c7a5eb51bcc8b7483e19bc204f417e6055e5b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_tu, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:45:26 compute-0 systemd[1]: Started libpod-conmon-e4145a3b30ceca1b644a11ed7f4c7a5eb51bcc8b7483e19bc204f417e6055e5b.scope.
Jan 20 18:45:26 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:45:26 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v40: 337 pgs: 337 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 217 B/s rd, 0 B/s wr, 0 op/s; 177 B/s, 3 objects/s recovering
Jan 20 18:45:26 compute-0 podman[108096]: 2026-01-20 18:45:26.905878234 +0000 UTC m=+0.123008429 container init e4145a3b30ceca1b644a11ed7f4c7a5eb51bcc8b7483e19bc204f417e6055e5b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 20 18:45:26 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0)
Jan 20 18:45:26 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Jan 20 18:45:26 compute-0 podman[108096]: 2026-01-20 18:45:26.913903432 +0000 UTC m=+0.131033607 container start e4145a3b30ceca1b644a11ed7f4c7a5eb51bcc8b7483e19bc204f417e6055e5b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_tu, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 20 18:45:26 compute-0 podman[108096]: 2026-01-20 18:45:26.819861089 +0000 UTC m=+0.036991284 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:45:26 compute-0 laughing_tu[108112]: 167 167
Jan 20 18:45:26 compute-0 systemd[1]: libpod-e4145a3b30ceca1b644a11ed7f4c7a5eb51bcc8b7483e19bc204f417e6055e5b.scope: Deactivated successfully.
Jan 20 18:45:26 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:45:26 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:45:26 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:45:26.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:45:27 compute-0 podman[108096]: 2026-01-20 18:45:27.044922236 +0000 UTC m=+0.262052431 container attach e4145a3b30ceca1b644a11ed7f4c7a5eb51bcc8b7483e19bc204f417e6055e5b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_tu, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Jan 20 18:45:27 compute-0 podman[108096]: 2026-01-20 18:45:27.045360878 +0000 UTC m=+0.262491053 container died e4145a3b30ceca1b644a11ed7f4c7a5eb51bcc8b7483e19bc204f417e6055e5b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:45:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-87c92ce8401ac18330e2dca6cd871f2cbf98fa9466ba564c3222cd1a29014ff9-merged.mount: Deactivated successfully.
Jan 20 18:45:27 compute-0 podman[108096]: 2026-01-20 18:45:27.093972037 +0000 UTC m=+0.311102242 container remove e4145a3b30ceca1b644a11ed7f4c7a5eb51bcc8b7483e19bc204f417e6055e5b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 20 18:45:27 compute-0 systemd[1]: libpod-conmon-e4145a3b30ceca1b644a11ed7f4c7a5eb51bcc8b7483e19bc204f417e6055e5b.scope: Deactivated successfully.
Jan 20 18:45:27 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Jan 20 18:45:27 compute-0 podman[108137]: 2026-01-20 18:45:27.239562047 +0000 UTC m=+0.045142675 container create 2dfbc6fea13fdd68455c6d4b9841cb91534e8fa5fea5a0fac0a54f1db2612556 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_robinson, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Jan 20 18:45:27 compute-0 systemd[1]: Started libpod-conmon-2dfbc6fea13fdd68455c6d4b9841cb91534e8fa5fea5a0fac0a54f1db2612556.scope.
Jan 20 18:45:27 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:45:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67ff009ba4a634f4e86610f5afb8891a0d3a7b9194035186c48d86f09b6877ae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 18:45:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67ff009ba4a634f4e86610f5afb8891a0d3a7b9194035186c48d86f09b6877ae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:45:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67ff009ba4a634f4e86610f5afb8891a0d3a7b9194035186c48d86f09b6877ae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:45:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67ff009ba4a634f4e86610f5afb8891a0d3a7b9194035186c48d86f09b6877ae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 18:45:27 compute-0 podman[108137]: 2026-01-20 18:45:27.217408536 +0000 UTC m=+0.022989174 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:45:27 compute-0 podman[108137]: 2026-01-20 18:45:27.319200558 +0000 UTC m=+0.124781196 container init 2dfbc6fea13fdd68455c6d4b9841cb91534e8fa5fea5a0fac0a54f1db2612556 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_robinson, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Jan 20 18:45:27 compute-0 podman[108137]: 2026-01-20 18:45:27.325820398 +0000 UTC m=+0.131401016 container start 2dfbc6fea13fdd68455c6d4b9841cb91534e8fa5fea5a0fac0a54f1db2612556 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_robinson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:45:27 compute-0 podman[108137]: 2026-01-20 18:45:27.32920523 +0000 UTC m=+0.134785888 container attach 2dfbc6fea13fdd68455c6d4b9841cb91534e8fa5fea5a0fac0a54f1db2612556 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_robinson, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:45:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [WARNING] 019/184527 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 20 18:45:27 compute-0 nostalgic_robinson[108154]: {
Jan 20 18:45:27 compute-0 nostalgic_robinson[108154]:     "0": [
Jan 20 18:45:27 compute-0 nostalgic_robinson[108154]:         {
Jan 20 18:45:27 compute-0 nostalgic_robinson[108154]:             "devices": [
Jan 20 18:45:27 compute-0 nostalgic_robinson[108154]:                 "/dev/loop3"
Jan 20 18:45:27 compute-0 nostalgic_robinson[108154]:             ],
Jan 20 18:45:27 compute-0 nostalgic_robinson[108154]:             "lv_name": "ceph_lv0",
Jan 20 18:45:27 compute-0 nostalgic_robinson[108154]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 18:45:27 compute-0 nostalgic_robinson[108154]:             "lv_size": "21470642176",
Jan 20 18:45:27 compute-0 nostalgic_robinson[108154]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=aecbbf3b-b405-507b-97d7-637a83f5b4b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5f53c0c6-6046-4836-83f9-ff93da7e674e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 18:45:27 compute-0 nostalgic_robinson[108154]:             "lv_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 18:45:27 compute-0 nostalgic_robinson[108154]:             "name": "ceph_lv0",
Jan 20 18:45:27 compute-0 nostalgic_robinson[108154]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 18:45:27 compute-0 nostalgic_robinson[108154]:             "tags": {
Jan 20 18:45:27 compute-0 nostalgic_robinson[108154]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 18:45:27 compute-0 nostalgic_robinson[108154]:                 "ceph.block_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 18:45:27 compute-0 nostalgic_robinson[108154]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 18:45:27 compute-0 nostalgic_robinson[108154]:                 "ceph.cluster_fsid": "aecbbf3b-b405-507b-97d7-637a83f5b4b1",
Jan 20 18:45:27 compute-0 nostalgic_robinson[108154]:                 "ceph.cluster_name": "ceph",
Jan 20 18:45:27 compute-0 nostalgic_robinson[108154]:                 "ceph.crush_device_class": "",
Jan 20 18:45:27 compute-0 nostalgic_robinson[108154]:                 "ceph.encrypted": "0",
Jan 20 18:45:27 compute-0 nostalgic_robinson[108154]:                 "ceph.osd_fsid": "5f53c0c6-6046-4836-83f9-ff93da7e674e",
Jan 20 18:45:27 compute-0 nostalgic_robinson[108154]:                 "ceph.osd_id": "0",
Jan 20 18:45:27 compute-0 nostalgic_robinson[108154]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 18:45:27 compute-0 nostalgic_robinson[108154]:                 "ceph.type": "block",
Jan 20 18:45:27 compute-0 nostalgic_robinson[108154]:                 "ceph.vdo": "0",
Jan 20 18:45:27 compute-0 nostalgic_robinson[108154]:                 "ceph.with_tpm": "0"
Jan 20 18:45:27 compute-0 nostalgic_robinson[108154]:             },
Jan 20 18:45:27 compute-0 nostalgic_robinson[108154]:             "type": "block",
Jan 20 18:45:27 compute-0 nostalgic_robinson[108154]:             "vg_name": "ceph_vg0"
Jan 20 18:45:27 compute-0 nostalgic_robinson[108154]:         }
Jan 20 18:45:27 compute-0 nostalgic_robinson[108154]:     ]
Jan 20 18:45:27 compute-0 nostalgic_robinson[108154]: }
Jan 20 18:45:27 compute-0 systemd[1]: libpod-2dfbc6fea13fdd68455c6d4b9841cb91534e8fa5fea5a0fac0a54f1db2612556.scope: Deactivated successfully.
Jan 20 18:45:27 compute-0 podman[108137]: 2026-01-20 18:45:27.599839983 +0000 UTC m=+0.405420601 container died 2dfbc6fea13fdd68455c6d4b9841cb91534e8fa5fea5a0fac0a54f1db2612556 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid)
Jan 20 18:45:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-67ff009ba4a634f4e86610f5afb8891a0d3a7b9194035186c48d86f09b6877ae-merged.mount: Deactivated successfully.
Jan 20 18:45:27 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Jan 20 18:45:27 compute-0 podman[108137]: 2026-01-20 18:45:27.662679118 +0000 UTC m=+0.468259736 container remove 2dfbc6fea13fdd68455c6d4b9841cb91534e8fa5fea5a0fac0a54f1db2612556 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:45:27 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Jan 20 18:45:27 compute-0 ceph-mon[74381]: pgmap v40: 337 pgs: 337 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 217 B/s rd, 0 B/s wr, 0 op/s; 177 B/s, 3 objects/s recovering
Jan 20 18:45:27 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Jan 20 18:45:27 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Jan 20 18:45:27 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 111 pg[9.10( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=2 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=111 pruub=10.646228790s) [1] r=-1 lpr=111 pi=[62,111)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active pruub 305.380157471s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:45:27 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 111 pg[9.10( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=2 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=111 pruub=10.646141052s) [1] r=-1 lpr=111 pi=[62,111)/1 crt=49'1085 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 305.380157471s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:45:27 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Jan 20 18:45:27 compute-0 systemd[1]: libpod-conmon-2dfbc6fea13fdd68455c6d4b9841cb91534e8fa5fea5a0fac0a54f1db2612556.scope: Deactivated successfully.
Jan 20 18:45:27 compute-0 sudo[108028]: pam_unix(sudo:session): session closed for user root
Jan 20 18:45:27 compute-0 sudo[108179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:45:27 compute-0 sudo[108179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:45:27 compute-0 sudo[108179]: pam_unix(sudo:session): session closed for user root
Jan 20 18:45:27 compute-0 sudo[108204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- raw list --format json
Jan 20 18:45:27 compute-0 sudo[108204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:45:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:27 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9a00016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:45:27 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 18:45:27 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Jan 20 18:45:28 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Jan 20 18:45:28 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Jan 20 18:45:28 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 112 pg[9.10( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=2 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=112) [1]/[0] r=0 lpr=112 pi=[62,112)/1 crt=49'1085 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:45:28 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 112 pg[9.10( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=2 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=112) [1]/[0] r=0 lpr=112 pi=[62,112)/1 crt=49'1085 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 18:45:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:28 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd99c0023e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:45:28 compute-0 podman[108265]: 2026-01-20 18:45:28.251516046 +0000 UTC m=+0.040664695 container create 39885534f60dfc5471a3a5fd9894a9915b546963c1868a90ee9d622475025b0f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_chatterjee, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 20 18:45:28 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:45:28 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:45:28 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:45:28.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:45:28 compute-0 systemd[1]: Started libpod-conmon-39885534f60dfc5471a3a5fd9894a9915b546963c1868a90ee9d622475025b0f.scope.
Jan 20 18:45:28 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:45:28 compute-0 podman[108265]: 2026-01-20 18:45:28.232911521 +0000 UTC m=+0.022060190 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:45:28 compute-0 podman[108265]: 2026-01-20 18:45:28.336650196 +0000 UTC m=+0.125798855 container init 39885534f60dfc5471a3a5fd9894a9915b546963c1868a90ee9d622475025b0f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:45:28 compute-0 podman[108265]: 2026-01-20 18:45:28.343347257 +0000 UTC m=+0.132495906 container start 39885534f60dfc5471a3a5fd9894a9915b546963c1868a90ee9d622475025b0f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 20 18:45:28 compute-0 podman[108265]: 2026-01-20 18:45:28.347457348 +0000 UTC m=+0.136606027 container attach 39885534f60dfc5471a3a5fd9894a9915b546963c1868a90ee9d622475025b0f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:45:28 compute-0 agitated_chatterjee[108282]: 167 167
Jan 20 18:45:28 compute-0 systemd[1]: libpod-39885534f60dfc5471a3a5fd9894a9915b546963c1868a90ee9d622475025b0f.scope: Deactivated successfully.
Jan 20 18:45:28 compute-0 podman[108265]: 2026-01-20 18:45:28.352464095 +0000 UTC m=+0.141612754 container died 39885534f60dfc5471a3a5fd9894a9915b546963c1868a90ee9d622475025b0f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Jan 20 18:45:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-91b5498a88e86da43e56289d15a7acfed2f910386a48c03f51f5cc1c69a473ce-merged.mount: Deactivated successfully.
Jan 20 18:45:28 compute-0 podman[108265]: 2026-01-20 18:45:28.400797596 +0000 UTC m=+0.189946245 container remove 39885534f60dfc5471a3a5fd9894a9915b546963c1868a90ee9d622475025b0f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_chatterjee, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 20 18:45:28 compute-0 systemd[1]: libpod-conmon-39885534f60dfc5471a3a5fd9894a9915b546963c1868a90ee9d622475025b0f.scope: Deactivated successfully.
Jan 20 18:45:28 compute-0 podman[108308]: 2026-01-20 18:45:28.57487976 +0000 UTC m=+0.051288793 container create 398e833d39ee38888e890a60add3ba083bc53f2483bd789dc202d1cdcae1bfa2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_moore, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Jan 20 18:45:28 compute-0 systemd[1]: Started libpod-conmon-398e833d39ee38888e890a60add3ba083bc53f2483bd789dc202d1cdcae1bfa2.scope.
Jan 20 18:45:28 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:45:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ff3a59a276185bc00744efaa2af60fdd70fc936ce21a235c660b5f664dbbb9e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 18:45:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ff3a59a276185bc00744efaa2af60fdd70fc936ce21a235c660b5f664dbbb9e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:45:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ff3a59a276185bc00744efaa2af60fdd70fc936ce21a235c660b5f664dbbb9e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:45:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ff3a59a276185bc00744efaa2af60fdd70fc936ce21a235c660b5f664dbbb9e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 18:45:28 compute-0 podman[108308]: 2026-01-20 18:45:28.554074155 +0000 UTC m=+0.030483198 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:45:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:28 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9ac001c00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:45:28 compute-0 podman[108308]: 2026-01-20 18:45:28.663675449 +0000 UTC m=+0.140084502 container init 398e833d39ee38888e890a60add3ba083bc53f2483bd789dc202d1cdcae1bfa2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_moore, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:45:28 compute-0 podman[108308]: 2026-01-20 18:45:28.66924457 +0000 UTC m=+0.145653593 container start 398e833d39ee38888e890a60add3ba083bc53f2483bd789dc202d1cdcae1bfa2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_moore, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Jan 20 18:45:28 compute-0 podman[108308]: 2026-01-20 18:45:28.672468008 +0000 UTC m=+0.148877061 container attach 398e833d39ee38888e890a60add3ba083bc53f2483bd789dc202d1cdcae1bfa2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_moore, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 20 18:45:28 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v43: 337 pgs: 337 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 215 B/s rd, 0 B/s wr, 0 op/s; 175 B/s, 3 objects/s recovering
Jan 20 18:45:28 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Jan 20 18:45:28 compute-0 ceph-mon[74381]: osdmap e111: 3 total, 3 up, 3 in
Jan 20 18:45:28 compute-0 ceph-mon[74381]: osdmap e112: 3 total, 3 up, 3 in
Jan 20 18:45:28 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:45:28 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:45:28 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:45:28.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:45:28 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0)
Jan 20 18:45:28 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Jan 20 18:45:29 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Jan 20 18:45:29 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Jan 20 18:45:29 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Jan 20 18:45:29 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Jan 20 18:45:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 113 pg[9.11( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=5 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=113 pruub=9.280864716s) [1] r=-1 lpr=113 pi=[62,113)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active pruub 305.383728027s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:45:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 113 pg[9.11( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=5 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=113 pruub=9.280793190s) [1] r=-1 lpr=113 pi=[62,113)/1 crt=49'1085 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 305.383728027s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:45:29 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 113 pg[9.10( v 49'1085 (0'0,49'1085] local-lis/les=112/113 n=2 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=112) [1]/[0] async=[1] r=0 lpr=112 pi=[62,112)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:45:29 compute-0 lvm[108399]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 18:45:29 compute-0 lvm[108399]: VG ceph_vg0 finished
Jan 20 18:45:29 compute-0 lucid_moore[108325]: {}
Jan 20 18:45:29 compute-0 systemd[1]: libpod-398e833d39ee38888e890a60add3ba083bc53f2483bd789dc202d1cdcae1bfa2.scope: Deactivated successfully.
Jan 20 18:45:29 compute-0 systemd[1]: libpod-398e833d39ee38888e890a60add3ba083bc53f2483bd789dc202d1cdcae1bfa2.scope: Consumed 1.202s CPU time.
Jan 20 18:45:29 compute-0 podman[108308]: 2026-01-20 18:45:29.384172619 +0000 UTC m=+0.860581652 container died 398e833d39ee38888e890a60add3ba083bc53f2483bd789dc202d1cdcae1bfa2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_moore, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Jan 20 18:45:29 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:45:29] "GET /metrics HTTP/1.1" 200 48320 "" "Prometheus/2.51.0"
Jan 20 18:45:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:45:29] "GET /metrics HTTP/1.1" 200 48320 "" "Prometheus/2.51.0"
Jan 20 18:45:29 compute-0 sudo[108416]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 18:45:29 compute-0 sudo[108416]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:45:29 compute-0 sudo[108416]: pam_unix(sudo:session): session closed for user root
Jan 20 18:45:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:29 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9b40095a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:45:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:30 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9a0002f00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:45:30 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:45:30 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 18:45:30 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:45:30.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 18:45:30 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Jan 20 18:45:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ff3a59a276185bc00744efaa2af60fdd70fc936ce21a235c660b5f664dbbb9e-merged.mount: Deactivated successfully.
Jan 20 18:45:30 compute-0 podman[108308]: 2026-01-20 18:45:30.360943922 +0000 UTC m=+1.837352945 container remove 398e833d39ee38888e890a60add3ba083bc53f2483bd789dc202d1cdcae1bfa2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_moore, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Jan 20 18:45:30 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Jan 20 18:45:30 compute-0 ceph-mon[74381]: pgmap v43: 337 pgs: 337 active+clean; 457 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 215 B/s rd, 0 B/s wr, 0 op/s; 175 B/s, 3 objects/s recovering
Jan 20 18:45:30 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Jan 20 18:45:30 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Jan 20 18:45:30 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Jan 20 18:45:30 compute-0 ceph-mon[74381]: osdmap e113: 3 total, 3 up, 3 in
Jan 20 18:45:30 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Jan 20 18:45:30 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 114 pg[9.11( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=5 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=114) [1]/[0] r=0 lpr=114 pi=[62,114)/1 crt=49'1085 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:45:30 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 114 pg[9.11( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=5 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=114) [1]/[0] r=0 lpr=114 pi=[62,114)/1 crt=49'1085 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 18:45:30 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 114 pg[9.10( v 49'1085 (0'0,49'1085] local-lis/les=112/113 n=2 ec=62/41 lis/c=112/62 les/c/f=113/63/0 sis=114 pruub=14.640603065s) [1] async=[1] r=-1 lpr=114 pi=[62,114)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active pruub 312.114440918s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:45:30 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 114 pg[9.10( v 49'1085 (0'0,49'1085] local-lis/les=112/113 n=2 ec=62/41 lis/c=112/62 les/c/f=113/63/0 sis=114 pruub=14.640565872s) [1] r=-1 lpr=114 pi=[62,114)/1 crt=49'1085 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 312.114440918s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:45:30 compute-0 sudo[108204]: pam_unix(sudo:session): session closed for user root
Jan 20 18:45:30 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 18:45:30 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:30 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 18:45:30 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:30 compute-0 systemd[1]: libpod-conmon-398e833d39ee38888e890a60add3ba083bc53f2483bd789dc202d1cdcae1bfa2.scope: Deactivated successfully.
Jan 20 18:45:30 compute-0 sudo[108442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 18:45:30 compute-0 sudo[108442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:45:30 compute-0 sudo[108442]: pam_unix(sudo:session): session closed for user root
Jan 20 18:45:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:30 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd99c0023e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:45:30 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v46: 337 pgs: 1 remapped+peering, 1 peering, 335 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 20 18:45:30 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:45:30 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 18:45:30 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:45:30.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 18:45:31 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Jan 20 18:45:31 compute-0 ceph-mon[74381]: osdmap e114: 3 total, 3 up, 3 in
Jan 20 18:45:31 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:31 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:31 compute-0 ceph-mon[74381]: pgmap v46: 337 pgs: 1 remapped+peering, 1 peering, 335 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 20 18:45:31 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Jan 20 18:45:31 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Jan 20 18:45:31 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 115 pg[9.11( v 49'1085 (0'0,49'1085] local-lis/les=114/115 n=5 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=114) [1]/[0] async=[1] r=0 lpr=114 pi=[62,114)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:45:31 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:31 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9ac001c00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:45:32 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:32 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9b40095a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:45:32 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:45:32 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:45:32 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:45:32.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:45:32 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Jan 20 18:45:32 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:32 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9a0002f00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:45:32 compute-0 ceph-mon[74381]: osdmap e115: 3 total, 3 up, 3 in
Jan 20 18:45:32 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Jan 20 18:45:32 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Jan 20 18:45:32 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 116 pg[9.11( v 49'1085 (0'0,49'1085] local-lis/les=114/115 n=5 ec=62/41 lis/c=114/62 les/c/f=115/63/0 sis=116 pruub=14.806450844s) [1] async=[1] r=-1 lpr=116 pi=[62,116)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active pruub 314.719726562s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:45:32 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 116 pg[9.11( v 49'1085 (0'0,49'1085] local-lis/les=114/115 n=5 ec=62/41 lis/c=114/62 les/c/f=115/63/0 sis=116 pruub=14.806324005s) [1] r=-1 lpr=116 pi=[62,116)/1 crt=49'1085 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 314.719726562s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:45:32 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v49: 337 pgs: 1 remapped+peering, 1 peering, 335 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 20 18:45:32 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:45:32 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:45:32 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:45:32.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:45:32 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 18:45:33 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Jan 20 18:45:33 compute-0 ceph-mon[74381]: osdmap e116: 3 total, 3 up, 3 in
Jan 20 18:45:33 compute-0 ceph-mon[74381]: pgmap v49: 337 pgs: 1 remapped+peering, 1 peering, 335 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 20 18:45:33 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Jan 20 18:45:33 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Jan 20 18:45:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:33 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd99c002580 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:45:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:34 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9ac001c00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:45:34 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:45:34 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 18:45:34 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:45:34.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 18:45:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:34 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9b40095a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:45:34 compute-0 ceph-mon[74381]: osdmap e117: 3 total, 3 up, 3 in
Jan 20 18:45:34 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v51: 337 pgs: 1 remapped+peering, 1 peering, 335 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 20 18:45:34 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:45:34 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 18:45:34 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:45:34.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 18:45:35 compute-0 ceph-mon[74381]: pgmap v51: 337 pgs: 1 remapped+peering, 1 peering, 335 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 20 18:45:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:35 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9a0003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:45:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:36 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd99c003730 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:45:36 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:45:36 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 18:45:36 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:45:36.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 18:45:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:36 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9ac001c00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:45:36 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v52: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s; 18 B/s, 0 objects/s recovering
Jan 20 18:45:36 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0)
Jan 20 18:45:36 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Jan 20 18:45:36 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:45:36 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:45:36 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:45:36.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:45:36 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Jan 20 18:45:37 compute-0 ceph-mgr[74676]: [dashboard INFO request] [192.168.122.100:42528] [POST] [200] [0.117s] [4.0B] [24787437-da19-43e9-b6ad-0af6fb07b3ff] /api/prometheus_receiver
Jan 20 18:45:37 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Jan 20 18:45:37 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Jan 20 18:45:37 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Jan 20 18:45:37 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Jan 20 18:45:37 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 118 pg[9.12( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=4 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=118 pruub=8.955689430s) [1] r=-1 lpr=118 pi=[62,118)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active pruub 313.383636475s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:45:37 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Jan 20 18:45:37 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 118 pg[9.12( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=4 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=118 pruub=8.955638885s) [1] r=-1 lpr=118 pi=[62,118)/1 crt=49'1085 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 313.383636475s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:45:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:37 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9b400a640 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:45:37 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 18:45:37 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Jan 20 18:45:37 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Jan 20 18:45:37 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Jan 20 18:45:37 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 119 pg[9.12( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=4 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=119) [1]/[0] r=0 lpr=119 pi=[62,119)/1 crt=49'1085 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:45:37 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 119 pg[9.12( v 49'1085 (0'0,49'1085] local-lis/les=62/63 n=4 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=119) [1]/[0] r=0 lpr=119 pi=[62,119)/1 crt=49'1085 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 18:45:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:38 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9a0003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:45:38 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:45:38 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:45:38 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:45:38.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:45:38 compute-0 ceph-mon[74381]: pgmap v52: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s; 18 B/s, 0 objects/s recovering
Jan 20 18:45:38 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Jan 20 18:45:38 compute-0 ceph-mon[74381]: osdmap e118: 3 total, 3 up, 3 in
Jan 20 18:45:38 compute-0 ceph-mon[74381]: osdmap e119: 3 total, 3 up, 3 in
Jan 20 18:45:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:38 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd99c003730 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:45:38 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v55: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s; 18 B/s, 0 objects/s recovering
Jan 20 18:45:38 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0)
Jan 20 18:45:38 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Jan 20 18:45:38 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:45:38 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 18:45:38 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:45:38.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 18:45:38 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Jan 20 18:45:38 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Jan 20 18:45:38 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Jan 20 18:45:39 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Jan 20 18:45:39 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 120 pg[9.12( v 49'1085 (0'0,49'1085] local-lis/les=119/120 n=4 ec=62/41 lis/c=62/62 les/c/f=63/63/0 sis=119) [1]/[0] async=[1] r=0 lpr=119 pi=[62,119)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:45:39 compute-0 ceph-mon[74381]: pgmap v55: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s; 18 B/s, 0 objects/s recovering
Jan 20 18:45:39 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Jan 20 18:45:39 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Jan 20 18:45:39 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Jan 20 18:45:39 compute-0 ceph-mon[74381]: osdmap e120: 3 total, 3 up, 3 in
Jan 20 18:45:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:45:39] "GET /metrics HTTP/1.1" 200 48320 "" "Prometheus/2.51.0"
Jan 20 18:45:39 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:45:39] "GET /metrics HTTP/1.1" 200 48320 "" "Prometheus/2.51.0"
Jan 20 18:45:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:39 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9ac001c00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:45:39 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Jan 20 18:45:39 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:39 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Jan 20 18:45:40 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Jan 20 18:45:40 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Jan 20 18:45:40 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 121 pg[9.12( v 49'1085 (0'0,49'1085] local-lis/les=119/120 n=4 ec=62/41 lis/c=119/62 les/c/f=120/63/0 sis=121 pruub=15.008261681s) [1] async=[1] r=-1 lpr=121 pi=[62,121)/1 crt=49'1085 lcod 0'0 mlcod 0'0 active pruub 322.080383301s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:45:40 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 121 pg[9.12( v 49'1085 (0'0,49'1085] local-lis/les=119/120 n=4 ec=62/41 lis/c=119/62 les/c/f=120/63/0 sis=121 pruub=15.008207321s) [1] r=-1 lpr=121 pi=[62,121)/1 crt=49'1085 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 322.080383301s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 18:45:40 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:40 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9b400a640 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:45:40 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:45:40 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:45:40 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:45:40.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:45:40 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:40 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9a0003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:45:40 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v58: 337 pgs: 1 peering, 336 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 0 objects/s recovering
Jan 20 18:45:40 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:45:40 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:45:40 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:45:40.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:45:40 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:45:40 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 18:45:40 compute-0 ceph-mon[74381]: osdmap e121: 3 total, 3 up, 3 in
Jan 20 18:45:41 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Jan 20 18:45:41 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Jan 20 18:45:41 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Jan 20 18:45:41 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:41 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd99c003730 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:45:42 compute-0 ceph-mon[74381]: pgmap v58: 337 pgs: 1 peering, 336 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 0 objects/s recovering
Jan 20 18:45:42 compute-0 ceph-mon[74381]: osdmap e122: 3 total, 3 up, 3 in
Jan 20 18:45:42 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:42 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9ac001c00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:45:42 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:45:42 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 18:45:42 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:45:42.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 18:45:42 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:42 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9b400a640 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:45:42 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v60: 337 pgs: 1 peering, 336 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 22 B/s, 0 objects/s recovering
Jan 20 18:45:42 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:45:42 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:45:42 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:45:42.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:45:42 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:45:43 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:43 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9a0003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:45:44 compute-0 ceph-mon[74381]: pgmap v60: 337 pgs: 1 peering, 336 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 22 B/s, 0 objects/s recovering
Jan 20 18:45:44 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:44 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd99c003730 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:45:44 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:45:44 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:45:44 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:45:44.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:45:44 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:44 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9ac003cc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:45:44 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v61: 337 pgs: 1 peering, 336 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Jan 20 18:45:44 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:45:44 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:45:44 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:45:44.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:45:45 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:45 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9b400a640 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:45:46 compute-0 ceph-mon[74381]: pgmap v61: 337 pgs: 1 peering, 336 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Jan 20 18:45:46 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:46 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9a0003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:45:46 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:45:46 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 18:45:46 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:45:46.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 18:45:46 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:46 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9a0003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:45:46 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v62: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 388 B/s rd, 0 op/s; 13 B/s, 0 objects/s recovering
Jan 20 18:45:46 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0)
Jan 20 18:45:46 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Jan 20 18:45:46 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:45:46.951Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 18:45:46 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:45:46 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:45:46 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:45:46.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:45:47 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Jan 20 18:45:47 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Jan 20 18:45:47 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Jan 20 18:45:47 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Jan 20 18:45:47 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Jan 20 18:45:47 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Jan 20 18:45:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:47 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9ac003cc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:45:47 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:45:48 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:48 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9b400a640 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:45:48 compute-0 ceph-mon[74381]: pgmap v62: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 388 B/s rd, 0 op/s; 13 B/s, 0 objects/s recovering
Jan 20 18:45:48 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Jan 20 18:45:48 compute-0 ceph-mon[74381]: osdmap e123: 3 total, 3 up, 3 in
Jan 20 18:45:48 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:45:48 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:45:48 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:45:48.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:45:48 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:48 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd99c003730 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:45:48 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v64: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 0 op/s
Jan 20 18:45:48 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0)
Jan 20 18:45:48 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Jan 20 18:45:48 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:45:48 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:45:48 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:45:48.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:45:49 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Jan 20 18:45:49 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Jan 20 18:45:49 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Jan 20 18:45:49 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Jan 20 18:45:49 compute-0 ceph-mon[74381]: pgmap v64: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 0 op/s
Jan 20 18:45:49 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Jan 20 18:45:49 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Jan 20 18:45:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:45:49] "GET /metrics HTTP/1.1" 200 48331 "" "Prometheus/2.51.0"
Jan 20 18:45:49 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:45:49] "GET /metrics HTTP/1.1" 200 48331 "" "Prometheus/2.51.0"
Jan 20 18:45:49 compute-0 sudo[108563]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 18:45:49 compute-0 sudo[108563]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:45:49 compute-0 sudo[108563]: pam_unix(sudo:session): session closed for user root
Jan 20 18:45:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:49 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9a0003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:45:50 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:50 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9a0003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:45:50 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Jan 20 18:45:50 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Jan 20 18:45:50 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Jan 20 18:45:50 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Jan 20 18:45:50 compute-0 ceph-mon[74381]: osdmap e124: 3 total, 3 up, 3 in
Jan 20 18:45:50 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:45:50 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:45:50 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:45:50.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:45:50 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:50 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9a0003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:45:50 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v67: 337 pgs: 1 remapped+peering, 336 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 20 18:45:50 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:45:50 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 18:45:50 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:45:50.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 18:45:51 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Jan 20 18:45:51 compute-0 ceph-mon[74381]: osdmap e125: 3 total, 3 up, 3 in
Jan 20 18:45:51 compute-0 ceph-mon[74381]: pgmap v67: 337 pgs: 1 remapped+peering, 336 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 20 18:45:51 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Jan 20 18:45:51 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Jan 20 18:45:51 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:51 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd998000b60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:45:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:52 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9ac003cc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:45:52 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Jan 20 18:45:52 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Jan 20 18:45:52 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Jan 20 18:45:52 compute-0 ceph-mon[74381]: osdmap e126: 3 total, 3 up, 3 in
Jan 20 18:45:52 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:45:52 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:45:52 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:45:52.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:45:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:52 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd99c003730 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:45:52 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v70: 337 pgs: 1 remapped+peering, 336 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:45:52 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:45:52 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 18:45:52 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:45:52.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 18:45:52 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e127 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:45:53 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Jan 20 18:45:53 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Jan 20 18:45:53 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Jan 20 18:45:53 compute-0 ceph-mon[74381]: osdmap e127: 3 total, 3 up, 3 in
Jan 20 18:45:53 compute-0 ceph-mon[74381]: pgmap v70: 337 pgs: 1 remapped+peering, 336 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:45:53 compute-0 ceph-mon[74381]: osdmap e128: 3 total, 3 up, 3 in
Jan 20 18:45:53 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:53 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9a0003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:45:54 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:54 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9980016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:45:54 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:45:54 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:45:54 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:45:54.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:45:54 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:54 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9ac003cc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:45:54 compute-0 ceph-mgr[74676]: [balancer INFO root] Optimize plan auto_2026-01-20_18:45:54
Jan 20 18:45:54 compute-0 ceph-mgr[74676]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 18:45:54 compute-0 ceph-mgr[74676]: [balancer INFO root] Some PGs (0.002967) are inactive; try again later
Jan 20 18:45:54 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v72: 337 pgs: 1 remapped+peering, 336 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:45:54 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:45:54 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:45:54 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:45:54.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:45:54 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 18:45:54 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:45:54 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 18:45:54 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:45:54 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:45:54 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:45:54 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:45:54 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:45:54 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:45:54 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:45:54 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:45:54 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:45:54 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 20 18:45:54 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:45:54 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:45:54 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:45:54 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 18:45:54 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:45:54 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 20 18:45:54 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:45:54 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:45:54 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:45:54 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 18:45:54 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:45:54 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 20 18:45:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:45:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:45:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:45:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:45:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 18:45:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 18:45:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 18:45:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 18:45:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 18:45:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:45:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:45:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 18:45:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 18:45:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 18:45:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 18:45:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 18:45:55 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 18:45:55 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:55 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd99c003730 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:45:56 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:56 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9a0003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:45:56 compute-0 ceph-mon[74381]: pgmap v72: 337 pgs: 1 remapped+peering, 336 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:45:56 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:45:56 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:45:56 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:45:56.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:45:56 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:56 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9980016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:45:56 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v73: 337 pgs: 337 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s; 18 B/s, 0 objects/s recovering
Jan 20 18:45:56 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0)
Jan 20 18:45:56 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Jan 20 18:45:56 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:45:56.952Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 18:45:56 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:45:56.953Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 18:45:56 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:45:56.953Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 18:45:56 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:45:56 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:45:56 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:45:56.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:45:57 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Jan 20 18:45:57 compute-0 ceph-mon[74381]: pgmap v73: 337 pgs: 337 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s; 18 B/s, 0 objects/s recovering
Jan 20 18:45:57 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Jan 20 18:45:57 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Jan 20 18:45:57 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Jan 20 18:45:57 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Jan 20 18:45:57 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Jan 20 18:45:57 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:45:57 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Jan 20 18:45:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:57 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9ac003cc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:45:57 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Jan 20 18:45:58 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Jan 20 18:45:58 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:58 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd99c003730 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:45:58 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Jan 20 18:45:58 compute-0 ceph-mon[74381]: osdmap e129: 3 total, 3 up, 3 in
Jan 20 18:45:58 compute-0 ceph-mon[74381]: osdmap e130: 3 total, 3 up, 3 in
Jan 20 18:45:58 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:45:58 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:45:58 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:45:58.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:45:58 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:58 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9a0003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:45:58 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v76: 337 pgs: 337 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s; 18 B/s, 0 objects/s recovering
Jan 20 18:45:58 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0)
Jan 20 18:45:58 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Jan 20 18:45:58 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:45:58 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:45:58 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:45:58.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:45:58 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Jan 20 18:45:59 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Jan 20 18:45:59 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Jan 20 18:45:59 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Jan 20 18:45:59 compute-0 ceph-mon[74381]: pgmap v76: 337 pgs: 337 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s; 18 B/s, 0 objects/s recovering
Jan 20 18:45:59 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Jan 20 18:45:59 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Jan 20 18:45:59 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Jan 20 18:45:59 compute-0 ceph-mon[74381]: osdmap e131: 3 total, 3 up, 3 in
Jan 20 18:45:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:45:59] "GET /metrics HTTP/1.1" 200 48320 "" "Prometheus/2.51.0"
Jan 20 18:45:59 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:45:59] "GET /metrics HTTP/1.1" 200 48320 "" "Prometheus/2.51.0"
Jan 20 18:45:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:45:59 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9a0003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:00 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Jan 20 18:46:00 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:00 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9980016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:00 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Jan 20 18:46:00 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Jan 20 18:46:00 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:46:00 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 18:46:00 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:46:00.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 18:46:00 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:00 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd99c003730 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:00 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v79: 337 pgs: 1 peering, 336 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 0 objects/s recovering
Jan 20 18:46:00 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:46:00 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:46:00 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:46:00.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:46:01 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Jan 20 18:46:01 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Jan 20 18:46:01 compute-0 ceph-mon[74381]: osdmap e132: 3 total, 3 up, 3 in
Jan 20 18:46:01 compute-0 ceph-mon[74381]: pgmap v79: 337 pgs: 1 peering, 336 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 0 objects/s recovering
Jan 20 18:46:01 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Jan 20 18:46:01 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:01 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9a0003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:02 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:02 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9a0003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:02 compute-0 ceph-mon[74381]: osdmap e133: 3 total, 3 up, 3 in
Jan 20 18:46:02 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:46:02 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:46:02 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:46:02.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:46:02 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:02 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd998002b10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:02 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v81: 337 pgs: 1 peering, 336 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 22 B/s, 0 objects/s recovering
Jan 20 18:46:02 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:46:02 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 18:46:02 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:46:02.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 18:46:02 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:46:03 compute-0 ceph-mon[74381]: pgmap v81: 337 pgs: 1 peering, 336 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 22 B/s, 0 objects/s recovering
Jan 20 18:46:03 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:03 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd99c003730 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:04 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:04 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9a0003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:04 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:46:04 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 18:46:04 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:46:04.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 18:46:04 compute-0 sudo[106980]: pam_unix(sudo:session): session closed for user root
Jan 20 18:46:04 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:04 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9a0003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:04 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v82: 337 pgs: 1 peering, 336 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Jan 20 18:46:04 compute-0 sudo[108753]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncghoxoctrylsldkhhkdgqavuvywbmzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934764.6907353-364-244468686459731/AnsiballZ_command.py'
Jan 20 18:46:04 compute-0 sudo[108753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:46:04 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:46:04 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:46:04 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:46:04.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:46:05 compute-0 python3.9[108755]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:46:06 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:06 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd998002b10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:06 compute-0 ceph-mon[74381]: pgmap v82: 337 pgs: 1 peering, 336 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Jan 20 18:46:06 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:06 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd99c003730 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:06 compute-0 sudo[108753]: pam_unix(sudo:session): session closed for user root
Jan 20 18:46:06 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:46:06 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 18:46:06 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:46:06.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 18:46:06 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:06 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9a0003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:06 compute-0 sudo[109042]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-msrboebjsrapgnuvjdduuzpuowomjzow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934766.297916-388-262833656512082/AnsiballZ_selinux.py'
Jan 20 18:46:06 compute-0 sudo[109042]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:46:06 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v83: 337 pgs: 337 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 388 B/s rd, 0 op/s; 13 B/s, 0 objects/s recovering
Jan 20 18:46:06 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0)
Jan 20 18:46:06 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Jan 20 18:46:06 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:46:06.954Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 18:46:06 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:46:06 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:46:06 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:46:06.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:46:07 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Jan 20 18:46:07 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Jan 20 18:46:07 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Jan 20 18:46:07 compute-0 python3.9[109044]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 20 18:46:07 compute-0 sudo[109042]: pam_unix(sudo:session): session closed for user root
Jan 20 18:46:07 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Jan 20 18:46:07 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Jan 20 18:46:07 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Jan 20 18:46:07 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:46:08 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:08 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9ac003cc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:08 compute-0 sudo[109196]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-puyfzceciphdwmmfjlychmtggtdntoyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934767.7632298-421-257621157746167/AnsiballZ_command.py'
Jan 20 18:46:08 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:08 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd998002b10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:08 compute-0 sudo[109196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:46:08 compute-0 ceph-mon[74381]: pgmap v83: 337 pgs: 337 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 388 B/s rd, 0 op/s; 13 B/s, 0 objects/s recovering
Jan 20 18:46:08 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Jan 20 18:46:08 compute-0 ceph-mon[74381]: osdmap e134: 3 total, 3 up, 3 in
Jan 20 18:46:08 compute-0 python3.9[109198]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 20 18:46:08 compute-0 sudo[109196]: pam_unix(sudo:session): session closed for user root
Jan 20 18:46:08 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:46:08 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:46:08 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:46:08.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:46:08 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:08 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd99c003730 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:08 compute-0 sudo[109348]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgrfrydxmblseghyzzirwijvremwnhkw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934768.5391226-445-263335711583156/AnsiballZ_file.py'
Jan 20 18:46:08 compute-0 sudo[109348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:46:08 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v85: 337 pgs: 337 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 0 op/s
Jan 20 18:46:08 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0)
Jan 20 18:46:08 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Jan 20 18:46:08 compute-0 python3.9[109350]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:46:08 compute-0 sudo[109348]: pam_unix(sudo:session): session closed for user root
Jan 20 18:46:08 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:46:08 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:46:08 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:46:08.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:46:09 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Jan 20 18:46:09 compute-0 ceph-mon[74381]: pgmap v85: 337 pgs: 337 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 0 op/s
Jan 20 18:46:09 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Jan 20 18:46:09 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Jan 20 18:46:09 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Jan 20 18:46:09 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Jan 20 18:46:09 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Jan 20 18:46:09 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 135 pg[9.19( empty local-lis/les=0/0 n=0 ec=62/41 lis/c=95/95 les/c/f=96/96/0 sis=135) [0] r=0 lpr=135 pi=[95,135)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:46:09 compute-0 sudo[109502]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amkasfsrmndlfqmeqrmnhakiwzuldiyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934769.3183174-469-99929803153308/AnsiballZ_mount.py'
Jan 20 18:46:09 compute-0 sudo[109502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:46:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:46:09] "GET /metrics HTTP/1.1" 200 48320 "" "Prometheus/2.51.0"
Jan 20 18:46:09 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:46:09] "GET /metrics HTTP/1.1" 200 48320 "" "Prometheus/2.51.0"
Jan 20 18:46:09 compute-0 python3.9[109504]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 20 18:46:09 compute-0 sudo[109502]: pam_unix(sudo:session): session closed for user root
Jan 20 18:46:09 compute-0 sudo[109505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 18:46:09 compute-0 sudo[109505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:46:09 compute-0 sudo[109505]: pam_unix(sudo:session): session closed for user root
Jan 20 18:46:10 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:10 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9a0003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:10 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:10 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9ac003cc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:10 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Jan 20 18:46:10 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Jan 20 18:46:10 compute-0 ceph-mon[74381]: osdmap e135: 3 total, 3 up, 3 in
Jan 20 18:46:10 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 18:46:10 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Jan 20 18:46:10 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Jan 20 18:46:10 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 136 pg[9.19( empty local-lis/les=0/0 n=0 ec=62/41 lis/c=95/95 les/c/f=96/96/0 sis=136) [0]/[2] r=-1 lpr=136 pi=[95,136)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:46:10 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 136 pg[9.19( empty local-lis/les=0/0 n=0 ec=62/41 lis/c=95/95 les/c/f=96/96/0 sis=136) [0]/[2] r=-1 lpr=136 pi=[95,136)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 18:46:10 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:46:10 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 18:46:10 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:46:10.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 18:46:10 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:10 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd998003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:10 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v88: 337 pgs: 1 remapped+peering, 336 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 20 18:46:10 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:46:10 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:46:10 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:46:10.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:46:11 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Jan 20 18:46:11 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Jan 20 18:46:11 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Jan 20 18:46:11 compute-0 ceph-mon[74381]: osdmap e136: 3 total, 3 up, 3 in
Jan 20 18:46:11 compute-0 ceph-mon[74381]: pgmap v88: 337 pgs: 1 remapped+peering, 336 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 20 18:46:11 compute-0 sudo[109680]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmkrvofupovaadicpopzwebfbmbxiwgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934771.0939252-553-12620960762443/AnsiballZ_file.py'
Jan 20 18:46:11 compute-0 sudo[109680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:46:11 compute-0 python3.9[109683]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:46:11 compute-0 sudo[109680]: pam_unix(sudo:session): session closed for user root
Jan 20 18:46:12 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:12 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd99c003730 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:12 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:12 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9a0003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:12 compute-0 sudo[109833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pekqawtdcfwpfbapgtfbvxjlrqggnupm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934772.0173347-577-254723679935654/AnsiballZ_stat.py'
Jan 20 18:46:12 compute-0 sudo[109833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:46:12 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:46:12 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:46:12 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:46:12.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:46:12 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Jan 20 18:46:12 compute-0 ceph-mon[74381]: osdmap e137: 3 total, 3 up, 3 in
Jan 20 18:46:12 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Jan 20 18:46:12 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Jan 20 18:46:12 compute-0 python3.9[109835]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:46:12 compute-0 sudo[109833]: pam_unix(sudo:session): session closed for user root
Jan 20 18:46:12 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 138 pg[9.19( v 49'1085 (0'0,49'1085] local-lis/les=0/0 n=7 ec=62/41 lis/c=136/95 les/c/f=137/96/0 sis=138) [0] r=0 lpr=138 pi=[95,138)/1 luod=0'0 crt=49'1085 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:46:12 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 138 pg[9.19( v 49'1085 (0'0,49'1085] local-lis/les=0/0 n=7 ec=62/41 lis/c=136/95 les/c/f=137/96/0 sis=138) [0] r=0 lpr=138 pi=[95,138)/1 crt=49'1085 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:46:12 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:12 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9ac003cc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:12 compute-0 sudo[109911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtebkhntsyetmmlxeghebhyjknqodakw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934772.0173347-577-254723679935654/AnsiballZ_file.py'
Jan 20 18:46:12 compute-0 sudo[109911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:46:12 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v91: 337 pgs: 1 remapped+peering, 336 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:46:12 compute-0 python3.9[109913]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:46:12 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [WARNING] 019/184612 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 20 18:46:12 compute-0 sudo[109911]: pam_unix(sudo:session): session closed for user root
Jan 20 18:46:12 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:46:13 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:46:13 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:46:13 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:46:13.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:46:13 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Jan 20 18:46:13 compute-0 ceph-mon[74381]: osdmap e138: 3 total, 3 up, 3 in
Jan 20 18:46:13 compute-0 ceph-mon[74381]: pgmap v91: 337 pgs: 1 remapped+peering, 336 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:46:13 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Jan 20 18:46:13 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Jan 20 18:46:13 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 139 pg[9.19( v 49'1085 (0'0,49'1085] local-lis/les=138/139 n=7 ec=62/41 lis/c=136/95 les/c/f=137/96/0 sis=138) [0] r=0 lpr=138 pi=[95,138)/1 crt=49'1085 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:46:14 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:14 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd998003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:14 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:14 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd99c003730 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:14 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:46:14 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:46:14 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:46:14.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:46:14 compute-0 sudo[110065]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdbivcswlkmxcastdjzgjhyzddwebemh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934774.291805-640-153383146773468/AnsiballZ_stat.py'
Jan 20 18:46:14 compute-0 sudo[110065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:46:14 compute-0 ceph-mon[74381]: osdmap e139: 3 total, 3 up, 3 in
Jan 20 18:46:14 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:14 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9a0003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:14 compute-0 python3.9[110067]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 18:46:14 compute-0 sudo[110065]: pam_unix(sudo:session): session closed for user root
Jan 20 18:46:14 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v93: 337 pgs: 1 remapped+peering, 336 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:46:15 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:46:15 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:46:15 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:46:15.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:46:15 compute-0 ceph-mon[74381]: pgmap v93: 337 pgs: 1 remapped+peering, 336 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Jan 20 18:46:15 compute-0 sudo[110221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kswwpfgxuunbdtwbbvmlkbqbygoaicrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934775.448493-679-54144216664814/AnsiballZ_getent.py'
Jan 20 18:46:15 compute-0 sudo[110221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:46:16 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:16 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9ac003cc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:16 compute-0 python3.9[110223]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 20 18:46:16 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:16 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd998003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:16 compute-0 sudo[110221]: pam_unix(sudo:session): session closed for user root
Jan 20 18:46:16 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:46:16 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 18:46:16 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:46:16.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 18:46:16 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:16 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd99c003730 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:16 compute-0 sudo[110374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcbcuguskfvfiinqbgpnpmsqzvcjlhdx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934776.5794027-709-115346426008520/AnsiballZ_getent.py'
Jan 20 18:46:16 compute-0 sudo[110374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:46:16 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v94: 337 pgs: 337 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s; 18 B/s, 1 objects/s recovering
Jan 20 18:46:16 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0)
Jan 20 18:46:16 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Jan 20 18:46:16 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:46:16.955Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 18:46:17 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:46:17 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 18:46:17 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:46:17.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 18:46:17 compute-0 python3.9[110376]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 20 18:46:17 compute-0 sudo[110374]: pam_unix(sudo:session): session closed for user root
Jan 20 18:46:17 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Jan 20 18:46:17 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Jan 20 18:46:17 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Jan 20 18:46:17 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Jan 20 18:46:17 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Jan 20 18:46:17 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 140 pg[9.1a( empty local-lis/les=0/0 n=0 ec=62/41 lis/c=100/100 les/c/f=101/101/0 sis=140) [0] r=0 lpr=140 pi=[100,140)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:46:17 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Jan 20 18:46:17 compute-0 sudo[110529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sikiruncbvdeithczyspmswllmvwqnbg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934777.351207-733-149282256570038/AnsiballZ_group.py'
Jan 20 18:46:17 compute-0 sudo[110529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:46:17 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:46:17 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Jan 20 18:46:18 compute-0 python3.9[110531]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 20 18:46:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:18 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9a0003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:18 compute-0 sudo[110529]: pam_unix(sudo:session): session closed for user root
Jan 20 18:46:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:18 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9ac003cc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:18 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:46:18 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:46:18 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:46:18.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:46:18 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Jan 20 18:46:18 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Jan 20 18:46:18 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 141 pg[9.1a( empty local-lis/les=0/0 n=0 ec=62/41 lis/c=100/100 les/c/f=101/101/0 sis=141) [0]/[1] r=-1 lpr=141 pi=[100,141)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:46:18 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 141 pg[9.1a( empty local-lis/les=0/0 n=0 ec=62/41 lis/c=100/100 les/c/f=101/101/0 sis=141) [0]/[1] r=-1 lpr=141 pi=[100,141)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 18:46:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[106777]: logger=infra.usagestats t=2026-01-20T18:46:18.519262618Z level=info msg="Usage stats are ready to report"
Jan 20 18:46:18 compute-0 sudo[110681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psnydjnecjoskuaiymchkjrxyhnrejcf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934778.387543-760-89833604653701/AnsiballZ_file.py'
Jan 20 18:46:18 compute-0 sudo[110681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:46:18 compute-0 ceph-mon[74381]: pgmap v94: 337 pgs: 337 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s; 18 B/s, 1 objects/s recovering
Jan 20 18:46:18 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Jan 20 18:46:18 compute-0 ceph-mon[74381]: osdmap e140: 3 total, 3 up, 3 in
Jan 20 18:46:18 compute-0 ceph-mon[74381]: osdmap e141: 3 total, 3 up, 3 in
Jan 20 18:46:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:18 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9ac003cc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:18 compute-0 python3.9[110683]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 20 18:46:18 compute-0 sudo[110681]: pam_unix(sudo:session): session closed for user root
Jan 20 18:46:18 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v97: 337 pgs: 337 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s; 18 B/s, 1 objects/s recovering
Jan 20 18:46:18 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0)
Jan 20 18:46:18 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Jan 20 18:46:19 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:46:19 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:46:19 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:46:19.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:46:19 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Jan 20 18:46:19 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Jan 20 18:46:19 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Jan 20 18:46:19 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Jan 20 18:46:19 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 142 pg[9.1b( empty local-lis/les=0/0 n=0 ec=62/41 lis/c=74/74 les/c/f=75/75/0 sis=142) [0] r=0 lpr=142 pi=[74,142)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:46:19 compute-0 sudo[110835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbqkkdgepmqpnrgfoipuemwbpewoaffg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934779.4262726-793-46936904631462/AnsiballZ_dnf.py'
Jan 20 18:46:19 compute-0 sudo[110835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:46:19 compute-0 ceph-mon[74381]: pgmap v97: 337 pgs: 337 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s; 18 B/s, 1 objects/s recovering
Jan 20 18:46:19 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Jan 20 18:46:19 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Jan 20 18:46:19 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Jan 20 18:46:19 compute-0 ceph-mon[74381]: osdmap e142: 3 total, 3 up, 3 in
Jan 20 18:46:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:46:19] "GET /metrics HTTP/1.1" 200 48324 "" "Prometheus/2.51.0"
Jan 20 18:46:19 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:46:19] "GET /metrics HTTP/1.1" 200 48324 "" "Prometheus/2.51.0"
Jan 20 18:46:19 compute-0 python3.9[110837]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 18:46:20 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:20 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd99c003730 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:20 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:20 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9a0003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:20 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:46:20 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 18:46:20 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:46:20.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 18:46:20 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Jan 20 18:46:20 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Jan 20 18:46:20 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Jan 20 18:46:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 143 pg[9.1a( v 49'1085 (0'0,49'1085] local-lis/les=0/0 n=4 ec=62/41 lis/c=141/100 les/c/f=142/101/0 sis=143) [0] r=0 lpr=143 pi=[100,143)/1 luod=0'0 crt=49'1085 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:46:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 143 pg[9.1b( empty local-lis/les=0/0 n=0 ec=62/41 lis/c=74/74 les/c/f=75/75/0 sis=143) [0]/[2] r=-1 lpr=143 pi=[74,143)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:46:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 143 pg[9.1a( v 49'1085 (0'0,49'1085] local-lis/les=0/0 n=4 ec=62/41 lis/c=141/100 les/c/f=142/101/0 sis=143) [0] r=0 lpr=143 pi=[100,143)/1 crt=49'1085 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:46:20 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 143 pg[9.1b( empty local-lis/les=0/0 n=0 ec=62/41 lis/c=74/74 les/c/f=75/75/0 sis=143) [0]/[2] r=-1 lpr=143 pi=[74,143)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 18:46:20 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:20 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9ac003cc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:20 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v100: 337 pgs: 1 remapped+peering, 1 peering, 335 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 255 B/s wr, 1 op/s; 27 B/s, 0 objects/s recovering
Jan 20 18:46:21 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:46:21 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:46:21 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:46:21.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:46:21 compute-0 sudo[110835]: pam_unix(sudo:session): session closed for user root
Jan 20 18:46:21 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Jan 20 18:46:21 compute-0 ceph-mon[74381]: osdmap e143: 3 total, 3 up, 3 in
Jan 20 18:46:21 compute-0 ceph-mon[74381]: pgmap v100: 337 pgs: 1 remapped+peering, 1 peering, 335 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 255 B/s wr, 1 op/s; 27 B/s, 0 objects/s recovering
Jan 20 18:46:21 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Jan 20 18:46:21 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Jan 20 18:46:21 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 144 pg[9.1a( v 49'1085 (0'0,49'1085] local-lis/les=143/144 n=4 ec=62/41 lis/c=141/100 les/c/f=142/101/0 sis=143) [0] r=0 lpr=143 pi=[100,143)/1 crt=49'1085 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:46:21 compute-0 sudo[110990]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stdiqzcarctrndcftgxfizcfbzjqdfix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934781.7225428-817-259118350837944/AnsiballZ_file.py'
Jan 20 18:46:21 compute-0 sudo[110990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:46:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:22 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9ac003cc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:22 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd99c003730 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:22 compute-0 python3.9[110992]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:46:22 compute-0 sudo[110990]: pam_unix(sudo:session): session closed for user root
Jan 20 18:46:22 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:46:22 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:46:22 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:46:22.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:46:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:22 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 20 18:46:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:22 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9b4002010 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:22 compute-0 sudo[111145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgvbodjeupmxslnxsanuelkrdmuijsxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934782.4801488-841-220179722643723/AnsiballZ_stat.py'
Jan 20 18:46:22 compute-0 sudo[111145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:46:22 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Jan 20 18:46:22 compute-0 ceph-mon[74381]: osdmap e144: 3 total, 3 up, 3 in
Jan 20 18:46:22 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Jan 20 18:46:22 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Jan 20 18:46:22 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 145 pg[9.1b( v 49'1085 (0'0,49'1085] local-lis/les=0/0 n=2 ec=62/41 lis/c=143/74 les/c/f=144/75/0 sis=145) [0] r=0 lpr=145 pi=[74,145)/1 luod=0'0 crt=49'1085 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:46:22 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 145 pg[9.1b( v 49'1085 (0'0,49'1085] local-lis/les=0/0 n=2 ec=62/41 lis/c=143/74 les/c/f=144/75/0 sis=145) [0] r=0 lpr=145 pi=[74,145)/1 crt=49'1085 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:46:22 compute-0 python3.9[111147]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:46:22 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v103: 337 pgs: 1 remapped+peering, 1 peering, 335 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 255 B/s wr, 1 op/s; 27 B/s, 0 objects/s recovering
Jan 20 18:46:22 compute-0 sudo[111145]: pam_unix(sudo:session): session closed for user root
Jan 20 18:46:23 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:46:23 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:46:23 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:46:23 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:46:23.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:46:23 compute-0 sudo[111223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvbcsgdtbbucqtdqsyxrbqrxjwlqijzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934782.4801488-841-220179722643723/AnsiballZ_file.py'
Jan 20 18:46:23 compute-0 sudo[111223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:46:23 compute-0 python3.9[111225]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:46:23 compute-0 sudo[111223]: pam_unix(sudo:session): session closed for user root
Jan 20 18:46:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [WARNING] 019/184623 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 20 18:46:23 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Jan 20 18:46:23 compute-0 ceph-mon[74381]: osdmap e145: 3 total, 3 up, 3 in
Jan 20 18:46:23 compute-0 ceph-mon[74381]: pgmap v103: 337 pgs: 1 remapped+peering, 1 peering, 335 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 255 B/s wr, 1 op/s; 27 B/s, 0 objects/s recovering
Jan 20 18:46:23 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e146 e146: 3 total, 3 up, 3 in
Jan 20 18:46:23 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Jan 20 18:46:23 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 146 pg[9.1b( v 49'1085 (0'0,49'1085] local-lis/les=145/146 n=2 ec=62/41 lis/c=143/74 les/c/f=144/75/0 sis=145) [0] r=0 lpr=145 pi=[74,145)/1 crt=49'1085 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:46:24 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:24 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9ac003cc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:24 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:24 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd994000d90 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:24 compute-0 sudo[111377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbfjnlwfuihjraeoxlfnzcncmmdnplqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934783.9091246-880-86559525391313/AnsiballZ_stat.py'
Jan 20 18:46:24 compute-0 sudo[111377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:46:24 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:46:24 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:46:24 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:46:24.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:46:24 compute-0 python3.9[111379]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:46:24 compute-0 sudo[111377]: pam_unix(sudo:session): session closed for user root
Jan 20 18:46:24 compute-0 sudo[111455]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdexltlymyhvhxutjzduirkbgmeanuyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934783.9091246-880-86559525391313/AnsiballZ_file.py'
Jan 20 18:46:24 compute-0 sudo[111455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:46:24 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:24 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd99c003730 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:24 compute-0 python3.9[111457]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:46:24 compute-0 sudo[111455]: pam_unix(sudo:session): session closed for user root
Jan 20 18:46:24 compute-0 ceph-mon[74381]: osdmap e146: 3 total, 3 up, 3 in
Jan 20 18:46:24 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v105: 337 pgs: 1 remapped+peering, 1 peering, 335 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 942 B/s rd, 235 B/s wr, 1 op/s; 25 B/s, 0 objects/s recovering
Jan 20 18:46:25 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:46:25 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 18:46:25 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:46:25.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 18:46:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:46:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:46:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:46:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7fb40fdc60d0>)]
Jan 20 18:46:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Jan 20 18:46:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:46:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7fb40fdc6040>)]
Jan 20 18:46:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Jan 20 18:46:25 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:25 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 20 18:46:25 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:25 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 20 18:46:25 compute-0 sudo[111609]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdxurboyqnkcyrwjdfzjqygzdimglvne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934785.5861013-925-45086563636478/AnsiballZ_dnf.py'
Jan 20 18:46:25 compute-0 sudo[111609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:46:26 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:26 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9b4002010 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:26 compute-0 python3.9[111611]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 18:46:26 compute-0 ceph-mon[74381]: pgmap v105: 337 pgs: 1 remapped+peering, 1 peering, 335 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 942 B/s rd, 235 B/s wr, 1 op/s; 25 B/s, 0 objects/s recovering
Jan 20 18:46:26 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 18:46:26 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:26 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9ac003cc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:26 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:46:26 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:46:26 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:46:26.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:46:26 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:26 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9940018b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:26 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v106: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 1.2 KiB/s wr, 2 op/s; 18 B/s, 0 objects/s recovering
Jan 20 18:46:26 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0)
Jan 20 18:46:26 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Jan 20 18:46:26 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:46:26.956Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 18:46:27 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:46:27 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:46:27 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:46:27.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:46:27 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e146 do_prune osdmap full prune enabled
Jan 20 18:46:27 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Jan 20 18:46:27 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Jan 20 18:46:27 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Jan 20 18:46:27 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e147 e147: 3 total, 3 up, 3 in
Jan 20 18:46:27 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Jan 20 18:46:27 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : mgrmap e33: compute-0.cepfkm(active, since 92s), standbys: compute-1.whkwsm, compute-2.pyghhf
Jan 20 18:46:27 compute-0 sudo[111609]: pam_unix(sudo:session): session closed for user root
Jan 20 18:46:28 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:46:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:28 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd99c003730 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:28 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9b4002010 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:28 compute-0 ceph-mon[74381]: pgmap v106: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 1.2 KiB/s wr, 2 op/s; 18 B/s, 0 objects/s recovering
Jan 20 18:46:28 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Jan 20 18:46:28 compute-0 ceph-mon[74381]: osdmap e147: 3 total, 3 up, 3 in
Jan 20 18:46:28 compute-0 ceph-mon[74381]: mgrmap e33: compute-0.cepfkm(active, since 92s), standbys: compute-1.whkwsm, compute-2.pyghhf
Jan 20 18:46:28 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:46:28 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 18:46:28 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:46:28.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 18:46:28 compute-0 python3.9[111764]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 18:46:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:28 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9ac003cc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:28 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v108: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 1.2 KiB/s wr, 2 op/s; 18 B/s, 0 objects/s recovering
Jan 20 18:46:28 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0)
Jan 20 18:46:28 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Jan 20 18:46:29 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:46:29 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 18:46:29 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:46:29.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 18:46:29 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e147 do_prune osdmap full prune enabled
Jan 20 18:46:29 compute-0 ceph-mon[74381]: pgmap v108: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 1.2 KiB/s wr, 2 op/s; 18 B/s, 0 objects/s recovering
Jan 20 18:46:29 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Jan 20 18:46:29 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Jan 20 18:46:29 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Jan 20 18:46:29 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e148 e148: 3 total, 3 up, 3 in
Jan 20 18:46:29 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e148: 3 total, 3 up, 3 in
Jan 20 18:46:29 compute-0 python3.9[111916]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 20 18:46:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:46:29] "GET /metrics HTTP/1.1" 200 48326 "" "Prometheus/2.51.0"
Jan 20 18:46:29 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:46:29] "GET /metrics HTTP/1.1" 200 48326 "" "Prometheus/2.51.0"
Jan 20 18:46:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:30 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9940018b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:30 compute-0 sudo[112018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 18:46:30 compute-0 sudo[112018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:46:30 compute-0 sudo[112018]: pam_unix(sudo:session): session closed for user root
Jan 20 18:46:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:30 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd99c003730 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:30 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:46:30 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:46:30 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:46:30.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:46:30 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e148 do_prune osdmap full prune enabled
Jan 20 18:46:30 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Jan 20 18:46:30 compute-0 ceph-mon[74381]: osdmap e148: 3 total, 3 up, 3 in
Jan 20 18:46:30 compute-0 python3.9[112093]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 18:46:30 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e149 e149: 3 total, 3 up, 3 in
Jan 20 18:46:30 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e149: 3 total, 3 up, 3 in
Jan 20 18:46:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:30 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9b4002010 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:30 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 20 18:46:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:30 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 20 18:46:30 compute-0 sudo[112118]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:46:30 compute-0 sudo[112118]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:46:30 compute-0 sudo[112118]: pam_unix(sudo:session): session closed for user root
Jan 20 18:46:30 compute-0 sudo[112143]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Jan 20 18:46:30 compute-0 sudo[112143]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:46:30 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v111: 337 pgs: 1 remapped+peering, 336 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 4.2 KiB/s rd, 1.8 KiB/s wr, 6 op/s; 18 B/s, 0 objects/s recovering
Jan 20 18:46:31 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:46:31 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 20 18:46:31 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:46:31.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 20 18:46:31 compute-0 podman[112283]: 2026-01-20 18:46:31.385900319 +0000 UTC m=+0.058120521 container exec 2fba800b181b7eef43f9bbe592c3e2bd413c4a1140b3b26fe8cd839a68603da7 (image=quay.io/ceph/ceph:v19, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 18:46:31 compute-0 podman[112283]: 2026-01-20 18:46:31.511132632 +0000 UTC m=+0.183352804 container exec_died 2fba800b181b7eef43f9bbe592c3e2bd413c4a1140b3b26fe8cd839a68603da7 (image=quay.io/ceph/ceph:v19, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mon-compute-0, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 18:46:31 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e149 do_prune osdmap full prune enabled
Jan 20 18:46:31 compute-0 ceph-mon[74381]: osdmap e149: 3 total, 3 up, 3 in
Jan 20 18:46:31 compute-0 ceph-mon[74381]: pgmap v111: 337 pgs: 1 remapped+peering, 336 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 4.2 KiB/s rd, 1.8 KiB/s wr, 6 op/s; 18 B/s, 0 objects/s recovering
Jan 20 18:46:31 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e150 e150: 3 total, 3 up, 3 in
Jan 20 18:46:31 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e150: 3 total, 3 up, 3 in
Jan 20 18:46:31 compute-0 sudo[112498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irmuqjvkmxcuvhnmlrbgqkzpbzrlbayy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934791.2867737-1048-204230165460463/AnsiballZ_systemd.py'
Jan 20 18:46:31 compute-0 sudo[112498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:46:32 compute-0 podman[112499]: 2026-01-20 18:46:32.009884055 +0000 UTC m=+0.068760566 container exec d3ebab8fa832c2f5835ae87e49226279b06e4ab5bb811ac33df7c9afc2478663 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:46:32 compute-0 podman[112499]: 2026-01-20 18:46:32.022269295 +0000 UTC m=+0.081145776 container exec_died d3ebab8fa832c2f5835ae87e49226279b06e4ab5bb811ac33df7c9afc2478663 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:46:32 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:32 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9ac003cc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:32 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:32 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9940018b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:32 compute-0 python3.9[112507]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 18:46:32 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 20 18:46:32 compute-0 podman[112575]: 2026-01-20 18:46:32.321036572 +0000 UTC m=+0.083296133 container exec a66f259d59f97851d239d62a1c86fcfe8cc108afab9f56a3ae31f80c5ab5d699 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:46:32 compute-0 podman[112575]: 2026-01-20 18:46:32.334345401 +0000 UTC m=+0.096604972 container exec_died a66f259d59f97851d239d62a1c86fcfe8cc108afab9f56a3ae31f80c5ab5d699 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Jan 20 18:46:32 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:46:32 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:46:32 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:46:32.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:46:32 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Jan 20 18:46:32 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 20 18:46:32 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 20 18:46:32 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 20 18:46:32 compute-0 sudo[112498]: pam_unix(sudo:session): session closed for user root
Jan 20 18:46:32 compute-0 podman[112643]: 2026-01-20 18:46:32.640025925 +0000 UTC m=+0.074122284 container exec 4e8e761c58a323b2e232c17a02958108663f7cfbfd5c846bd9edf3835afc34ea (image=quay.io/ceph/haproxy:2.3, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm)
Jan 20 18:46:32 compute-0 podman[112643]: 2026-01-20 18:46:32.665447916 +0000 UTC m=+0.099544285 container exec_died 4e8e761c58a323b2e232c17a02958108663f7cfbfd5c846bd9edf3835afc34ea (image=quay.io/ceph/haproxy:2.3, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm)
Jan 20 18:46:32 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:32 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd99c003730 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:32 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e150 do_prune osdmap full prune enabled
Jan 20 18:46:32 compute-0 ceph-mon[74381]: osdmap e150: 3 total, 3 up, 3 in
Jan 20 18:46:32 compute-0 podman[112736]: 2026-01-20 18:46:32.890789981 +0000 UTC m=+0.055143627 container exec 03da620c845e54fd933e41497235ed884812b05ac119911f7372fce805879a03 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-keepalived-nfs-cephfs-compute-0-kuklye, build-date=2023-02-22T09:23:20, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, name=keepalived, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, architecture=x86_64)
Jan 20 18:46:32 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e151 e151: 3 total, 3 up, 3 in
Jan 20 18:46:32 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Jan 20 18:46:32 compute-0 podman[112736]: 2026-01-20 18:46:32.902159388 +0000 UTC m=+0.066513034 container exec_died 03da620c845e54fd933e41497235ed884812b05ac119911f7372fce805879a03 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-keepalived-nfs-cephfs-compute-0-kuklye, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793, name=keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, io.buildah.version=1.28.2, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph, vendor=Red Hat, Inc., architecture=x86_64, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=)
Jan 20 18:46:32 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v114: 337 pgs: 1 remapped+peering, 336 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 1023 B/s wr, 4 op/s
Jan 20 18:46:33 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:46:33 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:46:33 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:46:33 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:46:33.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:46:33 compute-0 podman[112800]: 2026-01-20 18:46:33.103670553 +0000 UTC m=+0.057092599 container exec a8dfcd2937823e651e7ddb7da3250c4b2d68a69832f3be4617cbfedf4f0f7749 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:46:33 compute-0 podman[112800]: 2026-01-20 18:46:33.133197273 +0000 UTC m=+0.086619269 container exec_died a8dfcd2937823e651e7ddb7da3250c4b2d68a69832f3be4617cbfedf4f0f7749 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:46:33 compute-0 podman[112972]: 2026-01-20 18:46:33.343441952 +0000 UTC m=+0.062055904 container exec 07b235bbcf8e275aef429beeae1d676b774a5b2e004859627b1a936217e5b679 (image=quay.io/ceph/grafana:10.4.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 20 18:46:33 compute-0 podman[112972]: 2026-01-20 18:46:33.515167479 +0000 UTC m=+0.233781401 container exec_died 07b235bbcf8e275aef429beeae1d676b774a5b2e004859627b1a936217e5b679 (image=quay.io/ceph/grafana:10.4.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 20 18:46:33 compute-0 python3.9[113022]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 20 18:46:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:33 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 20 18:46:33 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e151 do_prune osdmap full prune enabled
Jan 20 18:46:33 compute-0 ceph-mon[74381]: osdmap e151: 3 total, 3 up, 3 in
Jan 20 18:46:33 compute-0 ceph-mon[74381]: pgmap v114: 337 pgs: 1 remapped+peering, 336 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 1023 B/s wr, 4 op/s
Jan 20 18:46:33 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e152 e152: 3 total, 3 up, 3 in
Jan 20 18:46:33 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e152: 3 total, 3 up, 3 in
Jan 20 18:46:33 compute-0 podman[113136]: 2026-01-20 18:46:33.962416121 +0000 UTC m=+0.057554063 container exec 3b46b6d3dc84fc240ebe1bff0868890dcc11b4799d0aa0050a48fba74c90cc31 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:46:33 compute-0 podman[113136]: 2026-01-20 18:46:33.998416104 +0000 UTC m=+0.093554046 container exec_died 3b46b6d3dc84fc240ebe1bff0868890dcc11b4799d0aa0050a48fba74c90cc31 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:46:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:34 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9b40089d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:34 compute-0 sudo[112143]: pam_unix(sudo:session): session closed for user root
Jan 20 18:46:34 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 18:46:34 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:46:34 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 18:46:34 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:46:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:34 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9ac003cc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:34 compute-0 sudo[113179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:46:34 compute-0 sudo[113179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:46:34 compute-0 sudo[113179]: pam_unix(sudo:session): session closed for user root
Jan 20 18:46:34 compute-0 sudo[113204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 20 18:46:34 compute-0 sudo[113204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:46:34 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:46:34 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 20 18:46:34 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:46:34.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 20 18:46:34 compute-0 sudo[113204]: pam_unix(sudo:session): session closed for user root
Jan 20 18:46:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:34 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd994002d40 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:34 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 18:46:34 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:46:34 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 20 18:46:34 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:46:34 compute-0 sudo[113262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:46:34 compute-0 sudo[113262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:46:34 compute-0 sudo[113262]: pam_unix(sudo:session): session closed for user root
Jan 20 18:46:34 compute-0 sudo[113287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 18:46:34 compute-0 sudo[113287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:46:34 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v116: 337 pgs: 1 remapped+peering, 336 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 933 B/s wr, 4 op/s
Jan 20 18:46:34 compute-0 ceph-mon[74381]: osdmap e152: 3 total, 3 up, 3 in
Jan 20 18:46:34 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:46:34 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:46:34 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:46:34 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 18:46:34 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:46:34 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:46:34 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 18:46:34 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 18:46:34 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:46:35 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:46:35 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:46:35 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:46:35.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:46:35 compute-0 podman[113353]: 2026-01-20 18:46:35.345154847 +0000 UTC m=+0.054659532 container create 85271c29cdf63b3aca25838db816f3a16525261429040276feff22f08b376e1c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_saha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:46:35 compute-0 systemd[1]: Started libpod-conmon-85271c29cdf63b3aca25838db816f3a16525261429040276feff22f08b376e1c.scope.
Jan 20 18:46:35 compute-0 podman[113353]: 2026-01-20 18:46:35.325264351 +0000 UTC m=+0.034769136 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:46:35 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:46:35 compute-0 podman[113353]: 2026-01-20 18:46:35.441929484 +0000 UTC m=+0.151434219 container init 85271c29cdf63b3aca25838db816f3a16525261429040276feff22f08b376e1c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_saha, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 20 18:46:35 compute-0 podman[113353]: 2026-01-20 18:46:35.45291511 +0000 UTC m=+0.162419795 container start 85271c29cdf63b3aca25838db816f3a16525261429040276feff22f08b376e1c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_saha, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:46:35 compute-0 podman[113353]: 2026-01-20 18:46:35.456997988 +0000 UTC m=+0.166502663 container attach 85271c29cdf63b3aca25838db816f3a16525261429040276feff22f08b376e1c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_saha, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True)
Jan 20 18:46:35 compute-0 hopeful_saha[113369]: 167 167
Jan 20 18:46:35 compute-0 systemd[1]: libpod-85271c29cdf63b3aca25838db816f3a16525261429040276feff22f08b376e1c.scope: Deactivated successfully.
Jan 20 18:46:35 compute-0 podman[113353]: 2026-01-20 18:46:35.459319401 +0000 UTC m=+0.168824086 container died 85271c29cdf63b3aca25838db816f3a16525261429040276feff22f08b376e1c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_saha, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:46:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-2b84e3713996c303afe0d7e8dd9d87c7a3d1c39fd225c862750b4dfcb6892773-merged.mount: Deactivated successfully.
Jan 20 18:46:35 compute-0 podman[113353]: 2026-01-20 18:46:35.505188735 +0000 UTC m=+0.214693430 container remove 85271c29cdf63b3aca25838db816f3a16525261429040276feff22f08b376e1c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_saha, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 18:46:35 compute-0 systemd[1]: libpod-conmon-85271c29cdf63b3aca25838db816f3a16525261429040276feff22f08b376e1c.scope: Deactivated successfully.
Jan 20 18:46:35 compute-0 podman[113395]: 2026-01-20 18:46:35.680753264 +0000 UTC m=+0.050425579 container create a71459247cf9fa6fde9edea497413fe3068c4b3f0556d27eb214c1140d3ace1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:46:35 compute-0 systemd[1]: Started libpod-conmon-a71459247cf9fa6fde9edea497413fe3068c4b3f0556d27eb214c1140d3ace1e.scope.
Jan 20 18:46:35 compute-0 podman[113395]: 2026-01-20 18:46:35.661755715 +0000 UTC m=+0.031428050 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:46:35 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:46:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14051c435c33a0b8c932871b68878769939de83554cae7b2faea0d8cff436548/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 18:46:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14051c435c33a0b8c932871b68878769939de83554cae7b2faea0d8cff436548/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:46:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14051c435c33a0b8c932871b68878769939de83554cae7b2faea0d8cff436548/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:46:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14051c435c33a0b8c932871b68878769939de83554cae7b2faea0d8cff436548/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 18:46:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14051c435c33a0b8c932871b68878769939de83554cae7b2faea0d8cff436548/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 18:46:35 compute-0 podman[113395]: 2026-01-20 18:46:35.786046039 +0000 UTC m=+0.155718374 container init a71459247cf9fa6fde9edea497413fe3068c4b3f0556d27eb214c1140d3ace1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:46:35 compute-0 podman[113395]: 2026-01-20 18:46:35.802286559 +0000 UTC m=+0.171958884 container start a71459247cf9fa6fde9edea497413fe3068c4b3f0556d27eb214c1140d3ace1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_ride, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:46:35 compute-0 podman[113395]: 2026-01-20 18:46:35.807418721 +0000 UTC m=+0.177091056 container attach a71459247cf9fa6fde9edea497413fe3068c4b3f0556d27eb214c1140d3ace1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_ride, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:46:35 compute-0 ceph-mon[74381]: pgmap v116: 337 pgs: 1 remapped+peering, 336 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 933 B/s wr, 4 op/s
Jan 20 18:46:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:36 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd99c003730 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:36 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9b40089d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:36 compute-0 vigorous_ride[113411]: --> passed data devices: 0 physical, 1 LVM
Jan 20 18:46:36 compute-0 vigorous_ride[113411]: --> All data devices are unavailable
Jan 20 18:46:36 compute-0 systemd[1]: libpod-a71459247cf9fa6fde9edea497413fe3068c4b3f0556d27eb214c1140d3ace1e.scope: Deactivated successfully.
Jan 20 18:46:36 compute-0 podman[113395]: 2026-01-20 18:46:36.182998766 +0000 UTC m=+0.552671081 container died a71459247cf9fa6fde9edea497413fe3068c4b3f0556d27eb214c1140d3ace1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Jan 20 18:46:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-14051c435c33a0b8c932871b68878769939de83554cae7b2faea0d8cff436548-merged.mount: Deactivated successfully.
Jan 20 18:46:36 compute-0 podman[113395]: 2026-01-20 18:46:36.241949212 +0000 UTC m=+0.611621527 container remove a71459247cf9fa6fde9edea497413fe3068c4b3f0556d27eb214c1140d3ace1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Jan 20 18:46:36 compute-0 systemd[1]: libpod-conmon-a71459247cf9fa6fde9edea497413fe3068c4b3f0556d27eb214c1140d3ace1e.scope: Deactivated successfully.
Jan 20 18:46:36 compute-0 sudo[113287]: pam_unix(sudo:session): session closed for user root
Jan 20 18:46:36 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:46:36 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 20 18:46:36 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:46:36.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 20 18:46:36 compute-0 sudo[113438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:46:36 compute-0 sudo[113438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:46:36 compute-0 sudo[113438]: pam_unix(sudo:session): session closed for user root
Jan 20 18:46:36 compute-0 sudo[113463]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- lvm list --format json
Jan 20 18:46:36 compute-0 sudo[113463]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:46:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:36 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9ac003cc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:36 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 20 18:46:36 compute-0 podman[113527]: 2026-01-20 18:46:36.820600872 +0000 UTC m=+0.021932362 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:46:36 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v117: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 511 B/s wr, 1 op/s; 36 B/s, 0 objects/s recovering
Jan 20 18:46:36 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0)
Jan 20 18:46:36 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Jan 20 18:46:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:46:36.957Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 18:46:36 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e152 do_prune osdmap full prune enabled
Jan 20 18:46:36 compute-0 podman[113527]: 2026-01-20 18:46:36.968029953 +0000 UTC m=+0.169361423 container create c0991df53a7b19052566626e8aff4f8328c1bbbd2fee0e73e2f8f087cff5c343 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:46:36 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Jan 20 18:46:36 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Jan 20 18:46:36 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Jan 20 18:46:36 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e153 e153: 3 total, 3 up, 3 in
Jan 20 18:46:36 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e153: 3 total, 3 up, 3 in
Jan 20 18:46:37 compute-0 ceph-mon[74381]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Jan 20 18:46:37 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:46:37.003190) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 18:46:37 compute-0 ceph-mon[74381]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Jan 20 18:46:37 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768934797003234, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 2821, "num_deletes": 251, "total_data_size": 7157180, "memory_usage": 7359488, "flush_reason": "Manual Compaction"}
Jan 20 18:46:37 compute-0 ceph-mon[74381]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Jan 20 18:46:37 compute-0 systemd[1]: Started libpod-conmon-c0991df53a7b19052566626e8aff4f8328c1bbbd2fee0e73e2f8f087cff5c343.scope.
Jan 20 18:46:37 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 153 pg[9.1e( empty local-lis/les=0/0 n=0 ec=62/41 lis/c=86/86 les/c/f=87/87/0 sis=153) [0] r=0 lpr=153 pi=[86,153)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:46:37 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:46:37 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:46:37 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:46:37 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:46:37.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:46:37 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768934797058659, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 6739429, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8402, "largest_seqno": 11222, "table_properties": {"data_size": 6725777, "index_size": 8866, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3717, "raw_key_size": 32435, "raw_average_key_size": 22, "raw_value_size": 6696764, "raw_average_value_size": 4571, "num_data_blocks": 384, "num_entries": 1465, "num_filter_entries": 1465, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768934688, "oldest_key_time": 1768934688, "file_creation_time": 1768934797, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cbf3ab03-d51c-4622-b6c7-e997cd5246eb", "db_session_id": "I40O2DG19JCNHUB0JQU4", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Jan 20 18:46:37 compute-0 ceph-mon[74381]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 55557 microseconds, and 24980 cpu microseconds.
Jan 20 18:46:37 compute-0 ceph-mon[74381]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 18:46:37 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:46:37.058744) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 6739429 bytes OK
Jan 20 18:46:37 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:46:37.058775) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Jan 20 18:46:37 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:46:37.061076) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Jan 20 18:46:37 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:46:37.061100) EVENT_LOG_v1 {"time_micros": 1768934797061092, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 18:46:37 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:46:37.061127) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 18:46:37 compute-0 ceph-mon[74381]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 7144274, prev total WAL file size 7144274, number of live WAL files 2.
Jan 20 18:46:37 compute-0 podman[113527]: 2026-01-20 18:46:37.062578761 +0000 UTC m=+0.263910281 container init c0991df53a7b19052566626e8aff4f8328c1bbbd2fee0e73e2f8f087cff5c343 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_ganguly, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:46:37 compute-0 ceph-mon[74381]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 18:46:37 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:46:37.064311) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Jan 20 18:46:37 compute-0 ceph-mon[74381]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 18:46:37 compute-0 ceph-mon[74381]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(6581KB)], [23(12MB)]
Jan 20 18:46:37 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768934797064373, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 19522268, "oldest_snapshot_seqno": -1}
Jan 20 18:46:37 compute-0 podman[113527]: 2026-01-20 18:46:37.072319756 +0000 UTC m=+0.273651236 container start c0991df53a7b19052566626e8aff4f8328c1bbbd2fee0e73e2f8f087cff5c343 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:46:37 compute-0 nervous_ganguly[113544]: 167 167
Jan 20 18:46:37 compute-0 podman[113527]: 2026-01-20 18:46:37.076307762 +0000 UTC m=+0.277639242 container attach c0991df53a7b19052566626e8aff4f8328c1bbbd2fee0e73e2f8f087cff5c343 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_ganguly, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:46:37 compute-0 systemd[1]: libpod-c0991df53a7b19052566626e8aff4f8328c1bbbd2fee0e73e2f8f087cff5c343.scope: Deactivated successfully.
Jan 20 18:46:37 compute-0 podman[113527]: 2026-01-20 18:46:37.077735107 +0000 UTC m=+0.279066587 container died c0991df53a7b19052566626e8aff4f8328c1bbbd2fee0e73e2f8f087cff5c343 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Jan 20 18:46:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-98acb4d100417f3cba6684d51dcec57346f270fd0eb732de90217a7093596ca5-merged.mount: Deactivated successfully.
Jan 20 18:46:37 compute-0 ceph-mon[74381]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 4298 keys, 15087098 bytes, temperature: kUnknown
Jan 20 18:46:37 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768934797172484, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 15087098, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15052469, "index_size": 22807, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10757, "raw_key_size": 109722, "raw_average_key_size": 25, "raw_value_size": 14968047, "raw_average_value_size": 3482, "num_data_blocks": 974, "num_entries": 4298, "num_filter_entries": 4298, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768934326, "oldest_key_time": 0, "file_creation_time": 1768934797, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cbf3ab03-d51c-4622-b6c7-e997cd5246eb", "db_session_id": "I40O2DG19JCNHUB0JQU4", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Jan 20 18:46:37 compute-0 ceph-mon[74381]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 18:46:37 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:46:37.172916) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 15087098 bytes
Jan 20 18:46:37 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:46:37.174296) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 180.3 rd, 139.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(6.4, 12.2 +0.0 blob) out(14.4 +0.0 blob), read-write-amplify(5.1) write-amplify(2.2) OK, records in: 4832, records dropped: 534 output_compression: NoCompression
Jan 20 18:46:37 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:46:37.174326) EVENT_LOG_v1 {"time_micros": 1768934797174313, "job": 8, "event": "compaction_finished", "compaction_time_micros": 108262, "compaction_time_cpu_micros": 45259, "output_level": 6, "num_output_files": 1, "total_output_size": 15087098, "num_input_records": 4832, "num_output_records": 4298, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 18:46:37 compute-0 ceph-mon[74381]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 18:46:37 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768934797176989, "job": 8, "event": "table_file_deletion", "file_number": 25}
Jan 20 18:46:37 compute-0 ceph-mon[74381]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 18:46:37 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768934797181883, "job": 8, "event": "table_file_deletion", "file_number": 23}
Jan 20 18:46:37 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:46:37.064180) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 18:46:37 compute-0 podman[113527]: 2026-01-20 18:46:37.181861096 +0000 UTC m=+0.383192576 container remove c0991df53a7b19052566626e8aff4f8328c1bbbd2fee0e73e2f8f087cff5c343 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_ganguly, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 18:46:37 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:46:37.182117) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 18:46:37 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:46:37.182125) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 18:46:37 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:46:37.182128) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 18:46:37 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:46:37.182130) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 18:46:37 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:46:37.182133) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 18:46:37 compute-0 systemd[1]: libpod-conmon-c0991df53a7b19052566626e8aff4f8328c1bbbd2fee0e73e2f8f087cff5c343.scope: Deactivated successfully.
Jan 20 18:46:37 compute-0 podman[113668]: 2026-01-20 18:46:37.356859506 +0000 UTC m=+0.051145012 container create e9fa75c41edd71c6ab76f2c766bef7b0f257bc7ce73f8650059190f10d624ea1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_cray, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 20 18:46:37 compute-0 sudo[113709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uosgcidyqdumoxmuirtkwsdabuziylpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934797.0898767-1219-21079848531177/AnsiballZ_systemd.py'
Jan 20 18:46:37 compute-0 sudo[113709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:46:37 compute-0 systemd[1]: Started libpod-conmon-e9fa75c41edd71c6ab76f2c766bef7b0f257bc7ce73f8650059190f10d624ea1.scope.
Jan 20 18:46:37 compute-0 podman[113668]: 2026-01-20 18:46:37.333027616 +0000 UTC m=+0.027313152 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:46:37 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:46:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23b5769e6252324cc93fa00bfa292fe79f20ec72af43f1f008284b44b8d235d7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 18:46:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23b5769e6252324cc93fa00bfa292fe79f20ec72af43f1f008284b44b8d235d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:46:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23b5769e6252324cc93fa00bfa292fe79f20ec72af43f1f008284b44b8d235d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:46:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23b5769e6252324cc93fa00bfa292fe79f20ec72af43f1f008284b44b8d235d7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 18:46:37 compute-0 podman[113668]: 2026-01-20 18:46:37.466485347 +0000 UTC m=+0.160770873 container init e9fa75c41edd71c6ab76f2c766bef7b0f257bc7ce73f8650059190f10d624ea1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:46:37 compute-0 podman[113668]: 2026-01-20 18:46:37.473096996 +0000 UTC m=+0.167382502 container start e9fa75c41edd71c6ab76f2c766bef7b0f257bc7ce73f8650059190f10d624ea1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_cray, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:46:37 compute-0 podman[113668]: 2026-01-20 18:46:37.477020629 +0000 UTC m=+0.171306135 container attach e9fa75c41edd71c6ab76f2c766bef7b0f257bc7ce73f8650059190f10d624ea1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Jan 20 18:46:37 compute-0 python3.9[113711]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 18:46:37 compute-0 sudo[113709]: pam_unix(sudo:session): session closed for user root
Jan 20 18:46:37 compute-0 recursing_cray[113716]: {
Jan 20 18:46:37 compute-0 recursing_cray[113716]:     "0": [
Jan 20 18:46:37 compute-0 recursing_cray[113716]:         {
Jan 20 18:46:37 compute-0 recursing_cray[113716]:             "devices": [
Jan 20 18:46:37 compute-0 recursing_cray[113716]:                 "/dev/loop3"
Jan 20 18:46:37 compute-0 recursing_cray[113716]:             ],
Jan 20 18:46:37 compute-0 recursing_cray[113716]:             "lv_name": "ceph_lv0",
Jan 20 18:46:37 compute-0 recursing_cray[113716]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 18:46:37 compute-0 recursing_cray[113716]:             "lv_size": "21470642176",
Jan 20 18:46:37 compute-0 recursing_cray[113716]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=aecbbf3b-b405-507b-97d7-637a83f5b4b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5f53c0c6-6046-4836-83f9-ff93da7e674e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 18:46:37 compute-0 recursing_cray[113716]:             "lv_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 18:46:37 compute-0 recursing_cray[113716]:             "name": "ceph_lv0",
Jan 20 18:46:37 compute-0 recursing_cray[113716]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 18:46:37 compute-0 recursing_cray[113716]:             "tags": {
Jan 20 18:46:37 compute-0 recursing_cray[113716]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 18:46:37 compute-0 recursing_cray[113716]:                 "ceph.block_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 18:46:37 compute-0 recursing_cray[113716]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 18:46:37 compute-0 recursing_cray[113716]:                 "ceph.cluster_fsid": "aecbbf3b-b405-507b-97d7-637a83f5b4b1",
Jan 20 18:46:37 compute-0 recursing_cray[113716]:                 "ceph.cluster_name": "ceph",
Jan 20 18:46:37 compute-0 recursing_cray[113716]:                 "ceph.crush_device_class": "",
Jan 20 18:46:37 compute-0 recursing_cray[113716]:                 "ceph.encrypted": "0",
Jan 20 18:46:37 compute-0 recursing_cray[113716]:                 "ceph.osd_fsid": "5f53c0c6-6046-4836-83f9-ff93da7e674e",
Jan 20 18:46:37 compute-0 recursing_cray[113716]:                 "ceph.osd_id": "0",
Jan 20 18:46:37 compute-0 recursing_cray[113716]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 18:46:37 compute-0 recursing_cray[113716]:                 "ceph.type": "block",
Jan 20 18:46:37 compute-0 recursing_cray[113716]:                 "ceph.vdo": "0",
Jan 20 18:46:37 compute-0 recursing_cray[113716]:                 "ceph.with_tpm": "0"
Jan 20 18:46:37 compute-0 recursing_cray[113716]:             },
Jan 20 18:46:37 compute-0 recursing_cray[113716]:             "type": "block",
Jan 20 18:46:37 compute-0 recursing_cray[113716]:             "vg_name": "ceph_vg0"
Jan 20 18:46:37 compute-0 recursing_cray[113716]:         }
Jan 20 18:46:37 compute-0 recursing_cray[113716]:     ]
Jan 20 18:46:37 compute-0 recursing_cray[113716]: }
Jan 20 18:46:37 compute-0 systemd[1]: libpod-e9fa75c41edd71c6ab76f2c766bef7b0f257bc7ce73f8650059190f10d624ea1.scope: Deactivated successfully.
Jan 20 18:46:37 compute-0 podman[113668]: 2026-01-20 18:46:37.765471431 +0000 UTC m=+0.459757017 container died e9fa75c41edd71c6ab76f2c766bef7b0f257bc7ce73f8650059190f10d624ea1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default)
Jan 20 18:46:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-23b5769e6252324cc93fa00bfa292fe79f20ec72af43f1f008284b44b8d235d7-merged.mount: Deactivated successfully.
Jan 20 18:46:37 compute-0 podman[113668]: 2026-01-20 18:46:37.81278527 +0000 UTC m=+0.507070766 container remove e9fa75c41edd71c6ab76f2c766bef7b0f257bc7ce73f8650059190f10d624ea1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_cray, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 20 18:46:37 compute-0 systemd[1]: libpod-conmon-e9fa75c41edd71c6ab76f2c766bef7b0f257bc7ce73f8650059190f10d624ea1.scope: Deactivated successfully.
Jan 20 18:46:37 compute-0 sudo[113463]: pam_unix(sudo:session): session closed for user root
Jan 20 18:46:37 compute-0 sudo[113788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:46:37 compute-0 sudo[113788]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:46:37 compute-0 sudo[113788]: pam_unix(sudo:session): session closed for user root
Jan 20 18:46:37 compute-0 sudo[113843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- raw list --format json
Jan 20 18:46:37 compute-0 sudo[113843]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:46:37 compute-0 ceph-mon[74381]: pgmap v117: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 511 B/s wr, 1 op/s; 36 B/s, 0 objects/s recovering
Jan 20 18:46:37 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Jan 20 18:46:37 compute-0 ceph-mon[74381]: osdmap e153: 3 total, 3 up, 3 in
Jan 20 18:46:38 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e153 do_prune osdmap full prune enabled
Jan 20 18:46:38 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e154 e154: 3 total, 3 up, 3 in
Jan 20 18:46:38 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 154 pg[9.1e( empty local-lis/les=0/0 n=0 ec=62/41 lis/c=86/86 les/c/f=87/87/0 sis=154) [0]/[1] r=-1 lpr=154 pi=[86,154)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:46:38 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 154 pg[9.1e( empty local-lis/les=0/0 n=0 ec=62/41 lis/c=86/86 les/c/f=87/87/0 sis=154) [0]/[1] r=-1 lpr=154 pi=[86,154)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 18:46:38 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e154: 3 total, 3 up, 3 in
Jan 20 18:46:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:38 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd994002d40 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:38 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd994002d40 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:38 compute-0 sudo[113941]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dagnjyenshimwwbgewqttspbvctjwdml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934797.8708034-1219-175116285759631/AnsiballZ_systemd.py'
Jan 20 18:46:38 compute-0 sudo[113941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:46:38 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:46:38 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:46:38 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:46:38.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:46:38 compute-0 podman[113982]: 2026-01-20 18:46:38.387090932 +0000 UTC m=+0.043915773 container create b3fc76e8791ef0d877fe965ad41655556bd1f36574bf721fa12937cbf6ffa963 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 18:46:38 compute-0 systemd[1]: Started libpod-conmon-b3fc76e8791ef0d877fe965ad41655556bd1f36574bf721fa12937cbf6ffa963.scope.
Jan 20 18:46:38 compute-0 python3.9[113943]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 18:46:38 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:46:38 compute-0 podman[113982]: 2026-01-20 18:46:38.366449343 +0000 UTC m=+0.023274194 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:46:38 compute-0 podman[113982]: 2026-01-20 18:46:38.471024245 +0000 UTC m=+0.127849086 container init b3fc76e8791ef0d877fe965ad41655556bd1f36574bf721fa12937cbf6ffa963 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Jan 20 18:46:38 compute-0 podman[113982]: 2026-01-20 18:46:38.479075559 +0000 UTC m=+0.135900390 container start b3fc76e8791ef0d877fe965ad41655556bd1f36574bf721fa12937cbf6ffa963 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_almeida, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 18:46:38 compute-0 podman[113982]: 2026-01-20 18:46:38.482048742 +0000 UTC m=+0.138873613 container attach b3fc76e8791ef0d877fe965ad41655556bd1f36574bf721fa12937cbf6ffa963 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:46:38 compute-0 interesting_almeida[113998]: 167 167
Jan 20 18:46:38 compute-0 systemd[1]: libpod-b3fc76e8791ef0d877fe965ad41655556bd1f36574bf721fa12937cbf6ffa963.scope: Deactivated successfully.
Jan 20 18:46:38 compute-0 podman[113982]: 2026-01-20 18:46:38.487213225 +0000 UTC m=+0.144038126 container died b3fc76e8791ef0d877fe965ad41655556bd1f36574bf721fa12937cbf6ffa963 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_almeida, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:46:38 compute-0 sudo[113941]: pam_unix(sudo:session): session closed for user root
Jan 20 18:46:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-35ca0667e52a97f77d3a4ca1d623e254c35c11a8e5bce49be11ec4b84a9d9c27-merged.mount: Deactivated successfully.
Jan 20 18:46:38 compute-0 podman[113982]: 2026-01-20 18:46:38.555875557 +0000 UTC m=+0.212700388 container remove b3fc76e8791ef0d877fe965ad41655556bd1f36574bf721fa12937cbf6ffa963 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_almeida, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:46:38 compute-0 systemd[1]: libpod-conmon-b3fc76e8791ef0d877fe965ad41655556bd1f36574bf721fa12937cbf6ffa963.scope: Deactivated successfully.
Jan 20 18:46:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:38 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd994002d40 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:38 compute-0 podman[114050]: 2026-01-20 18:46:38.759735596 +0000 UTC m=+0.050345906 container create f052bf9460f676d5f95005db0e1e6a8c438b92f2a95d8ca4afb28677ae79f36d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_mcnulty, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Jan 20 18:46:38 compute-0 systemd[1]: Started libpod-conmon-f052bf9460f676d5f95005db0e1e6a8c438b92f2a95d8ca4afb28677ae79f36d.scope.
Jan 20 18:46:38 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:46:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a47e2597e46da7f07afde2905339a308be435e7df83530d4883113b90a74d994/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 18:46:38 compute-0 podman[114050]: 2026-01-20 18:46:38.740040976 +0000 UTC m=+0.030651286 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:46:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a47e2597e46da7f07afde2905339a308be435e7df83530d4883113b90a74d994/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:46:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a47e2597e46da7f07afde2905339a308be435e7df83530d4883113b90a74d994/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:46:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a47e2597e46da7f07afde2905339a308be435e7df83530d4883113b90a74d994/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 18:46:38 compute-0 podman[114050]: 2026-01-20 18:46:38.851452833 +0000 UTC m=+0.142063143 container init f052bf9460f676d5f95005db0e1e6a8c438b92f2a95d8ca4afb28677ae79f36d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_mcnulty, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 20 18:46:38 compute-0 podman[114050]: 2026-01-20 18:46:38.860652613 +0000 UTC m=+0.151262903 container start f052bf9460f676d5f95005db0e1e6a8c438b92f2a95d8ca4afb28677ae79f36d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_mcnulty, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 20 18:46:38 compute-0 podman[114050]: 2026-01-20 18:46:38.864484943 +0000 UTC m=+0.155095283 container attach f052bf9460f676d5f95005db0e1e6a8c438b92f2a95d8ca4afb28677ae79f36d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 20 18:46:38 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v120: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 511 B/s wr, 1 op/s; 36 B/s, 0 objects/s recovering
Jan 20 18:46:38 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 20 18:46:38 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 20 18:46:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [WARNING] 019/184638 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 1ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 20 18:46:39 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e154 do_prune osdmap full prune enabled
Jan 20 18:46:39 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 18:46:39 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e155 e155: 3 total, 3 up, 3 in
Jan 20 18:46:39 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e155: 3 total, 3 up, 3 in
Jan 20 18:46:39 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 155 pg[9.1f( empty local-lis/les=0/0 n=0 ec=62/41 lis/c=109/109 les/c/f=110/110/0 sis=155) [0] r=0 lpr=155 pi=[109,155)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:46:39 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:46:39 compute-0 ceph-mon[74381]: osdmap e154: 3 total, 3 up, 3 in
Jan 20 18:46:39 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 20 18:46:39 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Jan 20 18:46:39 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 20 18:46:39 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:46:39.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 20 18:46:39 compute-0 sshd-session[102231]: Connection closed by 192.168.122.30 port 47672
Jan 20 18:46:39 compute-0 sshd-session[102201]: pam_unix(sshd:session): session closed for user zuul
Jan 20 18:46:39 compute-0 systemd[1]: session-40.scope: Deactivated successfully.
Jan 20 18:46:39 compute-0 systemd[1]: session-40.scope: Consumed 1min 6.223s CPU time.
Jan 20 18:46:39 compute-0 systemd-logind[796]: Session 40 logged out. Waiting for processes to exit.
Jan 20 18:46:39 compute-0 systemd-logind[796]: Removed session 40.
Jan 20 18:46:39 compute-0 lvm[114141]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 18:46:39 compute-0 lvm[114141]: VG ceph_vg0 finished
Jan 20 18:46:39 compute-0 silly_mcnulty[114067]: {}
Jan 20 18:46:39 compute-0 systemd[1]: libpod-f052bf9460f676d5f95005db0e1e6a8c438b92f2a95d8ca4afb28677ae79f36d.scope: Deactivated successfully.
Jan 20 18:46:39 compute-0 systemd[1]: libpod-f052bf9460f676d5f95005db0e1e6a8c438b92f2a95d8ca4afb28677ae79f36d.scope: Consumed 1.259s CPU time.
Jan 20 18:46:39 compute-0 podman[114050]: 2026-01-20 18:46:39.638905467 +0000 UTC m=+0.929515757 container died f052bf9460f676d5f95005db0e1e6a8c438b92f2a95d8ca4afb28677ae79f36d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_mcnulty, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:46:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-a47e2597e46da7f07afde2905339a308be435e7df83530d4883113b90a74d994-merged.mount: Deactivated successfully.
Jan 20 18:46:39 compute-0 podman[114050]: 2026-01-20 18:46:39.685958348 +0000 UTC m=+0.976568628 container remove f052bf9460f676d5f95005db0e1e6a8c438b92f2a95d8ca4afb28677ae79f36d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 20 18:46:39 compute-0 systemd[1]: libpod-conmon-f052bf9460f676d5f95005db0e1e6a8c438b92f2a95d8ca4afb28677ae79f36d.scope: Deactivated successfully.
Jan 20 18:46:39 compute-0 sudo[113843]: pam_unix(sudo:session): session closed for user root
Jan 20 18:46:39 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 18:46:39 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:46:39 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 18:46:39 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:46:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:46:39] "GET /metrics HTTP/1.1" 200 48326 "" "Prometheus/2.51.0"
Jan 20 18:46:39 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:46:39] "GET /metrics HTTP/1.1" 200 48326 "" "Prometheus/2.51.0"
Jan 20 18:46:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:39 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 20 18:46:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:39 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 20 18:46:39 compute-0 sudo[114159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 18:46:39 compute-0 sudo[114159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:46:39 compute-0 sudo[114159]: pam_unix(sudo:session): session closed for user root
Jan 20 18:46:40 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:40 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9ac003cc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:40 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e155 do_prune osdmap full prune enabled
Jan 20 18:46:40 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:40 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9b40089d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:40 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e156 e156: 3 total, 3 up, 3 in
Jan 20 18:46:40 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e156: 3 total, 3 up, 3 in
Jan 20 18:46:40 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 156 pg[9.1f( empty local-lis/les=0/0 n=0 ec=62/41 lis/c=109/109 les/c/f=110/110/0 sis=156) [0]/[1] r=-1 lpr=156 pi=[109,156)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:46:40 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 156 pg[9.1e( v 49'1085 (0'0,49'1085] local-lis/les=0/0 n=5 ec=62/41 lis/c=154/86 les/c/f=155/87/0 sis=156) [0] r=0 lpr=156 pi=[86,156)/1 luod=0'0 crt=49'1085 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:46:40 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 156 pg[9.1e( v 49'1085 (0'0,49'1085] local-lis/les=0/0 n=5 ec=62/41 lis/c=154/86 les/c/f=155/87/0 sis=156) [0] r=0 lpr=156 pi=[86,156)/1 crt=49'1085 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:46:40 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 156 pg[9.1f( empty local-lis/les=0/0 n=0 ec=62/41 lis/c=109/109 les/c/f=110/110/0 sis=156) [0]/[1] r=-1 lpr=156 pi=[109,156)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 18:46:40 compute-0 ceph-mon[74381]: pgmap v120: 337 pgs: 337 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 511 B/s wr, 1 op/s; 36 B/s, 0 objects/s recovering
Jan 20 18:46:40 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 18:46:40 compute-0 ceph-mon[74381]: osdmap e155: 3 total, 3 up, 3 in
Jan 20 18:46:40 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:46:40 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:46:40 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 18:46:40 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:46:40 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 20 18:46:40 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:46:40.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 20 18:46:40 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:40 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd994002d40 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:40 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v123: 337 pgs: 1 remapped+peering, 1 peering, 335 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 6.0 KiB/s rd, 2.2 KiB/s wr, 8 op/s; 0 B/s, 1 objects/s recovering
Jan 20 18:46:41 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:46:41 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:46:41 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:46:41.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:46:41 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e156 do_prune osdmap full prune enabled
Jan 20 18:46:41 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e157 e157: 3 total, 3 up, 3 in
Jan 20 18:46:41 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e157: 3 total, 3 up, 3 in
Jan 20 18:46:41 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 157 pg[9.1e( v 49'1085 (0'0,49'1085] local-lis/les=156/157 n=5 ec=62/41 lis/c=154/86 les/c/f=155/87/0 sis=156) [0] r=0 lpr=156 pi=[86,156)/1 crt=49'1085 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:46:41 compute-0 ceph-mon[74381]: osdmap e156: 3 total, 3 up, 3 in
Jan 20 18:46:41 compute-0 ceph-mon[74381]: pgmap v123: 337 pgs: 1 remapped+peering, 1 peering, 335 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 6.0 KiB/s rd, 2.2 KiB/s wr, 8 op/s; 0 B/s, 1 objects/s recovering
Jan 20 18:46:41 compute-0 ceph-mon[74381]: osdmap e157: 3 total, 3 up, 3 in
Jan 20 18:46:42 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:42 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd99c003730 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:42 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:42 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9ac003cc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:42 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e157 do_prune osdmap full prune enabled
Jan 20 18:46:42 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e158 e158: 3 total, 3 up, 3 in
Jan 20 18:46:42 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e158: 3 total, 3 up, 3 in
Jan 20 18:46:42 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 158 pg[9.1f( v 49'1085 (0'0,49'1085] local-lis/les=0/0 n=5 ec=62/41 lis/c=156/109 les/c/f=157/110/0 sis=158) [0] r=0 lpr=158 pi=[109,158)/1 luod=0'0 crt=49'1085 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 18:46:42 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 158 pg[9.1f( v 49'1085 (0'0,49'1085] local-lis/les=0/0 n=5 ec=62/41 lis/c=156/109 les/c/f=157/110/0 sis=158) [0] r=0 lpr=158 pi=[109,158)/1 crt=49'1085 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 18:46:42 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:46:42 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:46:42 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:46:42.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:46:42 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:42 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9b400a840 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:42 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:42 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 20 18:46:42 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v126: 337 pgs: 1 remapped+peering, 1 peering, 335 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 6.0 KiB/s rd, 2.2 KiB/s wr, 8 op/s; 0 B/s, 1 objects/s recovering
Jan 20 18:46:43 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:46:43 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:46:43 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:46:43 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:46:43.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:46:43 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e158 do_prune osdmap full prune enabled
Jan 20 18:46:43 compute-0 ceph-mon[74381]: osdmap e158: 3 total, 3 up, 3 in
Jan 20 18:46:43 compute-0 ceph-mon[74381]: pgmap v126: 337 pgs: 1 remapped+peering, 1 peering, 335 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 6.0 KiB/s rd, 2.2 KiB/s wr, 8 op/s; 0 B/s, 1 objects/s recovering
Jan 20 18:46:43 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 e159: 3 total, 3 up, 3 in
Jan 20 18:46:43 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e159: 3 total, 3 up, 3 in
Jan 20 18:46:43 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 159 pg[9.1f( v 49'1085 (0'0,49'1085] local-lis/les=158/159 n=5 ec=62/41 lis/c=156/109 les/c/f=157/110/0 sis=158) [0] r=0 lpr=158 pi=[109,158)/1 crt=49'1085 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 18:46:44 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:44 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd994002d40 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:44 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:44 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd99c003730 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:44 compute-0 ceph-mon[74381]: osdmap e159: 3 total, 3 up, 3 in
Jan 20 18:46:44 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:46:44 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 20 18:46:44 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:46:44.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 20 18:46:44 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:44 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9ac003cc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:44 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v128: 337 pgs: 1 remapped+peering, 1 peering, 335 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s rd, 1.9 KiB/s wr, 7 op/s; 0 B/s, 1 objects/s recovering
Jan 20 18:46:45 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:46:45 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:46:45 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:46:45.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:46:45 compute-0 sshd-session[114188]: Accepted publickey for zuul from 192.168.122.30 port 39142 ssh2: ECDSA SHA256:OqjpBifwMHsrzwMbJxHbqE54q1skVz6aecL1tN9sOps
Jan 20 18:46:45 compute-0 systemd-logind[796]: New session 41 of user zuul.
Jan 20 18:46:45 compute-0 systemd[1]: Started Session 41 of User zuul.
Jan 20 18:46:45 compute-0 sshd-session[114188]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 18:46:45 compute-0 ceph-mon[74381]: pgmap v128: 337 pgs: 1 remapped+peering, 1 peering, 335 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s rd, 1.9 KiB/s wr, 7 op/s; 0 B/s, 1 objects/s recovering
Jan 20 18:46:45 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [WARNING] 019/184645 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 20 18:46:46 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:46 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9b400a840 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:46 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:46 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd994003e40 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:46 compute-0 python3.9[114343]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 18:46:46 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:46:46 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:46:46 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:46:46.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:46:46 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:46 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd99c003730 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:46 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v129: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s; 18 B/s, 0 objects/s recovering
Jan 20 18:46:46 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:46:46.959Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 18:46:46 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:46:46.959Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 18:46:46 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:46:46.959Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 18:46:47 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:46:47 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:46:47 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:46:47.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:46:47 compute-0 sudo[114497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhtedknwjkpmbrfussterrzgjuchoikn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934806.932576-63-226310030506778/AnsiballZ_getent.py'
Jan 20 18:46:47 compute-0 sudo[114497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:46:47 compute-0 python3.9[114499]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 20 18:46:47 compute-0 sudo[114497]: pam_unix(sudo:session): session closed for user root
Jan 20 18:46:48 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:46:48 compute-0 ceph-mon[74381]: pgmap v129: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s; 18 B/s, 0 objects/s recovering
Jan 20 18:46:48 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:48 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd994003e40 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:48 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:48 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9b400a840 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:48 compute-0 sudo[114652]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irvmgqinqwzedlduanwgsotxyizgivse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934807.957832-99-24833945714489/AnsiballZ_setup.py'
Jan 20 18:46:48 compute-0 sudo[114652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:46:48 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:46:48 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:46:48 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:46:48.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:46:48 compute-0 python3.9[114654]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 18:46:48 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:48 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9ac003cc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:48 compute-0 sudo[114652]: pam_unix(sudo:session): session closed for user root
Jan 20 18:46:48 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v130: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 131 B/s rd, 0 B/s wr, 0 op/s; 14 B/s, 0 objects/s recovering
Jan 20 18:46:49 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:46:49 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 20 18:46:49 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:46:49.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 20 18:46:49 compute-0 sudo[114736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcgyubmohcsexhinailimaawmbhtxjcn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934807.957832-99-24833945714489/AnsiballZ_dnf.py'
Jan 20 18:46:49 compute-0 sudo[114736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:46:49 compute-0 python3.9[114738]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 20 18:46:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:46:49] "GET /metrics HTTP/1.1" 200 48327 "" "Prometheus/2.51.0"
Jan 20 18:46:49 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:46:49] "GET /metrics HTTP/1.1" 200 48327 "" "Prometheus/2.51.0"
Jan 20 18:46:50 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:50 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd99c003730 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:50 compute-0 ceph-mon[74381]: pgmap v130: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 131 B/s rd, 0 B/s wr, 0 op/s; 14 B/s, 0 objects/s recovering
Jan 20 18:46:50 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:50 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd99c003730 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:50 compute-0 sudo[114742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 18:46:50 compute-0 sudo[114742]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:46:50 compute-0 sudo[114742]: pam_unix(sudo:session): session closed for user root
Jan 20 18:46:50 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:46:50 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:46:50 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:46:50.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:46:50 compute-0 sudo[114736]: pam_unix(sudo:session): session closed for user root
Jan 20 18:46:50 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:50 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9b400a840 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:50 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v131: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 116 B/s rd, 0 B/s wr, 0 op/s; 12 B/s, 0 objects/s recovering
Jan 20 18:46:51 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:46:51 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:46:51 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:46:51.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:46:51 compute-0 sudo[114918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlgryotlpuleblhskfgapckcyxzxiniz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934811.4634514-141-147473849612494/AnsiballZ_dnf.py'
Jan 20 18:46:51 compute-0 sudo[114918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:46:51 compute-0 python3.9[114920]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 18:46:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:52 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9ac003cc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:52 compute-0 ceph-mon[74381]: pgmap v131: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 116 B/s rd, 0 B/s wr, 0 op/s; 12 B/s, 0 objects/s recovering
Jan 20 18:46:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:52 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd99c003730 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:52 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:46:52 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 20 18:46:52 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:46:52.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 20 18:46:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:52 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd99c003730 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:52 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v132: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 0 B/s wr, 0 op/s; 10 B/s, 0 objects/s recovering
Jan 20 18:46:53 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:46:53 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:46:53 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 20 18:46:53 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:46:53.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 20 18:46:53 compute-0 ceph-mon[74381]: pgmap v132: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 102 B/s rd, 0 B/s wr, 0 op/s; 10 B/s, 0 objects/s recovering
Jan 20 18:46:53 compute-0 sudo[114918]: pam_unix(sudo:session): session closed for user root
Jan 20 18:46:54 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:54 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9b400a840 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:54 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:54 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9ac003cc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:54 compute-0 sudo[115075]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbxfxationibovbtmmmdlalatahsmwtw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934813.5784225-165-157479893286214/AnsiballZ_systemd.py'
Jan 20 18:46:54 compute-0 sudo[115075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:46:54 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:46:54 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 20 18:46:54 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:46:54.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 20 18:46:54 compute-0 python3.9[115077]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 20 18:46:54 compute-0 sudo[115075]: pam_unix(sudo:session): session closed for user root
Jan 20 18:46:54 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:54 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9a0001090 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:54 compute-0 ceph-mgr[74676]: [balancer INFO root] Optimize plan auto_2026-01-20_18:46:54
Jan 20 18:46:54 compute-0 ceph-mgr[74676]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 18:46:54 compute-0 ceph-mgr[74676]: [balancer INFO root] do_upmap
Jan 20 18:46:54 compute-0 ceph-mgr[74676]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.log', 'volumes', 'backups', 'images', 'default.rgw.meta', '.rgw.root', 'vms', 'cephfs.cephfs.meta', 'default.rgw.control', '.mgr', '.nfs']
Jan 20 18:46:54 compute-0 ceph-mgr[74676]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 18:46:54 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v133: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 87 B/s rd, 0 B/s wr, 0 op/s; 9 B/s, 0 objects/s recovering
Jan 20 18:46:54 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 18:46:54 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:46:54 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 18:46:54 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:46:54 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:46:54 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:46:54 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:46:54 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:46:54 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:46:54 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:46:54 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:46:54 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:46:54 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 20 18:46:54 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:46:54 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:46:54 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:46:54 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 18:46:54 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:46:54 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 20 18:46:54 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:46:54 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:46:54 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:46:54 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 18:46:54 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:46:54 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 20 18:46:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:46:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:46:55 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 18:46:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:46:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:46:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 18:46:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 18:46:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 18:46:55 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:46:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 18:46:55 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:46:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 18:46:55 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:46:55.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:46:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:46:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:46:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 18:46:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 18:46:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 18:46:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 18:46:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 18:46:55 compute-0 python3.9[115230]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 18:46:56 compute-0 ceph-mon[74381]: pgmap v133: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 87 B/s rd, 0 B/s wr, 0 op/s; 9 B/s, 0 objects/s recovering
Jan 20 18:46:56 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:56 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd994003e40 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:56 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:56 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd998002690 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:56 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:46:56 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 20 18:46:56 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:46:56.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 20 18:46:56 compute-0 sudo[115382]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdflfcnolytguyqfxbibfdlpzhkuxprg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934816.1103969-219-126269356863409/AnsiballZ_sefcontext.py'
Jan 20 18:46:56 compute-0 sudo[115382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:46:56 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:56 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9ac003cc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:56 compute-0 python3.9[115384]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 20 18:46:56 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v134: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s; 9 B/s, 0 objects/s recovering
Jan 20 18:46:56 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:46:56.961Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 18:46:57 compute-0 sudo[115382]: pam_unix(sudo:session): session closed for user root
Jan 20 18:46:57 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:46:57 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:46:57 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:46:57.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:46:58 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:46:58 compute-0 python3.9[115536]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 18:46:58 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:58 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9a0001f90 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:58 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:58 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd994003e40 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:58 compute-0 ceph-mon[74381]: pgmap v134: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s; 9 B/s, 0 objects/s recovering
Jan 20 18:46:58 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:46:58 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 20 18:46:58 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:46:58.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 20 18:46:58 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:46:58 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd998002690 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:46:58 compute-0 sudo[115692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfdtzwjzivwuqqciazxfzaegptwcpdim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934818.6106346-273-204634379312751/AnsiballZ_dnf.py'
Jan 20 18:46:58 compute-0 sudo[115692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:46:58 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v135: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:46:59 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:46:59 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:46:59 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:46:59.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:46:59 compute-0 python3.9[115694]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 18:46:59 compute-0 ceph-mon[74381]: pgmap v135: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:46:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:46:59] "GET /metrics HTTP/1.1" 200 48324 "" "Prometheus/2.51.0"
Jan 20 18:46:59 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:46:59] "GET /metrics HTTP/1.1" 200 48324 "" "Prometheus/2.51.0"
Jan 20 18:47:00 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:47:00 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9ac003cc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:47:00 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:47:00 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9ac003cc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:47:00 compute-0 sudo[115692]: pam_unix(sudo:session): session closed for user root
Jan 20 18:47:00 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:47:00 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:47:00 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:47:00.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:47:00 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:47:00 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd994003e40 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:47:00 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v136: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:47:01 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:47:01 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:47:01 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:47:01.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:47:01 compute-0 sudo[115847]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfrissqqfbsfojmqfiyqnzazifpwmiph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934820.8925507-297-265375270968767/AnsiballZ_command.py'
Jan 20 18:47:01 compute-0 sudo[115847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:47:01 compute-0 python3.9[115849]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:47:02 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:47:02 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd998002690 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:47:02 compute-0 sudo[115847]: pam_unix(sudo:session): session closed for user root
Jan 20 18:47:02 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:47:02 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9ac003cc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:47:02 compute-0 ceph-mon[74381]: pgmap v136: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:47:02 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:47:02 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:47:02 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:47:02.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:47:02 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:47:02 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9ac003cc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:47:02 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v137: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:47:02 compute-0 sudo[116136]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbndivqbcvhdceqiawtfvhdzswpesstn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934822.4870484-321-216866833719673/AnsiballZ_file.py'
Jan 20 18:47:02 compute-0 sudo[116136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:47:03 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:47:03 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:47:03 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 20 18:47:03 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:47:03.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 20 18:47:03 compute-0 python3.9[116138]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 20 18:47:03 compute-0 sudo[116136]: pam_unix(sudo:session): session closed for user root
Jan 20 18:47:03 compute-0 ceph-mon[74381]: pgmap v137: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:47:03 compute-0 python3.9[116290]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 18:47:04 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:47:04 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd994003e40 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:47:04 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:47:04 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd998002690 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:47:04 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:47:04 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:47:04 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:47:04.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:47:04 compute-0 sudo[116442]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bogjrfijxhubnwaalchvmvjcroumktyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934824.207461-369-267315658299667/AnsiballZ_dnf.py'
Jan 20 18:47:04 compute-0 sudo[116442]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:47:04 compute-0 python3.9[116444]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 18:47:04 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:47:04 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9ac003cc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:47:04 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v138: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:47:05 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:47:05 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:47:05 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:47:05.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:47:06 compute-0 ceph-mon[74381]: pgmap v138: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:47:06 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:47:06 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9a0002600 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:47:06 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:47:06 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd994003e40 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:47:06 compute-0 sudo[116442]: pam_unix(sudo:session): session closed for user root
Jan 20 18:47:06 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:47:06 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 20 18:47:06 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:47:06.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 20 18:47:06 compute-0 sudo[116597]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhglgtrskjtvibipbxoeiaxadfdqgqyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934826.3668933-396-241863661522976/AnsiballZ_dnf.py'
Jan 20 18:47:06 compute-0 sudo[116597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:47:06 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:47:06 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd998002690 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:47:06 compute-0 python3.9[116599]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 18:47:06 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v139: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 20 18:47:06 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:47:06.962Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 18:47:07 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:47:07 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 20 18:47:07 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:47:07.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 20 18:47:08 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:47:08 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:47:08 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9ac003cc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:47:08 compute-0 ceph-mon[74381]: pgmap v139: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 20 18:47:08 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:47:08 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9a0002600 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:47:08 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:47:08 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 20 18:47:08 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:47:08.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 20 18:47:08 compute-0 sudo[116597]: pam_unix(sudo:session): session closed for user root
Jan 20 18:47:08 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:47:08 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd994003e40 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:47:08 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v140: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:47:09 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:47:09 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:47:09 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:47:09.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:47:09 compute-0 sudo[116752]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyfjfxmqrpeixzzrbieqbovqeuejkynk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934828.87692-432-237951778573831/AnsiballZ_stat.py'
Jan 20 18:47:09 compute-0 sudo[116752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:47:09 compute-0 python3.9[116754]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 18:47:09 compute-0 sudo[116752]: pam_unix(sudo:session): session closed for user root
Jan 20 18:47:09 compute-0 ceph-mon[74381]: pgmap v140: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:47:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:47:09] "GET /metrics HTTP/1.1" 200 48324 "" "Prometheus/2.51.0"
Jan 20 18:47:09 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:47:09] "GET /metrics HTTP/1.1" 200 48324 "" "Prometheus/2.51.0"
Jan 20 18:47:10 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:47:10 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd998002690 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:47:10 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:47:10 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9ac003cc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:47:10 compute-0 sudo[116843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 18:47:10 compute-0 sudo[116843]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:47:10 compute-0 sudo[116843]: pam_unix(sudo:session): session closed for user root
Jan 20 18:47:10 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:47:10 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:47:10 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:47:10.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:47:10 compute-0 sudo[116933]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmtnwteycsarzemwwwdebbxevpzioiyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934829.7668793-456-102495264567533/AnsiballZ_slurp.py'
Jan 20 18:47:10 compute-0 sudo[116933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:47:10 compute-0 python3.9[116935]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Jan 20 18:47:10 compute-0 sudo[116933]: pam_unix(sudo:session): session closed for user root
Jan 20 18:47:10 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:47:10 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9ac003cc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:47:10 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v141: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:47:11 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:47:11 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:47:11 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:47:11.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:47:11 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 18:47:11 compute-0 sshd-session[114191]: Connection closed by 192.168.122.30 port 39142
Jan 20 18:47:11 compute-0 sshd-session[114188]: pam_unix(sshd:session): session closed for user zuul
Jan 20 18:47:11 compute-0 systemd[1]: session-41.scope: Deactivated successfully.
Jan 20 18:47:11 compute-0 systemd[1]: session-41.scope: Consumed 17.834s CPU time.
Jan 20 18:47:11 compute-0 systemd-logind[796]: Session 41 logged out. Waiting for processes to exit.
Jan 20 18:47:11 compute-0 systemd-logind[796]: Removed session 41.
Jan 20 18:47:12 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:47:12 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd994003e40 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:47:12 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:47:12 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9ac003cc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:47:12 compute-0 ceph-mon[74381]: pgmap v141: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:47:12 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:47:12 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:47:12 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:47:12.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:47:12 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:47:12 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9a0002600 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:47:12 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v142: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:47:13 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:47:13 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:47:13 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 20 18:47:13 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:47:13.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 20 18:47:13 compute-0 ceph-mon[74381]: pgmap v142: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:47:14 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:47:14 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd998003b50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:47:14 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:47:14 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd994003e40 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:47:14 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:47:14 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:47:14 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:47:14.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:47:14 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:47:14 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9ac003cc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:47:14 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v143: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:47:15 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:47:15 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 20 18:47:15 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:47:15.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 20 18:47:16 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:47:16 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9ac003cc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:47:16 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:47:16 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd998003b50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:47:16 compute-0 ceph-mon[74381]: pgmap v143: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:47:16 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:47:16 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:47:16 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:47:16.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:47:16 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:47:16 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd994003e40 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:47:16 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v144: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 20 18:47:16 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:47:16.963Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 18:47:17 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:47:17 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:47:17 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:47:17.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:47:17 compute-0 sshd-session[116966]: Accepted publickey for zuul from 192.168.122.30 port 53762 ssh2: ECDSA SHA256:OqjpBifwMHsrzwMbJxHbqE54q1skVz6aecL1tN9sOps
Jan 20 18:47:17 compute-0 systemd-logind[796]: New session 42 of user zuul.
Jan 20 18:47:17 compute-0 systemd[1]: Started Session 42 of User zuul.
Jan 20 18:47:17 compute-0 sshd-session[116966]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 18:47:17 compute-0 ceph-mon[74381]: pgmap v144: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 20 18:47:18 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:47:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:47:18 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9ac003cc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:47:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:47:18 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9ac003cc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:47:18 compute-0 python3.9[117121]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 18:47:18 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:47:18 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:47:18 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:47:18.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:47:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:47:18 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd998003b50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:47:18 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v145: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:47:19 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:47:19 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:47:19 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:47:19.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:47:19 compute-0 python3.9[117275]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 18:47:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:47:19] "GET /metrics HTTP/1.1" 200 48330 "" "Prometheus/2.51.0"
Jan 20 18:47:19 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:47:19] "GET /metrics HTTP/1.1" 200 48330 "" "Prometheus/2.51.0"
Jan 20 18:47:20 compute-0 ceph-mon[74381]: pgmap v145: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:47:20 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:47:20 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd994003e40 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:47:20 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:47:20 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9ac003cc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:47:20 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:47:20 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 20 18:47:20 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:47:20.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 20 18:47:20 compute-0 python3.9[117470]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:47:20 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:47:20 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9ac003cc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:47:20 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v146: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:47:21 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:47:21 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:47:21 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:47:21.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:47:21 compute-0 sshd-session[116969]: Connection closed by 192.168.122.30 port 53762
Jan 20 18:47:21 compute-0 sshd-session[116966]: pam_unix(sshd:session): session closed for user zuul
Jan 20 18:47:21 compute-0 systemd[1]: session-42.scope: Deactivated successfully.
Jan 20 18:47:21 compute-0 systemd[1]: session-42.scope: Consumed 2.452s CPU time.
Jan 20 18:47:21 compute-0 systemd-logind[796]: Session 42 logged out. Waiting for processes to exit.
Jan 20 18:47:21 compute-0 systemd-logind[796]: Removed session 42.
Jan 20 18:47:22 compute-0 ceph-mon[74381]: pgmap v146: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:47:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:47:22 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd998003b50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:47:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:47:22 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd994003e40 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:47:22 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:47:22 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.002000061s ======
Jan 20 18:47:22 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:47:22.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000061s
Jan 20 18:47:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:47:22 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9ac003cc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:47:22 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v147: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:47:23 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:47:23 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:47:23 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:47:23 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:47:23.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:47:24 compute-0 ceph-mon[74381]: pgmap v147: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:47:24 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:47:24 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9ac003cc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:47:24 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:47:24 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd998003b50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:47:24 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:47:24 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:47:24 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:47:24.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:47:24 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:47:24 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd99c001ff0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:47:24 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v148: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:47:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:47:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:47:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:47:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:47:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:47:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:47:25 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:47:25 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:47:25 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:47:25.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:47:25 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 18:47:26 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:47:26 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd994003e40 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:47:26 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:47:26 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd994003e40 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:47:26 compute-0 sshd-session[117505]: Accepted publickey for zuul from 192.168.122.30 port 56578 ssh2: ECDSA SHA256:OqjpBifwMHsrzwMbJxHbqE54q1skVz6aecL1tN9sOps
Jan 20 18:47:26 compute-0 systemd-logind[796]: New session 43 of user zuul.
Jan 20 18:47:26 compute-0 systemd[1]: Started Session 43 of User zuul.
Jan 20 18:47:26 compute-0 sshd-session[117505]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 18:47:26 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:47:26 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 20 18:47:26 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:47:26.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 20 18:47:26 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:47:26 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd998003b50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:47:26 compute-0 ceph-mon[74381]: pgmap v148: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:47:26 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v149: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 20 18:47:26 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:47:26.963Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 18:47:27 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:47:27 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 20 18:47:27 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:47:27.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 20 18:47:27 compute-0 python3.9[117658]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 18:47:27 compute-0 ceph-mon[74381]: pgmap v149: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 20 18:47:28 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:47:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:47:28 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd99c001ff0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:47:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:47:28 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd99c001ff0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:47:28 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:47:28 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000031s ======
Jan 20 18:47:28 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:47:28.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000031s
Jan 20 18:47:28 compute-0 python3.9[117814]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 18:47:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:47:28 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9a0002600 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:47:28 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v150: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:47:29 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:47:29 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:47:29 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:47:29.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:47:29 compute-0 sudo[117968]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ateozfpetyathontefcnrbomepyknjeo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934849.0160391-75-160387568136501/AnsiballZ_setup.py'
Jan 20 18:47:29 compute-0 sudo[117968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:47:29 compute-0 python3.9[117970]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 18:47:29 compute-0 sudo[117968]: pam_unix(sudo:session): session closed for user root
Jan 20 18:47:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:47:29] "GET /metrics HTTP/1.1" 200 48318 "" "Prometheus/2.51.0"
Jan 20 18:47:29 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:47:29] "GET /metrics HTTP/1.1" 200 48318 "" "Prometheus/2.51.0"
Jan 20 18:47:30 compute-0 ceph-mon[74381]: pgmap v150: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:47:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:47:30 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd998003b50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:47:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:47:30 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd998003b50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:47:30 compute-0 sudo[118054]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkuazvzqqrpnexasjsanvocfvtrgerye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934849.0160391-75-160387568136501/AnsiballZ_dnf.py'
Jan 20 18:47:30 compute-0 sudo[118054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:47:30 compute-0 sudo[118057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 18:47:30 compute-0 sudo[118057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:47:30 compute-0 sudo[118057]: pam_unix(sudo:session): session closed for user root
Jan 20 18:47:30 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:47:30 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:47:30 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:47:30.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:47:30 compute-0 python3.9[118056]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 18:47:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:47:30 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd994003e40 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:47:30 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v151: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:47:31 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:47:31 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 20 18:47:31 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:47:31.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 20 18:47:31 compute-0 ceph-mon[74381]: pgmap v151: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:47:31 compute-0 sudo[118054]: pam_unix(sudo:session): session closed for user root
Jan 20 18:47:32 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:47:32 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9a0002600 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:47:32 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:47:32 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9a0002600 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:47:32 compute-0 sudo[118234]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgbkoqmwoljqojvxxfyxacgibajkcnad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934851.9689102-111-248686477779454/AnsiballZ_setup.py'
Jan 20 18:47:32 compute-0 sudo[118234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:47:32 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:47:32 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 20 18:47:32 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:47:32.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 20 18:47:32 compute-0 python3.9[118236]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 18:47:32 compute-0 sudo[118234]: pam_unix(sudo:session): session closed for user root
Jan 20 18:47:32 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:47:32 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd998003b50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:47:32 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v152: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:47:33 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:47:33 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:47:33 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000030s ======
Jan 20 18:47:33 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:47:33.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000030s
Jan 20 18:47:33 compute-0 sudo[118431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgbdcvzoptysnwjvxqyeawenlsvbvyun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934853.268058-144-180372184579970/AnsiballZ_file.py'
Jan 20 18:47:33 compute-0 sudo[118431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:47:33 compute-0 python3.9[118433]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:47:33 compute-0 sudo[118431]: pam_unix(sudo:session): session closed for user root
Jan 20 18:47:34 compute-0 ceph-mon[74381]: pgmap v152: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:47:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:47:34 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd998003b50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:47:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:47:34 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd994003e40 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:47:34 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:47:34 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000039s ======
Jan 20 18:47:34 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:47:34.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000039s
Jan 20 18:47:34 compute-0 sudo[118583]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eazwuqazbwqvjahrdwlsicbyijcsinzv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934854.1166177-168-247561638420473/AnsiballZ_command.py'
Jan 20 18:47:34 compute-0 sudo[118583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:47:34 compute-0 python3.9[118585]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:47:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:47:34 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9a0002600 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:47:34 compute-0 sudo[118583]: pam_unix(sudo:session): session closed for user root
Jan 20 18:47:34 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v153: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:47:35 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:47:35 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:47:35 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:47:35.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:47:35 compute-0 sudo[118751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgwztskebiokkbmsvxontqpthqjprpxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934855.1673348-192-146635293160642/AnsiballZ_stat.py'
Jan 20 18:47:35 compute-0 sudo[118751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:47:35 compute-0 python3.9[118753]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:47:35 compute-0 sudo[118751]: pam_unix(sudo:session): session closed for user root
Jan 20 18:47:36 compute-0 sudo[118829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzdphnavutotnftphbzncrbvnojqbcwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934855.1673348-192-146635293160642/AnsiballZ_file.py'
Jan 20 18:47:36 compute-0 sudo[118829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:47:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:47:36 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd99c001ff0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:47:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:47:36 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd998003b50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:47:36 compute-0 python3.9[118831]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:47:36 compute-0 sudo[118829]: pam_unix(sudo:session): session closed for user root
Jan 20 18:47:36 compute-0 ceph-mon[74381]: pgmap v153: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:47:36 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:47:36 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:47:36 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:47:36.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:47:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:47:36 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd994003e40 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:47:36 compute-0 sudo[118981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsqgarvfpsvxegswtypdnivwqqkhtjvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934856.5430048-228-206067858499419/AnsiballZ_stat.py'
Jan 20 18:47:36 compute-0 sudo[118981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:47:36 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v154: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 20 18:47:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:47:36.965Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 18:47:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:47:36.966Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 18:47:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:47:36.966Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 18:47:37 compute-0 python3.9[118983]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:47:37 compute-0 sudo[118981]: pam_unix(sudo:session): session closed for user root
Jan 20 18:47:37 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:47:37 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000039s ======
Jan 20 18:47:37 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:47:37.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000039s
Jan 20 18:47:37 compute-0 sudo[119059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zagdiuotugiagpxpqatlbdcjqeceramw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934856.5430048-228-206067858499419/AnsiballZ_file.py'
Jan 20 18:47:37 compute-0 sudo[119059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:47:37 compute-0 ceph-mon[74381]: pgmap v154: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 20 18:47:37 compute-0 python3.9[119061]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:47:37 compute-0 sudo[119059]: pam_unix(sudo:session): session closed for user root
Jan 20 18:47:38 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:47:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:47:38 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9a00043f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:47:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:47:38 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd99c0036a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:47:38 compute-0 sudo[119213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urbwxrppvuhkgrqdlvwbsvmatvemjvta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934857.8486292-267-10967058655332/AnsiballZ_ini_file.py'
Jan 20 18:47:38 compute-0 sudo[119213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:47:38 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:47:38 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:47:38 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:47:38.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:47:38 compute-0 python3.9[119215]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:47:38 compute-0 sudo[119213]: pam_unix(sudo:session): session closed for user root
Jan 20 18:47:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:47:38 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd998003b50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:47:38 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v155: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:47:39 compute-0 sudo[119365]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvwgfuahdqugnjvzziizegheisrawyoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934858.7129154-267-183124615648810/AnsiballZ_ini_file.py'
Jan 20 18:47:39 compute-0 sudo[119365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:47:39 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:47:39 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000039s ======
Jan 20 18:47:39 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:47:39.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000039s
Jan 20 18:47:39 compute-0 python3.9[119367]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:47:39 compute-0 sudo[119365]: pam_unix(sudo:session): session closed for user root
Jan 20 18:47:39 compute-0 sudo[119519]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ciazkmzzdqbgofebsucwopwotvutgcdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934859.3750107-267-198053842928944/AnsiballZ_ini_file.py'
Jan 20 18:47:39 compute-0 sudo[119519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:47:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:47:39] "GET /metrics HTTP/1.1" 200 48318 "" "Prometheus/2.51.0"
Jan 20 18:47:39 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:47:39] "GET /metrics HTTP/1.1" 200 48318 "" "Prometheus/2.51.0"
Jan 20 18:47:39 compute-0 python3.9[119521]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:47:39 compute-0 sudo[119519]: pam_unix(sudo:session): session closed for user root
Jan 20 18:47:40 compute-0 ceph-mon[74381]: pgmap v155: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:47:40 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 18:47:40 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:47:40 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd994003e40 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:47:40 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:47:40 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd9a00043f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:47:40 compute-0 sudo[119550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:47:40 compute-0 sudo[119550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:47:40 compute-0 sudo[119550]: pam_unix(sudo:session): session closed for user root
Jan 20 18:47:40 compute-0 sudo[119605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Jan 20 18:47:40 compute-0 sudo[119605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:47:40 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:47:40 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:47:40 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:47:40.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:47:40 compute-0 sudo[119733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtuzpucgtpnvypveukaadoqmqometkbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934860.1508808-267-161632679055340/AnsiballZ_ini_file.py'
Jan 20 18:47:40 compute-0 sudo[119733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:47:40 compute-0 python3.9[119737]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:47:40 compute-0 sudo[119733]: pam_unix(sudo:session): session closed for user root
Jan 20 18:47:40 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:47:40 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd99c0036a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:47:40 compute-0 podman[119821]: 2026-01-20 18:47:40.926334051 +0000 UTC m=+0.084857379 container exec 2fba800b181b7eef43f9bbe592c3e2bd413c4a1140b3b26fe8cd839a68603da7 (image=quay.io/ceph/ceph:v19, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:47:40 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v156: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:47:41 compute-0 podman[119821]: 2026-01-20 18:47:41.043723665 +0000 UTC m=+0.202246983 container exec_died 2fba800b181b7eef43f9bbe592c3e2bd413c4a1140b3b26fe8cd839a68603da7 (image=quay.io/ceph/ceph:v19, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:47:41 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:47:41 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:47:41 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:47:41.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:47:41 compute-0 sudo[120033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsiuljhuecpezfxaizodxevlulkumhdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934861.073318-360-211808199017134/AnsiballZ_dnf.py'
Jan 20 18:47:41 compute-0 sudo[120033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:47:41 compute-0 python3.9[120037]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 18:47:41 compute-0 podman[120084]: 2026-01-20 18:47:41.69629451 +0000 UTC m=+0.069969200 container exec d3ebab8fa832c2f5835ae87e49226279b06e4ab5bb811ac33df7c9afc2478663 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:47:41 compute-0 podman[120084]: 2026-01-20 18:47:41.70722758 +0000 UTC m=+0.080902230 container exec_died d3ebab8fa832c2f5835ae87e49226279b06e4ab5bb811ac33df7c9afc2478663 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:47:42 compute-0 podman[120159]: 2026-01-20 18:47:42.082467979 +0000 UTC m=+0.079841188 container exec a66f259d59f97851d239d62a1c86fcfe8cc108afab9f56a3ae31f80c5ab5d699 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Jan 20 18:47:42 compute-0 podman[120159]: 2026-01-20 18:47:42.100037945 +0000 UTC m=+0.097411144 container exec_died a66f259d59f97851d239d62a1c86fcfe8cc108afab9f56a3ae31f80c5ab5d699 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Jan 20 18:47:42 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:47:42 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd998003b50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:47:42 compute-0 kernel: ganesha.nfsd[110993]: segfault at 50 ip 00007fda4114732e sp 00007fd9a9ffa210 error 4 in libntirpc.so.5.8[7fda4112c000+2c000] likely on CPU 2 (core 0, socket 2)
Jan 20 18:47:42 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Jan 20 18:47:42 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[103461]: 20/01/2026 18:47:42 : epoch 696fcd30 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fd994003e40 fd 42 proxy ignored for local
Jan 20 18:47:42 compute-0 ceph-mon[74381]: pgmap v156: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:47:42 compute-0 systemd[1]: Started Process Core Dump (PID 120197/UID 0).
Jan 20 18:47:42 compute-0 podman[120230]: 2026-01-20 18:47:42.425083078 +0000 UTC m=+0.090159852 container exec 4e8e761c58a323b2e232c17a02958108663f7cfbfd5c846bd9edf3835afc34ea (image=quay.io/ceph/haproxy:2.3, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm)
Jan 20 18:47:42 compute-0 podman[120230]: 2026-01-20 18:47:42.439045898 +0000 UTC m=+0.104122632 container exec_died 4e8e761c58a323b2e232c17a02958108663f7cfbfd5c846bd9edf3835afc34ea (image=quay.io/ceph/haproxy:2.3, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm)
Jan 20 18:47:42 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:47:42 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:47:42 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:47:42.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:47:42 compute-0 podman[120292]: 2026-01-20 18:47:42.692722175 +0000 UTC m=+0.070194219 container exec 03da620c845e54fd933e41497235ed884812b05ac119911f7372fce805879a03 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-keepalived-nfs-cephfs-compute-0-kuklye, build-date=2023-02-22T09:23:20, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, version=2.2.4, architecture=x86_64, io.buildah.version=1.28.2, release=1793, description=keepalived for Ceph, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, vendor=Red Hat, Inc., name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 20 18:47:42 compute-0 podman[120292]: 2026-01-20 18:47:42.714152196 +0000 UTC m=+0.091624210 container exec_died 03da620c845e54fd933e41497235ed884812b05ac119911f7372fce805879a03 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-keepalived-nfs-cephfs-compute-0-kuklye, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, architecture=x86_64, name=keepalived, io.openshift.expose-services=, build-date=2023-02-22T09:23:20, distribution-scope=public, vendor=Red Hat, Inc., description=keepalived for Ceph, io.buildah.version=1.28.2, release=1793, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Jan 20 18:47:42 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v157: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:47:42 compute-0 sudo[120033]: pam_unix(sudo:session): session closed for user root
Jan 20 18:47:43 compute-0 podman[120353]: 2026-01-20 18:47:43.012371171 +0000 UTC m=+0.083644249 container exec a8dfcd2937823e651e7ddb7da3250c4b2d68a69832f3be4617cbfedf4f0f7749 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:47:43 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:47:43 compute-0 podman[120353]: 2026-01-20 18:47:43.07832899 +0000 UTC m=+0.149602058 container exec_died a8dfcd2937823e651e7ddb7da3250c4b2d68a69832f3be4617cbfedf4f0f7749 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:47:43 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:47:43 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:47:43 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:47:43.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:47:43 compute-0 ceph-mon[74381]: pgmap v157: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:47:43 compute-0 podman[120451]: 2026-01-20 18:47:43.456987747 +0000 UTC m=+0.066221970 container exec 07b235bbcf8e275aef429beeae1d676b774a5b2e004859627b1a936217e5b679 (image=quay.io/ceph/grafana:10.4.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 20 18:47:43 compute-0 systemd-coredump[120205]: Process 103469 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 58:
                                                    #0  0x00007fda4114732e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Jan 20 18:47:43 compute-0 systemd[1]: systemd-coredump@2-120197-0.service: Deactivated successfully.
Jan 20 18:47:43 compute-0 systemd[1]: systemd-coredump@2-120197-0.service: Consumed 1.273s CPU time.
Jan 20 18:47:43 compute-0 podman[120514]: 2026-01-20 18:47:43.626794686 +0000 UTC m=+0.030621501 container died a66f259d59f97851d239d62a1c86fcfe8cc108afab9f56a3ae31f80c5ab5d699 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Jan 20 18:47:43 compute-0 podman[120451]: 2026-01-20 18:47:43.628289456 +0000 UTC m=+0.237523659 container exec_died 07b235bbcf8e275aef429beeae1d676b774a5b2e004859627b1a936217e5b679 (image=quay.io/ceph/grafana:10.4.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 20 18:47:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-93008f74cac035563ae48118542e269946171003c03a0ded4d97a2c21df53c79-merged.mount: Deactivated successfully.
Jan 20 18:47:43 compute-0 podman[120514]: 2026-01-20 18:47:43.694072247 +0000 UTC m=+0.097899042 container remove a66f259d59f97851d239d62a1c86fcfe8cc108afab9f56a3ae31f80c5ab5d699 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 20 18:47:43 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Main process exited, code=exited, status=139/n/a
Jan 20 18:47:43 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Failed with result 'exit-code'.
Jan 20 18:47:43 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Consumed 1.825s CPU time.
Jan 20 18:47:43 compute-0 sudo[120701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsobpolfokhkpsvcwvaquckkmgfrqvel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934863.5672972-393-86337946096475/AnsiballZ_setup.py'
Jan 20 18:47:43 compute-0 sudo[120701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:47:44 compute-0 podman[120734]: 2026-01-20 18:47:44.121063584 +0000 UTC m=+0.070813604 container exec 3b46b6d3dc84fc240ebe1bff0868890dcc11b4799d0aa0050a48fba74c90cc31 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:47:44 compute-0 podman[120734]: 2026-01-20 18:47:44.162861183 +0000 UTC m=+0.112611203 container exec_died 3b46b6d3dc84fc240ebe1bff0868890dcc11b4799d0aa0050a48fba74c90cc31 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:47:44 compute-0 python3.9[120706]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 18:47:44 compute-0 sudo[119605]: pam_unix(sudo:session): session closed for user root
Jan 20 18:47:44 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 18:47:44 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:47:44 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 18:47:44 compute-0 sudo[120701]: pam_unix(sudo:session): session closed for user root
Jan 20 18:47:44 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:47:44 compute-0 sudo[120803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:47:44 compute-0 sudo[120803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:47:44 compute-0 sudo[120803]: pam_unix(sudo:session): session closed for user root
Jan 20 18:47:44 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:47:44 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:47:44 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:47:44.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:47:44 compute-0 sudo[120828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 20 18:47:44 compute-0 sudo[120828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:47:44 compute-0 sudo[120998]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssflphmrjlyhnzxfebootifchitpmtlg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934864.5344899-417-15765130418460/AnsiballZ_stat.py'
Jan 20 18:47:44 compute-0 sudo[120998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:47:44 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v158: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:47:45 compute-0 sudo[120828]: pam_unix(sudo:session): session closed for user root
Jan 20 18:47:45 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 18:47:45 compute-0 python3.9[121000]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 18:47:45 compute-0 sudo[120998]: pam_unix(sudo:session): session closed for user root
Jan 20 18:47:45 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:47:45 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:47:45 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:47:45.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:47:45 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:47:45 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 20 18:47:45 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:47:45 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:47:45 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:47:45 compute-0 ceph-mon[74381]: pgmap v158: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:47:45 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:47:45 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 18:47:45 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:47:45 compute-0 sudo[121037]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:47:45 compute-0 sudo[121037]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:47:45 compute-0 sudo[121037]: pam_unix(sudo:session): session closed for user root
Jan 20 18:47:45 compute-0 sudo[121062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 18:47:45 compute-0 sudo[121062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:47:45 compute-0 podman[121129]: 2026-01-20 18:47:45.711622158 +0000 UTC m=+0.028282047 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:47:45 compute-0 podman[121129]: 2026-01-20 18:47:45.924304468 +0000 UTC m=+0.240964367 container create 13949c34296197546c18a8dc6f713885ffb72a5ce5e0fd9abccda805424ec260 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_lehmann, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:47:45 compute-0 systemd[1]: Started libpod-conmon-13949c34296197546c18a8dc6f713885ffb72a5ce5e0fd9abccda805424ec260.scope.
Jan 20 18:47:46 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:47:46 compute-0 podman[121129]: 2026-01-20 18:47:46.034194162 +0000 UTC m=+0.350854041 container init 13949c34296197546c18a8dc6f713885ffb72a5ce5e0fd9abccda805424ec260 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_lehmann, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:47:46 compute-0 podman[121129]: 2026-01-20 18:47:46.043857189 +0000 UTC m=+0.360517078 container start 13949c34296197546c18a8dc6f713885ffb72a5ce5e0fd9abccda805424ec260 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_lehmann, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:47:46 compute-0 podman[121129]: 2026-01-20 18:47:46.047998055 +0000 UTC m=+0.364657934 container attach 13949c34296197546c18a8dc6f713885ffb72a5ce5e0fd9abccda805424ec260 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_lehmann, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:47:46 compute-0 nostalgic_lehmann[121152]: 167 167
Jan 20 18:47:46 compute-0 systemd[1]: libpod-13949c34296197546c18a8dc6f713885ffb72a5ce5e0fd9abccda805424ec260.scope: Deactivated successfully.
Jan 20 18:47:46 compute-0 podman[121129]: 2026-01-20 18:47:46.051410203 +0000 UTC m=+0.368070092 container died 13949c34296197546c18a8dc6f713885ffb72a5ce5e0fd9abccda805424ec260 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Jan 20 18:47:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b55eba55a825759548566642deb37901d96e40691207dfbcc6536e84343c09b-merged.mount: Deactivated successfully.
Jan 20 18:47:46 compute-0 podman[121129]: 2026-01-20 18:47:46.105238304 +0000 UTC m=+0.421898173 container remove 13949c34296197546c18a8dc6f713885ffb72a5ce5e0fd9abccda805424ec260 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Jan 20 18:47:46 compute-0 systemd[1]: libpod-conmon-13949c34296197546c18a8dc6f713885ffb72a5ce5e0fd9abccda805424ec260.scope: Deactivated successfully.
Jan 20 18:47:46 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:47:46 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 18:47:46 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 18:47:46 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:47:46 compute-0 sudo[121306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yooquvnnujfojzymvlkifzxuiqwpshyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934866.0129485-444-129809740401867/AnsiballZ_stat.py'
Jan 20 18:47:46 compute-0 sudo[121306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:47:46 compute-0 podman[121269]: 2026-01-20 18:47:46.309164304 +0000 UTC m=+0.061937628 container create 692c94ff52c3001fceb9a34c604f81931f911831b432fb9bcfdfd878d4686976 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_hodgkin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Jan 20 18:47:46 compute-0 systemd[1]: Started libpod-conmon-692c94ff52c3001fceb9a34c604f81931f911831b432fb9bcfdfd878d4686976.scope.
Jan 20 18:47:46 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:47:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3321703ee3f8ca1e87c8e2a6d34683f6c85ad6ac2a9738b85b0f104864bc6ca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 18:47:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3321703ee3f8ca1e87c8e2a6d34683f6c85ad6ac2a9738b85b0f104864bc6ca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:47:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3321703ee3f8ca1e87c8e2a6d34683f6c85ad6ac2a9738b85b0f104864bc6ca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:47:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3321703ee3f8ca1e87c8e2a6d34683f6c85ad6ac2a9738b85b0f104864bc6ca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 18:47:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3321703ee3f8ca1e87c8e2a6d34683f6c85ad6ac2a9738b85b0f104864bc6ca/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 18:47:46 compute-0 podman[121269]: 2026-01-20 18:47:46.292500035 +0000 UTC m=+0.045273389 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:47:46 compute-0 podman[121269]: 2026-01-20 18:47:46.403061294 +0000 UTC m=+0.155834638 container init 692c94ff52c3001fceb9a34c604f81931f911831b432fb9bcfdfd878d4686976 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid)
Jan 20 18:47:46 compute-0 podman[121269]: 2026-01-20 18:47:46.411859767 +0000 UTC m=+0.164633111 container start 692c94ff52c3001fceb9a34c604f81931f911831b432fb9bcfdfd878d4686976 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:47:46 compute-0 podman[121269]: 2026-01-20 18:47:46.415312997 +0000 UTC m=+0.168086331 container attach 692c94ff52c3001fceb9a34c604f81931f911831b432fb9bcfdfd878d4686976 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_hodgkin, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:47:46 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:47:46 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:47:46 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:47:46.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:47:46 compute-0 python3.9[121308]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 18:47:46 compute-0 sudo[121306]: pam_unix(sudo:session): session closed for user root
Jan 20 18:47:46 compute-0 happy_hodgkin[121314]: --> passed data devices: 0 physical, 1 LVM
Jan 20 18:47:46 compute-0 happy_hodgkin[121314]: --> All data devices are unavailable
Jan 20 18:47:46 compute-0 systemd[1]: libpod-692c94ff52c3001fceb9a34c604f81931f911831b432fb9bcfdfd878d4686976.scope: Deactivated successfully.
Jan 20 18:47:46 compute-0 podman[121269]: 2026-01-20 18:47:46.738244944 +0000 UTC m=+0.491018268 container died 692c94ff52c3001fceb9a34c604f81931f911831b432fb9bcfdfd878d4686976 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_hodgkin, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Jan 20 18:47:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3321703ee3f8ca1e87c8e2a6d34683f6c85ad6ac2a9738b85b0f104864bc6ca-merged.mount: Deactivated successfully.
Jan 20 18:47:46 compute-0 podman[121269]: 2026-01-20 18:47:46.782861316 +0000 UTC m=+0.535634640 container remove 692c94ff52c3001fceb9a34c604f81931f911831b432fb9bcfdfd878d4686976 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_hodgkin, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Jan 20 18:47:46 compute-0 systemd[1]: libpod-conmon-692c94ff52c3001fceb9a34c604f81931f911831b432fb9bcfdfd878d4686976.scope: Deactivated successfully.
Jan 20 18:47:46 compute-0 sudo[121062]: pam_unix(sudo:session): session closed for user root
Jan 20 18:47:46 compute-0 sudo[121364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:47:46 compute-0 sudo[121364]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:47:46 compute-0 sudo[121364]: pam_unix(sudo:session): session closed for user root
Jan 20 18:47:46 compute-0 sudo[121389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- lvm list --format json
Jan 20 18:47:46 compute-0 sudo[121389]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:47:46 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v159: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 20 18:47:46 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:47:46.967Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 18:47:47 compute-0 sudo[121558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbabseupjiwogofbynloragzafunxtum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934866.9159696-474-54792409888464/AnsiballZ_command.py'
Jan 20 18:47:47 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:47:47 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:47:47 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:47:47.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:47:47 compute-0 sudo[121558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:47:47 compute-0 ceph-mon[74381]: pgmap v159: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 20 18:47:47 compute-0 podman[121582]: 2026-01-20 18:47:47.280451939 +0000 UTC m=+0.047122484 container create d67e11e120f00aa20ddbc9909701addac656cf041812d2e7fea126c4d87a147e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_montalcini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 20 18:47:47 compute-0 systemd[93001]: Created slice User Background Tasks Slice.
Jan 20 18:47:47 compute-0 systemd[93001]: Starting Cleanup of User's Temporary Files and Directories...
Jan 20 18:47:47 compute-0 python3.9[121567]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:47:47 compute-0 systemd[1]: Started libpod-conmon-d67e11e120f00aa20ddbc9909701addac656cf041812d2e7fea126c4d87a147e.scope.
Jan 20 18:47:47 compute-0 systemd[93001]: Finished Cleanup of User's Temporary Files and Directories.
Jan 20 18:47:47 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:47:47 compute-0 podman[121582]: 2026-01-20 18:47:47.256510788 +0000 UTC m=+0.023181323 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:47:47 compute-0 sudo[121558]: pam_unix(sudo:session): session closed for user root
Jan 20 18:47:47 compute-0 podman[121582]: 2026-01-20 18:47:47.365800206 +0000 UTC m=+0.132470761 container init d67e11e120f00aa20ddbc9909701addac656cf041812d2e7fea126c4d87a147e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_montalcini, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:47:47 compute-0 podman[121582]: 2026-01-20 18:47:47.371488905 +0000 UTC m=+0.138159410 container start d67e11e120f00aa20ddbc9909701addac656cf041812d2e7fea126c4d87a147e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:47:47 compute-0 stupefied_montalcini[121600]: 167 167
Jan 20 18:47:47 compute-0 systemd[1]: libpod-d67e11e120f00aa20ddbc9909701addac656cf041812d2e7fea126c4d87a147e.scope: Deactivated successfully.
Jan 20 18:47:47 compute-0 podman[121582]: 2026-01-20 18:47:47.375473965 +0000 UTC m=+0.142144500 container attach d67e11e120f00aa20ddbc9909701addac656cf041812d2e7fea126c4d87a147e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_montalcini, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Jan 20 18:47:47 compute-0 podman[121582]: 2026-01-20 18:47:47.375760246 +0000 UTC m=+0.142430761 container died d67e11e120f00aa20ddbc9909701addac656cf041812d2e7fea126c4d87a147e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_montalcini, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:47:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-612641e6f899fe207ca427b82ff67b9484d30093b328c5680776a47695b2882e-merged.mount: Deactivated successfully.
Jan 20 18:47:47 compute-0 podman[121582]: 2026-01-20 18:47:47.414032283 +0000 UTC m=+0.180702788 container remove d67e11e120f00aa20ddbc9909701addac656cf041812d2e7fea126c4d87a147e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:47:47 compute-0 systemd[1]: libpod-conmon-d67e11e120f00aa20ddbc9909701addac656cf041812d2e7fea126c4d87a147e.scope: Deactivated successfully.
Jan 20 18:47:47 compute-0 podman[121648]: 2026-01-20 18:47:47.559734424 +0000 UTC m=+0.045584231 container create 7cb949e536e8dc588f714dc38a4ba0f6e5d4d0f4dd970edc20078623fe72f2f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_golick, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:47:47 compute-0 systemd[1]: Started libpod-conmon-7cb949e536e8dc588f714dc38a4ba0f6e5d4d0f4dd970edc20078623fe72f2f4.scope.
Jan 20 18:47:47 compute-0 podman[121648]: 2026-01-20 18:47:47.537612685 +0000 UTC m=+0.023462472 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:47:47 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:47:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02eca010ca0632f8481f5a25a8ff97122cd2399941925bca6c6dc81c80d522e7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 18:47:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02eca010ca0632f8481f5a25a8ff97122cd2399941925bca6c6dc81c80d522e7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:47:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02eca010ca0632f8481f5a25a8ff97122cd2399941925bca6c6dc81c80d522e7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:47:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02eca010ca0632f8481f5a25a8ff97122cd2399941925bca6c6dc81c80d522e7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 18:47:47 compute-0 podman[121648]: 2026-01-20 18:47:47.667684979 +0000 UTC m=+0.153534856 container init 7cb949e536e8dc588f714dc38a4ba0f6e5d4d0f4dd970edc20078623fe72f2f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_golick, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2)
Jan 20 18:47:47 compute-0 podman[121648]: 2026-01-20 18:47:47.680964403 +0000 UTC m=+0.166814160 container start 7cb949e536e8dc588f714dc38a4ba0f6e5d4d0f4dd970edc20078623fe72f2f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_golick, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 20 18:47:47 compute-0 podman[121648]: 2026-01-20 18:47:47.688575017 +0000 UTC m=+0.174424884 container attach 7cb949e536e8dc588f714dc38a4ba0f6e5d4d0f4dd970edc20078623fe72f2f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 20 18:47:47 compute-0 nervous_golick[121666]: {
Jan 20 18:47:48 compute-0 nervous_golick[121666]:     "0": [
Jan 20 18:47:48 compute-0 nervous_golick[121666]:         {
Jan 20 18:47:48 compute-0 nervous_golick[121666]:             "devices": [
Jan 20 18:47:48 compute-0 nervous_golick[121666]:                 "/dev/loop3"
Jan 20 18:47:48 compute-0 nervous_golick[121666]:             ],
Jan 20 18:47:48 compute-0 nervous_golick[121666]:             "lv_name": "ceph_lv0",
Jan 20 18:47:48 compute-0 nervous_golick[121666]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 18:47:48 compute-0 nervous_golick[121666]:             "lv_size": "21470642176",
Jan 20 18:47:48 compute-0 nervous_golick[121666]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=aecbbf3b-b405-507b-97d7-637a83f5b4b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5f53c0c6-6046-4836-83f9-ff93da7e674e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 18:47:48 compute-0 nervous_golick[121666]:             "lv_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 18:47:48 compute-0 nervous_golick[121666]:             "name": "ceph_lv0",
Jan 20 18:47:48 compute-0 nervous_golick[121666]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 18:47:48 compute-0 nervous_golick[121666]:             "tags": {
Jan 20 18:47:48 compute-0 nervous_golick[121666]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 18:47:48 compute-0 nervous_golick[121666]:                 "ceph.block_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 18:47:48 compute-0 nervous_golick[121666]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 18:47:48 compute-0 nervous_golick[121666]:                 "ceph.cluster_fsid": "aecbbf3b-b405-507b-97d7-637a83f5b4b1",
Jan 20 18:47:48 compute-0 nervous_golick[121666]:                 "ceph.cluster_name": "ceph",
Jan 20 18:47:48 compute-0 nervous_golick[121666]:                 "ceph.crush_device_class": "",
Jan 20 18:47:48 compute-0 nervous_golick[121666]:                 "ceph.encrypted": "0",
Jan 20 18:47:48 compute-0 nervous_golick[121666]:                 "ceph.osd_fsid": "5f53c0c6-6046-4836-83f9-ff93da7e674e",
Jan 20 18:47:48 compute-0 nervous_golick[121666]:                 "ceph.osd_id": "0",
Jan 20 18:47:48 compute-0 nervous_golick[121666]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 18:47:48 compute-0 nervous_golick[121666]:                 "ceph.type": "block",
Jan 20 18:47:48 compute-0 nervous_golick[121666]:                 "ceph.vdo": "0",
Jan 20 18:47:48 compute-0 nervous_golick[121666]:                 "ceph.with_tpm": "0"
Jan 20 18:47:48 compute-0 nervous_golick[121666]:             },
Jan 20 18:47:48 compute-0 nervous_golick[121666]:             "type": "block",
Jan 20 18:47:48 compute-0 nervous_golick[121666]:             "vg_name": "ceph_vg0"
Jan 20 18:47:48 compute-0 nervous_golick[121666]:         }
Jan 20 18:47:48 compute-0 nervous_golick[121666]:     ]
Jan 20 18:47:48 compute-0 nervous_golick[121666]: }
Jan 20 18:47:48 compute-0 systemd[1]: libpod-7cb949e536e8dc588f714dc38a4ba0f6e5d4d0f4dd970edc20078623fe72f2f4.scope: Deactivated successfully.
Jan 20 18:47:48 compute-0 podman[121648]: 2026-01-20 18:47:48.024476627 +0000 UTC m=+0.510326424 container died 7cb949e536e8dc588f714dc38a4ba0f6e5d4d0f4dd970edc20078623fe72f2f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:47:48 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:47:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-02eca010ca0632f8481f5a25a8ff97122cd2399941925bca6c6dc81c80d522e7-merged.mount: Deactivated successfully.
Jan 20 18:47:48 compute-0 podman[121648]: 2026-01-20 18:47:48.08333086 +0000 UTC m=+0.569180647 container remove 7cb949e536e8dc588f714dc38a4ba0f6e5d4d0f4dd970edc20078623fe72f2f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True)
Jan 20 18:47:48 compute-0 systemd[1]: libpod-conmon-7cb949e536e8dc588f714dc38a4ba0f6e5d4d0f4dd970edc20078623fe72f2f4.scope: Deactivated successfully.
Jan 20 18:47:48 compute-0 sudo[121389]: pam_unix(sudo:session): session closed for user root
Jan 20 18:47:48 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [WARNING] 019/184748 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 20 18:47:48 compute-0 sudo[121739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:47:48 compute-0 sudo[121739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:47:48 compute-0 sudo[121739]: pam_unix(sudo:session): session closed for user root
Jan 20 18:47:48 compute-0 sudo[121787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- raw list --format json
Jan 20 18:47:48 compute-0 sudo[121787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:47:48 compute-0 sudo[121862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-moxhgdvftacavqwticjvdksypddwhpaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934867.875486-504-65064414485294/AnsiballZ_service_facts.py'
Jan 20 18:47:48 compute-0 sudo[121862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:47:48 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:47:48 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:47:48 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:47:48.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:47:48 compute-0 python3.9[121864]: ansible-service_facts Invoked
Jan 20 18:47:48 compute-0 network[121929]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 20 18:47:48 compute-0 network[121930]: 'network-scripts' will be removed from distribution in near future.
Jan 20 18:47:48 compute-0 network[121936]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 20 18:47:48 compute-0 podman[121915]: 2026-01-20 18:47:48.683149727 +0000 UTC m=+0.040624722 container create 60f93b12189a0d530c936280660aab98da23a32c64d75de02884e7cae8b56129 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_thompson, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:47:48 compute-0 systemd[1]: Started libpod-conmon-60f93b12189a0d530c936280660aab98da23a32c64d75de02884e7cae8b56129.scope.
Jan 20 18:47:48 compute-0 podman[121915]: 2026-01-20 18:47:48.665139545 +0000 UTC m=+0.022614570 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:47:48 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v160: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:47:49 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:47:49 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:47:49 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:47:49.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:47:49 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:47:49 compute-0 podman[121915]: 2026-01-20 18:47:49.395098218 +0000 UTC m=+0.752573233 container init 60f93b12189a0d530c936280660aab98da23a32c64d75de02884e7cae8b56129 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_thompson, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Jan 20 18:47:49 compute-0 podman[121915]: 2026-01-20 18:47:49.40212468 +0000 UTC m=+0.759599715 container start 60f93b12189a0d530c936280660aab98da23a32c64d75de02884e7cae8b56129 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:47:49 compute-0 podman[121915]: 2026-01-20 18:47:49.406691663 +0000 UTC m=+0.764166688 container attach 60f93b12189a0d530c936280660aab98da23a32c64d75de02884e7cae8b56129 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_thompson, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:47:49 compute-0 pensive_thompson[121946]: 167 167
Jan 20 18:47:49 compute-0 systemd[1]: libpod-60f93b12189a0d530c936280660aab98da23a32c64d75de02884e7cae8b56129.scope: Deactivated successfully.
Jan 20 18:47:49 compute-0 podman[121915]: 2026-01-20 18:47:49.412923984 +0000 UTC m=+0.770398999 container died 60f93b12189a0d530c936280660aab98da23a32c64d75de02884e7cae8b56129 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_thompson, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Jan 20 18:47:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-ea0502856470498272722c0bbb8010b66210705832afaab3ece123d77907e905-merged.mount: Deactivated successfully.
Jan 20 18:47:49 compute-0 podman[121915]: 2026-01-20 18:47:49.4539134 +0000 UTC m=+0.811388395 container remove 60f93b12189a0d530c936280660aab98da23a32c64d75de02884e7cae8b56129 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_thompson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:47:49 compute-0 systemd[1]: libpod-conmon-60f93b12189a0d530c936280660aab98da23a32c64d75de02884e7cae8b56129.scope: Deactivated successfully.
Jan 20 18:47:49 compute-0 podman[121984]: 2026-01-20 18:47:49.665422664 +0000 UTC m=+0.055530732 container create 5c6a6ab54804361e81ccb16d73f5847aeae9f0d72b53229d9ad80f79113f3129 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_satoshi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:47:49 compute-0 systemd[1]: Started libpod-conmon-5c6a6ab54804361e81ccb16d73f5847aeae9f0d72b53229d9ad80f79113f3129.scope.
Jan 20 18:47:49 compute-0 podman[121984]: 2026-01-20 18:47:49.643013664 +0000 UTC m=+0.033121742 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:47:49 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:47:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a44d28976879f42c1e1af1adf3fe3116fee51fb97ccede7d61d5924a58dfc22/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 18:47:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a44d28976879f42c1e1af1adf3fe3116fee51fb97ccede7d61d5924a58dfc22/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:47:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a44d28976879f42c1e1af1adf3fe3116fee51fb97ccede7d61d5924a58dfc22/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:47:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a44d28976879f42c1e1af1adf3fe3116fee51fb97ccede7d61d5924a58dfc22/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 18:47:49 compute-0 podman[121984]: 2026-01-20 18:47:49.773063546 +0000 UTC m=+0.163171644 container init 5c6a6ab54804361e81ccb16d73f5847aeae9f0d72b53229d9ad80f79113f3129 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 20 18:47:49 compute-0 podman[121984]: 2026-01-20 18:47:49.781859389 +0000 UTC m=+0.171967487 container start 5c6a6ab54804361e81ccb16d73f5847aeae9f0d72b53229d9ad80f79113f3129 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 20 18:47:49 compute-0 podman[121984]: 2026-01-20 18:47:49.785656432 +0000 UTC m=+0.175764490 container attach 5c6a6ab54804361e81ccb16d73f5847aeae9f0d72b53229d9ad80f79113f3129 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:47:49 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:47:49] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Jan 20 18:47:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:47:49] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Jan 20 18:47:50 compute-0 ceph-mon[74381]: pgmap v160: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:47:50 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:47:50 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000040s ======
Jan 20 18:47:50 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:47:50.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000040s
Jan 20 18:47:50 compute-0 lvm[122122]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 18:47:50 compute-0 lvm[122122]: VG ceph_vg0 finished
Jan 20 18:47:50 compute-0 sudo[122114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 18:47:50 compute-0 sudo[122114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:47:50 compute-0 sudo[122114]: pam_unix(sudo:session): session closed for user root
Jan 20 18:47:50 compute-0 optimistic_satoshi[122005]: {}
Jan 20 18:47:50 compute-0 podman[121984]: 2026-01-20 18:47:50.562442106 +0000 UTC m=+0.952550184 container died 5c6a6ab54804361e81ccb16d73f5847aeae9f0d72b53229d9ad80f79113f3129 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_satoshi, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:47:50 compute-0 systemd[1]: libpod-5c6a6ab54804361e81ccb16d73f5847aeae9f0d72b53229d9ad80f79113f3129.scope: Deactivated successfully.
Jan 20 18:47:50 compute-0 systemd[1]: libpod-5c6a6ab54804361e81ccb16d73f5847aeae9f0d72b53229d9ad80f79113f3129.scope: Consumed 1.181s CPU time.
Jan 20 18:47:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a44d28976879f42c1e1af1adf3fe3116fee51fb97ccede7d61d5924a58dfc22-merged.mount: Deactivated successfully.
Jan 20 18:47:50 compute-0 podman[121984]: 2026-01-20 18:47:50.614295049 +0000 UTC m=+1.004403107 container remove 5c6a6ab54804361e81ccb16d73f5847aeae9f0d72b53229d9ad80f79113f3129 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_satoshi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:47:50 compute-0 systemd[1]: libpod-conmon-5c6a6ab54804361e81ccb16d73f5847aeae9f0d72b53229d9ad80f79113f3129.scope: Deactivated successfully.
Jan 20 18:47:50 compute-0 sudo[121787]: pam_unix(sudo:session): session closed for user root
Jan 20 18:47:50 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 18:47:50 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:47:50 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 18:47:50 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:47:50 compute-0 sudo[122155]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 18:47:50 compute-0 sudo[122155]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:47:50 compute-0 sudo[122155]: pam_unix(sudo:session): session closed for user root
Jan 20 18:47:50 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v161: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:47:51 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:47:51 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:47:51 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:47:51.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:47:51 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:47:51 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:47:51 compute-0 ceph-mon[74381]: pgmap v161: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:47:52 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:47:52 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:47:52 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:47:52.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:47:52 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v162: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:47:53 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:47:53 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:47:53 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:47:53 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:47:53.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:47:53 compute-0 sudo[121862]: pam_unix(sudo:session): session closed for user root
Jan 20 18:47:53 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Scheduled restart job, restart counter is at 3.
Jan 20 18:47:53 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.ulclbx for aecbbf3b-b405-507b-97d7-637a83f5b4b1.
Jan 20 18:47:53 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Consumed 1.825s CPU time.
Jan 20 18:47:54 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.ulclbx for aecbbf3b-b405-507b-97d7-637a83f5b4b1...
Jan 20 18:47:54 compute-0 ceph-mon[74381]: pgmap v162: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:47:54 compute-0 podman[122309]: 2026-01-20 18:47:54.209101078 +0000 UTC m=+0.038072230 container create b744e04ad1d52c40600907aad25ce1135fd9d076abab06b85b9daa26a8ff322f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Jan 20 18:47:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/deef52dab2c8781af92dfddb1afabc9e4883a1251a0089dd9474b0e6197b0942/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Jan 20 18:47:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/deef52dab2c8781af92dfddb1afabc9e4883a1251a0089dd9474b0e6197b0942/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:47:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/deef52dab2c8781af92dfddb1afabc9e4883a1251a0089dd9474b0e6197b0942/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 18:47:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/deef52dab2c8781af92dfddb1afabc9e4883a1251a0089dd9474b0e6197b0942/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.ulclbx-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 18:47:54 compute-0 podman[122309]: 2026-01-20 18:47:54.260790344 +0000 UTC m=+0.089761506 container init b744e04ad1d52c40600907aad25ce1135fd9d076abab06b85b9daa26a8ff322f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:47:54 compute-0 podman[122309]: 2026-01-20 18:47:54.265491242 +0000 UTC m=+0.094462394 container start b744e04ad1d52c40600907aad25ce1135fd9d076abab06b85b9daa26a8ff322f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 20 18:47:54 compute-0 bash[122309]: b744e04ad1d52c40600907aad25ce1135fd9d076abab06b85b9daa26a8ff322f
Jan 20 18:47:54 compute-0 podman[122309]: 2026-01-20 18:47:54.192870307 +0000 UTC m=+0.021841479 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:47:54 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.ulclbx for aecbbf3b-b405-507b-97d7-637a83f5b4b1.
Jan 20 18:47:54 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:47:54 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Jan 20 18:47:54 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:47:54 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Jan 20 18:47:54 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:47:54 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Jan 20 18:47:54 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:47:54 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Jan 20 18:47:54 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:47:54 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Jan 20 18:47:54 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:47:54 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Jan 20 18:47:54 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:47:54 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Jan 20 18:47:54 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:47:54 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 20 18:47:54 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:47:54 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:47:54 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:47:54.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:47:54 compute-0 sudo[122512]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmqvqvasjbukzayigbuvgyekudrbiooz ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1768934874.2290587-549-109849777303966/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1768934874.2290587-549-109849777303966/args'
Jan 20 18:47:54 compute-0 sudo[122512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:47:54 compute-0 sudo[122512]: pam_unix(sudo:session): session closed for user root
Jan 20 18:47:54 compute-0 ceph-mgr[74676]: [balancer INFO root] Optimize plan auto_2026-01-20_18:47:54
Jan 20 18:47:54 compute-0 ceph-mgr[74676]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 18:47:54 compute-0 ceph-mgr[74676]: [balancer INFO root] do_upmap
Jan 20 18:47:54 compute-0 ceph-mgr[74676]: [balancer INFO root] pools ['images', '.nfs', 'default.rgw.meta', '.mgr', 'volumes', '.rgw.root', 'default.rgw.control', 'vms', 'cephfs.cephfs.meta', 'default.rgw.log', 'backups', 'cephfs.cephfs.data']
Jan 20 18:47:54 compute-0 ceph-mgr[74676]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 18:47:54 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v163: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:47:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 18:47:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:47:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 18:47:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:47:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:47:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:47:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:47:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:47:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:47:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:47:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:47:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:47:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 20 18:47:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:47:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:47:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:47:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 18:47:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:47:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 20 18:47:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:47:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:47:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:47:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 18:47:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:47:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 20 18:47:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:47:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:47:55 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 18:47:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:47:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:47:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 18:47:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 18:47:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 18:47:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 18:47:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:47:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:47:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 18:47:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 18:47:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 18:47:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 18:47:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 18:47:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 18:47:55 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:47:55 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:47:55 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:47:55.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:47:55 compute-0 sudo[122679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhefhxdyqymvwhvdmapstkutpcodvkgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934875.0401645-582-89579764095755/AnsiballZ_dnf.py'
Jan 20 18:47:55 compute-0 sudo[122679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:47:55 compute-0 python3.9[122681]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 18:47:56 compute-0 ceph-mon[74381]: pgmap v163: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:47:56 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:47:56 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:47:56 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:47:56.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:47:56 compute-0 sudo[122679]: pam_unix(sudo:session): session closed for user root
Jan 20 18:47:56 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v164: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 85 B/s wr, 0 op/s
Jan 20 18:47:56 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:47:56.969Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 18:47:57 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:47:57 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:47:57 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:47:57.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:47:58 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:47:58 compute-0 ceph-mon[74381]: pgmap v164: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 85 B/s wr, 0 op/s
Jan 20 18:47:58 compute-0 sudo[122836]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jitbwouxpqmsoirxfeqpfztlimjtjyzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934877.69175-621-181599133617733/AnsiballZ_package_facts.py'
Jan 20 18:47:58 compute-0 sudo[122836]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:47:58 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:47:58 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:47:58 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:47:58.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:47:58 compute-0 python3.9[122838]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 20 18:47:58 compute-0 sudo[122836]: pam_unix(sudo:session): session closed for user root
Jan 20 18:47:58 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v165: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 20 18:47:59 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:47:59 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:47:59 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:47:59.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:47:59 compute-0 sudo[122990]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dksrujswivdqkrukeahcihliaccmhhbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934879.5324707-651-98841197409806/AnsiballZ_stat.py'
Jan 20 18:47:59 compute-0 sudo[122990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:47:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:47:59] "GET /metrics HTTP/1.1" 200 48331 "" "Prometheus/2.51.0"
Jan 20 18:47:59 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:47:59] "GET /metrics HTTP/1.1" 200 48331 "" "Prometheus/2.51.0"
Jan 20 18:47:59 compute-0 python3.9[122992]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:48:00 compute-0 sudo[122990]: pam_unix(sudo:session): session closed for user root
Jan 20 18:48:00 compute-0 sudo[123068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkwyuojhtqflhocvemdzmkbqrsacbupf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934879.5324707-651-98841197409806/AnsiballZ_file.py'
Jan 20 18:48:00 compute-0 ceph-mon[74381]: pgmap v165: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 20 18:48:00 compute-0 sudo[123068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:48:00 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:00 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 20 18:48:00 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:00 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 20 18:48:00 compute-0 python3.9[123070]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:48:00 compute-0 sudo[123068]: pam_unix(sudo:session): session closed for user root
Jan 20 18:48:00 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:48:00 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:48:00 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:48:00.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:48:00 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v166: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Jan 20 18:48:01 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:48:01 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:48:01 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:48:01.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:48:01 compute-0 sudo[123220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-araelmkqyonrqnxysjihsgilcnacnxyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934880.942844-687-127697868671888/AnsiballZ_stat.py'
Jan 20 18:48:01 compute-0 sudo[123220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:48:01 compute-0 ceph-mon[74381]: pgmap v166: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Jan 20 18:48:01 compute-0 python3.9[123222]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:48:01 compute-0 sudo[123220]: pam_unix(sudo:session): session closed for user root
Jan 20 18:48:01 compute-0 sudo[123300]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnotbzpsbwaatbuoreitbczwyvmgfodq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934880.942844-687-127697868671888/AnsiballZ_file.py'
Jan 20 18:48:01 compute-0 sudo[123300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:48:01 compute-0 python3.9[123302]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:48:01 compute-0 sudo[123300]: pam_unix(sudo:session): session closed for user root
Jan 20 18:48:02 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:48:02 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:48:02 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:48:02.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:48:02 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v167: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Jan 20 18:48:03 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:48:03 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:48:03 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:48:03 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:48:03.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:48:03 compute-0 sudo[123452]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqnatcxpjrftsrvonnmdotcygrnfuymx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934883.051151-741-42433591827205/AnsiballZ_lineinfile.py'
Jan 20 18:48:03 compute-0 sudo[123452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:48:03 compute-0 python3.9[123454]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:48:03 compute-0 sudo[123452]: pam_unix(sudo:session): session closed for user root
Jan 20 18:48:04 compute-0 ceph-mon[74381]: pgmap v167: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Jan 20 18:48:04 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:48:04 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000040s ======
Jan 20 18:48:04 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:48:04.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000040s
Jan 20 18:48:04 compute-0 sudo[123606]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnsdjmsrrmuphpvwtqewkfjavskuwsmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934884.7088544-786-123418523863241/AnsiballZ_setup.py'
Jan 20 18:48:04 compute-0 sudo[123606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:48:04 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v168: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Jan 20 18:48:05 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:48:05 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:48:05 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:48:05.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:48:05 compute-0 python3.9[123608]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 18:48:05 compute-0 ceph-mon[74381]: pgmap v168: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Jan 20 18:48:05 compute-0 sudo[123606]: pam_unix(sudo:session): session closed for user root
Jan 20 18:48:06 compute-0 sudo[123692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aibajujijnmkyxjudocbuvbdviffnhgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934884.7088544-786-123418523863241/AnsiballZ_systemd.py'
Jan 20 18:48:06 compute-0 sudo[123692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:48:06 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:48:06 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000039s ======
Jan 20 18:48:06 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:48:06.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000039s
Jan 20 18:48:06 compute-0 python3.9[123694]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 18:48:06 compute-0 sudo[123692]: pam_unix(sudo:session): session closed for user root
Jan 20 18:48:06 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:06 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 20 18:48:06 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:06 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Jan 20 18:48:06 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:06 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Jan 20 18:48:06 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:06 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Jan 20 18:48:06 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:06 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Jan 20 18:48:06 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:06 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Jan 20 18:48:06 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:06 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Jan 20 18:48:06 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:06 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 20 18:48:06 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:06 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 20 18:48:06 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:06 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 20 18:48:06 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:06 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Jan 20 18:48:06 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:06 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 20 18:48:06 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:06 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Jan 20 18:48:06 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:06 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Jan 20 18:48:06 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:06 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Jan 20 18:48:06 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:06 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Jan 20 18:48:06 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:06 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Jan 20 18:48:06 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:06 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Jan 20 18:48:06 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:06 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Jan 20 18:48:06 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:06 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Jan 20 18:48:06 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:06 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Jan 20 18:48:06 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:06 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Jan 20 18:48:06 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:06 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Jan 20 18:48:06 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:06 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Jan 20 18:48:06 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:06 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 20 18:48:06 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:06 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Jan 20 18:48:06 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:06 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 20 18:48:06 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:06 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d4c000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:48:06 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v169: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 20 18:48:06 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:48:06.971Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 18:48:07 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:48:07 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:48:07 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:48:07.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:48:07 compute-0 sshd-session[117508]: Connection closed by 192.168.122.30 port 56578
Jan 20 18:48:07 compute-0 sshd-session[117505]: pam_unix(sshd:session): session closed for user zuul
Jan 20 18:48:07 compute-0 systemd[1]: session-43.scope: Deactivated successfully.
Jan 20 18:48:07 compute-0 systemd[1]: session-43.scope: Consumed 24.040s CPU time.
Jan 20 18:48:07 compute-0 systemd-logind[796]: Session 43 logged out. Waiting for processes to exit.
Jan 20 18:48:07 compute-0 systemd-logind[796]: Removed session 43.
Jan 20 18:48:08 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:48:08 compute-0 ceph-mon[74381]: pgmap v169: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 20 18:48:08 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:08 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d40001240 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:48:08 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:08 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d2c000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:48:08 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:48:08 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000039s ======
Jan 20 18:48:08 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:48:08.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000039s
Jan 20 18:48:08 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:08 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d4c000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:48:08 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v170: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Jan 20 18:48:09 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:48:09 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:48:09 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:48:09.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:48:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:48:09] "GET /metrics HTTP/1.1" 200 48331 "" "Prometheus/2.51.0"
Jan 20 18:48:09 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:48:09] "GET /metrics HTTP/1.1" 200 48331 "" "Prometheus/2.51.0"
Jan 20 18:48:10 compute-0 ceph-mon[74381]: pgmap v170: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Jan 20 18:48:10 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 18:48:10 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:10 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d44001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:48:10 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [WARNING] 019/184810 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 20 18:48:10 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:10 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d2c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:48:10 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:48:10 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:48:10 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:48:10.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:48:10 compute-0 sudo[123739]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 18:48:10 compute-0 sudo[123739]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:48:10 compute-0 sudo[123739]: pam_unix(sudo:session): session closed for user root
Jan 20 18:48:10 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:10 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d40001f20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:48:10 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v171: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Jan 20 18:48:11 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:48:11 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:48:11 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:48:11.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:48:11 compute-0 ceph-mon[74381]: pgmap v171: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Jan 20 18:48:12 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:12 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d4c001f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:48:12 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:12 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d44002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:48:12 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:48:12 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:48:12 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:48:12.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:48:12 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:12 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d2c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:48:12 compute-0 sshd-session[123766]: Accepted publickey for zuul from 192.168.122.30 port 58906 ssh2: ECDSA SHA256:OqjpBifwMHsrzwMbJxHbqE54q1skVz6aecL1tN9sOps
Jan 20 18:48:12 compute-0 systemd-logind[796]: New session 44 of user zuul.
Jan 20 18:48:12 compute-0 systemd[1]: Started Session 44 of User zuul.
Jan 20 18:48:12 compute-0 sshd-session[123766]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 18:48:12 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v172: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Jan 20 18:48:13 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:48:13 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:48:13 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:48:13 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:48:13.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:48:13 compute-0 sudo[123919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwapcmououkdmtmnlvciqwnjhpsdmqhy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934892.990101-21-94452444212118/AnsiballZ_file.py'
Jan 20 18:48:13 compute-0 sudo[123919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:48:13 compute-0 python3.9[123921]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:48:13 compute-0 sudo[123919]: pam_unix(sudo:session): session closed for user root
Jan 20 18:48:14 compute-0 ceph-mon[74381]: pgmap v172: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Jan 20 18:48:14 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:14 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d40001f20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:48:14 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:14 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d4c001f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:48:14 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:48:14 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:48:14 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:48:14.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:48:14 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:14 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d44002520 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:48:14 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v173: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Jan 20 18:48:15 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:48:15 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:48:15 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:48:15.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:48:15 compute-0 sudo[124073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkklsxlsnjdbajchcnryikhbewymfdem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934894.8874712-57-59140989642831/AnsiballZ_stat.py'
Jan 20 18:48:15 compute-0 sudo[124073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:48:15 compute-0 python3.9[124075]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:48:15 compute-0 sudo[124073]: pam_unix(sudo:session): session closed for user root
Jan 20 18:48:15 compute-0 sudo[124153]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhpbkfkflfbnnfvqesglheiteivftbpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934894.8874712-57-59140989642831/AnsiballZ_file.py'
Jan 20 18:48:15 compute-0 sudo[124153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:48:15 compute-0 python3.9[124155]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:48:15 compute-0 sudo[124153]: pam_unix(sudo:session): session closed for user root
Jan 20 18:48:16 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:16 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d2c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:48:16 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:16 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d40002c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:48:16 compute-0 ceph-mon[74381]: pgmap v173: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Jan 20 18:48:16 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:48:16 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:48:16 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:48:16.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:48:16 compute-0 sshd-session[123769]: Connection closed by 192.168.122.30 port 58906
Jan 20 18:48:16 compute-0 sshd-session[123766]: pam_unix(sshd:session): session closed for user zuul
Jan 20 18:48:16 compute-0 systemd[1]: session-44.scope: Deactivated successfully.
Jan 20 18:48:16 compute-0 systemd[1]: session-44.scope: Consumed 1.450s CPU time.
Jan 20 18:48:16 compute-0 systemd-logind[796]: Session 44 logged out. Waiting for processes to exit.
Jan 20 18:48:16 compute-0 systemd-logind[796]: Removed session 44.
Jan 20 18:48:16 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:16 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d4c008dc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:48:16 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:48:16.972Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 18:48:16 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:48:16.972Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 18:48:16 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v174: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Jan 20 18:48:17 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:48:17 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 18:48:17 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:48:17.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 18:48:17 compute-0 ceph-mon[74381]: pgmap v174: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Jan 20 18:48:18 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:48:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:18 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d44003230 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:48:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:18 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d2c002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:48:18 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:48:18 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:48:18 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:48:18.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:48:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:18 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d2c002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:48:18 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v175: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 20 18:48:19 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:48:19 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:48:19 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:48:19.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:48:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:48:19] "GET /metrics HTTP/1.1" 200 48338 "" "Prometheus/2.51.0"
Jan 20 18:48:19 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:48:19] "GET /metrics HTTP/1.1" 200 48338 "" "Prometheus/2.51.0"
Jan 20 18:48:20 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:20 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d40002c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:48:20 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:20 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d30000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:48:20 compute-0 ceph-mon[74381]: pgmap v175: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 20 18:48:20 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:48:20 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:48:20 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:48:20.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:48:20 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:20 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d4c0090c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:48:20 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v176: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Jan 20 18:48:21 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:48:21 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 18:48:21 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:48:21.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 18:48:21 compute-0 ceph-mon[74381]: pgmap v176: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Jan 20 18:48:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:22 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d2c002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:48:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:22 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d40002c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:48:22 compute-0 sshd-session[124187]: Accepted publickey for zuul from 192.168.122.30 port 33022 ssh2: ECDSA SHA256:OqjpBifwMHsrzwMbJxHbqE54q1skVz6aecL1tN9sOps
Jan 20 18:48:22 compute-0 systemd-logind[796]: New session 45 of user zuul.
Jan 20 18:48:22 compute-0 systemd[1]: Started Session 45 of User zuul.
Jan 20 18:48:22 compute-0 sshd-session[124187]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 18:48:22 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:48:22 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:48:22 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:48:22.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:48:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:22 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d300016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:48:22 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v177: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:48:23 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:48:23 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:48:23 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:48:23 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:48:23.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:48:23 compute-0 python3.9[124340]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 18:48:24 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:24 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d4c0099e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:48:24 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:24 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d4c0099e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:48:24 compute-0 ceph-mon[74381]: pgmap v177: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:48:24 compute-0 sudo[124496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrzwnpycfkdmqkfxwufrmgpizyqeeuzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934904.004956-54-217267830340473/AnsiballZ_file.py'
Jan 20 18:48:24 compute-0 sudo[124496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:48:24 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:48:24 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 18:48:24 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:48:24.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 18:48:24 compute-0 python3.9[124498]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:48:24 compute-0 sudo[124496]: pam_unix(sudo:session): session closed for user root
Jan 20 18:48:24 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:24 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d40002c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:48:24 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v178: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:48:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:48:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:48:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:48:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:48:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:48:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:48:25 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:48:25 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:48:25 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:48:25.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:48:25 compute-0 ceph-mon[74381]: pgmap v178: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:48:25 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 18:48:25 compute-0 sudo[124671]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xebsnmcucnnfpnmfsmgvdlrjilxecbre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934904.9958231-78-121698196858419/AnsiballZ_stat.py'
Jan 20 18:48:25 compute-0 sudo[124671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:48:25 compute-0 python3.9[124673]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:48:25 compute-0 sudo[124671]: pam_unix(sudo:session): session closed for user root
Jan 20 18:48:26 compute-0 sudo[124751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eymxikozcohvzblpdorlcuevhpbewmnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934904.9958231-78-121698196858419/AnsiballZ_file.py'
Jan 20 18:48:26 compute-0 sudo[124751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:48:26 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:26 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d300016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:48:26 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:26 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d2c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:48:26 compute-0 python3.9[124753]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.evdfxg1h recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:48:26 compute-0 sudo[124751]: pam_unix(sudo:session): session closed for user root
Jan 20 18:48:26 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:48:26 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:48:26 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:48:26.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:48:26 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:26 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d4c0099e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:48:26 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:48:26.973Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 18:48:27 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:48:27 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v179: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 411 B/s rd, 0 op/s
Jan 20 18:48:27 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:48:27 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:48:27.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:48:27 compute-0 sudo[124905]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbotfrqomafzrwjlycepqdqgxsgasrif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934907.4660676-138-215075428488937/AnsiballZ_stat.py'
Jan 20 18:48:27 compute-0 sudo[124905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:48:27 compute-0 python3.9[124907]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:48:28 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:48:28 compute-0 sudo[124905]: pam_unix(sudo:session): session closed for user root
Jan 20 18:48:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:28 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d40002c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:48:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:28 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d40002c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:48:28 compute-0 sudo[124983]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cblrrfhzevegvpabhkwytdrfnolzcdtz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934907.4660676-138-215075428488937/AnsiballZ_file.py'
Jan 20 18:48:28 compute-0 sudo[124983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:48:28 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:48:28 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:48:28 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:48:28.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:48:28 compute-0 python3.9[124985]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.xzwgbean recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:48:28 compute-0 sudo[124983]: pam_unix(sudo:session): session closed for user root
Jan 20 18:48:28 compute-0 ceph-mon[74381]: pgmap v179: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 411 B/s rd, 0 op/s
Jan 20 18:48:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:28 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d40002c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:48:29 compute-0 sudo[125135]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjzsjzohthjdkqzifzozowxajxbosycq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934908.7908704-177-201597628559470/AnsiballZ_file.py'
Jan 20 18:48:29 compute-0 sudo[125135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:48:29 compute-0 python3.9[125137]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:48:29 compute-0 sudo[125135]: pam_unix(sudo:session): session closed for user root
Jan 20 18:48:29 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v180: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 246 B/s rd, 0 op/s
Jan 20 18:48:29 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:48:29 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:48:29 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:48:29.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:48:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:48:29] "GET /metrics HTTP/1.1" 200 48337 "" "Prometheus/2.51.0"
Jan 20 18:48:29 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:48:29] "GET /metrics HTTP/1.1" 200 48337 "" "Prometheus/2.51.0"
Jan 20 18:48:30 compute-0 sudo[125289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hinlonogrrlcufthmuwptkgdlxiwezei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934909.7253208-201-106374386942965/AnsiballZ_stat.py'
Jan 20 18:48:30 compute-0 sudo[125289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:48:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:30 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d4c0099e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:48:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:30 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d2c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:48:30 compute-0 python3.9[125291]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:48:30 compute-0 sudo[125289]: pam_unix(sudo:session): session closed for user root
Jan 20 18:48:30 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:48:30 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:48:30 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:48:30.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:48:30 compute-0 sudo[125367]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzimvrisshguifhgvdimxgkfasapcqqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934909.7253208-201-106374386942965/AnsiballZ_file.py'
Jan 20 18:48:30 compute-0 sudo[125367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:48:30 compute-0 python3.9[125369]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:48:30 compute-0 sudo[125370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 18:48:30 compute-0 sudo[125370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:48:30 compute-0 sudo[125370]: pam_unix(sudo:session): session closed for user root
Jan 20 18:48:30 compute-0 sudo[125367]: pam_unix(sudo:session): session closed for user root
Jan 20 18:48:30 compute-0 ceph-mon[74381]: pgmap v180: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 246 B/s rd, 0 op/s
Jan 20 18:48:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:30 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d2c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:48:31 compute-0 sudo[125544]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulxprnkwroneafkcjqgkfldanttcmchg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934910.8721788-201-242304246494255/AnsiballZ_stat.py'
Jan 20 18:48:31 compute-0 sudo[125544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:48:31 compute-0 python3.9[125546]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:48:31 compute-0 sudo[125544]: pam_unix(sudo:session): session closed for user root
Jan 20 18:48:31 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v181: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 328 B/s rd, 0 op/s
Jan 20 18:48:31 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:48:31 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:48:31 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:48:31.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:48:31 compute-0 sudo[125622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzttuxywmxmrscbysigfwrroguuquhyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934910.8721788-201-242304246494255/AnsiballZ_file.py'
Jan 20 18:48:31 compute-0 sudo[125622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:48:31 compute-0 python3.9[125624]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:48:31 compute-0 sudo[125622]: pam_unix(sudo:session): session closed for user root
Jan 20 18:48:32 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:32 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d40002c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:48:32 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:32 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d40002c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:48:32 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:48:32 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:48:32 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:48:32.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:48:32 compute-0 sudo[125776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjawuzhzkvgbubrokegsclaxjklgcpyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934912.3987806-270-5552780263146/AnsiballZ_file.py'
Jan 20 18:48:32 compute-0 sudo[125776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:48:32 compute-0 ceph-mon[74381]: pgmap v181: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 328 B/s rd, 0 op/s
Jan 20 18:48:32 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:32 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d4c0099e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:48:32 compute-0 python3.9[125778]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:48:32 compute-0 sudo[125776]: pam_unix(sudo:session): session closed for user root
Jan 20 18:48:33 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:48:33 compute-0 sudo[125928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avqdomiszkuktiedjcvkcqnqqqpnyfrz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934913.1703594-294-64937019377738/AnsiballZ_stat.py'
Jan 20 18:48:33 compute-0 sudo[125928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:48:33 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v182: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 246 B/s rd, 0 op/s
Jan 20 18:48:33 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:48:33 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:48:33 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:48:33.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:48:33 compute-0 python3.9[125930]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:48:33 compute-0 sudo[125928]: pam_unix(sudo:session): session closed for user root
Jan 20 18:48:33 compute-0 sudo[126008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftvdvgsyqmritwuncnznddriqrvukmbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934913.1703594-294-64937019377738/AnsiballZ_file.py'
Jan 20 18:48:33 compute-0 sudo[126008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:48:34 compute-0 python3.9[126010]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:48:34 compute-0 sudo[126008]: pam_unix(sudo:session): session closed for user root
Jan 20 18:48:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:34 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d30002720 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:48:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:34 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d2c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:48:34 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:48:34 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:48:34 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:48:34.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:48:34 compute-0 sudo[126160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chkalibgitjkmdxfdrkyvkvkkxajfifg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934914.5230904-330-273860768932708/AnsiballZ_stat.py'
Jan 20 18:48:34 compute-0 sudo[126160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:48:34 compute-0 ceph-mon[74381]: pgmap v182: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 246 B/s rd, 0 op/s
Jan 20 18:48:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:34 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d40002c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:48:34 compute-0 python3.9[126162]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:48:35 compute-0 sudo[126160]: pam_unix(sudo:session): session closed for user root
Jan 20 18:48:35 compute-0 sudo[126238]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owtnkqiywegpjkkikacxwavnqezskeov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934914.5230904-330-273860768932708/AnsiballZ_file.py'
Jan 20 18:48:35 compute-0 sudo[126238]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:48:35 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v183: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 246 B/s rd, 0 op/s
Jan 20 18:48:35 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:48:35 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:48:35 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:48:35.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:48:35 compute-0 python3.9[126240]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:48:35 compute-0 sudo[126238]: pam_unix(sudo:session): session closed for user root
Jan 20 18:48:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:36 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d4c0099e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:48:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:36 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d30002720 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:48:36 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:48:36 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 18:48:36 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:48:36.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 18:48:36 compute-0 sudo[126392]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxahwaiqnhpstkosukbknikkcjnnmttz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934915.970143-366-144891798466151/AnsiballZ_systemd.py'
Jan 20 18:48:36 compute-0 sudo[126392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:48:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:36 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d2c003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:48:36 compute-0 ceph-mon[74381]: pgmap v183: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 246 B/s rd, 0 op/s
Jan 20 18:48:36 compute-0 python3.9[126394]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 18:48:36 compute-0 systemd[1]: Reloading.
Jan 20 18:48:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:48:36.973Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 18:48:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:48:36.975Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 18:48:36 compute-0 systemd-rc-local-generator[126418]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:48:36 compute-0 systemd-sysv-generator[126422]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:48:37 compute-0 sudo[126392]: pam_unix(sudo:session): session closed for user root
Jan 20 18:48:37 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v184: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 411 B/s rd, 0 op/s
Jan 20 18:48:37 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:48:37 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 18:48:37 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:48:37.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 18:48:37 compute-0 sudo[126582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzhxpujreaqdcrvvfxmirsqecjolqenc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934917.5912585-390-278758076194226/AnsiballZ_stat.py'
Jan 20 18:48:37 compute-0 sudo[126582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:48:38 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:48:38 compute-0 python3.9[126584]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:48:38 compute-0 sudo[126582]: pam_unix(sudo:session): session closed for user root
Jan 20 18:48:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:38 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d40002c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:48:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:38 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d4c0099e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:48:38 compute-0 sudo[126660]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aguflptztqklsiufbjytarmmyeyiuosz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934917.5912585-390-278758076194226/AnsiballZ_file.py'
Jan 20 18:48:38 compute-0 sudo[126660]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:48:38 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:48:38 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:48:38 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:48:38.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:48:38 compute-0 python3.9[126662]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:48:38 compute-0 sudo[126660]: pam_unix(sudo:session): session closed for user root
Jan 20 18:48:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:38 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d30003430 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:48:38 compute-0 ceph-mon[74381]: pgmap v184: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 411 B/s rd, 0 op/s
Jan 20 18:48:39 compute-0 sudo[126812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ieepgevsepqijqorqzuprjawhmxliyyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934918.9120584-426-99406844084587/AnsiballZ_stat.py'
Jan 20 18:48:39 compute-0 sudo[126812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:48:39 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v185: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:48:39 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:48:39 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 20 18:48:39 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:48:39.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 20 18:48:39 compute-0 python3.9[126814]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:48:39 compute-0 sudo[126812]: pam_unix(sudo:session): session closed for user root
Jan 20 18:48:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:48:39] "GET /metrics HTTP/1.1" 200 48337 "" "Prometheus/2.51.0"
Jan 20 18:48:39 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:48:39] "GET /metrics HTTP/1.1" 200 48337 "" "Prometheus/2.51.0"
Jan 20 18:48:39 compute-0 sudo[126892]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpjtshbplhlvkowcbqlbmeegebzbznuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934918.9120584-426-99406844084587/AnsiballZ_file.py'
Jan 20 18:48:39 compute-0 sudo[126892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:48:40 compute-0 python3.9[126894]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:48:40 compute-0 sudo[126892]: pam_unix(sudo:session): session closed for user root
Jan 20 18:48:40 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:40 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d2c003c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:48:40 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:40 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d30003430 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:48:40 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:48:40 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 20 18:48:40 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:48:40.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 20 18:48:40 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:40 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d40002c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:48:40 compute-0 sudo[127044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdfkktrxjopjstjkmieapnzkslnjxxxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934920.3140185-462-5103613318321/AnsiballZ_systemd.py'
Jan 20 18:48:40 compute-0 sudo[127044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:48:41 compute-0 python3.9[127046]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 18:48:41 compute-0 systemd[1]: Reloading.
Jan 20 18:48:41 compute-0 ceph-mon[74381]: pgmap v185: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:48:41 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 18:48:41 compute-0 systemd-sysv-generator[127072]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:48:41 compute-0 systemd-rc-local-generator[127066]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:48:41 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v186: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 20 18:48:41 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:48:41 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:48:41 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:48:41.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:48:41 compute-0 systemd[1]: Starting Create netns directory...
Jan 20 18:48:41 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 20 18:48:41 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 20 18:48:41 compute-0 systemd[1]: Finished Create netns directory.
Jan 20 18:48:41 compute-0 sudo[127044]: pam_unix(sudo:session): session closed for user root
Jan 20 18:48:42 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:42 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d4c0099e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:48:42 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:42 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d2c003c50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:48:42 compute-0 ceph-mon[74381]: pgmap v186: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 20 18:48:42 compute-0 python3.9[127240]: ansible-ansible.builtin.service_facts Invoked
Jan 20 18:48:42 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:48:42 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 20 18:48:42 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:48:42.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 20 18:48:42 compute-0 network[127257]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 20 18:48:42 compute-0 network[127258]: 'network-scripts' will be removed from distribution in near future.
Jan 20 18:48:42 compute-0 network[127259]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 20 18:48:42 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:42 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d30003430 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:48:43 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:48:43 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v187: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:48:43 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:48:43 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:48:43 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:48:43.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:48:44 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:44 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d40002c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:48:44 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:44 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d4c0099e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:48:44 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:48:44 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 20 18:48:44 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:48:44.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 20 18:48:44 compute-0 ceph-mon[74381]: pgmap v187: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:48:44 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:44 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d2c003c70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:48:45 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v188: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:48:45 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:48:45 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:48:45 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:48:45.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:48:46 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:46 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d2c003c70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:48:46 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:46 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d40002c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:48:46 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:48:46 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:48:46 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:48:46.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:48:46 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:46 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d4c0099e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:48:46 compute-0 ceph-mon[74381]: pgmap v188: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:48:46 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:48:46.976Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 18:48:47 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v189: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 20 18:48:47 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:48:47 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:48:47 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:48:47.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:48:47 compute-0 sudo[127525]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjgnuuxrttlameyedtpocuknogpmhtrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934927.3842692-540-260930928024833/AnsiballZ_stat.py'
Jan 20 18:48:47 compute-0 sudo[127525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:48:47 compute-0 python3.9[127527]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:48:47 compute-0 sudo[127525]: pam_unix(sudo:session): session closed for user root
Jan 20 18:48:48 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:48:48 compute-0 sudo[127603]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dheggbcdfkfzhucoforcvymiswzgtowf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934927.3842692-540-260930928024833/AnsiballZ_file.py'
Jan 20 18:48:48 compute-0 sudo[127603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:48:48 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:48 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d2c003c70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:48:48 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:48 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d2c003c70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:48:48 compute-0 ceph-mon[74381]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 18:48:48 compute-0 ceph-mon[74381]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 2553 writes, 12K keys, 2552 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.04 MB/s
                                           Cumulative WAL: 2553 writes, 2552 syncs, 1.00 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2553 writes, 12K keys, 2552 commit groups, 1.0 writes per commit group, ingest: 23.97 MB, 0.04 MB/s
                                           Interval WAL: 2553 writes, 2552 syncs, 1.00 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    119.5      0.17              0.06         4    0.041       0      0       0.0       0.0
                                             L6      1/0   14.39 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.0    132.6    116.5      0.34              0.11         3    0.112     12K   1360       0.0       0.0
                                            Sum      1/0   14.39 MB   0.0      0.0     0.0      0.0       0.1      0.0       0.0   3.0     88.8    117.5      0.50              0.17         7    0.072     12K   1360       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.0       0.0   3.0     89.7    118.5      0.50              0.17         6    0.083     12K   1360       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    132.6    116.5      0.34              0.11         3    0.112     12K   1360       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    122.6      0.16              0.06         3    0.054       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.2      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.019, interval 0.019
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.06 GB write, 0.10 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.5 seconds
                                           Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.5 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564b95c0c9b0#2 capacity: 304.00 MB usage: 1.74 MB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 8.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(133,1.60 MB,0.525856%) FilterBlock(8,44.80 KB,0.0143904%) IndexBlock(8,94.95 KB,0.0305025%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 20 18:48:48 compute-0 python3.9[127605]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:48:48 compute-0 sudo[127603]: pam_unix(sudo:session): session closed for user root
Jan 20 18:48:48 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:48:48 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:48:48 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:48:48.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:48:48 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:48 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d40002c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:48:48 compute-0 ceph-mon[74381]: pgmap v189: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 20 18:48:49 compute-0 sudo[127755]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvmsrqtlmzliewubykatlvrkvngfgatl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934928.8621035-579-230533861459877/AnsiballZ_file.py'
Jan 20 18:48:49 compute-0 sudo[127755]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:48:49 compute-0 python3.9[127757]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:48:49 compute-0 sudo[127755]: pam_unix(sudo:session): session closed for user root
Jan 20 18:48:49 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v190: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:48:49 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:48:49 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:48:49 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:48:49.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:48:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:48:49] "GET /metrics HTTP/1.1" 200 48332 "" "Prometheus/2.51.0"
Jan 20 18:48:49 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:48:49] "GET /metrics HTTP/1.1" 200 48332 "" "Prometheus/2.51.0"
Jan 20 18:48:50 compute-0 sudo[127909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcqzktgplmhclddgpqtkzwzsvzaqgest ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934929.656772-603-160718972291674/AnsiballZ_stat.py'
Jan 20 18:48:50 compute-0 sudo[127909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:48:50 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:50 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d4c00a300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:48:50 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:50 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d2c003c70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:48:50 compute-0 python3.9[127911]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:48:50 compute-0 sudo[127909]: pam_unix(sudo:session): session closed for user root
Jan 20 18:48:50 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:48:50 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:48:50 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:48:50.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:48:50 compute-0 sudo[127988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jujocomnymcjitqmpadzmchtfpwrgeog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934929.656772-603-160718972291674/AnsiballZ_file.py'
Jan 20 18:48:50 compute-0 sudo[127988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:48:50 compute-0 python3.9[127990]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:48:50 compute-0 sudo[127988]: pam_unix(sudo:session): session closed for user root
Jan 20 18:48:50 compute-0 sudo[127991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 18:48:50 compute-0 sudo[127991]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:48:50 compute-0 sudo[127991]: pam_unix(sudo:session): session closed for user root
Jan 20 18:48:50 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:50 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d500013a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:48:51 compute-0 sudo[128040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:48:51 compute-0 sudo[128040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:48:51 compute-0 sudo[128040]: pam_unix(sudo:session): session closed for user root
Jan 20 18:48:51 compute-0 sudo[128065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 20 18:48:51 compute-0 sudo[128065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:48:51 compute-0 ceph-mon[74381]: pgmap v190: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:48:51 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v191: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 20 18:48:51 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:48:51 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:48:51 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:48:51.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:48:51 compute-0 sudo[128065]: pam_unix(sudo:session): session closed for user root
Jan 20 18:48:51 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 18:48:51 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:48:51 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 20 18:48:51 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:48:51 compute-0 sudo[128175]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:48:51 compute-0 sudo[128175]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:48:51 compute-0 sudo[128175]: pam_unix(sudo:session): session closed for user root
Jan 20 18:48:51 compute-0 sudo[128223]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 18:48:52 compute-0 sudo[128223]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:48:52 compute-0 sudo[128298]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htsihaobpgwccytaonaynzemoeodimfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934931.545387-648-54111299424727/AnsiballZ_timezone.py'
Jan 20 18:48:52 compute-0 sudo[128298]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:48:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:52 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d30003430 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:48:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:52 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d4c00a300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:48:52 compute-0 python3.9[128300]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 20 18:48:52 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:48:52 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 18:48:52 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 18:48:52 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:48:52 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 18:48:52 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:48:52 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 18:48:52 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 18:48:52 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:48:52 compute-0 systemd[1]: Starting Time & Date Service...
Jan 20 18:48:52 compute-0 systemd[1]: Started Time & Date Service.
Jan 20 18:48:52 compute-0 podman[128342]: 2026-01-20 18:48:52.453030206 +0000 UTC m=+0.040438809 container create e5de33668c1ffcb6a0a1e244622ff32796ca7bd6057759100a21a5456e7c1177 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325)
Jan 20 18:48:52 compute-0 systemd[1]: Started libpod-conmon-e5de33668c1ffcb6a0a1e244622ff32796ca7bd6057759100a21a5456e7c1177.scope.
Jan 20 18:48:52 compute-0 sudo[128298]: pam_unix(sudo:session): session closed for user root
Jan 20 18:48:52 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:48:52 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:48:52 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:48:52 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:48:52.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:48:52 compute-0 podman[128342]: 2026-01-20 18:48:52.434070998 +0000 UTC m=+0.021479621 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:48:52 compute-0 podman[128342]: 2026-01-20 18:48:52.540129617 +0000 UTC m=+0.127538240 container init e5de33668c1ffcb6a0a1e244622ff32796ca7bd6057759100a21a5456e7c1177 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Jan 20 18:48:52 compute-0 podman[128342]: 2026-01-20 18:48:52.549814676 +0000 UTC m=+0.137223289 container start e5de33668c1ffcb6a0a1e244622ff32796ca7bd6057759100a21a5456e7c1177 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_mirzakhani, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Jan 20 18:48:52 compute-0 podman[128342]: 2026-01-20 18:48:52.553486407 +0000 UTC m=+0.140895020 container attach e5de33668c1ffcb6a0a1e244622ff32796ca7bd6057759100a21a5456e7c1177 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_mirzakhani, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:48:52 compute-0 goofy_mirzakhani[128360]: 167 167
Jan 20 18:48:52 compute-0 systemd[1]: libpod-e5de33668c1ffcb6a0a1e244622ff32796ca7bd6057759100a21a5456e7c1177.scope: Deactivated successfully.
Jan 20 18:48:52 compute-0 podman[128342]: 2026-01-20 18:48:52.559905535 +0000 UTC m=+0.147314158 container died e5de33668c1ffcb6a0a1e244622ff32796ca7bd6057759100a21a5456e7c1177 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_mirzakhani, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Jan 20 18:48:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b8f8b751ccfd59e41607629a3b2b942c648eda0c685d02425189e8d616e4939-merged.mount: Deactivated successfully.
Jan 20 18:48:52 compute-0 podman[128342]: 2026-01-20 18:48:52.602346403 +0000 UTC m=+0.189755006 container remove e5de33668c1ffcb6a0a1e244622ff32796ca7bd6057759100a21a5456e7c1177 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:48:52 compute-0 systemd[1]: libpod-conmon-e5de33668c1ffcb6a0a1e244622ff32796ca7bd6057759100a21a5456e7c1177.scope: Deactivated successfully.
Jan 20 18:48:52 compute-0 podman[128407]: 2026-01-20 18:48:52.749021375 +0000 UTC m=+0.045283119 container create cfe3cc2360238c3b33809cac501b40cc6c162af4a8a00a788800be6e6215d13f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_cannon, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 18:48:52 compute-0 systemd[1]: Started libpod-conmon-cfe3cc2360238c3b33809cac501b40cc6c162af4a8a00a788800be6e6215d13f.scope.
Jan 20 18:48:52 compute-0 podman[128407]: 2026-01-20 18:48:52.730432317 +0000 UTC m=+0.026694081 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:48:52 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:48:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6724f7fcee7b434239e629fed58154d6ac327d02068ff3d498caf93c661f58cb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 18:48:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6724f7fcee7b434239e629fed58154d6ac327d02068ff3d498caf93c661f58cb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:48:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6724f7fcee7b434239e629fed58154d6ac327d02068ff3d498caf93c661f58cb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:48:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6724f7fcee7b434239e629fed58154d6ac327d02068ff3d498caf93c661f58cb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 18:48:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6724f7fcee7b434239e629fed58154d6ac327d02068ff3d498caf93c661f58cb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 18:48:52 compute-0 podman[128407]: 2026-01-20 18:48:52.855531696 +0000 UTC m=+0.151793460 container init cfe3cc2360238c3b33809cac501b40cc6c162af4a8a00a788800be6e6215d13f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:48:52 compute-0 podman[128407]: 2026-01-20 18:48:52.863300867 +0000 UTC m=+0.159562611 container start cfe3cc2360238c3b33809cac501b40cc6c162af4a8a00a788800be6e6215d13f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_cannon, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:48:52 compute-0 podman[128407]: 2026-01-20 18:48:52.866691771 +0000 UTC m=+0.162953565 container attach cfe3cc2360238c3b33809cac501b40cc6c162af4a8a00a788800be6e6215d13f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 20 18:48:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:52 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d2c003c70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:48:53 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:48:53 compute-0 flamboyant_cannon[128423]: --> passed data devices: 0 physical, 1 LVM
Jan 20 18:48:53 compute-0 flamboyant_cannon[128423]: --> All data devices are unavailable
Jan 20 18:48:53 compute-0 systemd[1]: libpod-cfe3cc2360238c3b33809cac501b40cc6c162af4a8a00a788800be6e6215d13f.scope: Deactivated successfully.
Jan 20 18:48:53 compute-0 podman[128407]: 2026-01-20 18:48:53.213016383 +0000 UTC m=+0.509278137 container died cfe3cc2360238c3b33809cac501b40cc6c162af4a8a00a788800be6e6215d13f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_cannon, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:48:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-6724f7fcee7b434239e629fed58154d6ac327d02068ff3d498caf93c661f58cb-merged.mount: Deactivated successfully.
Jan 20 18:48:53 compute-0 podman[128407]: 2026-01-20 18:48:53.260858884 +0000 UTC m=+0.557120638 container remove cfe3cc2360238c3b33809cac501b40cc6c162af4a8a00a788800be6e6215d13f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_cannon, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Jan 20 18:48:53 compute-0 systemd[1]: libpod-conmon-cfe3cc2360238c3b33809cac501b40cc6c162af4a8a00a788800be6e6215d13f.scope: Deactivated successfully.
Jan 20 18:48:53 compute-0 ceph-mon[74381]: pgmap v191: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 20 18:48:53 compute-0 sudo[128223]: pam_unix(sudo:session): session closed for user root
Jan 20 18:48:53 compute-0 sudo[128449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:48:53 compute-0 sudo[128449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:48:53 compute-0 sudo[128449]: pam_unix(sudo:session): session closed for user root
Jan 20 18:48:53 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v192: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:48:53 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:48:53 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:48:53 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:48:53.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:48:53 compute-0 sudo[128474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- lvm list --format json
Jan 20 18:48:53 compute-0 sudo[128474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:48:53 compute-0 podman[128564]: 2026-01-20 18:48:53.914983238 +0000 UTC m=+0.043754942 container create d86db2b4e9658559b066562b3c10e2ff61fe218bee30968e45d5750c8a3e063d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_villani, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid)
Jan 20 18:48:53 compute-0 systemd[1]: Started libpod-conmon-d86db2b4e9658559b066562b3c10e2ff61fe218bee30968e45d5750c8a3e063d.scope.
Jan 20 18:48:53 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:48:53 compute-0 podman[128564]: 2026-01-20 18:48:53.983456428 +0000 UTC m=+0.112228162 container init d86db2b4e9658559b066562b3c10e2ff61fe218bee30968e45d5750c8a3e063d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_villani, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:48:53 compute-0 podman[128564]: 2026-01-20 18:48:53.896636674 +0000 UTC m=+0.025408398 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:48:53 compute-0 podman[128564]: 2026-01-20 18:48:53.991679022 +0000 UTC m=+0.120450746 container start d86db2b4e9658559b066562b3c10e2ff61fe218bee30968e45d5750c8a3e063d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_villani, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 20 18:48:53 compute-0 podman[128564]: 2026-01-20 18:48:53.995593798 +0000 UTC m=+0.124365562 container attach d86db2b4e9658559b066562b3c10e2ff61fe218bee30968e45d5750c8a3e063d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_villani, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Jan 20 18:48:53 compute-0 fervent_villani[128615]: 167 167
Jan 20 18:48:53 compute-0 systemd[1]: libpod-d86db2b4e9658559b066562b3c10e2ff61fe218bee30968e45d5750c8a3e063d.scope: Deactivated successfully.
Jan 20 18:48:53 compute-0 podman[128564]: 2026-01-20 18:48:53.997107306 +0000 UTC m=+0.125879030 container died d86db2b4e9658559b066562b3c10e2ff61fe218bee30968e45d5750c8a3e063d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_villani, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Jan 20 18:48:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-42bd4d34dd53bab9db5b4c582c4755073004d03218ce2329f78885c133c76f84-merged.mount: Deactivated successfully.
Jan 20 18:48:54 compute-0 podman[128564]: 2026-01-20 18:48:54.036239752 +0000 UTC m=+0.165011456 container remove d86db2b4e9658559b066562b3c10e2ff61fe218bee30968e45d5750c8a3e063d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 20 18:48:54 compute-0 systemd[1]: libpod-conmon-d86db2b4e9658559b066562b3c10e2ff61fe218bee30968e45d5750c8a3e063d.scope: Deactivated successfully.
Jan 20 18:48:54 compute-0 sudo[128700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xedeqejhcnxawfbzhqsvvvjiqjrioyof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934933.851999-675-220590885502608/AnsiballZ_file.py'
Jan 20 18:48:54 compute-0 sudo[128700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:48:54 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:54 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d50001eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:48:54 compute-0 podman[128708]: 2026-01-20 18:48:54.214579997 +0000 UTC m=+0.047834073 container create 037120bee3fe25a1e16a197d43d074eb7b225b5a98bbed7e4e791be2365707c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:48:54 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:54 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d30003430 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:48:54 compute-0 systemd[1]: Started libpod-conmon-037120bee3fe25a1e16a197d43d074eb7b225b5a98bbed7e4e791be2365707c2.scope.
Jan 20 18:48:54 compute-0 podman[128708]: 2026-01-20 18:48:54.190324097 +0000 UTC m=+0.023578203 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:48:54 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:48:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7d11d3ecc04068c5d222885e995da9229e800f4717694f8099a486c2750214a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 18:48:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7d11d3ecc04068c5d222885e995da9229e800f4717694f8099a486c2750214a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:48:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7d11d3ecc04068c5d222885e995da9229e800f4717694f8099a486c2750214a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:48:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7d11d3ecc04068c5d222885e995da9229e800f4717694f8099a486c2750214a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 18:48:54 compute-0 podman[128708]: 2026-01-20 18:48:54.305361178 +0000 UTC m=+0.138615264 container init 037120bee3fe25a1e16a197d43d074eb7b225b5a98bbed7e4e791be2365707c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_shockley, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:48:54 compute-0 python3.9[128702]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:48:54 compute-0 podman[128708]: 2026-01-20 18:48:54.316314648 +0000 UTC m=+0.149568714 container start 037120bee3fe25a1e16a197d43d074eb7b225b5a98bbed7e4e791be2365707c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_shockley, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:48:54 compute-0 podman[128708]: 2026-01-20 18:48:54.319630641 +0000 UTC m=+0.152884767 container attach 037120bee3fe25a1e16a197d43d074eb7b225b5a98bbed7e4e791be2365707c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:48:54 compute-0 sudo[128700]: pam_unix(sudo:session): session closed for user root
Jan 20 18:48:54 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:48:54 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 18:48:54 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:48:54.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 18:48:54 compute-0 intelligent_shockley[128725]: {
Jan 20 18:48:54 compute-0 intelligent_shockley[128725]:     "0": [
Jan 20 18:48:54 compute-0 intelligent_shockley[128725]:         {
Jan 20 18:48:54 compute-0 intelligent_shockley[128725]:             "devices": [
Jan 20 18:48:54 compute-0 intelligent_shockley[128725]:                 "/dev/loop3"
Jan 20 18:48:54 compute-0 intelligent_shockley[128725]:             ],
Jan 20 18:48:54 compute-0 intelligent_shockley[128725]:             "lv_name": "ceph_lv0",
Jan 20 18:48:54 compute-0 intelligent_shockley[128725]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 18:48:54 compute-0 intelligent_shockley[128725]:             "lv_size": "21470642176",
Jan 20 18:48:54 compute-0 intelligent_shockley[128725]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=aecbbf3b-b405-507b-97d7-637a83f5b4b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5f53c0c6-6046-4836-83f9-ff93da7e674e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 18:48:54 compute-0 intelligent_shockley[128725]:             "lv_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 18:48:54 compute-0 intelligent_shockley[128725]:             "name": "ceph_lv0",
Jan 20 18:48:54 compute-0 intelligent_shockley[128725]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 18:48:54 compute-0 intelligent_shockley[128725]:             "tags": {
Jan 20 18:48:54 compute-0 intelligent_shockley[128725]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 18:48:54 compute-0 intelligent_shockley[128725]:                 "ceph.block_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 18:48:54 compute-0 intelligent_shockley[128725]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 18:48:54 compute-0 intelligent_shockley[128725]:                 "ceph.cluster_fsid": "aecbbf3b-b405-507b-97d7-637a83f5b4b1",
Jan 20 18:48:54 compute-0 intelligent_shockley[128725]:                 "ceph.cluster_name": "ceph",
Jan 20 18:48:54 compute-0 intelligent_shockley[128725]:                 "ceph.crush_device_class": "",
Jan 20 18:48:54 compute-0 intelligent_shockley[128725]:                 "ceph.encrypted": "0",
Jan 20 18:48:54 compute-0 intelligent_shockley[128725]:                 "ceph.osd_fsid": "5f53c0c6-6046-4836-83f9-ff93da7e674e",
Jan 20 18:48:54 compute-0 intelligent_shockley[128725]:                 "ceph.osd_id": "0",
Jan 20 18:48:54 compute-0 intelligent_shockley[128725]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 18:48:54 compute-0 intelligent_shockley[128725]:                 "ceph.type": "block",
Jan 20 18:48:54 compute-0 intelligent_shockley[128725]:                 "ceph.vdo": "0",
Jan 20 18:48:54 compute-0 intelligent_shockley[128725]:                 "ceph.with_tpm": "0"
Jan 20 18:48:54 compute-0 intelligent_shockley[128725]:             },
Jan 20 18:48:54 compute-0 intelligent_shockley[128725]:             "type": "block",
Jan 20 18:48:54 compute-0 intelligent_shockley[128725]:             "vg_name": "ceph_vg0"
Jan 20 18:48:54 compute-0 intelligent_shockley[128725]:         }
Jan 20 18:48:54 compute-0 intelligent_shockley[128725]:     ]
Jan 20 18:48:54 compute-0 intelligent_shockley[128725]: }
Jan 20 18:48:54 compute-0 ceph-mon[74381]: pgmap v192: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:48:54 compute-0 systemd[1]: libpod-037120bee3fe25a1e16a197d43d074eb7b225b5a98bbed7e4e791be2365707c2.scope: Deactivated successfully.
Jan 20 18:48:54 compute-0 podman[128708]: 2026-01-20 18:48:54.658298413 +0000 UTC m=+0.491552519 container died 037120bee3fe25a1e16a197d43d074eb7b225b5a98bbed7e4e791be2365707c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_shockley, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Jan 20 18:48:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-d7d11d3ecc04068c5d222885e995da9229e800f4717694f8099a486c2750214a-merged.mount: Deactivated successfully.
Jan 20 18:48:54 compute-0 podman[128708]: 2026-01-20 18:48:54.712050241 +0000 UTC m=+0.545304317 container remove 037120bee3fe25a1e16a197d43d074eb7b225b5a98bbed7e4e791be2365707c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_shockley, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 20 18:48:54 compute-0 systemd[1]: libpod-conmon-037120bee3fe25a1e16a197d43d074eb7b225b5a98bbed7e4e791be2365707c2.scope: Deactivated successfully.
Jan 20 18:48:54 compute-0 sudo[128474]: pam_unix(sudo:session): session closed for user root
Jan 20 18:48:54 compute-0 sudo[128773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:48:54 compute-0 sudo[128773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:48:54 compute-0 sudo[128773]: pam_unix(sudo:session): session closed for user root
Jan 20 18:48:54 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:54 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d30003430 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:48:54 compute-0 ceph-mgr[74676]: [balancer INFO root] Optimize plan auto_2026-01-20_18:48:54
Jan 20 18:48:54 compute-0 ceph-mgr[74676]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 18:48:54 compute-0 ceph-mgr[74676]: [balancer INFO root] do_upmap
Jan 20 18:48:54 compute-0 ceph-mgr[74676]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', 'default.rgw.control', 'vms', 'default.rgw.meta', 'cephfs.cephfs.data', '.nfs', 'volumes', '.rgw.root', '.mgr', 'default.rgw.log', 'backups']
Jan 20 18:48:54 compute-0 sudo[128799]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- raw list --format json
Jan 20 18:48:54 compute-0 ceph-mgr[74676]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 18:48:54 compute-0 sudo[128799]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:48:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:48:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:48:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 18:48:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:48:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 18:48:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:48:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:48:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:48:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:48:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:48:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:48:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:48:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:48:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:48:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 20 18:48:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:48:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:48:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:48:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 18:48:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:48:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 20 18:48:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:48:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:48:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:48:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 18:48:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:48:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 20 18:48:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:48:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:48:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:48:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:48:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 18:48:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 18:48:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 18:48:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 18:48:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 18:48:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 18:48:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 18:48:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 18:48:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 18:48:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 18:48:55 compute-0 sudo[128975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fclrcqppnmfrdnuswhxryldnelyntplu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934934.9072978-699-37867579138134/AnsiballZ_stat.py'
Jan 20 18:48:55 compute-0 sudo[128975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:48:55 compute-0 podman[128990]: 2026-01-20 18:48:55.336113412 +0000 UTC m=+0.050573460 container create a783a5e737c2dc744893adba4fad9cb7a9940698032cec524fccd2035f7faeea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_mclaren, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:48:55 compute-0 systemd[1]: Started libpod-conmon-a783a5e737c2dc744893adba4fad9cb7a9940698032cec524fccd2035f7faeea.scope.
Jan 20 18:48:55 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:48:55 compute-0 podman[128990]: 2026-01-20 18:48:55.3133692 +0000 UTC m=+0.027829268 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:48:55 compute-0 podman[128990]: 2026-01-20 18:48:55.416273081 +0000 UTC m=+0.130733129 container init a783a5e737c2dc744893adba4fad9cb7a9940698032cec524fccd2035f7faeea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_mclaren, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 20 18:48:55 compute-0 podman[128990]: 2026-01-20 18:48:55.426099434 +0000 UTC m=+0.140559472 container start a783a5e737c2dc744893adba4fad9cb7a9940698032cec524fccd2035f7faeea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:48:55 compute-0 podman[128990]: 2026-01-20 18:48:55.429034757 +0000 UTC m=+0.143494825 container attach a783a5e737c2dc744893adba4fad9cb7a9940698032cec524fccd2035f7faeea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 20 18:48:55 compute-0 heuristic_mclaren[129006]: 167 167
Jan 20 18:48:55 compute-0 systemd[1]: libpod-a783a5e737c2dc744893adba4fad9cb7a9940698032cec524fccd2035f7faeea.scope: Deactivated successfully.
Jan 20 18:48:55 compute-0 conmon[129006]: conmon a783a5e737c2dc744893 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a783a5e737c2dc744893adba4fad9cb7a9940698032cec524fccd2035f7faeea.scope/container/memory.events
Jan 20 18:48:55 compute-0 podman[128990]: 2026-01-20 18:48:55.432777179 +0000 UTC m=+0.147237267 container died a783a5e737c2dc744893adba4fad9cb7a9940698032cec524fccd2035f7faeea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 20 18:48:55 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v193: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:48:55 compute-0 python3.9[128986]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:48:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf4c271b9cf966ecf15561b429b32122082364bdd4da17e480fbd7f50775cfff-merged.mount: Deactivated successfully.
Jan 20 18:48:55 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:48:55 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 20 18:48:55 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:48:55.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 20 18:48:55 compute-0 podman[128990]: 2026-01-20 18:48:55.477931434 +0000 UTC m=+0.192391472 container remove a783a5e737c2dc744893adba4fad9cb7a9940698032cec524fccd2035f7faeea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Jan 20 18:48:55 compute-0 systemd[1]: libpod-conmon-a783a5e737c2dc744893adba4fad9cb7a9940698032cec524fccd2035f7faeea.scope: Deactivated successfully.
Jan 20 18:48:55 compute-0 sudo[128975]: pam_unix(sudo:session): session closed for user root
Jan 20 18:48:55 compute-0 podman[129055]: 2026-01-20 18:48:55.642045337 +0000 UTC m=+0.046286684 container create 7eee17b615f242bbd9ac5b25bd042a47c869f2d135f282a841d2f7a95aecc98e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_cori, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:48:55 compute-0 systemd[1]: Started libpod-conmon-7eee17b615f242bbd9ac5b25bd042a47c869f2d135f282a841d2f7a95aecc98e.scope.
Jan 20 18:48:55 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:48:55 compute-0 sudo[129124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzivhycronpvmokeoalygqxkjoxdephv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934934.9072978-699-37867579138134/AnsiballZ_file.py'
Jan 20 18:48:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1a3e710f1870fd94654c401f5b098a2744799def538d07d3d72c95bdc592ca5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 18:48:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1a3e710f1870fd94654c401f5b098a2744799def538d07d3d72c95bdc592ca5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:48:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1a3e710f1870fd94654c401f5b098a2744799def538d07d3d72c95bdc592ca5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:48:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1a3e710f1870fd94654c401f5b098a2744799def538d07d3d72c95bdc592ca5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 18:48:55 compute-0 sudo[129124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:48:55 compute-0 podman[129055]: 2026-01-20 18:48:55.623689343 +0000 UTC m=+0.027930700 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:48:55 compute-0 podman[129055]: 2026-01-20 18:48:55.723929959 +0000 UTC m=+0.128171316 container init 7eee17b615f242bbd9ac5b25bd042a47c869f2d135f282a841d2f7a95aecc98e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_cori, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 20 18:48:55 compute-0 podman[129055]: 2026-01-20 18:48:55.734832668 +0000 UTC m=+0.139074005 container start 7eee17b615f242bbd9ac5b25bd042a47c869f2d135f282a841d2f7a95aecc98e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_cori, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 20 18:48:55 compute-0 podman[129055]: 2026-01-20 18:48:55.738237802 +0000 UTC m=+0.142479159 container attach 7eee17b615f242bbd9ac5b25bd042a47c869f2d135f282a841d2f7a95aecc98e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_cori, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 18:48:55 compute-0 python3.9[129127]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:48:55 compute-0 sudo[129124]: pam_unix(sudo:session): session closed for user root
Jan 20 18:48:56 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 18:48:56 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:56 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d2c003c70 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:48:56 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:56 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d50001eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:48:56 compute-0 sudo[129347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mivuitirossznjrifibzlcziqhgcooxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934936.1368747-735-97408381656713/AnsiballZ_stat.py'
Jan 20 18:48:56 compute-0 sudo[129347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:48:56 compute-0 lvm[129349]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 18:48:56 compute-0 lvm[129349]: VG ceph_vg0 finished
Jan 20 18:48:56 compute-0 festive_cori[129122]: {}
Jan 20 18:48:56 compute-0 systemd[1]: libpod-7eee17b615f242bbd9ac5b25bd042a47c869f2d135f282a841d2f7a95aecc98e.scope: Deactivated successfully.
Jan 20 18:48:56 compute-0 systemd[1]: libpod-7eee17b615f242bbd9ac5b25bd042a47c869f2d135f282a841d2f7a95aecc98e.scope: Consumed 1.217s CPU time.
Jan 20 18:48:56 compute-0 podman[129055]: 2026-01-20 18:48:56.522587591 +0000 UTC m=+0.926828928 container died 7eee17b615f242bbd9ac5b25bd042a47c869f2d135f282a841d2f7a95aecc98e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_cori, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Jan 20 18:48:56 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:48:56 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 20 18:48:56 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:48:56.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 20 18:48:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-a1a3e710f1870fd94654c401f5b098a2744799def538d07d3d72c95bdc592ca5-merged.mount: Deactivated successfully.
Jan 20 18:48:56 compute-0 podman[129055]: 2026-01-20 18:48:56.569165871 +0000 UTC m=+0.973407198 container remove 7eee17b615f242bbd9ac5b25bd042a47c869f2d135f282a841d2f7a95aecc98e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_cori, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS)
Jan 20 18:48:56 compute-0 systemd[1]: libpod-conmon-7eee17b615f242bbd9ac5b25bd042a47c869f2d135f282a841d2f7a95aecc98e.scope: Deactivated successfully.
Jan 20 18:48:56 compute-0 sudo[128799]: pam_unix(sudo:session): session closed for user root
Jan 20 18:48:56 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 18:48:56 compute-0 python3.9[129352]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:48:56 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:48:56 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 18:48:56 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:48:56 compute-0 sudo[129347]: pam_unix(sudo:session): session closed for user root
Jan 20 18:48:56 compute-0 sudo[129371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 18:48:56 compute-0 sudo[129371]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:48:56 compute-0 sudo[129371]: pam_unix(sudo:session): session closed for user root
Jan 20 18:48:56 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:56 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d4c00a300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:48:56 compute-0 sudo[129469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afvlthfocngzwpthhamalkzzyjrkhafu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934936.1368747-735-97408381656713/AnsiballZ_file.py'
Jan 20 18:48:56 compute-0 sudo[129469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:48:56 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:48:56.977Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 18:48:57 compute-0 python3.9[129471]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.61au3n4r recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:48:57 compute-0 sudo[129469]: pam_unix(sudo:session): session closed for user root
Jan 20 18:48:57 compute-0 ceph-mon[74381]: pgmap v193: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:48:57 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:48:57 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:48:57 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v194: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 20 18:48:57 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:48:57 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:48:57 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:48:57.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:48:57 compute-0 sudo[129623]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crtasivukgdzrqsfezorvogrdqfudkba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934937.3970506-771-276270493172116/AnsiballZ_stat.py'
Jan 20 18:48:57 compute-0 sudo[129623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:48:57 compute-0 python3.9[129625]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:48:58 compute-0 sudo[129623]: pam_unix(sudo:session): session closed for user root
Jan 20 18:48:58 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:48:58 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:58 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d30003430 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:48:58 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:58 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d50001eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:48:58 compute-0 sudo[129701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acdleupzxpxelgcuqfavtygsnbqiklha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934937.3970506-771-276270493172116/AnsiballZ_file.py'
Jan 20 18:48:58 compute-0 sudo[129701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:48:58 compute-0 ceph-mon[74381]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Jan 20 18:48:58 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:48:58.280047) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 18:48:58 compute-0 ceph-mon[74381]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Jan 20 18:48:58 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768934938280169, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 1493, "num_deletes": 251, "total_data_size": 2835362, "memory_usage": 2879816, "flush_reason": "Manual Compaction"}
Jan 20 18:48:58 compute-0 ceph-mon[74381]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Jan 20 18:48:58 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768934938294649, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 1725609, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11224, "largest_seqno": 12715, "table_properties": {"data_size": 1720295, "index_size": 2582, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13235, "raw_average_key_size": 20, "raw_value_size": 1708797, "raw_average_value_size": 2600, "num_data_blocks": 115, "num_entries": 657, "num_filter_entries": 657, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768934797, "oldest_key_time": 1768934797, "file_creation_time": 1768934938, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cbf3ab03-d51c-4622-b6c7-e997cd5246eb", "db_session_id": "I40O2DG19JCNHUB0JQU4", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Jan 20 18:48:58 compute-0 ceph-mon[74381]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 14651 microseconds, and 6868 cpu microseconds.
Jan 20 18:48:58 compute-0 ceph-mon[74381]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 18:48:58 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:48:58.294707) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 1725609 bytes OK
Jan 20 18:48:58 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:48:58.294735) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Jan 20 18:48:58 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:48:58.296226) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Jan 20 18:48:58 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:48:58.296249) EVENT_LOG_v1 {"time_micros": 1768934938296243, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 18:48:58 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:48:58.296278) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 18:48:58 compute-0 ceph-mon[74381]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 2829040, prev total WAL file size 2845432, number of live WAL files 2.
Jan 20 18:48:58 compute-0 ceph-mon[74381]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 18:48:58 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:48:58.297359) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323532' seq:0, type:0; will stop at (end)
Jan 20 18:48:58 compute-0 ceph-mon[74381]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 18:48:58 compute-0 ceph-mon[74381]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(1685KB)], [26(14MB)]
Jan 20 18:48:58 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768934938297428, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 16812707, "oldest_snapshot_seqno": -1}
Jan 20 18:48:58 compute-0 ceph-mon[74381]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 4498 keys, 14608830 bytes, temperature: kUnknown
Jan 20 18:48:58 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768934938422071, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 14608830, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14574676, "index_size": 21820, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11269, "raw_key_size": 114246, "raw_average_key_size": 25, "raw_value_size": 14488552, "raw_average_value_size": 3221, "num_data_blocks": 930, "num_entries": 4498, "num_filter_entries": 4498, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768934326, "oldest_key_time": 0, "file_creation_time": 1768934938, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cbf3ab03-d51c-4622-b6c7-e997cd5246eb", "db_session_id": "I40O2DG19JCNHUB0JQU4", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Jan 20 18:48:58 compute-0 ceph-mon[74381]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 18:48:58 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:48:58.422327) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 14608830 bytes
Jan 20 18:48:58 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:48:58.450181) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 134.8 rd, 117.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 14.4 +0.0 blob) out(13.9 +0.0 blob), read-write-amplify(18.2) write-amplify(8.5) OK, records in: 4955, records dropped: 457 output_compression: NoCompression
Jan 20 18:48:58 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:48:58.450217) EVENT_LOG_v1 {"time_micros": 1768934938450204, "job": 10, "event": "compaction_finished", "compaction_time_micros": 124750, "compaction_time_cpu_micros": 32788, "output_level": 6, "num_output_files": 1, "total_output_size": 14608830, "num_input_records": 4955, "num_output_records": 4498, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 18:48:58 compute-0 ceph-mon[74381]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 18:48:58 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768934938450616, "job": 10, "event": "table_file_deletion", "file_number": 28}
Jan 20 18:48:58 compute-0 ceph-mon[74381]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 18:48:58 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768934938452696, "job": 10, "event": "table_file_deletion", "file_number": 26}
Jan 20 18:48:58 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:48:58.297241) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 18:48:58 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:48:58.453301) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 18:48:58 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:48:58.453313) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 18:48:58 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:48:58.453317) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 18:48:58 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:48:58.453322) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 18:48:58 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:48:58.453326) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 18:48:58 compute-0 python3.9[129703]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:48:58 compute-0 sudo[129701]: pam_unix(sudo:session): session closed for user root
Jan 20 18:48:58 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:48:58 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 20 18:48:58 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:48:58.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 20 18:48:58 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:48:58 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d50001eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:48:59 compute-0 sudo[129853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhxrjabfsxrjkmfapwkwloguajfapgpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934938.7160459-810-213934357602709/AnsiballZ_command.py'
Jan 20 18:48:59 compute-0 sudo[129853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:48:59 compute-0 python3.9[129855]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:48:59 compute-0 ceph-mon[74381]: pgmap v194: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 20 18:48:59 compute-0 sudo[129853]: pam_unix(sudo:session): session closed for user root
Jan 20 18:48:59 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v195: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:48:59 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:48:59 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:48:59 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:48:59.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:48:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:48:59] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Jan 20 18:48:59 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:48:59] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Jan 20 18:49:00 compute-0 sudo[130008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjlglmqghcayykgiejcdshhusxsltvvy ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1768934939.762566-834-105513214914448/AnsiballZ_edpm_nftables_from_files.py'
Jan 20 18:49:00 compute-0 sudo[130008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:49:00 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:00 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d4c00a300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:00 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:00 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d30003430 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:00 compute-0 python3[130010]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 20 18:49:00 compute-0 sudo[130008]: pam_unix(sudo:session): session closed for user root
Jan 20 18:49:00 compute-0 ceph-mon[74381]: pgmap v195: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:49:00 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:49:00 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:49:00 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:49:00.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:49:00 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:00 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d44002110 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:01 compute-0 sudo[130161]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lykvikczldvjujhgklljvzkmblwzwawh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934940.7312894-858-20763657644785/AnsiballZ_stat.py'
Jan 20 18:49:01 compute-0 sudo[130161]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:49:01 compute-0 python3.9[130163]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:49:01 compute-0 sudo[130161]: pam_unix(sudo:session): session closed for user root
Jan 20 18:49:01 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v196: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 20 18:49:01 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:49:01 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:49:01 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:49:01.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:49:01 compute-0 sudo[130239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqaiykzmhqtwzxrlhhqspvtjivqakenl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934940.7312894-858-20763657644785/AnsiballZ_file.py'
Jan 20 18:49:01 compute-0 sudo[130239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:49:01 compute-0 python3.9[130242]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:49:01 compute-0 sudo[130239]: pam_unix(sudo:session): session closed for user root
Jan 20 18:49:02 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:02 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d50001eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:02 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:02 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d4c00a300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:02 compute-0 sudo[130393]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-coiatarxqvxxhqmcwyladluvwsmrxklb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934942.1402562-894-50021741701275/AnsiballZ_stat.py'
Jan 20 18:49:02 compute-0 sudo[130393]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:49:02 compute-0 ceph-mon[74381]: pgmap v196: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 20 18:49:02 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:49:02 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 20 18:49:02 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:49:02.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 20 18:49:02 compute-0 python3.9[130395]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:49:02 compute-0 sudo[130393]: pam_unix(sudo:session): session closed for user root
Jan 20 18:49:02 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:02 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d30003430 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:03 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:49:03 compute-0 sudo[130518]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tigvugsxxodsbltifionmowyzccrmkqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934942.1402562-894-50021741701275/AnsiballZ_copy.py'
Jan 20 18:49:03 compute-0 sudo[130518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:49:03 compute-0 python3.9[130520]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768934942.1402562-894-50021741701275/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:49:03 compute-0 sudo[130518]: pam_unix(sudo:session): session closed for user root
Jan 20 18:49:03 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v197: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:49:03 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:49:03 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:49:03 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:49:03.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:49:03 compute-0 sudo[130672]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcgurtylawnfjwnxbiwupgibhwakopai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934943.6717608-939-92281267952991/AnsiballZ_stat.py'
Jan 20 18:49:03 compute-0 sudo[130672]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:49:04 compute-0 python3.9[130674]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:49:04 compute-0 sudo[130672]: pam_unix(sudo:session): session closed for user root
Jan 20 18:49:04 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:04 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d44002110 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:04 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:04 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d50001eb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:04 compute-0 sudo[130750]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmholqsiriujcknaulugjjmknwukvuyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934943.6717608-939-92281267952991/AnsiballZ_file.py'
Jan 20 18:49:04 compute-0 sudo[130750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:49:04 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:49:04 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:49:04 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:49:04.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:49:04 compute-0 python3.9[130752]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:49:04 compute-0 sudo[130750]: pam_unix(sudo:session): session closed for user root
Jan 20 18:49:04 compute-0 ceph-mon[74381]: pgmap v197: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:49:04 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:04 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d4c00a300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:05 compute-0 sudo[130902]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rshseqvsduruqrjsamuvzstvnvqkwxnp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934945.0184696-975-71285135672976/AnsiballZ_stat.py'
Jan 20 18:49:05 compute-0 sudo[130902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:49:05 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v198: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:49:05 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:49:05 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:49:05 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:49:05.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:49:05 compute-0 python3.9[130904]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:49:05 compute-0 sudo[130902]: pam_unix(sudo:session): session closed for user root
Jan 20 18:49:05 compute-0 sudo[130982]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcytvpbpdqsgwfdruriwzyiwjmbegrmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934945.0184696-975-71285135672976/AnsiballZ_file.py'
Jan 20 18:49:05 compute-0 sudo[130982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:49:05 compute-0 python3.9[130984]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:49:05 compute-0 sudo[130982]: pam_unix(sudo:session): session closed for user root
Jan 20 18:49:06 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:06 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d30004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:06 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:06 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d44002110 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:06 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:49:06 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 20 18:49:06 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:49:06.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 20 18:49:06 compute-0 ceph-mon[74381]: pgmap v198: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:49:06 compute-0 sudo[131134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qyhjnyfgmkyysvzyfrvxqbxybppszcsi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934946.416246-1011-15770539592899/AnsiballZ_stat.py'
Jan 20 18:49:06 compute-0 sudo[131134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:49:06 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:06 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d50003f10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:06 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:49:06.979Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 18:49:06 compute-0 python3.9[131136]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:49:07 compute-0 sudo[131134]: pam_unix(sudo:session): session closed for user root
Jan 20 18:49:07 compute-0 sudo[131212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzuvwxvhekxtfwiiqbzqvsnlcxbtmehz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934946.416246-1011-15770539592899/AnsiballZ_file.py'
Jan 20 18:49:07 compute-0 sudo[131212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:49:07 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v199: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 20 18:49:07 compute-0 python3.9[131214]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:49:07 compute-0 sudo[131212]: pam_unix(sudo:session): session closed for user root
Jan 20 18:49:07 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:49:07 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:49:07 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:49:07.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:49:08 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:49:08 compute-0 sudo[131366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzcrolyroiilbgmzymrdokbhthkefbwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934947.8822951-1050-132680328419349/AnsiballZ_command.py'
Jan 20 18:49:08 compute-0 sudo[131366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:49:08 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:08 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d4c00a300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:08 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:08 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d30004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:08 compute-0 python3.9[131368]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:49:08 compute-0 sudo[131366]: pam_unix(sudo:session): session closed for user root
Jan 20 18:49:08 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:49:08 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:49:08 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:49:08.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:49:08 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:08 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d440030b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:09 compute-0 sudo[131521]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbyxdpyoiwejvqpaozfcnitzjvzuujhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934948.7289743-1074-230019800307761/AnsiballZ_blockinfile.py'
Jan 20 18:49:09 compute-0 sudo[131521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:49:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [WARNING] 019/184909 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 20 18:49:09 compute-0 python3.9[131523]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:49:09 compute-0 sudo[131521]: pam_unix(sudo:session): session closed for user root
Jan 20 18:49:09 compute-0 ceph-mon[74381]: pgmap v199: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 20 18:49:09 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v200: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:49:09 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:49:09 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:49:09 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:49:09.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:49:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:49:09] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Jan 20 18:49:09 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:49:09] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Jan 20 18:49:09 compute-0 sudo[131675]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwlnujbrhvezbshlmmxhvcavydbzczhe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934949.707943-1101-268974229360225/AnsiballZ_file.py'
Jan 20 18:49:09 compute-0 sudo[131675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:49:10 compute-0 python3.9[131677]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:49:10 compute-0 sudo[131675]: pam_unix(sudo:session): session closed for user root
Jan 20 18:49:10 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:10 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d50003f10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:10 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:10 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d4c00a300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:10 compute-0 ceph-mon[74381]: pgmap v200: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:49:10 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 18:49:10 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:49:10 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 20 18:49:10 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:49:10.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 20 18:49:10 compute-0 sudo[131827]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvymcepktdbkghbfxxsvxxhrjiqcrkxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934950.2930982-1101-186964686100711/AnsiballZ_file.py'
Jan 20 18:49:10 compute-0 sudo[131827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:49:10 compute-0 sshd-session[71266]: Received disconnect from 38.102.83.73 port 57820:11: disconnected by user
Jan 20 18:49:10 compute-0 sshd-session[71266]: Disconnected from user zuul 38.102.83.73 port 57820
Jan 20 18:49:10 compute-0 sshd-session[71263]: pam_unix(sshd:session): session closed for user zuul
Jan 20 18:49:10 compute-0 systemd[1]: session-18.scope: Deactivated successfully.
Jan 20 18:49:10 compute-0 systemd[1]: session-18.scope: Consumed 1min 39.894s CPU time.
Jan 20 18:49:10 compute-0 systemd-logind[796]: Session 18 logged out. Waiting for processes to exit.
Jan 20 18:49:10 compute-0 systemd-logind[796]: Removed session 18.
Jan 20 18:49:10 compute-0 python3.9[131829]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:49:10 compute-0 sudo[131827]: pam_unix(sudo:session): session closed for user root
Jan 20 18:49:10 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:10 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d30004140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:10 compute-0 sudo[131853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 18:49:10 compute-0 sudo[131853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:49:10 compute-0 sudo[131853]: pam_unix(sudo:session): session closed for user root
Jan 20 18:49:11 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v201: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:49:11 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:49:11 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 20 18:49:11 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:49:11.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 20 18:49:11 compute-0 sudo[132006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glxuqclssqytdvugjhaqpxwzoeqdmswa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934951.3190627-1146-248410465513180/AnsiballZ_mount.py'
Jan 20 18:49:11 compute-0 sudo[132006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:49:11 compute-0 python3.9[132008]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 20 18:49:11 compute-0 sudo[132006]: pam_unix(sudo:session): session closed for user root
Jan 20 18:49:12 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:12 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d440030b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:12 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:12 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d440030b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:12 compute-0 sudo[132159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmmtlrldistdwupuwovrwajpnseofjte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934952.0170386-1146-256471555417452/AnsiballZ_mount.py'
Jan 20 18:49:12 compute-0 sudo[132159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:49:12 compute-0 ceph-mon[74381]: pgmap v201: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:49:12 compute-0 python3.9[132161]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 20 18:49:12 compute-0 sudo[132159]: pam_unix(sudo:session): session closed for user root
Jan 20 18:49:12 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:49:12 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 20 18:49:12 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:49:12.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 20 18:49:12 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:12 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d4c00a300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:13 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:49:13 compute-0 sshd-session[124190]: Connection closed by 192.168.122.30 port 33022
Jan 20 18:49:13 compute-0 sshd-session[124187]: pam_unix(sshd:session): session closed for user zuul
Jan 20 18:49:13 compute-0 systemd[1]: session-45.scope: Deactivated successfully.
Jan 20 18:49:13 compute-0 systemd[1]: session-45.scope: Consumed 29.443s CPU time.
Jan 20 18:49:13 compute-0 systemd-logind[796]: Session 45 logged out. Waiting for processes to exit.
Jan 20 18:49:13 compute-0 systemd-logind[796]: Removed session 45.
Jan 20 18:49:13 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v202: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:49:13 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:49:13 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 20 18:49:13 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:49:13.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 20 18:49:14 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:14 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d24000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:14 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:14 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d50003f10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:14 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:49:14 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:49:14 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:49:14.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:49:14 compute-0 ceph-mon[74381]: pgmap v202: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:49:14 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:14 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d440041b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:15 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v203: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:49:15 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:49:15 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:49:15 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:49:15.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:49:16 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:16 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d4c00a300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:16 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:16 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d240016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:16 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:49:16 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 20 18:49:16 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:49:16.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 20 18:49:16 compute-0 ceph-mon[74381]: pgmap v203: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:49:16 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:16 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d240016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:16 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:49:16.980Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 18:49:17 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v204: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Jan 20 18:49:17 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:49:17 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:49:17 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:49:17.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:49:18 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:49:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:18 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d440041b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:18 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d4c00a300 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:18 compute-0 sshd-session[132192]: Accepted publickey for zuul from 192.168.122.30 port 47506 ssh2: ECDSA SHA256:OqjpBifwMHsrzwMbJxHbqE54q1skVz6aecL1tN9sOps
Jan 20 18:49:18 compute-0 systemd-logind[796]: New session 46 of user zuul.
Jan 20 18:49:18 compute-0 systemd[1]: Started Session 46 of User zuul.
Jan 20 18:49:18 compute-0 sshd-session[132192]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 18:49:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:18 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 20 18:49:18 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:49:18 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:49:18 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:49:18.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:49:18 compute-0 sudo[132345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojuvgonuyxbnlfdjdxtquldjcrmhqjfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934958.429838-18-24877138584497/AnsiballZ_tempfile.py'
Jan 20 18:49:18 compute-0 sudo[132345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:49:18 compute-0 ceph-mon[74381]: pgmap v204: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Jan 20 18:49:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:18 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d50003f10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:19 compute-0 python3.9[132347]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 20 18:49:19 compute-0 sudo[132345]: pam_unix(sudo:session): session closed for user root
Jan 20 18:49:19 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v205: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 20 18:49:19 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:49:19 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:49:19 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:49:19.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:49:19 compute-0 sudo[132499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsdlrqmnscnewpswgylmqvmasjbnwlgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934959.3551068-54-198035874209726/AnsiballZ_stat.py'
Jan 20 18:49:19 compute-0 sudo[132499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:49:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:49:19] "GET /metrics HTTP/1.1" 200 48334 "" "Prometheus/2.51.0"
Jan 20 18:49:19 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:49:19] "GET /metrics HTTP/1.1" 200 48334 "" "Prometheus/2.51.0"
Jan 20 18:49:19 compute-0 python3.9[132501]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 18:49:20 compute-0 sudo[132499]: pam_unix(sudo:session): session closed for user root
Jan 20 18:49:20 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:20 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d50003f10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:20 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:20 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d440041b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:20 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:49:20 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:49:20 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:49:20.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:49:20 compute-0 sudo[132653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjcvarkjvywtayhjuequvllgielucorl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934960.3065822-78-9926077221823/AnsiballZ_slurp.py'
Jan 20 18:49:20 compute-0 sudo[132653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:49:20 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:20 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d4c00a300 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:21 compute-0 python3.9[132655]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Jan 20 18:49:21 compute-0 sudo[132653]: pam_unix(sudo:session): session closed for user root
Jan 20 18:49:21 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v206: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 20 18:49:21 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:21 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 20 18:49:21 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:21 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 20 18:49:21 compute-0 ceph-mon[74381]: pgmap v205: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 20 18:49:21 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:49:21 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:49:21 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:49:21.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:49:21 compute-0 sudo[132806]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aoeczpjsqgpfnhcfpzcgivfmoyutwcdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934961.2492516-102-119915416225288/AnsiballZ_stat.py'
Jan 20 18:49:21 compute-0 sudo[132806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:49:21 compute-0 python3.9[132809]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.b_c4jjcy follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:49:21 compute-0 sudo[132806]: pam_unix(sudo:session): session closed for user root
Jan 20 18:49:21 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:21 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 20 18:49:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:22 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d4c00a300 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:22 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d240016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:22 compute-0 sudo[132932]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkggcowmhovwjdydsrpvkpzvrcylredk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934961.2492516-102-119915416225288/AnsiballZ_copy.py'
Jan 20 18:49:22 compute-0 sudo[132932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:49:22 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 20 18:49:22 compute-0 ceph-mon[74381]: pgmap v206: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 20 18:49:22 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:49:22 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:49:22 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:49:22.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:49:22 compute-0 python3.9[132934]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.b_c4jjcy mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768934961.2492516-102-119915416225288/.source.b_c4jjcy _original_basename=.xk4sq178 follow=False checksum=9c1612b359ff32ec267e41c968e1dbce82f1fb66 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:49:22 compute-0 sudo[132932]: pam_unix(sudo:session): session closed for user root
Jan 20 18:49:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:22 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d44004ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:23 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:49:23 compute-0 sudo[133086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfytmatjvhidrwdfrmpboedvtkvihhyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934962.8633628-147-18504353355565/AnsiballZ_setup.py'
Jan 20 18:49:23 compute-0 sudo[133086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:49:23 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v207: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 20 18:49:23 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:49:23 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:49:23 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:49:23.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:49:23 compute-0 python3.9[133088]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 18:49:23 compute-0 sudo[133086]: pam_unix(sudo:session): session closed for user root
Jan 20 18:49:24 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:24 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d44004ad0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:24 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:24 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d4c00a300 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:24 compute-0 sudo[133242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxmbaqifgzvwmpcreyjvppezwsiwauhq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934964.1682315-172-3092129854289/AnsiballZ_blockinfile.py'
Jan 20 18:49:24 compute-0 sudo[133242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:49:24 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:49:24 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:49:24 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:49:24.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:49:24 compute-0 ceph-mon[74381]: pgmap v207: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 20 18:49:24 compute-0 python3.9[133244]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC9QdI3DVbQ/8fNqEX5lzpJhhopd9VDFCOgX5Ovz96zAHRQi14Jvy8BufX9CL0nn2KOl6ezG8aIoN/hRsxCDqP65NYENNiByELn9tcS4HvcuoagePtrXopXNN2+zL3+f9mUnz3qU9ygYcjKbL+Q6PS39awjEYMDz6GSF0CWlJiQ2EVuSkpGUxLwePZI1OoVvjp7enzUJiXOT1dy1t4dsk+oAzlOCz7Twc5cYTKMsIESt6jBb3yW6gs3FUO0b6XN9xuE7bWoaTFrPzdUTXZV+kOH9/bDLe6am45px3PMBlOBK3/Dj7RrO2YLNqU7O+xjM5OKRsGZCKGjWVIB/xCRXUHaUhy9Ysa7lcTd8CvaOuVaE8WC9M5E75GQUXEsWnu7zs0+W5ZNQGQ+Y9LLfw6kNdIwLmvzVYXv3+eLyUnU9I1hOw6pgVpkfBB7NlkA/KumUL/XhjuamC0ZHRtAY7BEF6tMG/GRfy+spzvJ8gkHNtXNsF5uneM29eZfNeHasXpmras=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKZNrFAD4ziqo0rY8uHXS8b5yDewrwNrfH4oVhpwZEyi
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJHO4xAirgvBrLVTYbLwLCRykAt37Lt68eJO/YoBRvtoa8G0TJwvYUVlCWW1uOltqGDLrd0Z3J9FcQSAsez16Lw=
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCw7cXQxn2VuIaK8uuKhbAQcJI6FiqRrHdVUvwaDkOrq8qzvByaVgk5xl8EmCSduf3k7Y8SialoKXuoU8N4uSFOSpfxwz+Hh3X3fqr6lhtpSdW+l8C1kh3dPgL3wL3CnE7vIXa+JC+4RvVawPsqUZ4Mr9cCO1BQ+K1Jl9P2NFNV2nHdMeXlm8Y5lti9nJg2TH2c+qoVr2JJ0mbQ2g6802EjO2cn2ICs7VGaGTwXoCYX4HbPgf+zq5fv6uF8vZ+fz+tpoj7+ORrrNVoMDMQPDz+OT1l9WmK4vQ0x2R+27rDgRDmcetscRnRCtJRUPUEkHy72oDBZDWQvM2R3c+hZbjeJJRpiLRcri/fCFrweLudytyA1hKV+sJodq8EbfrC8lMy0fxEGDs3/YXN0udAzS8Sg/6LiIiJRzNcbF6H70B8P4FpAnKo03BrWwGDGRVNXWh8YqOXzIqN/FQVPOJ2aZ3ZCt1xIlMKY5ncsFz1F4volxuwrKutRdeDhfJu0M+M5Go8=
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMtfgxr5p3WX7/JV8ZGeyedNjypTLSFpEQC1rgg6zjYI
                                             compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFqHcFmVLlh75o0B9MfID1Caa9btI9E7S53rpl9+oGjdKlHBWb0Ut4EGvboMZg6zbxshPKgaBs01y0VgbZ/88Io=
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCgECz5eSdEf6b1nBbTrZn/96903LYDK9+b7cxGGOCNIDGWsHg2JX5Kqd/u4iGx/w1fQTGAR14aejfZ2SeHn8xBZRL0RP7QYzl5W/W0lgMt+4fg8fWSgBK+lGuWeXHxLQA3EfGSOaH75DPbFbPjNhPZwK8MM3/bOS5enrM/lUVLI0VjjLnetWBuhc6gJxekhxkMC+KxBr+1a6yk+lD0cSkbmAmVqpFWaQIJNPncphxpsr3JkTklrC7sP7JtXOsYCFIiHJw/tPUTIfMpYDk5suT3f2b+uuRFUWI3DJOwpLaBMpN39KNvfSFAJCNn5V1ts3cw4gwm4TCggGyo5cCQy1wFPvrxqtNQ2SXE1N3DHUV6eF/aB7ho3f9Tfd3e04AbyJCY9eCHMOks+s0XErfE/Cn0chsJ4ZM+ET3NfOQK+Pb0/TVX82iYfJLZYF9Jp11RvI77SMs/7osnwh70VyTLREUMDpiXboJEynLArKP6ijyVsgJQlb28WyBbvYZSG1ObDC8=
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJTqyKhPPFjqXut+RZeKNFFnMHaz29oIDVm2c0ADBh2O
                                             compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK48ShS5Scyh986e4WPn7bEgHcWXxRaxxF6rW4jUSClnY+cE5Aoo/m90YSyz93HHWjTtRg6XJ3YwjdVSOx6pfdw=
                                              create=True mode=0644 path=/tmp/ansible.b_c4jjcy state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:49:24 compute-0 sudo[133242]: pam_unix(sudo:session): session closed for user root
Jan 20 18:49:24 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:24 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 20 18:49:24 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:24 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d4c00a300 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:49:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:49:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:49:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:49:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:49:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:49:25 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v208: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 20 18:49:25 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:49:25 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 20 18:49:25 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:49:25.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 20 18:49:25 compute-0 sudo[133395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eknsotbkpboctuypqnsewclmjkbvqulq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934965.0839272-196-181617384266200/AnsiballZ_command.py'
Jan 20 18:49:25 compute-0 sudo[133395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:49:25 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 18:49:25 compute-0 python3.9[133397]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.b_c4jjcy' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:49:25 compute-0 sudo[133395]: pam_unix(sudo:session): session closed for user root
Jan 20 18:49:26 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:26 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d40000df0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:26 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:26 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d30004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:26 compute-0 sudo[133550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yllihdrwyzkfnfvexzcznygvqwdcjnbf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934966.0613232-220-190195968395139/AnsiballZ_file.py'
Jan 20 18:49:26 compute-0 sudo[133550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:49:26 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:49:26 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:49:26 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:49:26.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:49:26 compute-0 python3.9[133552]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.b_c4jjcy state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:49:26 compute-0 sudo[133550]: pam_unix(sudo:session): session closed for user root
Jan 20 18:49:26 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:26 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d50003fa0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:26 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:49:26.982Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 18:49:26 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:49:26.982Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 18:49:26 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:49:26.982Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 18:49:27 compute-0 sshd-session[132195]: Connection closed by 192.168.122.30 port 47506
Jan 20 18:49:27 compute-0 sshd-session[132192]: pam_unix(sshd:session): session closed for user zuul
Jan 20 18:49:27 compute-0 systemd[1]: session-46.scope: Deactivated successfully.
Jan 20 18:49:27 compute-0 systemd[1]: session-46.scope: Consumed 5.101s CPU time.
Jan 20 18:49:27 compute-0 systemd-logind[796]: Session 46 logged out. Waiting for processes to exit.
Jan 20 18:49:27 compute-0 systemd-logind[796]: Removed session 46.
Jan 20 18:49:27 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v209: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 20 18:49:27 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:49:27 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:49:27 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:49:27.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:49:27 compute-0 ceph-mon[74381]: pgmap v208: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 20 18:49:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:28 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d4c00a300 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:28 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d40000df0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:28 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:49:28 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:49:28 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:49:28 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:49:28.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:49:28 compute-0 ceph-mon[74381]: pgmap v209: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 20 18:49:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:28 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d30004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:29 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v210: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Jan 20 18:49:29 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:49:29 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:49:29 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:49:29.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:49:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:49:29] "GET /metrics HTTP/1.1" 200 48332 "" "Prometheus/2.51.0"
Jan 20 18:49:29 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:49:29] "GET /metrics HTTP/1.1" 200 48332 "" "Prometheus/2.51.0"
Jan 20 18:49:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:30 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d50003fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:30 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d4c00a300 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:30 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:49:30 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:49:30 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:49:30.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:49:30 compute-0 ceph-mon[74381]: pgmap v210: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Jan 20 18:49:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:30 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d4c00a300 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:31 compute-0 sudo[133582]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 18:49:31 compute-0 sudo[133582]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:49:31 compute-0 sudo[133582]: pam_unix(sudo:session): session closed for user root
Jan 20 18:49:31 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-crash-compute-0[79794]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 20 18:49:31 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [WARNING] 019/184931 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 20 18:49:31 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v211: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Jan 20 18:49:31 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:49:31 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:49:31 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:49:31.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:49:32 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:32 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d30004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:32 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:32 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d50003fe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:32 compute-0 sshd-session[133609]: Accepted publickey for zuul from 192.168.122.30 port 40068 ssh2: ECDSA SHA256:OqjpBifwMHsrzwMbJxHbqE54q1skVz6aecL1tN9sOps
Jan 20 18:49:32 compute-0 systemd-logind[796]: New session 47 of user zuul.
Jan 20 18:49:32 compute-0 systemd[1]: Started Session 47 of User zuul.
Jan 20 18:49:32 compute-0 sshd-session[133609]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 18:49:32 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:49:32 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:49:32 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:49:32.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:49:32 compute-0 ceph-mon[74381]: pgmap v211: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Jan 20 18:49:32 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:32 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d50003fe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:33 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:49:33 compute-0 python3.9[133762]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 18:49:33 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v212: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Jan 20 18:49:33 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:49:33 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:49:33 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:49:33.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:49:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:34 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d4c00a300 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:34 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d30004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:34 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:49:34 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:49:34 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:49:34.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:49:34 compute-0 sudo[133918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykpcogwhtsijgnmlwvyifrndbszyhcwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934974.0461824-51-164944911720681/AnsiballZ_systemd.py'
Jan 20 18:49:34 compute-0 sudo[133918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:49:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:34 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d50003fe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:34 compute-0 python3.9[133920]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 20 18:49:35 compute-0 sudo[133918]: pam_unix(sudo:session): session closed for user root
Jan 20 18:49:35 compute-0 ceph-mon[74381]: pgmap v212: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Jan 20 18:49:35 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v213: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Jan 20 18:49:35 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:49:35 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 20 18:49:35 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:49:35.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 20 18:49:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:36 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d50003fe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:36 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d4c00a300 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:36 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:49:36 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:49:36 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:49:36.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:49:36 compute-0 sudo[134074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjfbepfznxrfleluzingusmxxmvukrwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934976.3225162-75-216511541085204/AnsiballZ_systemd.py'
Jan 20 18:49:36 compute-0 sudo[134074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:49:36 compute-0 python3.9[134076]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 18:49:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:36 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d30004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:36 compute-0 sudo[134074]: pam_unix(sudo:session): session closed for user root
Jan 20 18:49:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:49:36.984Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 18:49:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:49:36.984Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 18:49:37 compute-0 ceph-mon[74381]: pgmap v213: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Jan 20 18:49:37 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v214: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Jan 20 18:49:37 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:49:37 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 20 18:49:37 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:49:37.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 20 18:49:37 compute-0 sudo[134229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urojpdvuqanymplqwnniiryzkhwavkqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934977.3275197-102-24232887259202/AnsiballZ_command.py'
Jan 20 18:49:37 compute-0 sudo[134229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:49:37 compute-0 python3.9[134231]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:49:37 compute-0 sudo[134229]: pam_unix(sudo:session): session closed for user root
Jan 20 18:49:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:38 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d30004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:38 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d400025c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:38 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:49:38 compute-0 ceph-mon[74381]: pgmap v214: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Jan 20 18:49:38 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:49:38 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 20 18:49:38 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:49:38.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 20 18:49:38 compute-0 sudo[134383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnzmmiqhuoththmojpkbpozroyesjbxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934978.2292953-126-108261410097251/AnsiballZ_stat.py'
Jan 20 18:49:38 compute-0 sudo[134383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:49:38 compute-0 python3.9[134385]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 18:49:38 compute-0 sudo[134383]: pam_unix(sudo:session): session closed for user root
Jan 20 18:49:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:38 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d4c00a300 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:39 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v215: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 20 18:49:39 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:49:39 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 20 18:49:39 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:49:39.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 20 18:49:39 compute-0 sudo[134537]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uivrojxakbjfkdpdnkyuvsssnpagzpux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934979.20927-153-250886218943210/AnsiballZ_file.py'
Jan 20 18:49:39 compute-0 sudo[134537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:49:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:49:39] "GET /metrics HTTP/1.1" 200 48332 "" "Prometheus/2.51.0"
Jan 20 18:49:39 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:49:39] "GET /metrics HTTP/1.1" 200 48332 "" "Prometheus/2.51.0"
Jan 20 18:49:39 compute-0 python3.9[134539]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:49:39 compute-0 sudo[134537]: pam_unix(sudo:session): session closed for user root
Jan 20 18:49:40 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:40 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d50004000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:40 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:40 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d30004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:40 compute-0 sshd-session[133612]: Connection closed by 192.168.122.30 port 40068
Jan 20 18:49:40 compute-0 sshd-session[133609]: pam_unix(sshd:session): session closed for user zuul
Jan 20 18:49:40 compute-0 systemd[1]: session-47.scope: Deactivated successfully.
Jan 20 18:49:40 compute-0 systemd[1]: session-47.scope: Consumed 3.810s CPU time.
Jan 20 18:49:40 compute-0 systemd-logind[796]: Session 47 logged out. Waiting for processes to exit.
Jan 20 18:49:40 compute-0 systemd-logind[796]: Removed session 47.
Jan 20 18:49:40 compute-0 ceph-mon[74381]: pgmap v215: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 20 18:49:40 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 18:49:40 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:49:40 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:49:40 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:49:40.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:49:40 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:40 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d40002ee0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:41 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v216: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Jan 20 18:49:41 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:49:41 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:49:41 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:49:41.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:49:42 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:42 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d4c00a300 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:42 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:42 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d50004020 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:42 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:49:42 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:49:42 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:49:42.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:49:42 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:42 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d30004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:42 compute-0 ceph-mon[74381]: pgmap v216: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Jan 20 18:49:43 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:49:43 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v217: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:49:43 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:49:43 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 20 18:49:43 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:49:43.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 20 18:49:44 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:44 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d40002ee0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:44 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:44 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d4c00a300 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:44 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:49:44 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:49:44 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:49:44.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:49:44 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:44 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d50004040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:45 compute-0 ceph-mon[74381]: pgmap v217: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:49:45 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v218: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:49:45 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:49:45 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 18:49:45 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:49:45.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 18:49:45 compute-0 sshd-session[134568]: Accepted publickey for zuul from 192.168.122.30 port 41492 ssh2: ECDSA SHA256:OqjpBifwMHsrzwMbJxHbqE54q1skVz6aecL1tN9sOps
Jan 20 18:49:45 compute-0 systemd-logind[796]: New session 48 of user zuul.
Jan 20 18:49:45 compute-0 systemd[1]: Started Session 48 of User zuul.
Jan 20 18:49:45 compute-0 sshd-session[134568]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 18:49:46 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:46 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d50004040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:46 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:46 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d30004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:46 compute-0 ceph-mon[74381]: pgmap v218: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:49:46 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:49:46 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 18:49:46 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:49:46.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 18:49:46 compute-0 python3.9[134723]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 18:49:46 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:46 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d4c00a300 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:46 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:49:46.985Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 18:49:46 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:49:46.986Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 18:49:47 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v219: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 20 18:49:47 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:49:47 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:49:47 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:49:47.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:49:47 compute-0 sudo[134879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jeaedycxvqctwqahlowqwhquutkxaujk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934987.2843816-57-79910571755809/AnsiballZ_setup.py'
Jan 20 18:49:47 compute-0 sudo[134879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:49:47 compute-0 python3.9[134881]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 18:49:48 compute-0 sudo[134879]: pam_unix(sudo:session): session closed for user root
Jan 20 18:49:48 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:48 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d4c00a300 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:48 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:49:48 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:48 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d40003bf0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:48 compute-0 ceph-mon[74381]: pgmap v219: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 20 18:49:48 compute-0 sudo[134963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygygopgfqdcefszefrvyqreviofcnipf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768934987.2843816-57-79910571755809/AnsiballZ_dnf.py'
Jan 20 18:49:48 compute-0 sudo[134963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:49:48 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:49:48 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:49:48 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:49:48.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:49:48 compute-0 python3.9[134965]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 20 18:49:48 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:48 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d40003bf0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:49 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v220: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:49:49 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:49:49 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 18:49:49 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:49:49.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 18:49:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:49:49] "GET /metrics HTTP/1.1" 200 48332 "" "Prometheus/2.51.0"
Jan 20 18:49:49 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:49:49] "GET /metrics HTTP/1.1" 200 48332 "" "Prometheus/2.51.0"
Jan 20 18:49:50 compute-0 sudo[134963]: pam_unix(sudo:session): session closed for user root
Jan 20 18:49:50 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:50 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d50004060 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:50 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:50 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d4c00a300 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:50 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:49:50 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:49:50 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:49:50.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:49:50 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:50 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d40003bf0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:51 compute-0 sudo[135045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 18:49:51 compute-0 sudo[135045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:49:51 compute-0 sudo[135045]: pam_unix(sudo:session): session closed for user root
Jan 20 18:49:51 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v221: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 20 18:49:51 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:49:51 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:49:51 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:49:51.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:49:51 compute-0 ceph-mon[74381]: pgmap v220: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:49:51 compute-0 python3.9[135143]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:49:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:52 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d40003bf0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:52 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d50004080 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:52 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:49:52 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 18:49:52 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:49:52.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 18:49:52 compute-0 ceph-mon[74381]: pgmap v221: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 20 18:49:52 compute-0 python3.9[135296]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 20 18:49:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:52 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d4c00a300 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:53 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:49:53 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v222: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:49:53 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:49:53 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:49:53 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:49:53.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:49:53 compute-0 python3.9[135447]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 18:49:54 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:54 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d30004140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:54 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:54 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d40003bf0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:54 compute-0 python3.9[135598]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 18:49:54 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:49:54 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:49:54 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:49:54.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:49:54 compute-0 ceph-mon[74381]: pgmap v222: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:49:54 compute-0 ceph-mgr[74676]: [balancer INFO root] Optimize plan auto_2026-01-20_18:49:54
Jan 20 18:49:54 compute-0 ceph-mgr[74676]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 18:49:54 compute-0 ceph-mgr[74676]: [balancer INFO root] do_upmap
Jan 20 18:49:54 compute-0 ceph-mgr[74676]: [balancer INFO root] pools ['.mgr', 'vms', 'default.rgw.meta', 'backups', 'default.rgw.control', '.nfs', 'volumes', 'images', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.log', 'cephfs.cephfs.data']
Jan 20 18:49:54 compute-0 ceph-mgr[74676]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 18:49:54 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:54 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d500040a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:49:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:49:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 18:49:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:49:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:49:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:49:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 18:49:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:49:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:49:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:49:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:49:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:49:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:49:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:49:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:49:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:49:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 20 18:49:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:49:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:49:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:49:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 18:49:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:49:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 20 18:49:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:49:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:49:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:49:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 18:49:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:49:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 20 18:49:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:49:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:49:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 18:49:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 18:49:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 18:49:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 18:49:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 18:49:55 compute-0 sshd-session[134573]: Connection closed by 192.168.122.30 port 41492
Jan 20 18:49:55 compute-0 sshd-session[134568]: pam_unix(sshd:session): session closed for user zuul
Jan 20 18:49:55 compute-0 systemd[1]: session-48.scope: Deactivated successfully.
Jan 20 18:49:55 compute-0 systemd[1]: session-48.scope: Consumed 5.888s CPU time.
Jan 20 18:49:55 compute-0 systemd-logind[796]: Session 48 logged out. Waiting for processes to exit.
Jan 20 18:49:55 compute-0 systemd-logind[796]: Removed session 48.
Jan 20 18:49:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 18:49:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 18:49:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 18:49:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 18:49:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 18:49:55 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v223: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:49:55 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:49:55 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:49:55 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:49:55.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:49:55 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 18:49:56 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:56 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d4c00a300 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:56 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:56 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d30004160 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:56 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:49:56 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:49:56 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:49:56.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:49:56 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:56 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d24000f30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:56 compute-0 ceph-mon[74381]: pgmap v223: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:49:56 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:49:56.986Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 18:49:57 compute-0 sudo[135627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:49:57 compute-0 sudo[135627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:49:57 compute-0 sudo[135627]: pam_unix(sudo:session): session closed for user root
Jan 20 18:49:57 compute-0 sudo[135652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Jan 20 18:49:57 compute-0 sudo[135652]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:49:57 compute-0 sudo[135652]: pam_unix(sudo:session): session closed for user root
Jan 20 18:49:57 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 18:49:57 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v224: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 20 18:49:57 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:49:57 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:49:57 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:49:57.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:49:57 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:49:57 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 20 18:49:57 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 18:49:57 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:49:57 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:49:57 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 20 18:49:57 compute-0 sudo[135700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:49:57 compute-0 sudo[135700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:49:57 compute-0 sudo[135700]: pam_unix(sudo:session): session closed for user root
Jan 20 18:49:57 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:49:57 compute-0 sudo[135725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 20 18:49:57 compute-0 sudo[135725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:49:58 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:58 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d50004130 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:58 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:49:58 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:58 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d4c00a300 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:58 compute-0 sudo[135725]: pam_unix(sudo:session): session closed for user root
Jan 20 18:49:58 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 18:49:58 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:49:58 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:49:58 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:49:58.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:49:58 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:49:58 compute-0 ceph-mon[74381]: pgmap v224: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 20 18:49:58 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:49:58 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:49:58 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:49:58 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:49:58 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:49:58 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 18:49:58 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 20 18:49:58 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:49:58 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d30004180 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:49:59 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v225: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:49:59 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:49:59 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:49:59 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:49:59.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:49:59 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:49:59 compute-0 sudo[135784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:49:59 compute-0 sudo[135784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:49:59 compute-0 sudo[135784]: pam_unix(sudo:session): session closed for user root
Jan 20 18:49:59 compute-0 sudo[135809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 18:49:59 compute-0 sudo[135809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:49:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:49:59] "GET /metrics HTTP/1.1" 200 48331 "" "Prometheus/2.51.0"
Jan 20 18:49:59 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:49:59] "GET /metrics HTTP/1.1" 200 48331 "" "Prometheus/2.51.0"
Jan 20 18:49:59 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:49:59 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:49:59 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 18:49:59 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 18:49:59 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:50:00 compute-0 ceph-mon[74381]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 20 18:50:00 compute-0 podman[135875]: 2026-01-20 18:50:00.212575767 +0000 UTC m=+0.041841939 container create 4b17ee20e9f7e493e26ed2bb338559579d75a297301befd54adc644ea266f3d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 20 18:50:00 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:50:00 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d240010d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:50:00 compute-0 systemd[1]: Started libpod-conmon-4b17ee20e9f7e493e26ed2bb338559579d75a297301befd54adc644ea266f3d5.scope.
Jan 20 18:50:00 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:50:00 compute-0 podman[135875]: 2026-01-20 18:50:00.195416282 +0000 UTC m=+0.024682474 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:50:00 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:50:00 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d50004130 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:50:00 compute-0 podman[135875]: 2026-01-20 18:50:00.306248689 +0000 UTC m=+0.135514901 container init 4b17ee20e9f7e493e26ed2bb338559579d75a297301befd54adc644ea266f3d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_bouman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True)
Jan 20 18:50:00 compute-0 podman[135875]: 2026-01-20 18:50:00.319965855 +0000 UTC m=+0.149232037 container start 4b17ee20e9f7e493e26ed2bb338559579d75a297301befd54adc644ea266f3d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 20 18:50:00 compute-0 podman[135875]: 2026-01-20 18:50:00.323460825 +0000 UTC m=+0.152727017 container attach 4b17ee20e9f7e493e26ed2bb338559579d75a297301befd54adc644ea266f3d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_bouman, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:50:00 compute-0 lucid_bouman[135892]: 167 167
Jan 20 18:50:00 compute-0 systemd[1]: libpod-4b17ee20e9f7e493e26ed2bb338559579d75a297301befd54adc644ea266f3d5.scope: Deactivated successfully.
Jan 20 18:50:00 compute-0 podman[135875]: 2026-01-20 18:50:00.328909533 +0000 UTC m=+0.158175725 container died 4b17ee20e9f7e493e26ed2bb338559579d75a297301befd54adc644ea266f3d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_bouman, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 20 18:50:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-3016051976b784c396920399bd45a6a37cc7d44fb168a25d8361ace8bc2332ec-merged.mount: Deactivated successfully.
Jan 20 18:50:00 compute-0 podman[135875]: 2026-01-20 18:50:00.377833484 +0000 UTC m=+0.207099666 container remove 4b17ee20e9f7e493e26ed2bb338559579d75a297301befd54adc644ea266f3d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_bouman, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:50:00 compute-0 systemd[1]: libpod-conmon-4b17ee20e9f7e493e26ed2bb338559579d75a297301befd54adc644ea266f3d5.scope: Deactivated successfully.
Jan 20 18:50:00 compute-0 sshd-session[135910]: Accepted publickey for zuul from 192.168.122.30 port 46892 ssh2: ECDSA SHA256:OqjpBifwMHsrzwMbJxHbqE54q1skVz6aecL1tN9sOps
Jan 20 18:50:00 compute-0 systemd-logind[796]: New session 49 of user zuul.
Jan 20 18:50:00 compute-0 systemd[1]: Started Session 49 of User zuul.
Jan 20 18:50:00 compute-0 podman[135917]: 2026-01-20 18:50:00.557044775 +0000 UTC m=+0.065580834 container create 88fa8f4c91e35d4f3cfd0b9dbc25b45d653f1c947c370521f3b458288de2f047 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:50:00 compute-0 sshd-session[135910]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 18:50:00 compute-0 systemd[1]: Started libpod-conmon-88fa8f4c91e35d4f3cfd0b9dbc25b45d653f1c947c370521f3b458288de2f047.scope.
Jan 20 18:50:00 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:50:00 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:50:00 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:50:00.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:50:00 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:50:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10e5a9d74a40742759fd0b754e664fcc46eba23afeda9aa3229dd48b412f8f37/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 18:50:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10e5a9d74a40742759fd0b754e664fcc46eba23afeda9aa3229dd48b412f8f37/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:50:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10e5a9d74a40742759fd0b754e664fcc46eba23afeda9aa3229dd48b412f8f37/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:50:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10e5a9d74a40742759fd0b754e664fcc46eba23afeda9aa3229dd48b412f8f37/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 18:50:00 compute-0 podman[135917]: 2026-01-20 18:50:00.528905363 +0000 UTC m=+0.037441512 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:50:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/10e5a9d74a40742759fd0b754e664fcc46eba23afeda9aa3229dd48b412f8f37/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 18:50:00 compute-0 podman[135917]: 2026-01-20 18:50:00.635364124 +0000 UTC m=+0.143900273 container init 88fa8f4c91e35d4f3cfd0b9dbc25b45d653f1c947c370521f3b458288de2f047 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid)
Jan 20 18:50:00 compute-0 podman[135917]: 2026-01-20 18:50:00.642467069 +0000 UTC m=+0.151003158 container start 88fa8f4c91e35d4f3cfd0b9dbc25b45d653f1c947c370521f3b458288de2f047 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_babbage, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:50:00 compute-0 podman[135917]: 2026-01-20 18:50:00.646719682 +0000 UTC m=+0.155255731 container attach 88fa8f4c91e35d4f3cfd0b9dbc25b45d653f1c947c370521f3b458288de2f047 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_babbage, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 20 18:50:00 compute-0 ceph-mon[74381]: pgmap v225: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:50:00 compute-0 ceph-mon[74381]: overall HEALTH_OK
Jan 20 18:50:00 compute-0 great_babbage[135936]: --> passed data devices: 0 physical, 1 LVM
Jan 20 18:50:00 compute-0 great_babbage[135936]: --> All data devices are unavailable
Jan 20 18:50:00 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:50:00 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d4c00a300 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:50:00 compute-0 systemd[1]: libpod-88fa8f4c91e35d4f3cfd0b9dbc25b45d653f1c947c370521f3b458288de2f047.scope: Deactivated successfully.
Jan 20 18:50:00 compute-0 podman[135917]: 2026-01-20 18:50:00.98567289 +0000 UTC m=+0.494208969 container died 88fa8f4c91e35d4f3cfd0b9dbc25b45d653f1c947c370521f3b458288de2f047 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_babbage, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 20 18:50:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-10e5a9d74a40742759fd0b754e664fcc46eba23afeda9aa3229dd48b412f8f37-merged.mount: Deactivated successfully.
Jan 20 18:50:01 compute-0 podman[135917]: 2026-01-20 18:50:01.024217183 +0000 UTC m=+0.532753252 container remove 88fa8f4c91e35d4f3cfd0b9dbc25b45d653f1c947c370521f3b458288de2f047 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_babbage, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:50:01 compute-0 systemd[1]: libpod-conmon-88fa8f4c91e35d4f3cfd0b9dbc25b45d653f1c947c370521f3b458288de2f047.scope: Deactivated successfully.
Jan 20 18:50:01 compute-0 sudo[135809]: pam_unix(sudo:session): session closed for user root
Jan 20 18:50:01 compute-0 sudo[136042]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:50:01 compute-0 sudo[136042]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:50:01 compute-0 sudo[136042]: pam_unix(sudo:session): session closed for user root
Jan 20 18:50:01 compute-0 sudo[136087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- lvm list --format json
Jan 20 18:50:01 compute-0 sudo[136087]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:50:01 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v226: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 20 18:50:01 compute-0 python3.9[136161]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 18:50:01 compute-0 podman[136202]: 2026-01-20 18:50:01.559172966 +0000 UTC m=+0.041928591 container create 92b571fa704a0e9b605f8736cb9485f35870fc07c7e4fce8db99283f14bc4f0d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_wu, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Jan 20 18:50:01 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:50:01 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 18:50:01 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:50:01.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 18:50:01 compute-0 systemd[1]: Started libpod-conmon-92b571fa704a0e9b605f8736cb9485f35870fc07c7e4fce8db99283f14bc4f0d.scope.
Jan 20 18:50:01 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:50:01 compute-0 podman[136202]: 2026-01-20 18:50:01.541015052 +0000 UTC m=+0.023770697 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:50:01 compute-0 podman[136202]: 2026-01-20 18:50:01.648635707 +0000 UTC m=+0.131391362 container init 92b571fa704a0e9b605f8736cb9485f35870fc07c7e4fce8db99283f14bc4f0d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_wu, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:50:01 compute-0 podman[136202]: 2026-01-20 18:50:01.654458255 +0000 UTC m=+0.137213880 container start 92b571fa704a0e9b605f8736cb9485f35870fc07c7e4fce8db99283f14bc4f0d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_wu, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid)
Jan 20 18:50:01 compute-0 podman[136202]: 2026-01-20 18:50:01.658015908 +0000 UTC m=+0.140771533 container attach 92b571fa704a0e9b605f8736cb9485f35870fc07c7e4fce8db99283f14bc4f0d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_wu, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Jan 20 18:50:01 compute-0 hungry_wu[136225]: 167 167
Jan 20 18:50:01 compute-0 systemd[1]: libpod-92b571fa704a0e9b605f8736cb9485f35870fc07c7e4fce8db99283f14bc4f0d.scope: Deactivated successfully.
Jan 20 18:50:01 compute-0 podman[136202]: 2026-01-20 18:50:01.659616554 +0000 UTC m=+0.142372199 container died 92b571fa704a0e9b605f8736cb9485f35870fc07c7e4fce8db99283f14bc4f0d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_wu, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:50:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-7182b59269559c2fd212134367817bcc30de6fd1c8cb7e348cc0509ebc280cea-merged.mount: Deactivated successfully.
Jan 20 18:50:01 compute-0 podman[136202]: 2026-01-20 18:50:01.701203323 +0000 UTC m=+0.183958938 container remove 92b571fa704a0e9b605f8736cb9485f35870fc07c7e4fce8db99283f14bc4f0d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_wu, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Jan 20 18:50:01 compute-0 systemd[1]: libpod-conmon-92b571fa704a0e9b605f8736cb9485f35870fc07c7e4fce8db99283f14bc4f0d.scope: Deactivated successfully.
Jan 20 18:50:01 compute-0 podman[136250]: 2026-01-20 18:50:01.870359574 +0000 UTC m=+0.061471775 container create c824c03ff444db79fa605b41d015bb6f85b105d4844c62b50d05b8971be70a95 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_volhard, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:50:01 compute-0 systemd[1]: Started libpod-conmon-c824c03ff444db79fa605b41d015bb6f85b105d4844c62b50d05b8971be70a95.scope.
Jan 20 18:50:01 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:50:01 compute-0 podman[136250]: 2026-01-20 18:50:01.850363087 +0000 UTC m=+0.041475288 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:50:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf93b639bb57cd66baf6ab1fa64a20056b82e95bfc05b34924cc72979443277c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 18:50:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf93b639bb57cd66baf6ab1fa64a20056b82e95bfc05b34924cc72979443277c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:50:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf93b639bb57cd66baf6ab1fa64a20056b82e95bfc05b34924cc72979443277c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:50:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf93b639bb57cd66baf6ab1fa64a20056b82e95bfc05b34924cc72979443277c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 18:50:01 compute-0 podman[136250]: 2026-01-20 18:50:01.962726228 +0000 UTC m=+0.153838439 container init c824c03ff444db79fa605b41d015bb6f85b105d4844c62b50d05b8971be70a95 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_volhard, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:50:01 compute-0 podman[136250]: 2026-01-20 18:50:01.96935498 +0000 UTC m=+0.160467171 container start c824c03ff444db79fa605b41d015bb6f85b105d4844c62b50d05b8971be70a95 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_volhard, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1)
Jan 20 18:50:01 compute-0 podman[136250]: 2026-01-20 18:50:01.972343046 +0000 UTC m=+0.163455237 container attach c824c03ff444db79fa605b41d015bb6f85b105d4844c62b50d05b8971be70a95 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 20 18:50:02 compute-0 recursing_volhard[136290]: {
Jan 20 18:50:02 compute-0 recursing_volhard[136290]:     "0": [
Jan 20 18:50:02 compute-0 recursing_volhard[136290]:         {
Jan 20 18:50:02 compute-0 recursing_volhard[136290]:             "devices": [
Jan 20 18:50:02 compute-0 recursing_volhard[136290]:                 "/dev/loop3"
Jan 20 18:50:02 compute-0 recursing_volhard[136290]:             ],
Jan 20 18:50:02 compute-0 recursing_volhard[136290]:             "lv_name": "ceph_lv0",
Jan 20 18:50:02 compute-0 recursing_volhard[136290]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 18:50:02 compute-0 recursing_volhard[136290]:             "lv_size": "21470642176",
Jan 20 18:50:02 compute-0 recursing_volhard[136290]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=aecbbf3b-b405-507b-97d7-637a83f5b4b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5f53c0c6-6046-4836-83f9-ff93da7e674e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 18:50:02 compute-0 recursing_volhard[136290]:             "lv_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 18:50:02 compute-0 recursing_volhard[136290]:             "name": "ceph_lv0",
Jan 20 18:50:02 compute-0 recursing_volhard[136290]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 18:50:02 compute-0 recursing_volhard[136290]:             "tags": {
Jan 20 18:50:02 compute-0 recursing_volhard[136290]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 18:50:02 compute-0 recursing_volhard[136290]:                 "ceph.block_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 18:50:02 compute-0 recursing_volhard[136290]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 18:50:02 compute-0 recursing_volhard[136290]:                 "ceph.cluster_fsid": "aecbbf3b-b405-507b-97d7-637a83f5b4b1",
Jan 20 18:50:02 compute-0 recursing_volhard[136290]:                 "ceph.cluster_name": "ceph",
Jan 20 18:50:02 compute-0 recursing_volhard[136290]:                 "ceph.crush_device_class": "",
Jan 20 18:50:02 compute-0 recursing_volhard[136290]:                 "ceph.encrypted": "0",
Jan 20 18:50:02 compute-0 recursing_volhard[136290]:                 "ceph.osd_fsid": "5f53c0c6-6046-4836-83f9-ff93da7e674e",
Jan 20 18:50:02 compute-0 recursing_volhard[136290]:                 "ceph.osd_id": "0",
Jan 20 18:50:02 compute-0 recursing_volhard[136290]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 18:50:02 compute-0 recursing_volhard[136290]:                 "ceph.type": "block",
Jan 20 18:50:02 compute-0 recursing_volhard[136290]:                 "ceph.vdo": "0",
Jan 20 18:50:02 compute-0 recursing_volhard[136290]:                 "ceph.with_tpm": "0"
Jan 20 18:50:02 compute-0 recursing_volhard[136290]:             },
Jan 20 18:50:02 compute-0 recursing_volhard[136290]:             "type": "block",
Jan 20 18:50:02 compute-0 recursing_volhard[136290]:             "vg_name": "ceph_vg0"
Jan 20 18:50:02 compute-0 recursing_volhard[136290]:         }
Jan 20 18:50:02 compute-0 recursing_volhard[136290]:     ]
Jan 20 18:50:02 compute-0 recursing_volhard[136290]: }
Jan 20 18:50:02 compute-0 systemd[1]: libpod-c824c03ff444db79fa605b41d015bb6f85b105d4844c62b50d05b8971be70a95.scope: Deactivated successfully.
Jan 20 18:50:02 compute-0 podman[136250]: 2026-01-20 18:50:02.234608172 +0000 UTC m=+0.425720363 container died c824c03ff444db79fa605b41d015bb6f85b105d4844c62b50d05b8971be70a95 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_volhard, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:50:02 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:50:02 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d300041a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:50:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf93b639bb57cd66baf6ab1fa64a20056b82e95bfc05b34924cc72979443277c-merged.mount: Deactivated successfully.
Jan 20 18:50:02 compute-0 podman[136250]: 2026-01-20 18:50:02.27542812 +0000 UTC m=+0.466540321 container remove c824c03ff444db79fa605b41d015bb6f85b105d4844c62b50d05b8971be70a95 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_volhard, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:50:02 compute-0 systemd[1]: libpod-conmon-c824c03ff444db79fa605b41d015bb6f85b105d4844c62b50d05b8971be70a95.scope: Deactivated successfully.
Jan 20 18:50:02 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:50:02 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d240010d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:50:02 compute-0 sudo[136087]: pam_unix(sudo:session): session closed for user root
Jan 20 18:50:02 compute-0 sudo[136312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:50:02 compute-0 sudo[136312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:50:02 compute-0 sudo[136312]: pam_unix(sudo:session): session closed for user root
Jan 20 18:50:02 compute-0 sudo[136337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- raw list --format json
Jan 20 18:50:02 compute-0 sudo[136337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:50:02 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:50:02 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 18:50:02 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:50:02.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 18:50:02 compute-0 podman[136456]: 2026-01-20 18:50:02.815866121 +0000 UTC m=+0.038857281 container create a79544da4607ec3aae5eb2e10ed50e0843384c7c388756d9ef4ef578f5cd7698 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_jones, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:50:02 compute-0 systemd[1]: Started libpod-conmon-a79544da4607ec3aae5eb2e10ed50e0843384c7c388756d9ef4ef578f5cd7698.scope.
Jan 20 18:50:02 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:50:02 compute-0 podman[136456]: 2026-01-20 18:50:02.892682568 +0000 UTC m=+0.115673748 container init a79544da4607ec3aae5eb2e10ed50e0843384c7c388756d9ef4ef578f5cd7698 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_jones, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 18:50:02 compute-0 podman[136456]: 2026-01-20 18:50:02.797866663 +0000 UTC m=+0.020857833 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:50:02 compute-0 podman[136456]: 2026-01-20 18:50:02.898676251 +0000 UTC m=+0.121667411 container start a79544da4607ec3aae5eb2e10ed50e0843384c7c388756d9ef4ef578f5cd7698 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_jones, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1)
Jan 20 18:50:02 compute-0 podman[136456]: 2026-01-20 18:50:02.902219843 +0000 UTC m=+0.125211093 container attach a79544da4607ec3aae5eb2e10ed50e0843384c7c388756d9ef4ef578f5cd7698 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:50:02 compute-0 heuristic_jones[136472]: 167 167
Jan 20 18:50:02 compute-0 systemd[1]: libpod-a79544da4607ec3aae5eb2e10ed50e0843384c7c388756d9ef4ef578f5cd7698.scope: Deactivated successfully.
Jan 20 18:50:02 compute-0 podman[136477]: 2026-01-20 18:50:02.946327226 +0000 UTC m=+0.031768928 container died a79544da4607ec3aae5eb2e10ed50e0843384c7c388756d9ef4ef578f5cd7698 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325)
Jan 20 18:50:02 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:50:02 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d50004130 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:50:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d2303b184cb4e7f695abdcd4bc4a5b7f6174ab1636bdbf49f0b8fe98352d4a1-merged.mount: Deactivated successfully.
Jan 20 18:50:02 compute-0 podman[136477]: 2026-01-20 18:50:02.984054134 +0000 UTC m=+0.069495856 container remove a79544da4607ec3aae5eb2e10ed50e0843384c7c388756d9ef4ef578f5cd7698 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:50:02 compute-0 systemd[1]: libpod-conmon-a79544da4607ec3aae5eb2e10ed50e0843384c7c388756d9ef4ef578f5cd7698.scope: Deactivated successfully.
Jan 20 18:50:03 compute-0 sudo[136568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezzybmqpkkcwusqtjzqzmtypyzdkygid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935002.6918046-106-228334190418867/AnsiballZ_file.py'
Jan 20 18:50:03 compute-0 sudo[136568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:50:03 compute-0 ceph-mon[74381]: pgmap v226: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 20 18:50:03 compute-0 podman[136573]: 2026-01-20 18:50:03.162366058 +0000 UTC m=+0.042584519 container create 694091e4fdaac1f4a83853d8a2d5564368b795da84d0f699de0a078bf40f464f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_spence, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:50:03 compute-0 systemd[1]: Started libpod-conmon-694091e4fdaac1f4a83853d8a2d5564368b795da84d0f699de0a078bf40f464f.scope.
Jan 20 18:50:03 compute-0 podman[136573]: 2026-01-20 18:50:03.142222477 +0000 UTC m=+0.022440988 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:50:03 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:50:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06c32cec7ee6876df1611fae8138c2a984595573613f14407ee2fa7a744c747c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 18:50:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06c32cec7ee6876df1611fae8138c2a984595573613f14407ee2fa7a744c747c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:50:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06c32cec7ee6876df1611fae8138c2a984595573613f14407ee2fa7a744c747c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:50:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06c32cec7ee6876df1611fae8138c2a984595573613f14407ee2fa7a744c747c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 18:50:03 compute-0 podman[136573]: 2026-01-20 18:50:03.257577445 +0000 UTC m=+0.137795916 container init 694091e4fdaac1f4a83853d8a2d5564368b795da84d0f699de0a078bf40f464f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_spence, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Jan 20 18:50:03 compute-0 podman[136573]: 2026-01-20 18:50:03.264516225 +0000 UTC m=+0.144734686 container start 694091e4fdaac1f4a83853d8a2d5564368b795da84d0f699de0a078bf40f464f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_spence, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid)
Jan 20 18:50:03 compute-0 podman[136573]: 2026-01-20 18:50:03.268029576 +0000 UTC m=+0.148248067 container attach 694091e4fdaac1f4a83853d8a2d5564368b795da84d0f699de0a078bf40f464f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_spence, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Jan 20 18:50:03 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:50:03 compute-0 python3.9[136575]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:50:03 compute-0 sudo[136568]: pam_unix(sudo:session): session closed for user root
Jan 20 18:50:03 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v227: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:50:03 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:50:03 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:50:03 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:50:03.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:50:03 compute-0 sudo[136796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjmsraaktswfcpytpbrhszegrmatqgzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935003.4641912-106-190901459125943/AnsiballZ_file.py'
Jan 20 18:50:03 compute-0 sudo[136796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:50:03 compute-0 python3.9[136802]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:50:03 compute-0 lvm[136818]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 18:50:03 compute-0 lvm[136818]: VG ceph_vg0 finished
Jan 20 18:50:03 compute-0 sudo[136796]: pam_unix(sudo:session): session closed for user root
Jan 20 18:50:03 compute-0 pedantic_spence[136590]: {}
Jan 20 18:50:03 compute-0 systemd[1]: libpod-694091e4fdaac1f4a83853d8a2d5564368b795da84d0f699de0a078bf40f464f.scope: Deactivated successfully.
Jan 20 18:50:03 compute-0 systemd[1]: libpod-694091e4fdaac1f4a83853d8a2d5564368b795da84d0f699de0a078bf40f464f.scope: Consumed 1.076s CPU time.
Jan 20 18:50:03 compute-0 podman[136573]: 2026-01-20 18:50:03.973555661 +0000 UTC m=+0.853774132 container died 694091e4fdaac1f4a83853d8a2d5564368b795da84d0f699de0a078bf40f464f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_spence, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:50:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-06c32cec7ee6876df1611fae8138c2a984595573613f14407ee2fa7a744c747c-merged.mount: Deactivated successfully.
Jan 20 18:50:04 compute-0 podman[136573]: 2026-01-20 18:50:04.022066271 +0000 UTC m=+0.902284752 container remove 694091e4fdaac1f4a83853d8a2d5564368b795da84d0f699de0a078bf40f464f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 20 18:50:04 compute-0 systemd[1]: libpod-conmon-694091e4fdaac1f4a83853d8a2d5564368b795da84d0f699de0a078bf40f464f.scope: Deactivated successfully.
Jan 20 18:50:04 compute-0 sudo[136337]: pam_unix(sudo:session): session closed for user root
Jan 20 18:50:04 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 18:50:04 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:50:04 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 18:50:04 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:50:04 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d4c00a300 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:50:04 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:50:04 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d300041c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:50:04 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:50:04 compute-0 sudo[137004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijilhbvmyowxpuqmfpbfptwmdbwdrwma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935004.0928867-148-159393910436829/AnsiballZ_stat.py'
Jan 20 18:50:04 compute-0 sudo[136955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 18:50:04 compute-0 sudo[136955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:50:04 compute-0 sudo[137004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:50:04 compute-0 sudo[136955]: pam_unix(sudo:session): session closed for user root
Jan 20 18:50:04 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:50:04 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 18:50:04 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:50:04.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 18:50:04 compute-0 python3.9[137007]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:50:04 compute-0 sudo[137004]: pam_unix(sudo:session): session closed for user root
Jan 20 18:50:04 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:50:04 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d240010d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:50:05 compute-0 ceph-mon[74381]: pgmap v227: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:50:05 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:50:05 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:50:05 compute-0 sudo[137129]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvpnnxnzhxgypjagevgurbewjhvviemf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935004.0928867-148-159393910436829/AnsiballZ_copy.py'
Jan 20 18:50:05 compute-0 sudo[137129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:50:05 compute-0 python3.9[137131]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768935004.0928867-148-159393910436829/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=c0ef231a2f06570a8ce42687426e3aaf3db67cdc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:50:05 compute-0 sudo[137129]: pam_unix(sudo:session): session closed for user root
Jan 20 18:50:05 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v228: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:50:05 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:50:05 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:50:05 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:50:05.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:50:05 compute-0 sudo[137283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjpbmromhisfngwehuewehqkmthjvsqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935005.530954-148-20612705212202/AnsiballZ_stat.py'
Jan 20 18:50:05 compute-0 sudo[137283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:50:05 compute-0 python3.9[137285]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:50:05 compute-0 sudo[137283]: pam_unix(sudo:session): session closed for user root
Jan 20 18:50:06 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:50:06 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d240010d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:50:06 compute-0 sudo[137406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmjlpxouspgxbvuvszzfvukzapuvhchi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935005.530954-148-20612705212202/AnsiballZ_copy.py'
Jan 20 18:50:06 compute-0 sudo[137406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:50:06 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:50:06 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d4c00a300 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:50:06 compute-0 ceph-mon[74381]: pgmap v228: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:50:06 compute-0 python3.9[137408]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768935005.530954-148-20612705212202/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=02327a9aff7a73fa67bf55446c01afc4147f4a12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:50:06 compute-0 sudo[137406]: pam_unix(sudo:session): session closed for user root
Jan 20 18:50:06 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:50:06 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 18:50:06 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:50:06.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 18:50:06 compute-0 sudo[137558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkuyqnpabrjeoiqmgvpqxlsoiuhfirzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935006.6459892-148-208405382213950/AnsiballZ_stat.py'
Jan 20 18:50:06 compute-0 sudo[137558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:50:06 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:50:06 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d300041e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:50:06 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:50:06.987Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 18:50:07 compute-0 python3.9[137560]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:50:07 compute-0 sudo[137558]: pam_unix(sudo:session): session closed for user root
Jan 20 18:50:07 compute-0 sudo[137681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxhtrylvebtbidyzfsmutvdlqkaozmoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935006.6459892-148-208405382213950/AnsiballZ_copy.py'
Jan 20 18:50:07 compute-0 sudo[137681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:50:07 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v229: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 20 18:50:07 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:50:07 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:50:07 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:50:07.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:50:07 compute-0 python3.9[137683]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768935006.6459892-148-208405382213950/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=70bccf4f56b534605965994a3390e7f33712f137 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:50:07 compute-0 sudo[137681]: pam_unix(sudo:session): session closed for user root
Jan 20 18:50:08 compute-0 sudo[137835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsrtjzprnapyoxgjthbstfwkyorujufn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935007.916708-274-126284282010107/AnsiballZ_file.py'
Jan 20 18:50:08 compute-0 sudo[137835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:50:08 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:50:08 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d240010d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:50:08 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:50:08 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:50:08 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d500042f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:50:08 compute-0 python3.9[137837]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:50:08 compute-0 sudo[137835]: pam_unix(sudo:session): session closed for user root
Jan 20 18:50:08 compute-0 ceph-mon[74381]: pgmap v229: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 20 18:50:08 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:50:08 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 18:50:08 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:50:08.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 18:50:08 compute-0 sudo[137987]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbzwkusuctfddjvdlfsdvcyiqzismxan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935008.5251875-274-261503386674456/AnsiballZ_file.py'
Jan 20 18:50:08 compute-0 sudo[137987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:50:08 compute-0 python3.9[137989]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:50:08 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:50:08 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d4c00a300 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:50:08 compute-0 sudo[137987]: pam_unix(sudo:session): session closed for user root
Jan 20 18:50:09 compute-0 sudo[138139]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yubvtkacltsmelgggjppszsblhnywads ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935009.1257758-318-275699368064942/AnsiballZ_stat.py'
Jan 20 18:50:09 compute-0 sudo[138139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:50:09 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v230: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:50:09 compute-0 python3.9[138141]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:50:09 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:50:09 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:50:09 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:50:09.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:50:09 compute-0 sudo[138139]: pam_unix(sudo:session): session closed for user root
Jan 20 18:50:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:50:09] "GET /metrics HTTP/1.1" 200 48331 "" "Prometheus/2.51.0"
Jan 20 18:50:09 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:50:09] "GET /metrics HTTP/1.1" 200 48331 "" "Prometheus/2.51.0"
Jan 20 18:50:09 compute-0 sudo[138264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skidzfnplqpdblwmpugxrbpivyhdrdor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935009.1257758-318-275699368064942/AnsiballZ_copy.py'
Jan 20 18:50:09 compute-0 sudo[138264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:50:10 compute-0 python3.9[138266]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768935009.1257758-318-275699368064942/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=6fbb9baa316f52ff6796e38e7595d81cdb5259a7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:50:10 compute-0 sudo[138264]: pam_unix(sudo:session): session closed for user root
Jan 20 18:50:10 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:50:10 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d30004200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:50:10 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:50:10 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d24003330 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:50:10 compute-0 sudo[138416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txgiddhyqqeeogmukhxcfxtuhvvfqgni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935010.189292-318-1321328917630/AnsiballZ_stat.py'
Jan 20 18:50:10 compute-0 sudo[138416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:50:10 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:50:10 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:50:10 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:50:10.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:50:10 compute-0 python3.9[138418]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:50:10 compute-0 sudo[138416]: pam_unix(sudo:session): session closed for user root
Jan 20 18:50:10 compute-0 ceph-mon[74381]: pgmap v230: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:50:10 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 18:50:10 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:50:10 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d50004310 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:50:11 compute-0 sudo[138539]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yevtemkvifsmmcatbgfrvwllkyonvrdq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935010.189292-318-1321328917630/AnsiballZ_copy.py'
Jan 20 18:50:11 compute-0 sudo[138539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:50:11 compute-0 python3.9[138541]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768935010.189292-318-1321328917630/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=8db49ccc4e1ff3c8d3bfa8a5b097b9ca443b9391 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:50:11 compute-0 sudo[138539]: pam_unix(sudo:session): session closed for user root
Jan 20 18:50:11 compute-0 sudo[138542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 18:50:11 compute-0 sudo[138542]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:50:11 compute-0 sudo[138542]: pam_unix(sudo:session): session closed for user root
Jan 20 18:50:11 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v231: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 20 18:50:11 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:50:11 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:50:11 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:50:11.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:50:11 compute-0 sudo[138718]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bynfrsggmzdrfgpnjfwvgcjaixgxcuuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935011.3285851-318-202850473695148/AnsiballZ_stat.py'
Jan 20 18:50:11 compute-0 sudo[138718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:50:11 compute-0 python3.9[138721]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:50:11 compute-0 ceph-mon[74381]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Jan 20 18:50:11 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:50:11.801400) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 18:50:11 compute-0 ceph-mon[74381]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Jan 20 18:50:11 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935011801456, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 886, "num_deletes": 251, "total_data_size": 1537247, "memory_usage": 1557000, "flush_reason": "Manual Compaction"}
Jan 20 18:50:11 compute-0 ceph-mon[74381]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Jan 20 18:50:11 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935011811345, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 1503879, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12716, "largest_seqno": 13601, "table_properties": {"data_size": 1499388, "index_size": 2143, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 9482, "raw_average_key_size": 19, "raw_value_size": 1490516, "raw_average_value_size": 3005, "num_data_blocks": 95, "num_entries": 496, "num_filter_entries": 496, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768934938, "oldest_key_time": 1768934938, "file_creation_time": 1768935011, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cbf3ab03-d51c-4622-b6c7-e997cd5246eb", "db_session_id": "I40O2DG19JCNHUB0JQU4", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Jan 20 18:50:11 compute-0 ceph-mon[74381]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 9974 microseconds, and 4254 cpu microseconds.
Jan 20 18:50:11 compute-0 ceph-mon[74381]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 18:50:11 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:50:11.811381) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 1503879 bytes OK
Jan 20 18:50:11 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:50:11.811400) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Jan 20 18:50:11 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:50:11.813075) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Jan 20 18:50:11 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:50:11.813088) EVENT_LOG_v1 {"time_micros": 1768935011813084, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 18:50:11 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:50:11.813107) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 18:50:11 compute-0 ceph-mon[74381]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 1533026, prev total WAL file size 1533026, number of live WAL files 2.
Jan 20 18:50:11 compute-0 ceph-mon[74381]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 18:50:11 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:50:11.813863) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Jan 20 18:50:11 compute-0 ceph-mon[74381]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 18:50:11 compute-0 ceph-mon[74381]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(1468KB)], [29(13MB)]
Jan 20 18:50:11 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935011813918, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 16112709, "oldest_snapshot_seqno": -1}
Jan 20 18:50:11 compute-0 sudo[138718]: pam_unix(sudo:session): session closed for user root
Jan 20 18:50:11 compute-0 ceph-mon[74381]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4476 keys, 13793948 bytes, temperature: kUnknown
Jan 20 18:50:11 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935011886474, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 13793948, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13760941, "index_size": 20737, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11205, "raw_key_size": 114610, "raw_average_key_size": 25, "raw_value_size": 13676167, "raw_average_value_size": 3055, "num_data_blocks": 874, "num_entries": 4476, "num_filter_entries": 4476, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768934326, "oldest_key_time": 0, "file_creation_time": 1768935011, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cbf3ab03-d51c-4622-b6c7-e997cd5246eb", "db_session_id": "I40O2DG19JCNHUB0JQU4", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Jan 20 18:50:11 compute-0 ceph-mon[74381]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 18:50:11 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:50:11.886739) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 13793948 bytes
Jan 20 18:50:11 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:50:11.888257) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 221.8 rd, 189.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 13.9 +0.0 blob) out(13.2 +0.0 blob), read-write-amplify(19.9) write-amplify(9.2) OK, records in: 4994, records dropped: 518 output_compression: NoCompression
Jan 20 18:50:11 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:50:11.888278) EVENT_LOG_v1 {"time_micros": 1768935011888268, "job": 12, "event": "compaction_finished", "compaction_time_micros": 72658, "compaction_time_cpu_micros": 26402, "output_level": 6, "num_output_files": 1, "total_output_size": 13793948, "num_input_records": 4994, "num_output_records": 4476, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 18:50:11 compute-0 ceph-mon[74381]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 18:50:11 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935011888788, "job": 12, "event": "table_file_deletion", "file_number": 31}
Jan 20 18:50:11 compute-0 ceph-mon[74381]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 18:50:11 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935011891969, "job": 12, "event": "table_file_deletion", "file_number": 29}
Jan 20 18:50:11 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:50:11.813827) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 18:50:11 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:50:11.892148) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 18:50:11 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:50:11.892154) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 18:50:11 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:50:11.892155) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 18:50:11 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:50:11.892157) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 18:50:11 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:50:11.892158) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 18:50:12 compute-0 sudo[138842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ioyjacefxtrbtlzjnczqvhxjpfmzxhhu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935011.3285851-318-202850473695148/AnsiballZ_copy.py'
Jan 20 18:50:12 compute-0 sudo[138842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:50:12 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:50:12 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d4c00a300 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:50:12 compute-0 python3.9[138844]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768935011.3285851-318-202850473695148/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=32c8e46a341fa688868557503edbef92b6c938e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:50:12 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:50:12 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d30004220 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:50:12 compute-0 sudo[138842]: pam_unix(sudo:session): session closed for user root
Jan 20 18:50:12 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:50:12 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:50:12 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:50:12.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:50:12 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:50:12 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d24003330 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:50:13 compute-0 sudo[138994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iguvilacnofoxuoqclognmtlnczdaidp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935012.5564926-434-216219132433705/AnsiballZ_file.py'
Jan 20 18:50:13 compute-0 sudo[138994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:50:13 compute-0 ceph-mon[74381]: pgmap v231: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 20 18:50:13 compute-0 python3.9[138996]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:50:13 compute-0 sudo[138994]: pam_unix(sudo:session): session closed for user root
Jan 20 18:50:13 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:50:13 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v232: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:50:13 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:50:13 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:50:13 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:50:13.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:50:13 compute-0 sudo[139147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvtioarvqetokhtcjdorbpgdukkaotvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935013.3342326-434-45901701029612/AnsiballZ_file.py'
Jan 20 18:50:13 compute-0 sudo[139147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:50:13 compute-0 python3.9[139150]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:50:13 compute-0 sudo[139147]: pam_unix(sudo:session): session closed for user root
Jan 20 18:50:14 compute-0 ceph-mon[74381]: pgmap v232: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:50:14 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:50:14 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d50004330 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:50:14 compute-0 sudo[139300]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-attqlyprhatuspubxrfrezwbyicclzdd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935014.0268984-476-78207622809904/AnsiballZ_stat.py'
Jan 20 18:50:14 compute-0 sudo[139300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:50:14 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:50:14 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d4c00a300 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:50:14 compute-0 python3.9[139302]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:50:14 compute-0 sudo[139300]: pam_unix(sudo:session): session closed for user root
Jan 20 18:50:14 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:50:14 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:50:14 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:50:14.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:50:14 compute-0 sudo[139423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bitcofvpftqmsgutfwpcfwsvuyeresvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935014.0268984-476-78207622809904/AnsiballZ_copy.py'
Jan 20 18:50:14 compute-0 sudo[139423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:50:14 compute-0 python3.9[139425]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768935014.0268984-476-78207622809904/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=684fd8c2121aba8ba20fc61beeb3c42d27cdf447 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:50:14 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:50:14 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d4c00a300 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:50:14 compute-0 sudo[139423]: pam_unix(sudo:session): session closed for user root
Jan 20 18:50:15 compute-0 sudo[139575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpoqfxftuqjwwzrislvlsmdgazfoqgcc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935015.0929556-476-180859008773985/AnsiballZ_stat.py'
Jan 20 18:50:15 compute-0 sudo[139575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:50:15 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v233: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:50:15 compute-0 python3.9[139577]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:50:15 compute-0 sudo[139575]: pam_unix(sudo:session): session closed for user root
Jan 20 18:50:15 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:50:15 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:50:15 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:50:15.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:50:15 compute-0 sudo[139700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efyoqvdfgydfbyxwsixjqjtavbyxkhbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935015.0929556-476-180859008773985/AnsiballZ_copy.py'
Jan 20 18:50:15 compute-0 sudo[139700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:50:15 compute-0 python3.9[139702]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768935015.0929556-476-180859008773985/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=8db49ccc4e1ff3c8d3bfa8a5b097b9ca443b9391 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:50:16 compute-0 sudo[139700]: pam_unix(sudo:session): session closed for user root
Jan 20 18:50:16 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:50:16 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d24003330 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:50:16 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:50:16 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d50004350 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:50:16 compute-0 sudo[139852]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qccqzknztudwpvlxngbfhuhrtobzpgjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935016.1249962-476-114651007513862/AnsiballZ_stat.py'
Jan 20 18:50:16 compute-0 sudo[139852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:50:16 compute-0 python3.9[139854]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:50:16 compute-0 sudo[139852]: pam_unix(sudo:session): session closed for user root
Jan 20 18:50:16 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:50:16 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 18:50:16 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:50:16.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 18:50:16 compute-0 ceph-mon[74381]: pgmap v233: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:50:16 compute-0 sudo[139975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uajtwehfzfnqjflntrbvnybkzmusmemd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935016.1249962-476-114651007513862/AnsiballZ_copy.py'
Jan 20 18:50:16 compute-0 sudo[139975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:50:16 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:50:16 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d4c00a300 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:50:16 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:50:16.987Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 18:50:16 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:50:16.987Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 18:50:16 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:50:16.988Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 18:50:17 compute-0 python3.9[139977]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768935016.1249962-476-114651007513862/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=a973ccc461ce17db4b983b3054b2c7a1e77a716a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:50:17 compute-0 sudo[139975]: pam_unix(sudo:session): session closed for user root
Jan 20 18:50:17 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v234: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 20 18:50:17 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:50:17 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:50:17 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:50:17.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:50:18 compute-0 sudo[140129]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilvosucabnpmbvzpvpxtgxuesalddlds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935017.896754-613-144922955156640/AnsiballZ_file.py'
Jan 20 18:50:18 compute-0 sudo[140129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:50:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:50:18 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d4c00a300 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:50:18 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:50:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:50:18 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d24003ea0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:50:18 compute-0 python3.9[140131]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:50:18 compute-0 sudo[140129]: pam_unix(sudo:session): session closed for user root
Jan 20 18:50:18 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:50:18 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:50:18 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:50:18.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:50:18 compute-0 ceph-mon[74381]: pgmap v234: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 20 18:50:18 compute-0 sudo[140281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ostweumigxaqqyyzanhszuvitdxufcix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935018.5730498-635-27200898542750/AnsiballZ_stat.py'
Jan 20 18:50:18 compute-0 sudo[140281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:50:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:50:18 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d50004370 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:50:19 compute-0 python3.9[140283]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:50:19 compute-0 sudo[140281]: pam_unix(sudo:session): session closed for user root
Jan 20 18:50:19 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v235: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:50:19 compute-0 sudo[140404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qewztmebkprnjjuctpdwsjfemdgoojrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935018.5730498-635-27200898542750/AnsiballZ_copy.py'
Jan 20 18:50:19 compute-0 sudo[140404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:50:19 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:50:19 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:50:19 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:50:19.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:50:19 compute-0 python3.9[140406]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768935018.5730498-635-27200898542750/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=827feab96ffaf6d3142bc545a7d8116c3f01f714 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:50:19 compute-0 sudo[140404]: pam_unix(sudo:session): session closed for user root
Jan 20 18:50:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:50:19] "GET /metrics HTTP/1.1" 200 48334 "" "Prometheus/2.51.0"
Jan 20 18:50:19 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:50:19] "GET /metrics HTTP/1.1" 200 48334 "" "Prometheus/2.51.0"
Jan 20 18:50:20 compute-0 sudo[140558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyynczxrepbyqxlaartwgbcphsijfaub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935019.907226-693-77060847664500/AnsiballZ_file.py'
Jan 20 18:50:20 compute-0 sudo[140558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:50:20 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:50:20 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d300042a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:50:20 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:50:20 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d300042a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:50:20 compute-0 python3.9[140560]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:50:20 compute-0 sudo[140558]: pam_unix(sudo:session): session closed for user root
Jan 20 18:50:20 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:50:20 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:50:20 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:50:20.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:50:20 compute-0 ceph-mon[74381]: pgmap v235: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:50:20 compute-0 sudo[140710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkzrudchauwrjhhilesepwbudosiekdd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935020.5895736-725-71276830392724/AnsiballZ_stat.py'
Jan 20 18:50:20 compute-0 sudo[140710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:50:20 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:50:20 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d24003ea0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:50:21 compute-0 python3.9[140712]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:50:21 compute-0 sudo[140710]: pam_unix(sudo:session): session closed for user root
Jan 20 18:50:21 compute-0 sudo[140833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkssiiwsjftetjlkqrkkufcnntmryugx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935020.5895736-725-71276830392724/AnsiballZ_copy.py'
Jan 20 18:50:21 compute-0 sudo[140833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:50:21 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v236: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 20 18:50:21 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:50:21 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:50:21 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:50:21.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:50:21 compute-0 python3.9[140835]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768935020.5895736-725-71276830392724/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=827feab96ffaf6d3142bc545a7d8116c3f01f714 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:50:21 compute-0 sudo[140833]: pam_unix(sudo:session): session closed for user root
Jan 20 18:50:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:50:22 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d50004390 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:50:22 compute-0 sudo[140987]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlknvgpcmfnktzwmrgedoabapzewyjjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935022.0428762-769-38865549730363/AnsiballZ_file.py'
Jan 20 18:50:22 compute-0 sudo[140987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:50:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:50:22 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d300042a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:50:22 compute-0 python3.9[140989]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:50:22 compute-0 sudo[140987]: pam_unix(sudo:session): session closed for user root
Jan 20 18:50:22 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:50:22 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 18:50:22 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:50:22.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 18:50:22 compute-0 sudo[141139]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spiomcvhlfqljrevfgjeriuubakeunrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935022.6724703-792-218702032943419/AnsiballZ_stat.py'
Jan 20 18:50:22 compute-0 sudo[141139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:50:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:50:22 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d300042a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:50:23 compute-0 python3.9[141141]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:50:23 compute-0 sudo[141139]: pam_unix(sudo:session): session closed for user root
Jan 20 18:50:23 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:50:23 compute-0 ceph-mon[74381]: pgmap v236: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 20 18:50:23 compute-0 sudo[141262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydlwkdzpuiomaxeclnqzqeatvjdrxolv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935022.6724703-792-218702032943419/AnsiballZ_copy.py'
Jan 20 18:50:23 compute-0 sudo[141262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:50:23 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v237: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:50:23 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:50:23 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:50:23 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:50:23.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:50:23 compute-0 python3.9[141264]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768935022.6724703-792-218702032943419/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=827feab96ffaf6d3142bc545a7d8116c3f01f714 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:50:23 compute-0 sudo[141262]: pam_unix(sudo:session): session closed for user root
Jan 20 18:50:24 compute-0 sudo[141416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iskjqueyjeamvwmkadtwuqwhjjgyjkwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935023.8806124-836-149502728243637/AnsiballZ_file.py'
Jan 20 18:50:24 compute-0 sudo[141416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:50:24 compute-0 kernel: ganesha.nfsd[135626]: segfault at 50 ip 00007f9dd58a832e sp 00007f9d4bffe210 error 4 in libntirpc.so.5.8[7f9dd588d000+2c000] likely on CPU 4 (core 0, socket 4)
Jan 20 18:50:24 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Jan 20 18:50:24 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[122346]: 20/01/2026 18:50:24 : epoch 696fcdda : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9d24003ea0 fd 48 proxy ignored for local
Jan 20 18:50:24 compute-0 systemd[1]: Started Process Core Dump (PID 141419/UID 0).
Jan 20 18:50:24 compute-0 ceph-mon[74381]: pgmap v237: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:50:24 compute-0 python3.9[141418]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:50:24 compute-0 sudo[141416]: pam_unix(sudo:session): session closed for user root
Jan 20 18:50:24 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:50:24 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 18:50:24 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:50:24.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 18:50:24 compute-0 sudo[141570]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aypnzdzhbhfcznvlbwbmxapemxzdiugj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935024.5733721-861-40946747750191/AnsiballZ_stat.py'
Jan 20 18:50:24 compute-0 sudo[141570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:50:25 compute-0 python3.9[141572]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:50:25 compute-0 sudo[141570]: pam_unix(sudo:session): session closed for user root
Jan 20 18:50:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:50:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:50:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:50:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:50:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:50:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:50:25 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v238: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:50:25 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:50:25 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:50:25 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:50:25.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:50:26 compute-0 sudo[141695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfcuxyavmxoalxycxovcmqlzmaeleiod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935024.5733721-861-40946747750191/AnsiballZ_copy.py'
Jan 20 18:50:26 compute-0 sudo[141695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:50:26 compute-0 systemd-coredump[141420]: Process 122372 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 61:
                                                    #0  0x00007f9dd58a832e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    #1  0x0000000000000000 n/a (n/a + 0x0)
                                                    #2  0x00007f9dd58b2900 n/a (/usr/lib64/libntirpc.so.5.8 + 0x2c900)
                                                    ELF object binary architecture: AMD x86-64
Jan 20 18:50:26 compute-0 systemd[1]: systemd-coredump@3-141419-0.service: Deactivated successfully.
Jan 20 18:50:26 compute-0 systemd[1]: systemd-coredump@3-141419-0.service: Consumed 1.222s CPU time.
Jan 20 18:50:26 compute-0 podman[141702]: 2026-01-20 18:50:26.208774247 +0000 UTC m=+0.024878689 container died b744e04ad1d52c40600907aad25ce1135fd9d076abab06b85b9daa26a8ff322f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 20 18:50:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-deef52dab2c8781af92dfddb1afabc9e4883a1251a0089dd9474b0e6197b0942-merged.mount: Deactivated successfully.
Jan 20 18:50:26 compute-0 podman[141702]: 2026-01-20 18:50:26.248264006 +0000 UTC m=+0.064368428 container remove b744e04ad1d52c40600907aad25ce1135fd9d076abab06b85b9daa26a8ff322f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 20 18:50:26 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Main process exited, code=exited, status=139/n/a
Jan 20 18:50:26 compute-0 python3.9[141698]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768935024.5733721-861-40946747750191/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=827feab96ffaf6d3142bc545a7d8116c3f01f714 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:50:26 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 18:50:26 compute-0 sudo[141695]: pam_unix(sudo:session): session closed for user root
Jan 20 18:50:26 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Failed with result 'exit-code'.
Jan 20 18:50:26 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Consumed 1.588s CPU time.
Jan 20 18:50:26 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:50:26 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:50:26 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:50:26.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:50:26 compute-0 sudo[141896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkokrdlktuzdpogcyniepmoqeembecjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935026.519128-920-270053985219821/AnsiballZ_file.py'
Jan 20 18:50:26 compute-0 sudo[141896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:50:26 compute-0 python3.9[141898]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:50:26 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:50:26.989Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 18:50:27 compute-0 sudo[141896]: pam_unix(sudo:session): session closed for user root
Jan 20 18:50:27 compute-0 ceph-mon[74381]: pgmap v238: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:50:27 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v239: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 340 B/s rd, 0 op/s
Jan 20 18:50:27 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:50:27 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:50:27 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:50:27.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:50:27 compute-0 sudo[142050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tguxknbsdifhfhkfaukfjjuwzwqajrvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935027.1485543-941-186521114688680/AnsiballZ_stat.py'
Jan 20 18:50:27 compute-0 sudo[142050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:50:27 compute-0 python3.9[142052]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:50:27 compute-0 sudo[142050]: pam_unix(sudo:session): session closed for user root
Jan 20 18:50:28 compute-0 sudo[142173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsgsdxwxonwqbgxwabbfotxjlyqpfavv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935027.1485543-941-186521114688680/AnsiballZ_copy.py'
Jan 20 18:50:28 compute-0 sudo[142173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:50:28 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:50:28 compute-0 ceph-mon[74381]: pgmap v239: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 340 B/s rd, 0 op/s
Jan 20 18:50:28 compute-0 python3.9[142175]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768935027.1485543-941-186521114688680/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=827feab96ffaf6d3142bc545a7d8116c3f01f714 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:50:28 compute-0 sudo[142173]: pam_unix(sudo:session): session closed for user root
Jan 20 18:50:28 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:50:28 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 18:50:28 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:50:28.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 18:50:28 compute-0 sudo[142325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-potwqcexzzxhpgkpihuhrxtxnqibvbog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935028.6704867-990-115422454398056/AnsiballZ_file.py'
Jan 20 18:50:28 compute-0 sudo[142325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:50:29 compute-0 python3.9[142327]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:50:29 compute-0 sudo[142325]: pam_unix(sudo:session): session closed for user root
Jan 20 18:50:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [WARNING] 019/185029 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 20 18:50:29 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v240: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:50:29 compute-0 sudo[142478]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozqfqygbttfjlxquoyqwbhjycedtsihh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935029.2762232-1009-77315139284962/AnsiballZ_stat.py'
Jan 20 18:50:29 compute-0 sudo[142478]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:50:29 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:50:29 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 18:50:29 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:50:29.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 18:50:29 compute-0 python3.9[142481]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:50:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:50:29] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Jan 20 18:50:29 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:50:29] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Jan 20 18:50:29 compute-0 sudo[142478]: pam_unix(sudo:session): session closed for user root
Jan 20 18:50:30 compute-0 sudo[142602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uccfiljamhhtalejuroaritvuansxyxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935029.2762232-1009-77315139284962/AnsiballZ_copy.py'
Jan 20 18:50:30 compute-0 sudo[142602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:50:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [WARNING] 019/185030 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 20 18:50:30 compute-0 python3.9[142604]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768935029.2762232-1009-77315139284962/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=827feab96ffaf6d3142bc545a7d8116c3f01f714 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:50:30 compute-0 sudo[142602]: pam_unix(sudo:session): session closed for user root
Jan 20 18:50:30 compute-0 ceph-mon[74381]: pgmap v240: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:50:30 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:50:30 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:50:30 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:50:30.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:50:31 compute-0 sudo[142629]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 18:50:31 compute-0 sudo[142629]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:50:31 compute-0 sudo[142629]: pam_unix(sudo:session): session closed for user root
Jan 20 18:50:31 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v241: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:50:31 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:50:31 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:50:31 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:50:31.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:50:32 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:50:32 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 18:50:32 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:50:32.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 18:50:32 compute-0 ceph-osd[82836]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 18:50:32 compute-0 ceph-osd[82836]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 8388 writes, 34K keys, 8388 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.04 MB/s
                                           Cumulative WAL: 8388 writes, 1629 syncs, 5.15 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 8388 writes, 34K keys, 8388 commit groups, 1.0 writes per commit group, ingest: 21.74 MB, 0.04 MB/s
                                           Interval WAL: 8388 writes, 1629 syncs, 5.15 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5649cc1b5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5649cc1b5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5649cc1b5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5649cc1b5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5649cc1b5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5649cc1b5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5649cc1b5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5649cc1b49b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5649cc1b49b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5649cc1b49b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5649cc1b5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5649cc1b5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 20 18:50:33 compute-0 ceph-mon[74381]: pgmap v241: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:50:33 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:50:33 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v242: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Jan 20 18:50:33 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:50:33 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:50:33 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:50:33.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:50:34 compute-0 sshd-session[135932]: Connection closed by 192.168.122.30 port 46892
Jan 20 18:50:34 compute-0 sshd-session[135910]: pam_unix(sshd:session): session closed for user zuul
Jan 20 18:50:34 compute-0 systemd[1]: session-49.scope: Deactivated successfully.
Jan 20 18:50:34 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:50:34 compute-0 systemd[1]: session-49.scope: Consumed 22.011s CPU time.
Jan 20 18:50:34 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:50:34 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:50:34.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:50:34 compute-0 systemd-logind[796]: Session 49 logged out. Waiting for processes to exit.
Jan 20 18:50:34 compute-0 systemd-logind[796]: Removed session 49.
Jan 20 18:50:35 compute-0 ceph-mon[74381]: pgmap v242: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Jan 20 18:50:35 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v243: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Jan 20 18:50:35 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:50:35 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:50:35 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:50:35.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:50:36 compute-0 ceph-mon[74381]: pgmap v243: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Jan 20 18:50:36 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Scheduled restart job, restart counter is at 4.
Jan 20 18:50:36 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.ulclbx for aecbbf3b-b405-507b-97d7-637a83f5b4b1.
Jan 20 18:50:36 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Consumed 1.588s CPU time.
Jan 20 18:50:36 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.ulclbx for aecbbf3b-b405-507b-97d7-637a83f5b4b1...
Jan 20 18:50:36 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:50:36 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:50:36 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:50:36.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:50:36 compute-0 podman[142704]: 2026-01-20 18:50:36.738209351 +0000 UTC m=+0.047334207 container create 3aa3bc963ce1ed58f63838e10cf13c95ac23a3cfb168804353e230c0d4253dab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 18:50:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e1d7bd0a357f17236dad48fa7b93889df3cbfc644742c2d5397f0ffa950f846/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Jan 20 18:50:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e1d7bd0a357f17236dad48fa7b93889df3cbfc644742c2d5397f0ffa950f846/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:50:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e1d7bd0a357f17236dad48fa7b93889df3cbfc644742c2d5397f0ffa950f846/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 18:50:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e1d7bd0a357f17236dad48fa7b93889df3cbfc644742c2d5397f0ffa950f846/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.ulclbx-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 18:50:36 compute-0 podman[142704]: 2026-01-20 18:50:36.79640488 +0000 UTC m=+0.105529756 container init 3aa3bc963ce1ed58f63838e10cf13c95ac23a3cfb168804353e230c0d4253dab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:50:36 compute-0 podman[142704]: 2026-01-20 18:50:36.802086074 +0000 UTC m=+0.111210920 container start 3aa3bc963ce1ed58f63838e10cf13c95ac23a3cfb168804353e230c0d4253dab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Jan 20 18:50:36 compute-0 bash[142704]: 3aa3bc963ce1ed58f63838e10cf13c95ac23a3cfb168804353e230c0d4253dab
Jan 20 18:50:36 compute-0 podman[142704]: 2026-01-20 18:50:36.711796809 +0000 UTC m=+0.020921685 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:50:36 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.ulclbx for aecbbf3b-b405-507b-97d7-637a83f5b4b1.
Jan 20 18:50:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:50:36 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Jan 20 18:50:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:50:36 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Jan 20 18:50:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:50:36 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Jan 20 18:50:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:50:36 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Jan 20 18:50:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:50:36 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Jan 20 18:50:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:50:36 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Jan 20 18:50:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:50:36 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Jan 20 18:50:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:50:36.992Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 18:50:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:50:37 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 20 18:50:37 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v244: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Jan 20 18:50:37 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:50:37 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:50:37 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:50:37.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:50:38 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:50:38 compute-0 ceph-mon[74381]: pgmap v244: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Jan 20 18:50:38 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:50:38 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 18:50:38 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:50:38.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 18:50:39 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v245: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Jan 20 18:50:39 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:50:39 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 18:50:39 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:50:39.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 18:50:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:50:39] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Jan 20 18:50:39 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:50:39] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Jan 20 18:50:40 compute-0 sshd-session[142765]: Accepted publickey for zuul from 192.168.122.30 port 48888 ssh2: ECDSA SHA256:OqjpBifwMHsrzwMbJxHbqE54q1skVz6aecL1tN9sOps
Jan 20 18:50:40 compute-0 systemd-logind[796]: New session 50 of user zuul.
Jan 20 18:50:40 compute-0 systemd[1]: Started Session 50 of User zuul.
Jan 20 18:50:40 compute-0 sshd-session[142765]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 18:50:40 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:50:40 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 18:50:40 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:50:40.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 18:50:40 compute-0 ceph-mon[74381]: pgmap v245: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Jan 20 18:50:40 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 18:50:40 compute-0 sudo[142918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqijeuimxagurfebsegekhlnpqxsbjrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935040.288176-21-42897697605530/AnsiballZ_file.py'
Jan 20 18:50:40 compute-0 sudo[142918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:50:41 compute-0 python3.9[142920]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:50:41 compute-0 sudo[142918]: pam_unix(sudo:session): session closed for user root
Jan 20 18:50:41 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v246: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 2 op/s
Jan 20 18:50:41 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:50:41 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:50:41 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:50:41.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:50:41 compute-0 sudo[143072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pefrmugeexkohuzcnnlryflbaescgwmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935041.3145204-57-213763349378922/AnsiballZ_stat.py'
Jan 20 18:50:41 compute-0 sudo[143072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:50:41 compute-0 python3.9[143074]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:50:41 compute-0 sudo[143072]: pam_unix(sudo:session): session closed for user root
Jan 20 18:50:42 compute-0 sudo[143195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-floaqbamruuxkclcfejjtgzwfynmhebd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935041.3145204-57-213763349378922/AnsiballZ_copy.py'
Jan 20 18:50:42 compute-0 sudo[143195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:50:42 compute-0 python3.9[143197]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768935041.3145204-57-213763349378922/.source.conf _original_basename=ceph.conf follow=False checksum=b31722bfd2dce0319f19e87137b5b22c4698e5bc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:50:42 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:50:42 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 18:50:42 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:50:42.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 18:50:42 compute-0 sudo[143195]: pam_unix(sudo:session): session closed for user root
Jan 20 18:50:42 compute-0 ceph-mon[74381]: pgmap v246: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 2 op/s
Jan 20 18:50:43 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:50:43 compute-0 sudo[143347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbekurksieyeckrnmizkplsibbxorrug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935043.0721147-57-148698133529424/AnsiballZ_stat.py'
Jan 20 18:50:43 compute-0 sudo[143347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:50:43 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:50:43 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 20 18:50:43 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:50:43 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 20 18:50:43 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v247: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 2 op/s
Jan 20 18:50:43 compute-0 python3.9[143349]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:50:43 compute-0 sudo[143347]: pam_unix(sudo:session): session closed for user root
Jan 20 18:50:43 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:50:43 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:50:43 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:50:43.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:50:43 compute-0 sudo[143472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zyhwpmpadyjqicqaraslkviyebbgjzhk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935043.0721147-57-148698133529424/AnsiballZ_copy.py'
Jan 20 18:50:43 compute-0 sudo[143472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:50:44 compute-0 python3.9[143474]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1768935043.0721147-57-148698133529424/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=465f1f6abe8e4d723d0b6c413f0a5a323af4f262 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:50:44 compute-0 sudo[143472]: pam_unix(sudo:session): session closed for user root
Jan 20 18:50:44 compute-0 sshd-session[142768]: Connection closed by 192.168.122.30 port 48888
Jan 20 18:50:44 compute-0 sshd-session[142765]: pam_unix(sshd:session): session closed for user zuul
Jan 20 18:50:44 compute-0 systemd[1]: session-50.scope: Deactivated successfully.
Jan 20 18:50:44 compute-0 systemd[1]: session-50.scope: Consumed 2.773s CPU time.
Jan 20 18:50:44 compute-0 systemd-logind[796]: Session 50 logged out. Waiting for processes to exit.
Jan 20 18:50:44 compute-0 systemd-logind[796]: Removed session 50.
Jan 20 18:50:44 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:50:44 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:50:44 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:50:44.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:50:45 compute-0 ceph-mon[74381]: pgmap v247: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 2 op/s
Jan 20 18:50:45 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v248: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 2 op/s
Jan 20 18:50:45 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:50:45 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:50:45 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:50:45.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:50:46 compute-0 ceph-mon[74381]: pgmap v248: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 2 op/s
Jan 20 18:50:46 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:50:46 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 18:50:46 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:50:46.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 18:50:46 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:50:46.994Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 18:50:46 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:50:46.995Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 18:50:47 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v249: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 1023 B/s wr, 4 op/s
Jan 20 18:50:47 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:50:47 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 18:50:47 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:50:47.625 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 18:50:48 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:50:48 compute-0 ceph-mon[74381]: pgmap v249: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 1023 B/s wr, 4 op/s
Jan 20 18:50:48 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:50:48 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:50:48 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:50:48.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:50:49 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v250: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Jan 20 18:50:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:50:49 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 20 18:50:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:50:49 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Jan 20 18:50:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:50:49 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Jan 20 18:50:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:50:49 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Jan 20 18:50:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:50:49 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Jan 20 18:50:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:50:49 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Jan 20 18:50:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:50:49 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Jan 20 18:50:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:50:49 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 20 18:50:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:50:49 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 20 18:50:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:50:49 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 20 18:50:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:50:49 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Jan 20 18:50:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:50:49 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 20 18:50:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:50:49 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Jan 20 18:50:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:50:49 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Jan 20 18:50:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:50:49 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Jan 20 18:50:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:50:49 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Jan 20 18:50:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:50:49 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Jan 20 18:50:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:50:49 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Jan 20 18:50:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:50:49 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Jan 20 18:50:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:50:49 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Jan 20 18:50:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:50:49 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Jan 20 18:50:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:50:49 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Jan 20 18:50:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:50:49 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 20 18:50:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:50:49 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Jan 20 18:50:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:50:49 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 20 18:50:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:50:49 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Jan 20 18:50:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:50:49 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Jan 20 18:50:49 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:50:49 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 18:50:49 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:50:49.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 18:50:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:50:49] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Jan 20 18:50:49 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:50:49] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Jan 20 18:50:50 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:50:50 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa7fc000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:50:50 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:50:50 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa7f4001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:50:50 compute-0 sshd-session[143519]: Accepted publickey for zuul from 192.168.122.30 port 49210 ssh2: ECDSA SHA256:OqjpBifwMHsrzwMbJxHbqE54q1skVz6aecL1tN9sOps
Jan 20 18:50:50 compute-0 systemd-logind[796]: New session 51 of user zuul.
Jan 20 18:50:50 compute-0 systemd[1]: Started Session 51 of User zuul.
Jan 20 18:50:50 compute-0 sshd-session[143519]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 18:50:50 compute-0 ceph-mon[74381]: pgmap v250: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Jan 20 18:50:50 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:50:50 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 18:50:50 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:50:50.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 18:50:51 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:50:51 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa7ec000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:50:51 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [WARNING] 019/185051 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 20 18:50:51 compute-0 sudo[143673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 18:50:51 compute-0 sudo[143673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:50:51 compute-0 sudo[143673]: pam_unix(sudo:session): session closed for user root
Jan 20 18:50:51 compute-0 python3.9[143672]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 18:50:51 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v251: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 1023 B/s wr, 4 op/s
Jan 20 18:50:51 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:50:51 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:50:51 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:50:51.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:50:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [WARNING] 019/185052 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 20 18:50:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:50:52 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa7d8000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:50:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:50:52 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa7e4000d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:50:52 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:50:52 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 18:50:52 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:50:52.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 18:50:52 compute-0 ceph-mon[74381]: pgmap v251: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 1023 B/s wr, 4 op/s
Jan 20 18:50:52 compute-0 sudo[143853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixflpvrktbhfnhcuojzeprdaqpontsym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935052.2208142-57-82523223085442/AnsiballZ_file.py'
Jan 20 18:50:52 compute-0 sudo[143853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:50:52 compute-0 python3.9[143855]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:50:52 compute-0 sudo[143853]: pam_unix(sudo:session): session closed for user root
Jan 20 18:50:53 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:50:53 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa7f40025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:50:53 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:50:53 compute-0 sudo[144005]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylnevbozvtqghcorcomzgovfqzzgxnwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935053.0916555-57-15439001127170/AnsiballZ_file.py'
Jan 20 18:50:53 compute-0 sudo[144005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:50:53 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v252: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 511 B/s wr, 2 op/s
Jan 20 18:50:53 compute-0 python3.9[144007]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:50:53 compute-0 sudo[144005]: pam_unix(sudo:session): session closed for user root
Jan 20 18:50:53 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:50:53 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:50:53 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:50:53.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:50:54 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:50:54 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa7f40025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:50:54 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:50:54 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa7d80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:50:54 compute-0 python3.9[144159]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 18:50:54 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:50:54 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:50:54 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:50:54.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:50:54 compute-0 ceph-mon[74381]: pgmap v252: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 511 B/s wr, 2 op/s
Jan 20 18:50:54 compute-0 ceph-mgr[74676]: [balancer INFO root] Optimize plan auto_2026-01-20_18:50:54
Jan 20 18:50:54 compute-0 ceph-mgr[74676]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 18:50:54 compute-0 ceph-mgr[74676]: [balancer INFO root] do_upmap
Jan 20 18:50:54 compute-0 ceph-mgr[74676]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.mgr', 'vms', 'volumes', '.rgw.root', 'default.rgw.control', 'images', 'default.rgw.log', '.nfs', 'cephfs.cephfs.data', 'backups', 'default.rgw.meta']
Jan 20 18:50:54 compute-0 ceph-mgr[74676]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 18:50:55 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:50:55 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa7e4001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:50:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:50:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:50:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 18:50:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:50:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 18:50:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:50:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:50:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:50:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:50:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:50:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:50:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:50:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:50:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:50:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 20 18:50:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:50:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:50:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:50:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 18:50:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:50:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 20 18:50:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:50:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:50:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:50:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 18:50:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:50:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 20 18:50:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:50:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:50:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:50:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:50:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 18:50:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 18:50:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 18:50:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 18:50:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 18:50:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 18:50:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 18:50:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 18:50:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 18:50:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 18:50:55 compute-0 sudo[144309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjkbgyhmkmgymgmyoalfoltximozynsp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935054.7097855-126-246364408530396/AnsiballZ_seboolean.py'
Jan 20 18:50:55 compute-0 sudo[144309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:50:55 compute-0 python3.9[144311]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 20 18:50:55 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v253: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 511 B/s wr, 2 op/s
Jan 20 18:50:55 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:50:55 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 18:50:55 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:50:55.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 18:50:56 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:50:56 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa7f40025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:50:56 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:50:56 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa7f40025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:50:56 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 18:50:56 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:50:56 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 18:50:56 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:50:56.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 18:50:56 compute-0 sudo[144309]: pam_unix(sudo:session): session closed for user root
Jan 20 18:50:56 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:50:56.996Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 18:50:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:50:57 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa7d80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:50:57 compute-0 sudo[144469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqhcgjpvnugbedudxkcfnizwixzfohlv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935057.1209605-156-66387272461887/AnsiballZ_setup.py'
Jan 20 18:50:57 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Jan 20 18:50:57 compute-0 sudo[144469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:50:57 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v254: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 511 B/s wr, 2 op/s
Jan 20 18:50:57 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:50:57 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:50:57 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:50:57.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:50:57 compute-0 python3.9[144471]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 18:50:57 compute-0 sudo[144469]: pam_unix(sudo:session): session closed for user root
Jan 20 18:50:57 compute-0 ceph-mon[74381]: pgmap v253: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 511 B/s wr, 2 op/s
Jan 20 18:50:58 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:50:58 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa7e4001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:50:58 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:50:58 compute-0 sudo[144555]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtdvusozospptqziggkffavbhhugtlhj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935057.1209605-156-66387272461887/AnsiballZ_dnf.py'
Jan 20 18:50:58 compute-0 sudo[144555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:50:58 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:50:58 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa7f40025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:50:58 compute-0 python3.9[144557]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 18:50:58 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:50:58 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:50:58 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:50:58.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:50:58 compute-0 ceph-mon[74381]: pgmap v254: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 511 B/s wr, 2 op/s
Jan 20 18:50:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:50:59 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa7f40025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:50:59 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v255: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Jan 20 18:50:59 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:50:59 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 18:50:59 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:50:59.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 18:50:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:50:59] "GET /metrics HTTP/1.1" 200 48332 "" "Prometheus/2.51.0"
Jan 20 18:50:59 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:50:59] "GET /metrics HTTP/1.1" 200 48332 "" "Prometheus/2.51.0"
Jan 20 18:50:59 compute-0 sudo[144555]: pam_unix(sudo:session): session closed for user root
Jan 20 18:51:00 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:51:00 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa7d80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:51:00 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:51:00 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa7e4001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:51:00 compute-0 sudo[144710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpmevnpncaryashbjwhvtzvlzoeitomf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935060.079184-192-232763442538736/AnsiballZ_systemd.py'
Jan 20 18:51:00 compute-0 sudo[144710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:51:00 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:51:00 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 18:51:00 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:51:00.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 18:51:00 compute-0 python3.9[144712]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 20 18:51:01 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:51:01 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa7f40025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:51:01 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v256: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 20 18:51:01 compute-0 ceph-mon[74381]: pgmap v255: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Jan 20 18:51:01 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:51:01 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:51:01 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:51:01.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:51:02 compute-0 sudo[144710]: pam_unix(sudo:session): session closed for user root
Jan 20 18:51:02 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:51:02 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa7f40025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:51:02 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:51:02 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa7d8002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:51:02 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:51:02 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:51:02 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:51:02.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:51:02 compute-0 ceph-mon[74381]: pgmap v256: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 20 18:51:02 compute-0 sudo[144867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwnrcbxzojpbxjqhrzwxcxdckuxyktlv ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1768935062.250776-216-159718136061346/AnsiballZ_edpm_nftables_snippet.py'
Jan 20 18:51:02 compute-0 sudo[144867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:51:02 compute-0 python3[144869]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                             rule:
                                               proto: udp
                                               dport: 4789
                                           - rule_name: 119 neutron geneve networks
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               state: ["UNTRACKED"]
                                           - rule_name: 120 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: OUTPUT
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                           - rule_name: 121 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: PREROUTING
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                            dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Jan 20 18:51:02 compute-0 sudo[144867]: pam_unix(sudo:session): session closed for user root
Jan 20 18:51:03 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:51:03 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa7e4002cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:51:03 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:51:03 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v257: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:51:03 compute-0 sudo[145019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtjtedmetgyaznaesvxadwuvbjyagbzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935063.2997499-243-277186532625641/AnsiballZ_file.py'
Jan 20 18:51:03 compute-0 sudo[145019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:51:03 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:51:03 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:51:03 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:51:03.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:51:03 compute-0 python3.9[145022]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:51:03 compute-0 sudo[145019]: pam_unix(sudo:session): session closed for user root
Jan 20 18:51:04 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:51:04 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa7e4002cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:51:04 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:51:04 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa7ec001c90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:51:04 compute-0 sudo[145173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmplozxmrztligdhqwwukrcogoxklenn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935064.171583-267-150664585309764/AnsiballZ_stat.py'
Jan 20 18:51:04 compute-0 sudo[145173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:51:04 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:51:04 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 18:51:04 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:51:04.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 18:51:04 compute-0 python3.9[145175]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:51:04 compute-0 sudo[145176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:51:04 compute-0 sudo[145173]: pam_unix(sudo:session): session closed for user root
Jan 20 18:51:04 compute-0 sudo[145176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:51:04 compute-0 sudo[145176]: pam_unix(sudo:session): session closed for user root
Jan 20 18:51:04 compute-0 sudo[145203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 20 18:51:04 compute-0 sudo[145203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:51:04 compute-0 sudo[145301]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywifkhukkwflkyyotepqldwjaxgaavcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935064.171583-267-150664585309764/AnsiballZ_file.py'
Jan 20 18:51:04 compute-0 sudo[145301]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:51:04 compute-0 ceph-mon[74381]: pgmap v257: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:51:05 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:51:05 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa7d8002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:51:05 compute-0 python3.9[145303]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:51:05 compute-0 sudo[145301]: pam_unix(sudo:session): session closed for user root
Jan 20 18:51:05 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 20 18:51:05 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:51:05 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 20 18:51:05 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:51:05 compute-0 sudo[145203]: pam_unix(sudo:session): session closed for user root
Jan 20 18:51:05 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v258: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:51:05 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:51:05 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:51:05 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:51:05.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:51:05 compute-0 sudo[145487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqezvnjkugsakotlcalpnrqxmypwepkv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935065.5086718-303-64369031139253/AnsiballZ_stat.py'
Jan 20 18:51:05 compute-0 sudo[145487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:51:05 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 18:51:06 compute-0 python3.9[145489]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:51:06 compute-0 sudo[145487]: pam_unix(sudo:session): session closed for user root
Jan 20 18:51:06 compute-0 sudo[145565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awutjybieavolsqiwfyfoeyzgzatmwxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935065.5086718-303-64369031139253/AnsiballZ_file.py'
Jan 20 18:51:06 compute-0 sudo[145565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:51:06 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:51:06 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa7f40025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:51:06 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:51:06 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa7e4002cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:51:06 compute-0 python3.9[145567]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.op_vhvh1 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:51:06 compute-0 sudo[145565]: pam_unix(sudo:session): session closed for user root
Jan 20 18:51:06 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:51:06 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:51:06 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 18:51:06 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:51:06.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 18:51:06 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:51:06 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:51:06 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:51:06 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 18:51:06 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 20 18:51:06 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:51:06 compute-0 sudo[145592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:51:06 compute-0 sudo[145592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:51:06 compute-0 sudo[145592]: pam_unix(sudo:session): session closed for user root
Jan 20 18:51:06 compute-0 sudo[145623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 18:51:06 compute-0 sudo[145623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:51:06 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:51:06.998Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 18:51:07 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:51:07 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa7ec001c90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:51:07 compute-0 sudo[145793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwptnzeecbbneympqheltjazxazisajb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935066.9012403-339-91918943009225/AnsiballZ_stat.py'
Jan 20 18:51:07 compute-0 sudo[145793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:51:07 compute-0 podman[145811]: 2026-01-20 18:51:07.311922476 +0000 UTC m=+0.064439022 container create 839a02f3e746758f805d3666655ce32cfb70de4f30e1d231860e79054cb9c6e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_shockley, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Jan 20 18:51:07 compute-0 python3.9[145795]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:51:07 compute-0 podman[145811]: 2026-01-20 18:51:07.273205721 +0000 UTC m=+0.025722277 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:51:07 compute-0 systemd[1]: Started libpod-conmon-839a02f3e746758f805d3666655ce32cfb70de4f30e1d231860e79054cb9c6e0.scope.
Jan 20 18:51:07 compute-0 sudo[145793]: pam_unix(sudo:session): session closed for user root
Jan 20 18:51:07 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:51:07 compute-0 podman[145811]: 2026-01-20 18:51:07.4412014 +0000 UTC m=+0.193717936 container init 839a02f3e746758f805d3666655ce32cfb70de4f30e1d231860e79054cb9c6e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_shockley, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Jan 20 18:51:07 compute-0 podman[145811]: 2026-01-20 18:51:07.449339251 +0000 UTC m=+0.201855807 container start 839a02f3e746758f805d3666655ce32cfb70de4f30e1d231860e79054cb9c6e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_shockley, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 20 18:51:07 compute-0 podman[145811]: 2026-01-20 18:51:07.452853462 +0000 UTC m=+0.205369998 container attach 839a02f3e746758f805d3666655ce32cfb70de4f30e1d231860e79054cb9c6e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:51:07 compute-0 systemd[1]: libpod-839a02f3e746758f805d3666655ce32cfb70de4f30e1d231860e79054cb9c6e0.scope: Deactivated successfully.
Jan 20 18:51:07 compute-0 upbeat_shockley[145829]: 167 167
Jan 20 18:51:07 compute-0 conmon[145829]: conmon 839a02f3e746758f805d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-839a02f3e746758f805d3666655ce32cfb70de4f30e1d231860e79054cb9c6e0.scope/container/memory.events
Jan 20 18:51:07 compute-0 podman[145811]: 2026-01-20 18:51:07.456836926 +0000 UTC m=+0.209353472 container died 839a02f3e746758f805d3666655ce32cfb70de4f30e1d231860e79054cb9c6e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:51:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f4a857f498b23d526676f84752ead9241dc480fa6d3205592de963891746234-merged.mount: Deactivated successfully.
Jan 20 18:51:07 compute-0 podman[145811]: 2026-01-20 18:51:07.500509594 +0000 UTC m=+0.253026140 container remove 839a02f3e746758f805d3666655ce32cfb70de4f30e1d231860e79054cb9c6e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_shockley, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Jan 20 18:51:07 compute-0 systemd[1]: libpod-conmon-839a02f3e746758f805d3666655ce32cfb70de4f30e1d231860e79054cb9c6e0.scope: Deactivated successfully.
Jan 20 18:51:07 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v259: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:51:07 compute-0 sudo[145923]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pggvdsxykzxxmqaznfmlakipkadtnxzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935066.9012403-339-91918943009225/AnsiballZ_file.py'
Jan 20 18:51:07 compute-0 sudo[145923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:51:07 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:51:07 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:51:07 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:51:07.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:51:07 compute-0 podman[145931]: 2026-01-20 18:51:07.704149881 +0000 UTC m=+0.052593393 container create c8ce053aafac53d957822824284a7b32c02cef1d483ac0d26c6f930d794ed1ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_euclid, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 18:51:07 compute-0 systemd[1]: Started libpod-conmon-c8ce053aafac53d957822824284a7b32c02cef1d483ac0d26c6f930d794ed1ff.scope.
Jan 20 18:51:07 compute-0 podman[145931]: 2026-01-20 18:51:07.684481619 +0000 UTC m=+0.032925151 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:51:07 compute-0 ceph-mon[74381]: pgmap v258: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:51:07 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:51:07 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:51:07 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 18:51:07 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 18:51:07 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:51:07 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:51:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaf80355dca926a63809d74faefded982dfd9e742eeb44311e8d7d14ffb099ee/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 18:51:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaf80355dca926a63809d74faefded982dfd9e742eeb44311e8d7d14ffb099ee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:51:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaf80355dca926a63809d74faefded982dfd9e742eeb44311e8d7d14ffb099ee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:51:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaf80355dca926a63809d74faefded982dfd9e742eeb44311e8d7d14ffb099ee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 18:51:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaf80355dca926a63809d74faefded982dfd9e742eeb44311e8d7d14ffb099ee/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 18:51:07 compute-0 podman[145931]: 2026-01-20 18:51:07.826494496 +0000 UTC m=+0.174938028 container init c8ce053aafac53d957822824284a7b32c02cef1d483ac0d26c6f930d794ed1ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_euclid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 18:51:07 compute-0 podman[145931]: 2026-01-20 18:51:07.836359598 +0000 UTC m=+0.184803110 container start c8ce053aafac53d957822824284a7b32c02cef1d483ac0d26c6f930d794ed1ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_euclid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Jan 20 18:51:07 compute-0 podman[145931]: 2026-01-20 18:51:07.842997808 +0000 UTC m=+0.191441320 container attach c8ce053aafac53d957822824284a7b32c02cef1d483ac0d26c6f930d794ed1ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_euclid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:51:07 compute-0 python3.9[145928]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:51:07 compute-0 sudo[145923]: pam_unix(sudo:session): session closed for user root
Jan 20 18:51:08 compute-0 happy_euclid[145946]: --> passed data devices: 0 physical, 1 LVM
Jan 20 18:51:08 compute-0 happy_euclid[145946]: --> All data devices are unavailable
Jan 20 18:51:08 compute-0 systemd[1]: libpod-c8ce053aafac53d957822824284a7b32c02cef1d483ac0d26c6f930d794ed1ff.scope: Deactivated successfully.
Jan 20 18:51:08 compute-0 podman[145931]: 2026-01-20 18:51:08.205204974 +0000 UTC m=+0.553648486 container died c8ce053aafac53d957822824284a7b32c02cef1d483ac0d26c6f930d794ed1ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_euclid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:51:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-aaf80355dca926a63809d74faefded982dfd9e742eeb44311e8d7d14ffb099ee-merged.mount: Deactivated successfully.
Jan 20 18:51:08 compute-0 podman[145931]: 2026-01-20 18:51:08.247094861 +0000 UTC m=+0.595538373 container remove c8ce053aafac53d957822824284a7b32c02cef1d483ac0d26c6f930d794ed1ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:51:08 compute-0 systemd[1]: libpod-conmon-c8ce053aafac53d957822824284a7b32c02cef1d483ac0d26c6f930d794ed1ff.scope: Deactivated successfully.
Jan 20 18:51:08 compute-0 sudo[145623]: pam_unix(sudo:session): session closed for user root
Jan 20 18:51:08 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:51:08 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:51:08 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa7d8002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:51:08 compute-0 sudo[145996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:51:08 compute-0 sudo[145996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:51:08 compute-0 sudo[145996]: pam_unix(sudo:session): session closed for user root
Jan 20 18:51:08 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:51:08 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa7f40025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:51:08 compute-0 sudo[146021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- lvm list --format json
Jan 20 18:51:08 compute-0 sudo[146021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:51:08 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:51:08 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:51:08 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:51:08.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:51:08 compute-0 podman[146139]: 2026-01-20 18:51:08.739443766 +0000 UTC m=+0.041161528 container create 30a027466332b4034d0b67e18748171c608ff4fb157b5134830521502c27bfdb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_swanson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Jan 20 18:51:08 compute-0 systemd[1]: Started libpod-conmon-30a027466332b4034d0b67e18748171c608ff4fb157b5134830521502c27bfdb.scope.
Jan 20 18:51:08 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:51:08 compute-0 podman[146139]: 2026-01-20 18:51:08.723697166 +0000 UTC m=+0.025414948 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:51:08 compute-0 podman[146139]: 2026-01-20 18:51:08.833952315 +0000 UTC m=+0.135670097 container init 30a027466332b4034d0b67e18748171c608ff4fb157b5134830521502c27bfdb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_swanson, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Jan 20 18:51:08 compute-0 podman[146139]: 2026-01-20 18:51:08.843825547 +0000 UTC m=+0.145543319 container start 30a027466332b4034d0b67e18748171c608ff4fb157b5134830521502c27bfdb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:51:08 compute-0 ceph-mon[74381]: pgmap v259: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:51:08 compute-0 podman[146139]: 2026-01-20 18:51:08.849274183 +0000 UTC m=+0.150991975 container attach 30a027466332b4034d0b67e18748171c608ff4fb157b5134830521502c27bfdb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_swanson, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:51:08 compute-0 goofy_swanson[146156]: 167 167
Jan 20 18:51:08 compute-0 systemd[1]: libpod-30a027466332b4034d0b67e18748171c608ff4fb157b5134830521502c27bfdb.scope: Deactivated successfully.
Jan 20 18:51:08 compute-0 podman[146139]: 2026-01-20 18:51:08.853342239 +0000 UTC m=+0.155060001 container died 30a027466332b4034d0b67e18748171c608ff4fb157b5134830521502c27bfdb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_swanson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Jan 20 18:51:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e0b8614ebe5d96ac323903428baf7b9ece68a38a7fc959ef5064f078384e184-merged.mount: Deactivated successfully.
Jan 20 18:51:08 compute-0 podman[146139]: 2026-01-20 18:51:08.892393425 +0000 UTC m=+0.194111187 container remove 30a027466332b4034d0b67e18748171c608ff4fb157b5134830521502c27bfdb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 20 18:51:08 compute-0 systemd[1]: libpod-conmon-30a027466332b4034d0b67e18748171c608ff4fb157b5134830521502c27bfdb.scope: Deactivated successfully.
Jan 20 18:51:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:51:09 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa7e4002cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:51:09 compute-0 sudo[146265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aoxsdoosaezjunosrnxoixykfigkvyhs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935068.5731068-378-233698051581151/AnsiballZ_command.py'
Jan 20 18:51:09 compute-0 sudo[146265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:51:09 compute-0 podman[146234]: 2026-01-20 18:51:09.063245975 +0000 UTC m=+0.056663829 container create b4e1c407efb5d2a5dc8903e783a94c4af91488d5459dbf0a47118f4943023bce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1)
Jan 20 18:51:09 compute-0 systemd[1]: Started libpod-conmon-b4e1c407efb5d2a5dc8903e783a94c4af91488d5459dbf0a47118f4943023bce.scope.
Jan 20 18:51:09 compute-0 podman[146234]: 2026-01-20 18:51:09.037591043 +0000 UTC m=+0.031008937 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:51:09 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:51:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4453a5f4a24397e748831dd4a27b23e5af39860ae2e0ee4c156fba2df26acef6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 18:51:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4453a5f4a24397e748831dd4a27b23e5af39860ae2e0ee4c156fba2df26acef6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:51:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4453a5f4a24397e748831dd4a27b23e5af39860ae2e0ee4c156fba2df26acef6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:51:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4453a5f4a24397e748831dd4a27b23e5af39860ae2e0ee4c156fba2df26acef6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 18:51:09 compute-0 podman[146234]: 2026-01-20 18:51:09.157404955 +0000 UTC m=+0.150822829 container init b4e1c407efb5d2a5dc8903e783a94c4af91488d5459dbf0a47118f4943023bce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_lewin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:51:09 compute-0 podman[146234]: 2026-01-20 18:51:09.166041911 +0000 UTC m=+0.159459765 container start b4e1c407efb5d2a5dc8903e783a94c4af91488d5459dbf0a47118f4943023bce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_lewin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:51:09 compute-0 podman[146234]: 2026-01-20 18:51:09.170383086 +0000 UTC m=+0.163800960 container attach b4e1c407efb5d2a5dc8903e783a94c4af91488d5459dbf0a47118f4943023bce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:51:09 compute-0 python3.9[146269]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:51:09 compute-0 sudo[146265]: pam_unix(sudo:session): session closed for user root
Jan 20 18:51:09 compute-0 tender_lewin[146272]: {
Jan 20 18:51:09 compute-0 tender_lewin[146272]:     "0": [
Jan 20 18:51:09 compute-0 tender_lewin[146272]:         {
Jan 20 18:51:09 compute-0 tender_lewin[146272]:             "devices": [
Jan 20 18:51:09 compute-0 tender_lewin[146272]:                 "/dev/loop3"
Jan 20 18:51:09 compute-0 tender_lewin[146272]:             ],
Jan 20 18:51:09 compute-0 tender_lewin[146272]:             "lv_name": "ceph_lv0",
Jan 20 18:51:09 compute-0 tender_lewin[146272]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 18:51:09 compute-0 tender_lewin[146272]:             "lv_size": "21470642176",
Jan 20 18:51:09 compute-0 tender_lewin[146272]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=aecbbf3b-b405-507b-97d7-637a83f5b4b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5f53c0c6-6046-4836-83f9-ff93da7e674e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 18:51:09 compute-0 tender_lewin[146272]:             "lv_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 18:51:09 compute-0 tender_lewin[146272]:             "name": "ceph_lv0",
Jan 20 18:51:09 compute-0 tender_lewin[146272]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 18:51:09 compute-0 tender_lewin[146272]:             "tags": {
Jan 20 18:51:09 compute-0 tender_lewin[146272]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 18:51:09 compute-0 tender_lewin[146272]:                 "ceph.block_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 18:51:09 compute-0 tender_lewin[146272]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 18:51:09 compute-0 tender_lewin[146272]:                 "ceph.cluster_fsid": "aecbbf3b-b405-507b-97d7-637a83f5b4b1",
Jan 20 18:51:09 compute-0 tender_lewin[146272]:                 "ceph.cluster_name": "ceph",
Jan 20 18:51:09 compute-0 tender_lewin[146272]:                 "ceph.crush_device_class": "",
Jan 20 18:51:09 compute-0 tender_lewin[146272]:                 "ceph.encrypted": "0",
Jan 20 18:51:09 compute-0 tender_lewin[146272]:                 "ceph.osd_fsid": "5f53c0c6-6046-4836-83f9-ff93da7e674e",
Jan 20 18:51:09 compute-0 tender_lewin[146272]:                 "ceph.osd_id": "0",
Jan 20 18:51:09 compute-0 tender_lewin[146272]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 18:51:09 compute-0 tender_lewin[146272]:                 "ceph.type": "block",
Jan 20 18:51:09 compute-0 tender_lewin[146272]:                 "ceph.vdo": "0",
Jan 20 18:51:09 compute-0 tender_lewin[146272]:                 "ceph.with_tpm": "0"
Jan 20 18:51:09 compute-0 tender_lewin[146272]:             },
Jan 20 18:51:09 compute-0 tender_lewin[146272]:             "type": "block",
Jan 20 18:51:09 compute-0 tender_lewin[146272]:             "vg_name": "ceph_vg0"
Jan 20 18:51:09 compute-0 tender_lewin[146272]:         }
Jan 20 18:51:09 compute-0 tender_lewin[146272]:     ]
Jan 20 18:51:09 compute-0 tender_lewin[146272]: }
Jan 20 18:51:09 compute-0 systemd[1]: libpod-b4e1c407efb5d2a5dc8903e783a94c4af91488d5459dbf0a47118f4943023bce.scope: Deactivated successfully.
Jan 20 18:51:09 compute-0 podman[146234]: 2026-01-20 18:51:09.491605052 +0000 UTC m=+0.485022906 container died b4e1c407efb5d2a5dc8903e783a94c4af91488d5459dbf0a47118f4943023bce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:51:09 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v260: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:51:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-4453a5f4a24397e748831dd4a27b23e5af39860ae2e0ee4c156fba2df26acef6-merged.mount: Deactivated successfully.
Jan 20 18:51:09 compute-0 podman[146234]: 2026-01-20 18:51:09.540520649 +0000 UTC m=+0.533938503 container remove b4e1c407efb5d2a5dc8903e783a94c4af91488d5459dbf0a47118f4943023bce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_lewin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Jan 20 18:51:09 compute-0 systemd[1]: libpod-conmon-b4e1c407efb5d2a5dc8903e783a94c4af91488d5459dbf0a47118f4943023bce.scope: Deactivated successfully.
Jan 20 18:51:09 compute-0 sudo[146021]: pam_unix(sudo:session): session closed for user root
Jan 20 18:51:09 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:51:09 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:51:09 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:51:09.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:51:09 compute-0 sudo[146371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:51:09 compute-0 sudo[146371]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:51:09 compute-0 sudo[146371]: pam_unix(sudo:session): session closed for user root
Jan 20 18:51:09 compute-0 sudo[146396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- raw list --format json
Jan 20 18:51:09 compute-0 sudo[146396]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:51:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:51:09] "GET /metrics HTTP/1.1" 200 48332 "" "Prometheus/2.51.0"
Jan 20 18:51:09 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:51:09] "GET /metrics HTTP/1.1" 200 48332 "" "Prometheus/2.51.0"
Jan 20 18:51:09 compute-0 sudo[146508]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wiorfesbmunspnbwaknnbzplstacvsok ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1768935069.5035634-402-163940173046245/AnsiballZ_edpm_nftables_from_files.py'
Jan 20 18:51:09 compute-0 sudo[146508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:51:10 compute-0 podman[146535]: 2026-01-20 18:51:10.082935914 +0000 UTC m=+0.038042358 container create 97584a56930fc932c08adb87770e07cdfc9b50728bcf31436ae00d269971112d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_brown, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:51:10 compute-0 systemd[1]: Started libpod-conmon-97584a56930fc932c08adb87770e07cdfc9b50728bcf31436ae00d269971112d.scope.
Jan 20 18:51:10 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:51:10 compute-0 python3[146520]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 20 18:51:10 compute-0 podman[146535]: 2026-01-20 18:51:10.066916097 +0000 UTC m=+0.022022571 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:51:10 compute-0 podman[146535]: 2026-01-20 18:51:10.163980329 +0000 UTC m=+0.119086793 container init 97584a56930fc932c08adb87770e07cdfc9b50728bcf31436ae00d269971112d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_brown, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:51:10 compute-0 podman[146535]: 2026-01-20 18:51:10.170606649 +0000 UTC m=+0.125713103 container start 97584a56930fc932c08adb87770e07cdfc9b50728bcf31436ae00d269971112d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_brown, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:51:10 compute-0 xenodochial_brown[146552]: 167 167
Jan 20 18:51:10 compute-0 podman[146535]: 2026-01-20 18:51:10.174667465 +0000 UTC m=+0.129773939 container attach 97584a56930fc932c08adb87770e07cdfc9b50728bcf31436ae00d269971112d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_brown, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:51:10 compute-0 systemd[1]: libpod-97584a56930fc932c08adb87770e07cdfc9b50728bcf31436ae00d269971112d.scope: Deactivated successfully.
Jan 20 18:51:10 compute-0 podman[146535]: 2026-01-20 18:51:10.175674654 +0000 UTC m=+0.130781118 container died 97584a56930fc932c08adb87770e07cdfc9b50728bcf31436ae00d269971112d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_brown, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:51:10 compute-0 sudo[146508]: pam_unix(sudo:session): session closed for user root
Jan 20 18:51:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d2332792131850e7ba5a589d482ba938954eabb45ffca51dbb9124640c9a1af-merged.mount: Deactivated successfully.
Jan 20 18:51:10 compute-0 podman[146535]: 2026-01-20 18:51:10.21756323 +0000 UTC m=+0.172669684 container remove 97584a56930fc932c08adb87770e07cdfc9b50728bcf31436ae00d269971112d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_brown, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Jan 20 18:51:10 compute-0 systemd[1]: libpod-conmon-97584a56930fc932c08adb87770e07cdfc9b50728bcf31436ae00d269971112d.scope: Deactivated successfully.
Jan 20 18:51:10 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:51:10 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa7ec002da0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:51:10 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:51:10 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa7d8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:51:10 compute-0 podman[146599]: 2026-01-20 18:51:10.378029554 +0000 UTC m=+0.039350415 container create 07bdf20028a1b33927002f62014d9480534bddb08e5e60736bff4be826b3feed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_satoshi, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Jan 20 18:51:10 compute-0 systemd[1]: Started libpod-conmon-07bdf20028a1b33927002f62014d9480534bddb08e5e60736bff4be826b3feed.scope.
Jan 20 18:51:10 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:51:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69bd515ffbce569e0562aaa435e833fac32f747628eed2de26ba984b40e541ab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 18:51:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69bd515ffbce569e0562aaa435e833fac32f747628eed2de26ba984b40e541ab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:51:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69bd515ffbce569e0562aaa435e833fac32f747628eed2de26ba984b40e541ab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:51:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69bd515ffbce569e0562aaa435e833fac32f747628eed2de26ba984b40e541ab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 18:51:10 compute-0 podman[146599]: 2026-01-20 18:51:10.447315533 +0000 UTC m=+0.108636404 container init 07bdf20028a1b33927002f62014d9480534bddb08e5e60736bff4be826b3feed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Jan 20 18:51:10 compute-0 podman[146599]: 2026-01-20 18:51:10.357620611 +0000 UTC m=+0.018941462 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:51:10 compute-0 podman[146599]: 2026-01-20 18:51:10.454427497 +0000 UTC m=+0.115748348 container start 07bdf20028a1b33927002f62014d9480534bddb08e5e60736bff4be826b3feed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_satoshi, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:51:10 compute-0 podman[146599]: 2026-01-20 18:51:10.45806469 +0000 UTC m=+0.119385571 container attach 07bdf20028a1b33927002f62014d9480534bddb08e5e60736bff4be826b3feed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_satoshi, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:51:10 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:51:10 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:51:10 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:51:10.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:51:10 compute-0 sudo[146759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjravkpshissjdquobpavqufvcyrpyho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935070.4331565-426-230697208261974/AnsiballZ_stat.py'
Jan 20 18:51:10 compute-0 sudo[146759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:51:10 compute-0 python3.9[146763]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:51:10 compute-0 sudo[146759]: pam_unix(sudo:session): session closed for user root
Jan 20 18:51:11 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:51:11 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa7f40025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:51:11 compute-0 lvm[146866]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 18:51:11 compute-0 lvm[146866]: VG ceph_vg0 finished
Jan 20 18:51:11 compute-0 amazing_satoshi[146636]: {}
Jan 20 18:51:11 compute-0 systemd[1]: libpod-07bdf20028a1b33927002f62014d9480534bddb08e5e60736bff4be826b3feed.scope: Deactivated successfully.
Jan 20 18:51:11 compute-0 podman[146599]: 2026-01-20 18:51:11.178345666 +0000 UTC m=+0.839666537 container died 07bdf20028a1b33927002f62014d9480534bddb08e5e60736bff4be826b3feed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_satoshi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:51:11 compute-0 systemd[1]: libpod-07bdf20028a1b33927002f62014d9480534bddb08e5e60736bff4be826b3feed.scope: Consumed 1.108s CPU time.
Jan 20 18:51:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-69bd515ffbce569e0562aaa435e833fac32f747628eed2de26ba984b40e541ab-merged.mount: Deactivated successfully.
Jan 20 18:51:11 compute-0 podman[146599]: 2026-01-20 18:51:11.230871886 +0000 UTC m=+0.892192738 container remove 07bdf20028a1b33927002f62014d9480534bddb08e5e60736bff4be826b3feed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_satoshi, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:51:11 compute-0 systemd[1]: libpod-conmon-07bdf20028a1b33927002f62014d9480534bddb08e5e60736bff4be826b3feed.scope: Deactivated successfully.
Jan 20 18:51:11 compute-0 sudo[146396]: pam_unix(sudo:session): session closed for user root
Jan 20 18:51:11 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 18:51:11 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:51:11 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 18:51:11 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:51:11 compute-0 sudo[146904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 18:51:11 compute-0 sudo[146904]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:51:11 compute-0 sudo[146904]: pam_unix(sudo:session): session closed for user root
Jan 20 18:51:11 compute-0 sudo[146979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukqepaadfjrinxtbbevdpntjzetnlvco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935070.4331565-426-230697208261974/AnsiballZ_copy.py'
Jan 20 18:51:11 compute-0 sudo[146979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:51:11 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v261: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 20 18:51:11 compute-0 sudo[146982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 18:51:11 compute-0 sudo[146982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:51:11 compute-0 sudo[146982]: pam_unix(sudo:session): session closed for user root
Jan 20 18:51:11 compute-0 ceph-mon[74381]: pgmap v260: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:51:11 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 18:51:11 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:51:11 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:51:11 compute-0 python3.9[146981]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768935070.4331565-426-230697208261974/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:51:11 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:51:11 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:51:11 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:51:11.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:51:11 compute-0 sudo[146979]: pam_unix(sudo:session): session closed for user root
Jan 20 18:51:12 compute-0 sudo[147158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxpbylzmoyyrwflfizjdvaegypucsivw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935071.9536428-471-125816608903415/AnsiballZ_stat.py'
Jan 20 18:51:12 compute-0 sudo[147158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:51:12 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:51:12 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa7e4002cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:51:12 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:51:12 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa7ec002da0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:51:12 compute-0 python3.9[147160]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:51:12 compute-0 sudo[147158]: pam_unix(sudo:session): session closed for user root
Jan 20 18:51:12 compute-0 ceph-mon[74381]: pgmap v261: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 20 18:51:12 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:51:12 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:51:12 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:51:12.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:51:12 compute-0 sudo[147283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwggfdszcjolqozxmpsyanetvuydjppo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935071.9536428-471-125816608903415/AnsiballZ_copy.py'
Jan 20 18:51:12 compute-0 sudo[147283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:51:12 compute-0 python3.9[147285]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768935071.9536428-471-125816608903415/.source.nft follow=False _original_basename=jump-chain.j2 checksum=ac8dea350c18f51f54d48dacc09613cda4c5540c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:51:13 compute-0 sudo[147283]: pam_unix(sudo:session): session closed for user root
Jan 20 18:51:13 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:51:13 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa7d8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:51:13 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:51:13 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v262: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:51:13 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:51:13 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:51:13 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:51:13.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:51:13 compute-0 sudo[147437]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hyffimljnzdvmiarnfzhdkzsswfollgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935073.3438206-516-139454066895351/AnsiballZ_stat.py'
Jan 20 18:51:13 compute-0 sudo[147437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:51:13 compute-0 python3.9[147439]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:51:13 compute-0 sudo[147437]: pam_unix(sudo:session): session closed for user root
Jan 20 18:51:14 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:51:14 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa7f40025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:51:14 compute-0 sudo[147562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhwcotpnrjlxfclfzixnnihxkovdipzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935073.3438206-516-139454066895351/AnsiballZ_copy.py'
Jan 20 18:51:14 compute-0 sudo[147562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:51:14 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:51:14 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa7e4003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:51:14 compute-0 python3.9[147564]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768935073.3438206-516-139454066895351/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:51:14 compute-0 sudo[147562]: pam_unix(sudo:session): session closed for user root
Jan 20 18:51:14 compute-0 ceph-mon[74381]: pgmap v262: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:51:14 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:51:14 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:51:14 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:51:14.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:51:15 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:51:15 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa7ec002da0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:51:15 compute-0 sudo[147714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpylxrysmjcavifwldgbwberxnenidgz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935074.8047986-561-34132401456984/AnsiballZ_stat.py'
Jan 20 18:51:15 compute-0 sudo[147714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:51:15 compute-0 python3.9[147716]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:51:15 compute-0 sudo[147714]: pam_unix(sudo:session): session closed for user root
Jan 20 18:51:15 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v263: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:51:15 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:51:15 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:51:15 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:51:15.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:51:15 compute-0 sudo[147841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-manaelbypnffeyulfggmokixkweodyqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935074.8047986-561-34132401456984/AnsiballZ_copy.py'
Jan 20 18:51:15 compute-0 sudo[147841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:51:15 compute-0 python3.9[147843]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768935074.8047986-561-34132401456984/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:51:16 compute-0 sudo[147841]: pam_unix(sudo:session): session closed for user root
Jan 20 18:51:16 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:51:16 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa7d8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:51:16 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:51:16 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa7f40025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:51:16 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:51:16 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 18:51:16 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:51:16.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 18:51:16 compute-0 sudo[147993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywlverjkqrpxktybeysotdkxhskokdds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935076.3437111-606-277827003571928/AnsiballZ_stat.py'
Jan 20 18:51:16 compute-0 sudo[147993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:51:16 compute-0 ceph-mon[74381]: pgmap v263: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:51:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:51:17.000Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 18:51:17 compute-0 python3.9[147995]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:51:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:51:17 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa7e4003db0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:51:17 compute-0 sudo[147993]: pam_unix(sudo:session): session closed for user root
Jan 20 18:51:17 compute-0 sudo[148118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvzqgqjjcibjxsrbctqfhchpdgylbtyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935076.3437111-606-277827003571928/AnsiballZ_copy.py'
Jan 20 18:51:17 compute-0 sudo[148118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:51:17 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v264: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:51:17 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:51:17 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:51:17 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:51:17.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:51:17 compute-0 python3.9[148120]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768935076.3437111-606-277827003571928/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:51:17 compute-0 sudo[148118]: pam_unix(sudo:session): session closed for user root
Jan 20 18:51:18 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:51:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:51:18 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa7ec002da0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:51:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:51:18 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa7d8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:51:18 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:51:18 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:51:18 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:51:18.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:51:18 compute-0 sudo[148272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpoczekkoyhwzedhtwjmawuokfqsntvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935078.4012895-651-9005494472186/AnsiballZ_file.py'
Jan 20 18:51:18 compute-0 sudo[148272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:51:18 compute-0 python3.9[148274]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:51:18 compute-0 sudo[148272]: pam_unix(sudo:session): session closed for user root
Jan 20 18:51:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:51:19 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa7f40025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:51:19 compute-0 ceph-mon[74381]: pgmap v264: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:51:19 compute-0 sudo[148424]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eshesbwmbpprfcbathgizyaxcwwwccwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935079.125253-675-59898095452823/AnsiballZ_command.py'
Jan 20 18:51:19 compute-0 sudo[148424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:51:19 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v265: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:51:19 compute-0 python3.9[148426]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:51:19 compute-0 sudo[148424]: pam_unix(sudo:session): session closed for user root
Jan 20 18:51:19 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:51:19 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:51:19 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:51:19.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:51:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:51:19] "GET /metrics HTTP/1.1" 200 48330 "" "Prometheus/2.51.0"
Jan 20 18:51:19 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:51:19] "GET /metrics HTTP/1.1" 200 48330 "" "Prometheus/2.51.0"
Jan 20 18:51:20 compute-0 kernel: ganesha.nfsd[143509]: segfault at 50 ip 00007fa8849f432e sp 00007fa8017f9210 error 4 in libntirpc.so.5.8[7fa8849d9000+2c000] likely on CPU 3 (core 0, socket 3)
Jan 20 18:51:20 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Jan 20 18:51:20 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[142719]: 20/01/2026 18:51:20 : epoch 696fce7c : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa7e4003db0 fd 38 proxy ignored for local
Jan 20 18:51:20 compute-0 systemd[1]: Started Process Core Dump (PID 148565/UID 0).
Jan 20 18:51:20 compute-0 sudo[148583]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zigmogdbphqumcagosyvuisnuqattgui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935079.9063199-699-61002387432978/AnsiballZ_blockinfile.py'
Jan 20 18:51:20 compute-0 sudo[148583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:51:20 compute-0 python3.9[148585]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:51:20 compute-0 sudo[148583]: pam_unix(sudo:session): session closed for user root
Jan 20 18:51:20 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:51:20 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 18:51:20 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:51:20.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 18:51:20 compute-0 ceph-mon[74381]: pgmap v265: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:51:21 compute-0 sudo[148735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwhckvifqruoxxttvyieaprgwwlgbylv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935080.909701-726-258068056780019/AnsiballZ_command.py'
Jan 20 18:51:21 compute-0 sudo[148735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:51:21 compute-0 python3.9[148737]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:51:21 compute-0 sudo[148735]: pam_unix(sudo:session): session closed for user root
Jan 20 18:51:21 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v266: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 20 18:51:21 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:51:21 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:51:21 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:51:21.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:51:21 compute-0 systemd-coredump[148580]: Process 142723 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 46:
                                                    #0  0x00007fa8849f432e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Jan 20 18:51:21 compute-0 systemd[1]: systemd-coredump@4-148565-0.service: Deactivated successfully.
Jan 20 18:51:21 compute-0 systemd[1]: systemd-coredump@4-148565-0.service: Consumed 1.331s CPU time.
Jan 20 18:51:21 compute-0 podman[148796]: 2026-01-20 18:51:21.867287029 +0000 UTC m=+0.031087289 container died 3aa3bc963ce1ed58f63838e10cf13c95ac23a3cfb168804353e230c0d4253dab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Jan 20 18:51:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e1d7bd0a357f17236dad48fa7b93889df3cbfc644742c2d5397f0ffa950f846-merged.mount: Deactivated successfully.
Jan 20 18:51:21 compute-0 podman[148796]: 2026-01-20 18:51:21.980292387 +0000 UTC m=+0.144092647 container remove 3aa3bc963ce1ed58f63838e10cf13c95ac23a3cfb168804353e230c0d4253dab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:51:21 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Main process exited, code=exited, status=139/n/a
Jan 20 18:51:22 compute-0 sudo[148933]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opiefnwjaegnvbovuzjnrugllboenmpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935081.8080473-750-255754303112395/AnsiballZ_stat.py'
Jan 20 18:51:22 compute-0 sudo[148933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:51:22 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Failed with result 'exit-code'.
Jan 20 18:51:22 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Consumed 1.615s CPU time.
Jan 20 18:51:22 compute-0 python3.9[148939]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 18:51:22 compute-0 sudo[148933]: pam_unix(sudo:session): session closed for user root
Jan 20 18:51:22 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:51:22 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:51:22 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:51:22.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:51:22 compute-0 sudo[149092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxkwawjpfnikxthjhyvlzsrgktihydzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935082.5991962-774-249929426475263/AnsiballZ_command.py'
Jan 20 18:51:22 compute-0 sudo[149092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:51:23 compute-0 ceph-mon[74381]: pgmap v266: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 20 18:51:23 compute-0 python3.9[149094]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:51:23 compute-0 sudo[149092]: pam_unix(sudo:session): session closed for user root
Jan 20 18:51:23 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:51:23 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v267: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:51:23 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:51:23 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:51:23 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:51:23.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:51:23 compute-0 sudo[149249]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qetcjpphfupxecdutpsqgvogprzvqfar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935083.4758017-798-201680280627332/AnsiballZ_file.py'
Jan 20 18:51:23 compute-0 sudo[149249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:51:23 compute-0 python3.9[149251]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:51:23 compute-0 sudo[149249]: pam_unix(sudo:session): session closed for user root
Jan 20 18:51:24 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:51:24 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:51:24 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:51:24.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:51:25 compute-0 ceph-mon[74381]: pgmap v267: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:51:25 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 18:51:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:51:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:51:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:51:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:51:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:51:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:51:25 compute-0 python3.9[149401]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 18:51:25 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v268: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:51:25 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:51:25 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 18:51:25 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:51:25.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 18:51:26 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [WARNING] 019/185126 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 20 18:51:26 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:51:26 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 18:51:26 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:51:26.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 18:51:26 compute-0 sudo[149554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyrshxuhwqeokwuamiixnevunqlxblqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935086.2372735-918-215814707565688/AnsiballZ_command.py'
Jan 20 18:51:26 compute-0 sudo[149554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:51:26 compute-0 python3.9[149556]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:2e:0a:8d:1d:08:09" external_ids:ovn-encap-ip=172.19.0.102 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:51:26 compute-0 ovs-vsctl[149557]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:2e:0a:8d:1d:08:09 external_ids:ovn-encap-ip=172.19.0.102 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Jan 20 18:51:27 compute-0 sudo[149554]: pam_unix(sudo:session): session closed for user root
Jan 20 18:51:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:51:27.002Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 18:51:27 compute-0 ceph-mon[74381]: pgmap v268: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:51:27 compute-0 sudo[149707]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osxqtoyeyagekykekzwpedihtqpxuvmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935087.2382379-945-212001308387352/AnsiballZ_command.py'
Jan 20 18:51:27 compute-0 sudo[149707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:51:27 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v269: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:51:27 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:51:27 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:51:27 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:51:27.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:51:27 compute-0 python3.9[149709]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ovs-vsctl show | grep -q "Manager"
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:51:27 compute-0 sudo[149707]: pam_unix(sudo:session): session closed for user root
Jan 20 18:51:28 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:51:28 compute-0 sudo[149864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfvgurmnqxqruodecksgdrazwdeazgat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935088.0977733-969-110714665943194/AnsiballZ_command.py'
Jan 20 18:51:28 compute-0 sudo[149864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:51:28 compute-0 python3.9[149866]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:51:28 compute-0 ovs-vsctl[149867]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Jan 20 18:51:28 compute-0 sudo[149864]: pam_unix(sudo:session): session closed for user root
Jan 20 18:51:28 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:51:28 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 18:51:28 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:51:28.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 18:51:28 compute-0 ceph-mon[74381]: pgmap v269: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:51:29 compute-0 python3.9[150017]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 18:51:29 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v270: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:51:29 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:51:29 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:51:29 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:51:29.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:51:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:51:29] "GET /metrics HTTP/1.1" 200 48330 "" "Prometheus/2.51.0"
Jan 20 18:51:29 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:51:29] "GET /metrics HTTP/1.1" 200 48330 "" "Prometheus/2.51.0"
Jan 20 18:51:30 compute-0 sudo[150171]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwxovyurctszhzatlfuivgsfortbychc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935089.7746627-1020-222524107851420/AnsiballZ_file.py'
Jan 20 18:51:30 compute-0 sudo[150171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:51:30 compute-0 python3.9[150173]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:51:30 compute-0 sudo[150171]: pam_unix(sudo:session): session closed for user root
Jan 20 18:51:30 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:51:30 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:51:30 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:51:30.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:51:30 compute-0 sudo[150323]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aawtyzvbemhwzugtmejothxnxxozyqkx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935090.6080513-1044-236605497949284/AnsiballZ_stat.py'
Jan 20 18:51:30 compute-0 sudo[150323]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:51:31 compute-0 python3.9[150325]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:51:31 compute-0 sudo[150323]: pam_unix(sudo:session): session closed for user root
Jan 20 18:51:31 compute-0 ceph-mon[74381]: pgmap v270: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:51:31 compute-0 sudo[150401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pphulsszgbgxebmsxwkpnhrjwyfnowtd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935090.6080513-1044-236605497949284/AnsiballZ_file.py'
Jan 20 18:51:31 compute-0 sudo[150401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:51:31 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v271: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 20 18:51:31 compute-0 sudo[150405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 18:51:31 compute-0 sudo[150405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:51:31 compute-0 sudo[150405]: pam_unix(sudo:session): session closed for user root
Jan 20 18:51:31 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:51:31 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:51:31 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:51:31.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:51:31 compute-0 python3.9[150403]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:51:31 compute-0 sudo[150401]: pam_unix(sudo:session): session closed for user root
Jan 20 18:51:32 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Scheduled restart job, restart counter is at 5.
Jan 20 18:51:32 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.ulclbx for aecbbf3b-b405-507b-97d7-637a83f5b4b1.
Jan 20 18:51:32 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Consumed 1.615s CPU time.
Jan 20 18:51:32 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.ulclbx for aecbbf3b-b405-507b-97d7-637a83f5b4b1...
Jan 20 18:51:32 compute-0 sudo[150580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eiwynhrwxztgmmautpeqfwflrqeyfrlf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935091.8692753-1044-278217287977931/AnsiballZ_stat.py'
Jan 20 18:51:32 compute-0 sudo[150580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:51:32 compute-0 ceph-mon[74381]: pgmap v271: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 20 18:51:32 compute-0 python3.9[150583]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:51:32 compute-0 podman[150631]: 2026-01-20 18:51:32.36579153 +0000 UTC m=+0.041532268 container create 42c66f3f6edd73565f6d3d0eadd59107030548d81ef3c5fa0c83fa495713fc90 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:51:32 compute-0 sudo[150580]: pam_unix(sudo:session): session closed for user root
Jan 20 18:51:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d8cb5a89715401cd863bceb946e4ff712d35a14f320d2ee3fd7dd1dbf7b71a0/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Jan 20 18:51:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d8cb5a89715401cd863bceb946e4ff712d35a14f320d2ee3fd7dd1dbf7b71a0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:51:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d8cb5a89715401cd863bceb946e4ff712d35a14f320d2ee3fd7dd1dbf7b71a0/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 18:51:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d8cb5a89715401cd863bceb946e4ff712d35a14f320d2ee3fd7dd1dbf7b71a0/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.ulclbx-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 18:51:32 compute-0 podman[150631]: 2026-01-20 18:51:32.429069797 +0000 UTC m=+0.104810545 container init 42c66f3f6edd73565f6d3d0eadd59107030548d81ef3c5fa0c83fa495713fc90 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 20 18:51:32 compute-0 podman[150631]: 2026-01-20 18:51:32.434026199 +0000 UTC m=+0.109766937 container start 42c66f3f6edd73565f6d3d0eadd59107030548d81ef3c5fa0c83fa495713fc90 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Jan 20 18:51:32 compute-0 bash[150631]: 42c66f3f6edd73565f6d3d0eadd59107030548d81ef3c5fa0c83fa495713fc90
Jan 20 18:51:32 compute-0 podman[150631]: 2026-01-20 18:51:32.347698743 +0000 UTC m=+0.023439511 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:51:32 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:51:32 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Jan 20 18:51:32 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:51:32 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Jan 20 18:51:32 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.ulclbx for aecbbf3b-b405-507b-97d7-637a83f5b4b1.
Jan 20 18:51:32 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:51:32 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Jan 20 18:51:32 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:51:32 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Jan 20 18:51:32 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:51:32 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Jan 20 18:51:32 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:51:32 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Jan 20 18:51:32 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:51:32 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Jan 20 18:51:32 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:51:32 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 20 18:51:32 compute-0 sudo[150763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxizyetnlzhynmpgtqyssmtxufimhbia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935091.8692753-1044-278217287977931/AnsiballZ_file.py'
Jan 20 18:51:32 compute-0 sudo[150763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:51:32 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:51:32 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:51:32 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:51:32.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:51:32 compute-0 python3.9[150765]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:51:32 compute-0 sudo[150763]: pam_unix(sudo:session): session closed for user root
Jan 20 18:51:33 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:51:33 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v272: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:51:33 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:51:33 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:51:33 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:51:33.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:51:33 compute-0 sudo[150917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahtxfanuojexlepsbdezkeymuuuyvunf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935093.5001278-1113-224210143117835/AnsiballZ_file.py'
Jan 20 18:51:33 compute-0 sudo[150917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:51:34 compute-0 python3.9[150919]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:51:34 compute-0 sudo[150917]: pam_unix(sudo:session): session closed for user root
Jan 20 18:51:34 compute-0 sudo[151069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcbvvtlmwhtgpjdxjnpyskqsmcxiwokd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935094.2805812-1137-87062737141592/AnsiballZ_stat.py'
Jan 20 18:51:34 compute-0 sudo[151069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:51:34 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:51:34 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:51:34 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:51:34.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:51:34 compute-0 python3.9[151071]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:51:34 compute-0 sudo[151069]: pam_unix(sudo:session): session closed for user root
Jan 20 18:51:34 compute-0 ceph-mon[74381]: pgmap v272: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:51:35 compute-0 sudo[151147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erafkerzrxudhumivqmvzhwpeyrjsknk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935094.2805812-1137-87062737141592/AnsiballZ_file.py'
Jan 20 18:51:35 compute-0 sudo[151147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:51:35 compute-0 python3.9[151149]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:51:35 compute-0 sudo[151147]: pam_unix(sudo:session): session closed for user root
Jan 20 18:51:35 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v273: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:51:35 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:51:35 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:51:35 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:51:35.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:51:35 compute-0 sudo[151301]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lokwsvzeiujddkzamvzwcseuwslglodw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935095.6706443-1173-251846168404824/AnsiballZ_stat.py'
Jan 20 18:51:35 compute-0 sudo[151301]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:51:36 compute-0 python3.9[151303]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:51:36 compute-0 sudo[151301]: pam_unix(sudo:session): session closed for user root
Jan 20 18:51:36 compute-0 sudo[151379]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blkawrvdhnzibtozktmvdkqthuwhaukv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935095.6706443-1173-251846168404824/AnsiballZ_file.py'
Jan 20 18:51:36 compute-0 sudo[151379]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:51:36 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:51:36 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:51:36 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:51:36.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:51:36 compute-0 python3.9[151381]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:51:36 compute-0 sudo[151379]: pam_unix(sudo:session): session closed for user root
Jan 20 18:51:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:51:37.003Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 18:51:37 compute-0 ceph-mon[74381]: pgmap v273: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:51:37 compute-0 sudo[151532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvzdzgdoyjtgrzjmcaxiwpbugqkxgjvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935097.1158624-1209-168134903943517/AnsiballZ_systemd.py'
Jan 20 18:51:37 compute-0 sudo[151532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:51:37 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v274: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 20 18:51:37 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:51:37 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 18:51:37 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:51:37.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 18:51:37 compute-0 python3.9[151534]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 18:51:37 compute-0 systemd[1]: Reloading.
Jan 20 18:51:37 compute-0 systemd-sysv-generator[151566]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:51:37 compute-0 systemd-rc-local-generator[151562]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:51:38 compute-0 sudo[151532]: pam_unix(sudo:session): session closed for user root
Jan 20 18:51:38 compute-0 ceph-mon[74381]: pgmap v274: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 20 18:51:38 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:51:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:51:38 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 20 18:51:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:51:38 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 20 18:51:38 compute-0 sudo[151723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajuqtzrnagrjauwcawikuuiahdvpsquo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935098.375575-1233-119983268828364/AnsiballZ_stat.py'
Jan 20 18:51:38 compute-0 sudo[151723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:51:38 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:51:38 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 18:51:38 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:51:38.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 18:51:38 compute-0 python3.9[151725]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:51:38 compute-0 sudo[151723]: pam_unix(sudo:session): session closed for user root
Jan 20 18:51:39 compute-0 sudo[151801]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfymgpmydplywuyxumxvfekobhbhnzki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935098.375575-1233-119983268828364/AnsiballZ_file.py'
Jan 20 18:51:39 compute-0 sudo[151801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:51:39 compute-0 python3.9[151803]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:51:39 compute-0 sudo[151801]: pam_unix(sudo:session): session closed for user root
Jan 20 18:51:39 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v275: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 20 18:51:39 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:51:39 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:51:39 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:51:39.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:51:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:51:39] "GET /metrics HTTP/1.1" 200 48330 "" "Prometheus/2.51.0"
Jan 20 18:51:39 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:51:39] "GET /metrics HTTP/1.1" 200 48330 "" "Prometheus/2.51.0"
Jan 20 18:51:40 compute-0 sudo[151955]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlekotpfnpjrszdgphulwnqellgxmvxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935099.7765658-1269-110699205734226/AnsiballZ_stat.py'
Jan 20 18:51:40 compute-0 sudo[151955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:51:40 compute-0 python3.9[151957]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:51:40 compute-0 sudo[151955]: pam_unix(sudo:session): session closed for user root
Jan 20 18:51:40 compute-0 sudo[152033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-siblkhkqxnnhxjlwjuczsimhoeuhcetr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935099.7765658-1269-110699205734226/AnsiballZ_file.py'
Jan 20 18:51:40 compute-0 sudo[152033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:51:40 compute-0 ceph-mon[74381]: pgmap v275: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 20 18:51:40 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 18:51:40 compute-0 python3.9[152035]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:51:40 compute-0 sudo[152033]: pam_unix(sudo:session): session closed for user root
Jan 20 18:51:40 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:51:40 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:51:40 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:51:40.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:51:41 compute-0 sudo[152185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emzscbooixytgdylwhplohdetidjuoeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935101.2218516-1305-84936820294237/AnsiballZ_systemd.py'
Jan 20 18:51:41 compute-0 sudo[152185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:51:41 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v276: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Jan 20 18:51:41 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:51:41 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:51:41 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:51:41.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:51:41 compute-0 python3.9[152187]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 18:51:41 compute-0 systemd[1]: Reloading.
Jan 20 18:51:41 compute-0 systemd-rc-local-generator[152215]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:51:41 compute-0 systemd-sysv-generator[152218]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:51:42 compute-0 systemd[1]: Starting Create netns directory...
Jan 20 18:51:42 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 20 18:51:42 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 20 18:51:42 compute-0 systemd[1]: Finished Create netns directory.
Jan 20 18:51:42 compute-0 sudo[152185]: pam_unix(sudo:session): session closed for user root
Jan 20 18:51:42 compute-0 ceph-mon[74381]: pgmap v276: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Jan 20 18:51:42 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:51:42 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:51:42 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:51:42.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:51:42 compute-0 sudo[152380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbovhjkgxvwwgumekotzquaajpbyhdsf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935102.6450765-1335-266026027101268/AnsiballZ_file.py'
Jan 20 18:51:42 compute-0 sudo[152380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:51:43 compute-0 python3.9[152382]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:51:43 compute-0 sudo[152380]: pam_unix(sudo:session): session closed for user root
Jan 20 18:51:43 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:51:43 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v277: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Jan 20 18:51:43 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:51:43 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:51:43 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:51:43.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:51:43 compute-0 sudo[152534]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgriemjcvnsvurczsxhcwbakwkjcuodl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935103.494208-1359-208617453247480/AnsiballZ_stat.py'
Jan 20 18:51:43 compute-0 sudo[152534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:51:43 compute-0 python3.9[152536]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:51:43 compute-0 sudo[152534]: pam_unix(sudo:session): session closed for user root
Jan 20 18:51:44 compute-0 sudo[152657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehpcswbvdwfrywbmwkacazwqjwhiumxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935103.494208-1359-208617453247480/AnsiballZ_copy.py'
Jan 20 18:51:44 compute-0 sudo[152657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:51:44 compute-0 python3.9[152659]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768935103.494208-1359-208617453247480/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:51:44 compute-0 sudo[152657]: pam_unix(sudo:session): session closed for user root
Jan 20 18:51:44 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:51:44 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 18:51:44 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:51:44.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 18:51:44 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:51:44 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 20 18:51:44 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:51:44 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Jan 20 18:51:44 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:51:44 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Jan 20 18:51:44 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:51:44 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Jan 20 18:51:44 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:51:44 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Jan 20 18:51:44 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:51:44 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Jan 20 18:51:44 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:51:44 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Jan 20 18:51:44 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:51:44 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 20 18:51:44 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:51:44 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 20 18:51:44 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:51:44 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 20 18:51:44 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:51:44 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Jan 20 18:51:44 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:51:44 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 20 18:51:44 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:51:44 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Jan 20 18:51:44 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:51:44 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Jan 20 18:51:44 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:51:44 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Jan 20 18:51:44 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:51:44 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Jan 20 18:51:44 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:51:44 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Jan 20 18:51:44 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:51:44 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Jan 20 18:51:44 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:51:44 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Jan 20 18:51:44 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:51:44 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Jan 20 18:51:44 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:51:44 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Jan 20 18:51:44 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:51:44 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Jan 20 18:51:44 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:51:44 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Jan 20 18:51:44 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:51:44 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Jan 20 18:51:44 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:51:44 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 20 18:51:44 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:51:44 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Jan 20 18:51:44 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:51:44 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 20 18:51:45 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:51:45 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f366c000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:51:45 compute-0 ceph-mon[74381]: pgmap v277: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Jan 20 18:51:45 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v278: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Jan 20 18:51:45 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:51:45 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:51:45 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:51:45.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:51:45 compute-0 sudo[152826]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwklzuzckesvideavmxapzumyupvzetk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935105.4616175-1410-233236276466332/AnsiballZ_file.py'
Jan 20 18:51:45 compute-0 sudo[152826]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:51:45 compute-0 python3.9[152828]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:51:45 compute-0 sudo[152826]: pam_unix(sudo:session): session closed for user root
Jan 20 18:51:46 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:51:46 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3664001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:51:46 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:51:46 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3654000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:51:46 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:51:46 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:51:46 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:51:46.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:51:46 compute-0 sudo[152978]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyyymcszgjzyslpqujyaolkcgwywscqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935106.5429814-1434-125035908343314/AnsiballZ_file.py'
Jan 20 18:51:46 compute-0 sudo[152978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:51:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:51:47.003Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 18:51:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:51:47.003Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 18:51:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:51:47.003Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 18:51:47 compute-0 python3.9[152980]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:51:47 compute-0 sudo[152978]: pam_unix(sudo:session): session closed for user root
Jan 20 18:51:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:51:47 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3648000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:51:47 compute-0 ceph-mon[74381]: pgmap v278: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Jan 20 18:51:47 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v279: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 20 18:51:47 compute-0 sudo[153132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmcquimebgopejcpqigskzzqpjrvhntr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935107.341916-1458-60579441059125/AnsiballZ_stat.py'
Jan 20 18:51:47 compute-0 sudo[153132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:51:47 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:51:47 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 18:51:47 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:51:47.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 18:51:47 compute-0 python3.9[153134]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:51:47 compute-0 sudo[153132]: pam_unix(sudo:session): session closed for user root
Jan 20 18:51:48 compute-0 sudo[153255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfozjcgkvvrislkalpviwliokzmxnjdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935107.341916-1458-60579441059125/AnsiballZ_copy.py'
Jan 20 18:51:48 compute-0 sudo[153255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:51:48 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:51:48 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [WARNING] 019/185148 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 20 18:51:48 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:51:48 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3650000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:51:48 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:51:48 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3664001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:51:48 compute-0 python3.9[153257]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1768935107.341916-1458-60579441059125/.source.json _original_basename=.ba8ukhzu follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:51:48 compute-0 sudo[153255]: pam_unix(sudo:session): session closed for user root
Jan 20 18:51:48 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:51:48 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:51:48 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:51:48.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:51:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:51:49 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f36540016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:51:49 compute-0 python3.9[153407]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:51:49 compute-0 ceph-mon[74381]: pgmap v279: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 20 18:51:49 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v280: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Jan 20 18:51:49 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:51:49 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:51:49 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:51:49.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:51:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:51:49] "GET /metrics HTTP/1.1" 200 48332 "" "Prometheus/2.51.0"
Jan 20 18:51:49 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:51:49] "GET /metrics HTTP/1.1" 200 48332 "" "Prometheus/2.51.0"
Jan 20 18:51:50 compute-0 ceph-mon[74381]: pgmap v280: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Jan 20 18:51:50 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:51:50 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f36480016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:51:50 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:51:50 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3650001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:51:50 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:51:50 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 18:51:50 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:51:50.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 18:51:51 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:51:51 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3664001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:51:51 compute-0 sudo[153830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxmpqmubjodbuzwfneuivdigudfyaium ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935111.0460684-1578-50736537848711/AnsiballZ_container_config_data.py'
Jan 20 18:51:51 compute-0 sudo[153830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:51:51 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v281: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Jan 20 18:51:51 compute-0 sudo[153835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 18:51:51 compute-0 sudo[153835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:51:51 compute-0 sudo[153835]: pam_unix(sudo:session): session closed for user root
Jan 20 18:51:51 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:51:51 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 18:51:51 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:51:51.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 18:51:51 compute-0 python3.9[153832]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Jan 20 18:51:51 compute-0 sudo[153830]: pam_unix(sudo:session): session closed for user root
Jan 20 18:51:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:51:52 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f36540016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:51:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:51:52 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f36480016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:51:52 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:51:52 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:51:52 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:51:52.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:51:52 compute-0 sudo[154009]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjriqoeqzygolbvejwyasevtafnqvfyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935112.218942-1611-163535962327408/AnsiballZ_container_config_hash.py'
Jan 20 18:51:52 compute-0 sudo[154009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:51:52 compute-0 python3.9[154011]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 20 18:51:52 compute-0 sudo[154009]: pam_unix(sudo:session): session closed for user root
Jan 20 18:51:53 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:51:53 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3650001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:51:53 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:51:53 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v282: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Jan 20 18:51:53 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:51:53 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:51:53 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:51:53.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:51:53 compute-0 ceph-mon[74381]: pgmap v281: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Jan 20 18:51:54 compute-0 sudo[154163]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrjxyzluguzweemhiciwsqhlckzenfvi ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1768935113.4272735-1641-51313495631684/AnsiballZ_edpm_container_manage.py'
Jan 20 18:51:54 compute-0 sudo[154163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:51:54 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:51:54 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3664001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:51:54 compute-0 python3[154165]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json containers=['ovn_controller'] log_base_path=/var/log/containers/stdouts debug=False
Jan 20 18:51:54 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:51:54 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f36540016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:51:54 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:51:54 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:51:54 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:51:54.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:51:54 compute-0 ceph-mgr[74676]: [balancer INFO root] Optimize plan auto_2026-01-20_18:51:54
Jan 20 18:51:54 compute-0 ceph-mgr[74676]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 18:51:54 compute-0 ceph-mgr[74676]: [balancer INFO root] do_upmap
Jan 20 18:51:54 compute-0 ceph-mgr[74676]: [balancer INFO root] pools ['.nfs', 'cephfs.cephfs.meta', '.rgw.root', '.mgr', 'cephfs.cephfs.data', 'images', 'backups', 'default.rgw.meta', 'default.rgw.log', 'vms', 'default.rgw.control', 'volumes']
Jan 20 18:51:54 compute-0 ceph-mgr[74676]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 18:51:55 compute-0 ceph-mon[74381]: pgmap v282: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Jan 20 18:51:55 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 18:51:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:51:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:51:55 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:51:55 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f36480016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:51:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 18:51:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:51:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 18:51:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:51:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:51:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:51:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:51:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:51:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:51:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:51:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:51:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:51:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 20 18:51:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:51:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:51:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:51:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 18:51:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:51:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 20 18:51:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:51:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:51:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:51:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 18:51:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:51:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 20 18:51:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:51:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:51:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:51:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:51:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 18:51:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 18:51:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 18:51:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 18:51:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 18:51:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 18:51:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 18:51:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 18:51:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 18:51:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 18:51:55 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v283: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Jan 20 18:51:55 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:51:55 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:51:55 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:51:55.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:51:56 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:51:56 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3650001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:51:56 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:51:56 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3664001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:51:56 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:51:56 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 18:51:56 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:51:56.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 18:51:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:51:57.004Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 18:51:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:51:57 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3654002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:51:57 compute-0 ceph-mon[74381]: pgmap v283: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Jan 20 18:51:57 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v284: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Jan 20 18:51:57 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:51:57 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:51:57 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:51:57.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:51:58 compute-0 ceph-mon[74381]: pgmap v284: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Jan 20 18:51:58 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:51:58 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:51:58 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3648002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:51:58 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:51:58 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3650002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:51:58 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:51:58 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 18:51:58 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:51:58.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 18:51:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:51:59 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3664001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:51:59 compute-0 podman[154180]: 2026-01-20 18:51:59.351175009 +0000 UTC m=+4.931250603 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 20 18:51:59 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v285: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:51:59 compute-0 podman[154305]: 2026-01-20 18:51:59.473580373 +0000 UTC m=+0.026347408 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 20 18:51:59 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:51:59 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:51:59 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:51:59.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:51:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:51:59] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Jan 20 18:51:59 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:51:59] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Jan 20 18:52:00 compute-0 podman[154305]: 2026-01-20 18:52:00.024724764 +0000 UTC m=+0.577491769 container create d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible)
Jan 20 18:52:00 compute-0 python3[154165]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41 --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 20 18:52:00 compute-0 sudo[154163]: pam_unix(sudo:session): session closed for user root
Jan 20 18:52:00 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:00 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3654002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:00 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:00 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3648002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:00 compute-0 sudo[154495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkzyfpqjisumypsnsmslrrirjlhxwdtj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935120.3754714-1665-6923041049637/AnsiballZ_stat.py'
Jan 20 18:52:00 compute-0 sudo[154495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:52:00 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:52:00 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 18:52:00 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:52:00.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 18:52:00 compute-0 python3.9[154497]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 18:52:00 compute-0 sudo[154495]: pam_unix(sudo:session): session closed for user root
Jan 20 18:52:01 compute-0 ceph-mon[74381]: pgmap v285: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:52:01 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:01 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3650002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:01 compute-0 sudo[154649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etjqlvfvjyhufazjeoulonhlbnshzjac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935121.2440126-1692-85845054503903/AnsiballZ_file.py'
Jan 20 18:52:01 compute-0 sudo[154649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:52:01 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v286: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:52:01 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:52:01 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:52:01 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:52:01.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:52:01 compute-0 python3.9[154651]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:52:01 compute-0 sudo[154649]: pam_unix(sudo:session): session closed for user root
Jan 20 18:52:02 compute-0 sudo[154727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idqycvfaxcyngcdnoofalbgksxjfbgcf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935121.2440126-1692-85845054503903/AnsiballZ_stat.py'
Jan 20 18:52:02 compute-0 sudo[154727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:52:02 compute-0 python3.9[154729]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 18:52:02 compute-0 sudo[154727]: pam_unix(sudo:session): session closed for user root
Jan 20 18:52:02 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:02 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3664001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:02 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:02 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3654002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:02 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:52:02 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 18:52:02 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:52:02.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 18:52:02 compute-0 sudo[154878]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydupqehdbbbevjigqlrnhfsiowwrfhan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935122.3130012-1692-247311184141375/AnsiballZ_copy.py'
Jan 20 18:52:02 compute-0 sudo[154878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:52:03 compute-0 python3.9[154880]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768935122.3130012-1692-247311184141375/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:52:03 compute-0 sudo[154878]: pam_unix(sudo:session): session closed for user root
Jan 20 18:52:03 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:03 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3648002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:03 compute-0 ceph-mon[74381]: pgmap v286: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:52:03 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:52:03 compute-0 sudo[154954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulgnuadzgtdcqgeqahaoltzowuvedxyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935122.3130012-1692-247311184141375/AnsiballZ_systemd.py'
Jan 20 18:52:03 compute-0 sudo[154954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:52:03 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v287: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:52:03 compute-0 python3.9[154956]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 20 18:52:03 compute-0 systemd[1]: Reloading.
Jan 20 18:52:03 compute-0 systemd-rc-local-generator[154986]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:52:03 compute-0 systemd-sysv-generator[154989]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:52:03 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:52:03 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:52:03 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:52:03.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:52:03 compute-0 sudo[154954]: pam_unix(sudo:session): session closed for user root
Jan 20 18:52:04 compute-0 sudo[155067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onvmoubsrpdjwgddwsgsbmygebffczon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935122.3130012-1692-247311184141375/AnsiballZ_systemd.py'
Jan 20 18:52:04 compute-0 sudo[155067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:52:04 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:04 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3650002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:04 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:04 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3664001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:04 compute-0 python3.9[155069]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 18:52:04 compute-0 systemd[1]: Reloading.
Jan 20 18:52:04 compute-0 systemd-rc-local-generator[155099]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:52:04 compute-0 systemd-sysv-generator[155102]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:52:04 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:52:04 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:52:04 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:52:04.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:52:04 compute-0 ceph-mon[74381]: pgmap v287: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:52:04 compute-0 systemd[1]: Starting ovn_controller container...
Jan 20 18:52:05 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:05 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3654003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:05 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v288: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:52:05 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:52:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3810b7d6482adba2488eb264b49d828bb155dce61f04c4ae7db1b0b34df7236/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Jan 20 18:52:05 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:52:05 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:52:05 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:52:05.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:52:05 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf.
Jan 20 18:52:05 compute-0 podman[155111]: 2026-01-20 18:52:05.754415313 +0000 UTC m=+0.763187152 container init d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:52:05 compute-0 ovn_controller[155128]: + sudo -E kolla_set_configs
Jan 20 18:52:05 compute-0 podman[155111]: 2026-01-20 18:52:05.788191168 +0000 UTC m=+0.796962997 container start d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller)
Jan 20 18:52:05 compute-0 edpm-start-podman-container[155111]: ovn_controller
Jan 20 18:52:05 compute-0 systemd[1]: Created slice User Slice of UID 0.
Jan 20 18:52:05 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Jan 20 18:52:05 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Jan 20 18:52:05 compute-0 systemd[1]: Starting User Manager for UID 0...
Jan 20 18:52:05 compute-0 systemd[155158]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Jan 20 18:52:05 compute-0 edpm-start-podman-container[155110]: Creating additional drop-in dependency for "ovn_controller" (d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf)
Jan 20 18:52:05 compute-0 podman[155136]: 2026-01-20 18:52:05.869251494 +0000 UTC m=+0.072485318 container health_status d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller)
Jan 20 18:52:05 compute-0 systemd[1]: d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf-da372ca7b95a56a.service: Main process exited, code=exited, status=1/FAILURE
Jan 20 18:52:05 compute-0 systemd[1]: d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf-da372ca7b95a56a.service: Failed with result 'exit-code'.
Jan 20 18:52:05 compute-0 systemd[1]: Reloading.
Jan 20 18:52:05 compute-0 systemd-rc-local-generator[155218]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:52:05 compute-0 systemd-sysv-generator[155221]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:52:05 compute-0 systemd[155158]: Queued start job for default target Main User Target.
Jan 20 18:52:05 compute-0 systemd[155158]: Created slice User Application Slice.
Jan 20 18:52:05 compute-0 systemd[155158]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Jan 20 18:52:05 compute-0 systemd[155158]: Started Daily Cleanup of User's Temporary Directories.
Jan 20 18:52:05 compute-0 systemd[155158]: Reached target Paths.
Jan 20 18:52:05 compute-0 systemd[155158]: Reached target Timers.
Jan 20 18:52:06 compute-0 systemd[155158]: Starting D-Bus User Message Bus Socket...
Jan 20 18:52:06 compute-0 systemd[155158]: Starting Create User's Volatile Files and Directories...
Jan 20 18:52:06 compute-0 systemd[155158]: Finished Create User's Volatile Files and Directories.
Jan 20 18:52:06 compute-0 systemd[155158]: Listening on D-Bus User Message Bus Socket.
Jan 20 18:52:06 compute-0 systemd[155158]: Reached target Sockets.
Jan 20 18:52:06 compute-0 systemd[155158]: Reached target Basic System.
Jan 20 18:52:06 compute-0 systemd[155158]: Reached target Main User Target.
Jan 20 18:52:06 compute-0 systemd[155158]: Startup finished in 161ms.
Jan 20 18:52:06 compute-0 systemd[1]: Started User Manager for UID 0.
Jan 20 18:52:06 compute-0 systemd[1]: Started ovn_controller container.
Jan 20 18:52:06 compute-0 systemd[1]: Started Session c1 of User root.
Jan 20 18:52:06 compute-0 sudo[155067]: pam_unix(sudo:session): session closed for user root
Jan 20 18:52:06 compute-0 ovn_controller[155128]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 20 18:52:06 compute-0 ovn_controller[155128]: INFO:__main__:Validating config file
Jan 20 18:52:06 compute-0 ovn_controller[155128]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 20 18:52:06 compute-0 ovn_controller[155128]: INFO:__main__:Writing out command to execute
Jan 20 18:52:06 compute-0 systemd[1]: session-c1.scope: Deactivated successfully.
Jan 20 18:52:06 compute-0 ovn_controller[155128]: ++ cat /run_command
Jan 20 18:52:06 compute-0 ovn_controller[155128]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 20 18:52:06 compute-0 ovn_controller[155128]: + ARGS=
Jan 20 18:52:06 compute-0 ovn_controller[155128]: + sudo kolla_copy_cacerts
Jan 20 18:52:06 compute-0 systemd[1]: Started Session c2 of User root.
Jan 20 18:52:06 compute-0 ovn_controller[155128]: + [[ ! -n '' ]]
Jan 20 18:52:06 compute-0 systemd[1]: session-c2.scope: Deactivated successfully.
Jan 20 18:52:06 compute-0 ovn_controller[155128]: + . kolla_extend_start
Jan 20 18:52:06 compute-0 ovn_controller[155128]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 20 18:52:06 compute-0 ovn_controller[155128]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Jan 20 18:52:06 compute-0 ovn_controller[155128]: + umask 0022
Jan 20 18:52:06 compute-0 ovn_controller[155128]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Jan 20 18:52:06 compute-0 ovn_controller[155128]: 2026-01-20T18:52:06Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 20 18:52:06 compute-0 ovn_controller[155128]: 2026-01-20T18:52:06Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 20 18:52:06 compute-0 ovn_controller[155128]: 2026-01-20T18:52:06Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Jan 20 18:52:06 compute-0 ovn_controller[155128]: 2026-01-20T18:52:06Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Jan 20 18:52:06 compute-0 ovn_controller[155128]: 2026-01-20T18:52:06Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 20 18:52:06 compute-0 ovn_controller[155128]: 2026-01-20T18:52:06Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Jan 20 18:52:06 compute-0 NetworkManager[48914]: <info>  [1768935126.3043] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Jan 20 18:52:06 compute-0 NetworkManager[48914]: <info>  [1768935126.3053] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 18:52:06 compute-0 NetworkManager[48914]: <warn>  [1768935126.3056] device (br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 20 18:52:06 compute-0 NetworkManager[48914]: <info>  [1768935126.3065] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Jan 20 18:52:06 compute-0 NetworkManager[48914]: <info>  [1768935126.3070] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Jan 20 18:52:06 compute-0 NetworkManager[48914]: <info>  [1768935126.3074] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 20 18:52:06 compute-0 kernel: br-int: entered promiscuous mode
Jan 20 18:52:06 compute-0 ovn_controller[155128]: 2026-01-20T18:52:06Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 20 18:52:06 compute-0 ovn_controller[155128]: 2026-01-20T18:52:06Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 20 18:52:06 compute-0 ovn_controller[155128]: 2026-01-20T18:52:06Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 20 18:52:06 compute-0 ovn_controller[155128]: 2026-01-20T18:52:06Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Jan 20 18:52:06 compute-0 ovn_controller[155128]: 2026-01-20T18:52:06Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Jan 20 18:52:06 compute-0 ovn_controller[155128]: 2026-01-20T18:52:06Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Jan 20 18:52:06 compute-0 ovn_controller[155128]: 2026-01-20T18:52:06Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 20 18:52:06 compute-0 ovn_controller[155128]: 2026-01-20T18:52:06Z|00014|main|INFO|OVS feature set changed, force recompute.
Jan 20 18:52:06 compute-0 ovn_controller[155128]: 2026-01-20T18:52:06Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 20 18:52:06 compute-0 ovn_controller[155128]: 2026-01-20T18:52:06Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 20 18:52:06 compute-0 ovn_controller[155128]: 2026-01-20T18:52:06Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 20 18:52:06 compute-0 ovn_controller[155128]: 2026-01-20T18:52:06Z|00018|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Jan 20 18:52:06 compute-0 ovn_controller[155128]: 2026-01-20T18:52:06Z|00019|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Jan 20 18:52:06 compute-0 ovn_controller[155128]: 2026-01-20T18:52:06Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 20 18:52:06 compute-0 ovn_controller[155128]: 2026-01-20T18:52:06Z|00021|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 20 18:52:06 compute-0 ovn_controller[155128]: 2026-01-20T18:52:06Z|00022|main|INFO|OVS feature set changed, force recompute.
Jan 20 18:52:06 compute-0 ovn_controller[155128]: 2026-01-20T18:52:06Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Jan 20 18:52:06 compute-0 ovn_controller[155128]: 2026-01-20T18:52:06Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Jan 20 18:52:06 compute-0 ovn_controller[155128]: 2026-01-20T18:52:06Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 20 18:52:06 compute-0 ovn_controller[155128]: 2026-01-20T18:52:06Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 20 18:52:06 compute-0 ovn_controller[155128]: 2026-01-20T18:52:06Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 20 18:52:06 compute-0 ovn_controller[155128]: 2026-01-20T18:52:06Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 20 18:52:06 compute-0 ovn_controller[155128]: 2026-01-20T18:52:06Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 20 18:52:06 compute-0 NetworkManager[48914]: <info>  [1768935126.3306] manager: (ovn-8ec1a8-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Jan 20 18:52:06 compute-0 NetworkManager[48914]: <info>  [1768935126.3312] manager: (ovn-53ced8-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/20)
Jan 20 18:52:06 compute-0 NetworkManager[48914]: <info>  [1768935126.3320] manager: (ovn-f4a56c-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21)
Jan 20 18:52:06 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:06 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3648003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:06 compute-0 systemd-udevd[155264]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 18:52:06 compute-0 kernel: genev_sys_6081: entered promiscuous mode
Jan 20 18:52:06 compute-0 systemd-udevd[155268]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 18:52:06 compute-0 ovn_controller[155128]: 2026-01-20T18:52:06Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 20 18:52:06 compute-0 NetworkManager[48914]: <info>  [1768935126.3463] device (genev_sys_6081): carrier: link connected
Jan 20 18:52:06 compute-0 NetworkManager[48914]: <info>  [1768935126.3466] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/22)
Jan 20 18:52:06 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:06 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3650004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:06 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:52:06 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 18:52:06 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:52:06.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 18:52:07 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:52:07.006Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 18:52:07 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:52:07.006Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 18:52:07 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:52:07.006Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 18:52:07 compute-0 python3.9[155395]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 20 18:52:07 compute-0 ceph-mon[74381]: pgmap v288: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:52:07 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:07 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3664001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:07 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v289: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 20 18:52:07 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:52:07 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 18:52:07 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:52:07.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 18:52:08 compute-0 sudo[155547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrshsbzogvcjgonbfxfwnaiwauguohke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935127.7948282-1827-182658455211708/AnsiballZ_stat.py'
Jan 20 18:52:08 compute-0 sudo[155547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:52:08 compute-0 python3.9[155549]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:52:08 compute-0 sudo[155547]: pam_unix(sudo:session): session closed for user root
Jan 20 18:52:08 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:52:08 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:08 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3664001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:08 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:08 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3654003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:08 compute-0 sudo[155670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qytinmhwrmrfpskomnwisbdmalwpdukg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935127.7948282-1827-182658455211708/AnsiballZ_copy.py'
Jan 20 18:52:08 compute-0 sudo[155670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:52:08 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:52:08 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 18:52:08 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:52:08.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 18:52:08 compute-0 python3.9[155672]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768935127.7948282-1827-182658455211708/.source.yaml _original_basename=.horlhk_h follow=False checksum=9e47e45efbd45bbefd5e2c8372c6401984b6b77f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:52:08 compute-0 sudo[155670]: pam_unix(sudo:session): session closed for user root
Jan 20 18:52:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:09 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3650004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:09 compute-0 ceph-mon[74381]: pgmap v289: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 20 18:52:09 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v290: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:52:09 compute-0 sudo[155824]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvjzdzrtfwkmobgmolxtpjkywzcdovfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935129.3217685-1872-273501047005231/AnsiballZ_command.py'
Jan 20 18:52:09 compute-0 sudo[155824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:52:09 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:52:09 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:52:09 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:52:09.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:52:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:52:09] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Jan 20 18:52:09 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:52:09] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Jan 20 18:52:09 compute-0 python3.9[155826]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:52:09 compute-0 ovs-vsctl[155827]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Jan 20 18:52:09 compute-0 sudo[155824]: pam_unix(sudo:session): session closed for user root
Jan 20 18:52:10 compute-0 ceph-mon[74381]: pgmap v290: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:52:10 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 18:52:10 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:10 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3648003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:10 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:10 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3664001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:10 compute-0 sudo[155977]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjgirhaoemzrwryjjsdazrczcwsdvjmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935130.171665-1896-249766676956833/AnsiballZ_command.py'
Jan 20 18:52:10 compute-0 sudo[155977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:52:10 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:52:10 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:52:10 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:52:10.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:52:10 compute-0 python3.9[155979]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:52:10 compute-0 ovs-vsctl[155981]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Jan 20 18:52:10 compute-0 sudo[155977]: pam_unix(sudo:session): session closed for user root
Jan 20 18:52:11 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:11 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3654003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:11 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v291: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:52:11 compute-0 sudo[156108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:52:11 compute-0 sudo[156108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:52:11 compute-0 sudo[156108]: pam_unix(sudo:session): session closed for user root
Jan 20 18:52:11 compute-0 sudo[156158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzymyzwfxfxqnxllttmsarresoetzivp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935131.3391922-1938-60387029210256/AnsiballZ_command.py'
Jan 20 18:52:11 compute-0 sudo[156158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:52:11 compute-0 sudo[156161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 20 18:52:11 compute-0 sudo[156161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:52:11 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:52:11 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 18:52:11 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:52:11.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 18:52:11 compute-0 sudo[156187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 18:52:11 compute-0 sudo[156187]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:52:11 compute-0 sudo[156187]: pam_unix(sudo:session): session closed for user root
Jan 20 18:52:11 compute-0 python3.9[156162]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:52:11 compute-0 ovs-vsctl[156212]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Jan 20 18:52:11 compute-0 sudo[156158]: pam_unix(sudo:session): session closed for user root
Jan 20 18:52:12 compute-0 sudo[156161]: pam_unix(sudo:session): session closed for user root
Jan 20 18:52:12 compute-0 sshd-session[143522]: Connection closed by 192.168.122.30 port 49210
Jan 20 18:52:12 compute-0 sshd-session[143519]: pam_unix(sshd:session): session closed for user zuul
Jan 20 18:52:12 compute-0 systemd[1]: session-51.scope: Deactivated successfully.
Jan 20 18:52:12 compute-0 systemd[1]: session-51.scope: Consumed 58.116s CPU time.
Jan 20 18:52:12 compute-0 systemd-logind[796]: Session 51 logged out. Waiting for processes to exit.
Jan 20 18:52:12 compute-0 systemd-logind[796]: Removed session 51.
Jan 20 18:52:12 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:12 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3650004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:12 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 18:52:12 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:12 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3650004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:12 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:52:12 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 20 18:52:12 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:52:12 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:52:12 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 18:52:12 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:52:12.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 18:52:12 compute-0 ceph-mon[74381]: pgmap v291: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:52:12 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:52:12 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 18:52:12 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:52:12 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:52:12 compute-0 sudo[156269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:52:12 compute-0 sudo[156269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:52:12 compute-0 sudo[156269]: pam_unix(sudo:session): session closed for user root
Jan 20 18:52:12 compute-0 sudo[156294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 18:52:12 compute-0 sudo[156294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:52:13 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:13 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3664003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:13 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:52:13 compute-0 podman[156362]: 2026-01-20 18:52:13.369708988 +0000 UTC m=+0.054946397 container create 0ebacbfeae4e04e7e72ca690b9e9344a2e55f24d1a8452493255d6667d97d9d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_cannon, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:52:13 compute-0 systemd[1]: Started libpod-conmon-0ebacbfeae4e04e7e72ca690b9e9344a2e55f24d1a8452493255d6667d97d9d9.scope.
Jan 20 18:52:13 compute-0 podman[156362]: 2026-01-20 18:52:13.344520924 +0000 UTC m=+0.029758363 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:52:13 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:52:13 compute-0 podman[156362]: 2026-01-20 18:52:13.476190506 +0000 UTC m=+0.161427965 container init 0ebacbfeae4e04e7e72ca690b9e9344a2e55f24d1a8452493255d6667d97d9d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_cannon, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:52:13 compute-0 podman[156362]: 2026-01-20 18:52:13.484884439 +0000 UTC m=+0.170121878 container start 0ebacbfeae4e04e7e72ca690b9e9344a2e55f24d1a8452493255d6667d97d9d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Jan 20 18:52:13 compute-0 podman[156362]: 2026-01-20 18:52:13.490060453 +0000 UTC m=+0.175297942 container attach 0ebacbfeae4e04e7e72ca690b9e9344a2e55f24d1a8452493255d6667d97d9d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_cannon, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2)
Jan 20 18:52:13 compute-0 systemd[1]: libpod-0ebacbfeae4e04e7e72ca690b9e9344a2e55f24d1a8452493255d6667d97d9d9.scope: Deactivated successfully.
Jan 20 18:52:13 compute-0 laughing_cannon[156378]: 167 167
Jan 20 18:52:13 compute-0 conmon[156378]: conmon 0ebacbfeae4e04e7e72c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0ebacbfeae4e04e7e72ca690b9e9344a2e55f24d1a8452493255d6667d97d9d9.scope/container/memory.events
Jan 20 18:52:13 compute-0 podman[156362]: 2026-01-20 18:52:13.497981485 +0000 UTC m=+0.183218894 container died 0ebacbfeae4e04e7e72ca690b9e9344a2e55f24d1a8452493255d6667d97d9d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_cannon, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 18:52:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a2e8ae048160bf8e4b2d8f4f75faf3293a2dacbf6d75e9074dab68adad7eb1e-merged.mount: Deactivated successfully.
Jan 20 18:52:13 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v292: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:52:13 compute-0 podman[156362]: 2026-01-20 18:52:13.56215849 +0000 UTC m=+0.247395929 container remove 0ebacbfeae4e04e7e72ca690b9e9344a2e55f24d1a8452493255d6667d97d9d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_cannon, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:52:13 compute-0 systemd[1]: libpod-conmon-0ebacbfeae4e04e7e72ca690b9e9344a2e55f24d1a8452493255d6667d97d9d9.scope: Deactivated successfully.
Jan 20 18:52:13 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:52:13 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:52:13 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:52:13.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:52:13 compute-0 podman[156403]: 2026-01-20 18:52:13.770920818 +0000 UTC m=+0.059438503 container create 7754a41155ac786226657ee69dd66360a7dd9148f0303868b2b3bdd42a965079 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 20 18:52:13 compute-0 systemd[1]: Started libpod-conmon-7754a41155ac786226657ee69dd66360a7dd9148f0303868b2b3bdd42a965079.scope.
Jan 20 18:52:13 compute-0 podman[156403]: 2026-01-20 18:52:13.741292259 +0000 UTC m=+0.029810024 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:52:13 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:52:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbebf6960154eb5fdc3aa31160de4ea1a5f76bffc8eaf5c42c749d92714a9e7c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 18:52:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbebf6960154eb5fdc3aa31160de4ea1a5f76bffc8eaf5c42c749d92714a9e7c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:52:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbebf6960154eb5fdc3aa31160de4ea1a5f76bffc8eaf5c42c749d92714a9e7c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:52:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbebf6960154eb5fdc3aa31160de4ea1a5f76bffc8eaf5c42c749d92714a9e7c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 18:52:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbebf6960154eb5fdc3aa31160de4ea1a5f76bffc8eaf5c42c749d92714a9e7c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 18:52:13 compute-0 podman[156403]: 2026-01-20 18:52:13.853880957 +0000 UTC m=+0.142398662 container init 7754a41155ac786226657ee69dd66360a7dd9148f0303868b2b3bdd42a965079 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_lamarr, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 18:52:13 compute-0 podman[156403]: 2026-01-20 18:52:13.860571964 +0000 UTC m=+0.149089639 container start 7754a41155ac786226657ee69dd66360a7dd9148f0303868b2b3bdd42a965079 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:52:13 compute-0 podman[156403]: 2026-01-20 18:52:13.863715832 +0000 UTC m=+0.152233517 container attach 7754a41155ac786226657ee69dd66360a7dd9148f0303868b2b3bdd42a965079 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_lamarr, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 20 18:52:14 compute-0 admiring_lamarr[156419]: --> passed data devices: 0 physical, 1 LVM
Jan 20 18:52:14 compute-0 admiring_lamarr[156419]: --> All data devices are unavailable
Jan 20 18:52:14 compute-0 systemd[1]: libpod-7754a41155ac786226657ee69dd66360a7dd9148f0303868b2b3bdd42a965079.scope: Deactivated successfully.
Jan 20 18:52:14 compute-0 podman[156403]: 2026-01-20 18:52:14.171717155 +0000 UTC m=+0.460234840 container died 7754a41155ac786226657ee69dd66360a7dd9148f0303868b2b3bdd42a965079 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_lamarr, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Jan 20 18:52:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-cbebf6960154eb5fdc3aa31160de4ea1a5f76bffc8eaf5c42c749d92714a9e7c-merged.mount: Deactivated successfully.
Jan 20 18:52:14 compute-0 podman[156403]: 2026-01-20 18:52:14.213183215 +0000 UTC m=+0.501700900 container remove 7754a41155ac786226657ee69dd66360a7dd9148f0303868b2b3bdd42a965079 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_lamarr, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid)
Jan 20 18:52:14 compute-0 systemd[1]: libpod-conmon-7754a41155ac786226657ee69dd66360a7dd9148f0303868b2b3bdd42a965079.scope: Deactivated successfully.
Jan 20 18:52:14 compute-0 sudo[156294]: pam_unix(sudo:session): session closed for user root
Jan 20 18:52:14 compute-0 sudo[156445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:52:14 compute-0 sudo[156445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:52:14 compute-0 sudo[156445]: pam_unix(sudo:session): session closed for user root
Jan 20 18:52:14 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:14 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3654003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:14 compute-0 sudo[156470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- lvm list --format json
Jan 20 18:52:14 compute-0 sudo[156470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:52:14 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:14 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3654003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:14 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:52:14 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 18:52:14 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:52:14.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 18:52:14 compute-0 podman[156537]: 2026-01-20 18:52:14.819625022 +0000 UTC m=+0.054896106 container create 369fb9fc560550791323e2565d737e3d81c6fb358dba3881a2f6bb08f5d417c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_booth, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:52:14 compute-0 systemd[1]: Started libpod-conmon-369fb9fc560550791323e2565d737e3d81c6fb358dba3881a2f6bb08f5d417c7.scope.
Jan 20 18:52:14 compute-0 podman[156537]: 2026-01-20 18:52:14.790933269 +0000 UTC m=+0.026204373 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:52:14 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:52:14 compute-0 podman[156537]: 2026-01-20 18:52:14.913708153 +0000 UTC m=+0.148979247 container init 369fb9fc560550791323e2565d737e3d81c6fb358dba3881a2f6bb08f5d417c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325)
Jan 20 18:52:14 compute-0 podman[156537]: 2026-01-20 18:52:14.926619174 +0000 UTC m=+0.161890268 container start 369fb9fc560550791323e2565d737e3d81c6fb358dba3881a2f6bb08f5d417c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_booth, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True)
Jan 20 18:52:14 compute-0 podman[156537]: 2026-01-20 18:52:14.931383708 +0000 UTC m=+0.166654772 container attach 369fb9fc560550791323e2565d737e3d81c6fb358dba3881a2f6bb08f5d417c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_booth, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:52:14 compute-0 zen_booth[156553]: 167 167
Jan 20 18:52:14 compute-0 systemd[1]: libpod-369fb9fc560550791323e2565d737e3d81c6fb358dba3881a2f6bb08f5d417c7.scope: Deactivated successfully.
Jan 20 18:52:14 compute-0 conmon[156553]: conmon 369fb9fc560550791323 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-369fb9fc560550791323e2565d737e3d81c6fb358dba3881a2f6bb08f5d417c7.scope/container/memory.events
Jan 20 18:52:14 compute-0 podman[156537]: 2026-01-20 18:52:14.936243453 +0000 UTC m=+0.171514547 container died 369fb9fc560550791323e2565d737e3d81c6fb358dba3881a2f6bb08f5d417c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Jan 20 18:52:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-171cf991f6b00590c534ab73a0a13e4822799102d71a4b464443d7fad7d65e7f-merged.mount: Deactivated successfully.
Jan 20 18:52:14 compute-0 podman[156537]: 2026-01-20 18:52:14.988137224 +0000 UTC m=+0.223408318 container remove 369fb9fc560550791323e2565d737e3d81c6fb358dba3881a2f6bb08f5d417c7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_booth, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 20 18:52:15 compute-0 systemd[1]: libpod-conmon-369fb9fc560550791323e2565d737e3d81c6fb358dba3881a2f6bb08f5d417c7.scope: Deactivated successfully.
Jan 20 18:52:15 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:15 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3650004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:15 compute-0 podman[156577]: 2026-01-20 18:52:15.14424977 +0000 UTC m=+0.043105767 container create ac5948123a12b51849f2825b9a4b42c2d1be72ce78198c15da9a274122935bad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_mahavira, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:52:15 compute-0 systemd[1]: Started libpod-conmon-ac5948123a12b51849f2825b9a4b42c2d1be72ce78198c15da9a274122935bad.scope.
Jan 20 18:52:15 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:52:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/300c43c612d7c488c6ecab596c6bdbaba4a5ac98fc28972ce66962a07b044963/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 18:52:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/300c43c612d7c488c6ecab596c6bdbaba4a5ac98fc28972ce66962a07b044963/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:52:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/300c43c612d7c488c6ecab596c6bdbaba4a5ac98fc28972ce66962a07b044963/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:52:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/300c43c612d7c488c6ecab596c6bdbaba4a5ac98fc28972ce66962a07b044963/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 18:52:15 compute-0 podman[156577]: 2026-01-20 18:52:15.212698284 +0000 UTC m=+0.111554271 container init ac5948123a12b51849f2825b9a4b42c2d1be72ce78198c15da9a274122935bad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_mahavira, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Jan 20 18:52:15 compute-0 podman[156577]: 2026-01-20 18:52:15.126129073 +0000 UTC m=+0.024985070 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:52:15 compute-0 podman[156577]: 2026-01-20 18:52:15.227514538 +0000 UTC m=+0.126370515 container start ac5948123a12b51849f2825b9a4b42c2d1be72ce78198c15da9a274122935bad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 20 18:52:15 compute-0 podman[156577]: 2026-01-20 18:52:15.23190188 +0000 UTC m=+0.130757857 container attach ac5948123a12b51849f2825b9a4b42c2d1be72ce78198c15da9a274122935bad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_mahavira, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 20 18:52:15 compute-0 jovial_mahavira[156594]: {
Jan 20 18:52:15 compute-0 jovial_mahavira[156594]:     "0": [
Jan 20 18:52:15 compute-0 jovial_mahavira[156594]:         {
Jan 20 18:52:15 compute-0 jovial_mahavira[156594]:             "devices": [
Jan 20 18:52:15 compute-0 jovial_mahavira[156594]:                 "/dev/loop3"
Jan 20 18:52:15 compute-0 jovial_mahavira[156594]:             ],
Jan 20 18:52:15 compute-0 jovial_mahavira[156594]:             "lv_name": "ceph_lv0",
Jan 20 18:52:15 compute-0 jovial_mahavira[156594]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 18:52:15 compute-0 jovial_mahavira[156594]:             "lv_size": "21470642176",
Jan 20 18:52:15 compute-0 jovial_mahavira[156594]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=aecbbf3b-b405-507b-97d7-637a83f5b4b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5f53c0c6-6046-4836-83f9-ff93da7e674e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 18:52:15 compute-0 jovial_mahavira[156594]:             "lv_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 18:52:15 compute-0 jovial_mahavira[156594]:             "name": "ceph_lv0",
Jan 20 18:52:15 compute-0 jovial_mahavira[156594]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 18:52:15 compute-0 jovial_mahavira[156594]:             "tags": {
Jan 20 18:52:15 compute-0 jovial_mahavira[156594]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 18:52:15 compute-0 jovial_mahavira[156594]:                 "ceph.block_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 18:52:15 compute-0 jovial_mahavira[156594]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 18:52:15 compute-0 jovial_mahavira[156594]:                 "ceph.cluster_fsid": "aecbbf3b-b405-507b-97d7-637a83f5b4b1",
Jan 20 18:52:15 compute-0 jovial_mahavira[156594]:                 "ceph.cluster_name": "ceph",
Jan 20 18:52:15 compute-0 jovial_mahavira[156594]:                 "ceph.crush_device_class": "",
Jan 20 18:52:15 compute-0 jovial_mahavira[156594]:                 "ceph.encrypted": "0",
Jan 20 18:52:15 compute-0 jovial_mahavira[156594]:                 "ceph.osd_fsid": "5f53c0c6-6046-4836-83f9-ff93da7e674e",
Jan 20 18:52:15 compute-0 jovial_mahavira[156594]:                 "ceph.osd_id": "0",
Jan 20 18:52:15 compute-0 jovial_mahavira[156594]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 18:52:15 compute-0 jovial_mahavira[156594]:                 "ceph.type": "block",
Jan 20 18:52:15 compute-0 jovial_mahavira[156594]:                 "ceph.vdo": "0",
Jan 20 18:52:15 compute-0 jovial_mahavira[156594]:                 "ceph.with_tpm": "0"
Jan 20 18:52:15 compute-0 jovial_mahavira[156594]:             },
Jan 20 18:52:15 compute-0 jovial_mahavira[156594]:             "type": "block",
Jan 20 18:52:15 compute-0 jovial_mahavira[156594]:             "vg_name": "ceph_vg0"
Jan 20 18:52:15 compute-0 jovial_mahavira[156594]:         }
Jan 20 18:52:15 compute-0 jovial_mahavira[156594]:     ]
Jan 20 18:52:15 compute-0 jovial_mahavira[156594]: }
Jan 20 18:52:15 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v293: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:52:15 compute-0 systemd[1]: libpod-ac5948123a12b51849f2825b9a4b42c2d1be72ce78198c15da9a274122935bad.scope: Deactivated successfully.
Jan 20 18:52:15 compute-0 podman[156577]: 2026-01-20 18:52:15.551626431 +0000 UTC m=+0.450482418 container died ac5948123a12b51849f2825b9a4b42c2d1be72ce78198c15da9a274122935bad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 20 18:52:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-300c43c612d7c488c6ecab596c6bdbaba4a5ac98fc28972ce66962a07b044963-merged.mount: Deactivated successfully.
Jan 20 18:52:15 compute-0 podman[156577]: 2026-01-20 18:52:15.598892933 +0000 UTC m=+0.497748920 container remove ac5948123a12b51849f2825b9a4b42c2d1be72ce78198c15da9a274122935bad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_mahavira, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Jan 20 18:52:15 compute-0 systemd[1]: libpod-conmon-ac5948123a12b51849f2825b9a4b42c2d1be72ce78198c15da9a274122935bad.scope: Deactivated successfully.
Jan 20 18:52:15 compute-0 sudo[156470]: pam_unix(sudo:session): session closed for user root
Jan 20 18:52:15 compute-0 sudo[156616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:52:15 compute-0 sudo[156616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:52:15 compute-0 sudo[156616]: pam_unix(sudo:session): session closed for user root
Jan 20 18:52:15 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:52:15 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:52:15 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:52:15.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:52:15 compute-0 sudo[156641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- raw list --format json
Jan 20 18:52:15 compute-0 sudo[156641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:52:16 compute-0 podman[156706]: 2026-01-20 18:52:16.210914657 +0000 UTC m=+0.052587542 container create 7555b858093c1d4739c3cd746c6edf8b388f905de7d60a691f2b3c1f65d1044b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_bhabha, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 18:52:16 compute-0 systemd[1]: Started libpod-conmon-7555b858093c1d4739c3cd746c6edf8b388f905de7d60a691f2b3c1f65d1044b.scope.
Jan 20 18:52:16 compute-0 podman[156706]: 2026-01-20 18:52:16.18491022 +0000 UTC m=+0.026583145 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:52:16 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:52:16 compute-0 podman[156706]: 2026-01-20 18:52:16.300899993 +0000 UTC m=+0.142572918 container init 7555b858093c1d4739c3cd746c6edf8b388f905de7d60a691f2b3c1f65d1044b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_bhabha, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Jan 20 18:52:16 compute-0 podman[156706]: 2026-01-20 18:52:16.306368436 +0000 UTC m=+0.148041301 container start 7555b858093c1d4739c3cd746c6edf8b388f905de7d60a691f2b3c1f65d1044b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_bhabha, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:52:16 compute-0 podman[156706]: 2026-01-20 18:52:16.311051877 +0000 UTC m=+0.152724852 container attach 7555b858093c1d4739c3cd746c6edf8b388f905de7d60a691f2b3c1f65d1044b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_bhabha, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:52:16 compute-0 mystifying_bhabha[156722]: 167 167
Jan 20 18:52:16 compute-0 systemd[1]: libpod-7555b858093c1d4739c3cd746c6edf8b388f905de7d60a691f2b3c1f65d1044b.scope: Deactivated successfully.
Jan 20 18:52:16 compute-0 podman[156706]: 2026-01-20 18:52:16.312973461 +0000 UTC m=+0.154646326 container died 7555b858093c1d4739c3cd746c6edf8b388f905de7d60a691f2b3c1f65d1044b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:52:16 compute-0 systemd[1]: Stopping User Manager for UID 0...
Jan 20 18:52:16 compute-0 systemd[155158]: Activating special unit Exit the Session...
Jan 20 18:52:16 compute-0 systemd[155158]: Stopped target Main User Target.
Jan 20 18:52:16 compute-0 systemd[155158]: Stopped target Basic System.
Jan 20 18:52:16 compute-0 systemd[155158]: Stopped target Paths.
Jan 20 18:52:16 compute-0 systemd[155158]: Stopped target Sockets.
Jan 20 18:52:16 compute-0 systemd[155158]: Stopped target Timers.
Jan 20 18:52:16 compute-0 systemd[155158]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 20 18:52:16 compute-0 systemd[155158]: Closed D-Bus User Message Bus Socket.
Jan 20 18:52:16 compute-0 systemd[155158]: Stopped Create User's Volatile Files and Directories.
Jan 20 18:52:16 compute-0 systemd[155158]: Removed slice User Application Slice.
Jan 20 18:52:16 compute-0 systemd[155158]: Reached target Shutdown.
Jan 20 18:52:16 compute-0 systemd[155158]: Finished Exit the Session.
Jan 20 18:52:16 compute-0 systemd[155158]: Reached target Exit the Session.
Jan 20 18:52:16 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Jan 20 18:52:16 compute-0 systemd[1]: Stopped User Manager for UID 0.
Jan 20 18:52:16 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Jan 20 18:52:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e5411caf756a24b000b8978ac6343bf3a6b58ccbddd2daafab9c99fdd8f552b-merged.mount: Deactivated successfully.
Jan 20 18:52:16 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Jan 20 18:52:16 compute-0 podman[156706]: 2026-01-20 18:52:16.354471491 +0000 UTC m=+0.196144366 container remove 7555b858093c1d4739c3cd746c6edf8b388f905de7d60a691f2b3c1f65d1044b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_bhabha, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 18:52:16 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Jan 20 18:52:16 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Jan 20 18:52:16 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Jan 20 18:52:16 compute-0 systemd[1]: libpod-conmon-7555b858093c1d4739c3cd746c6edf8b388f905de7d60a691f2b3c1f65d1044b.scope: Deactivated successfully.
Jan 20 18:52:16 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:16 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3648003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:16 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:16 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3650004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:16 compute-0 podman[156749]: 2026-01-20 18:52:16.553397333 +0000 UTC m=+0.055825252 container create 43787959d302118139e89a80ca1740444aad72442ae62cce162cd71fe9dafc9f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_carson, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Jan 20 18:52:16 compute-0 systemd[1]: Started libpod-conmon-43787959d302118139e89a80ca1740444aad72442ae62cce162cd71fe9dafc9f.scope.
Jan 20 18:52:16 compute-0 podman[156749]: 2026-01-20 18:52:16.527721156 +0000 UTC m=+0.030149065 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:52:16 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:52:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22229fef2c8022e2845134b62a76fe30a886ae23c2396d532db70fb7c1f4a1a9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 18:52:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22229fef2c8022e2845134b62a76fe30a886ae23c2396d532db70fb7c1f4a1a9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:52:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22229fef2c8022e2845134b62a76fe30a886ae23c2396d532db70fb7c1f4a1a9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:52:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22229fef2c8022e2845134b62a76fe30a886ae23c2396d532db70fb7c1f4a1a9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 18:52:16 compute-0 podman[156749]: 2026-01-20 18:52:16.652015882 +0000 UTC m=+0.154443761 container init 43787959d302118139e89a80ca1740444aad72442ae62cce162cd71fe9dafc9f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_carson, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 18:52:16 compute-0 podman[156749]: 2026-01-20 18:52:16.668992056 +0000 UTC m=+0.171419935 container start 43787959d302118139e89a80ca1740444aad72442ae62cce162cd71fe9dafc9f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_carson, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Jan 20 18:52:16 compute-0 podman[156749]: 2026-01-20 18:52:16.672631767 +0000 UTC m=+0.175059666 container attach 43787959d302118139e89a80ca1740444aad72442ae62cce162cd71fe9dafc9f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_carson, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:52:16 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:52:16 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:52:16 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:52:16.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:52:16 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 18:52:16 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 18:52:16 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:52:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:52:17.008Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 18:52:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:17 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3650004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:17 compute-0 lvm[156839]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 18:52:17 compute-0 lvm[156839]: VG ceph_vg0 finished
Jan 20 18:52:17 compute-0 busy_carson[156765]: {}
Jan 20 18:52:17 compute-0 systemd[1]: libpod-43787959d302118139e89a80ca1740444aad72442ae62cce162cd71fe9dafc9f.scope: Deactivated successfully.
Jan 20 18:52:17 compute-0 systemd[1]: libpod-43787959d302118139e89a80ca1740444aad72442ae62cce162cd71fe9dafc9f.scope: Consumed 1.103s CPU time.
Jan 20 18:52:17 compute-0 podman[156749]: 2026-01-20 18:52:17.372484917 +0000 UTC m=+0.874912836 container died 43787959d302118139e89a80ca1740444aad72442ae62cce162cd71fe9dafc9f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_carson, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default)
Jan 20 18:52:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-22229fef2c8022e2845134b62a76fe30a886ae23c2396d532db70fb7c1f4a1a9-merged.mount: Deactivated successfully.
Jan 20 18:52:17 compute-0 podman[156749]: 2026-01-20 18:52:17.417145186 +0000 UTC m=+0.919573075 container remove 43787959d302118139e89a80ca1740444aad72442ae62cce162cd71fe9dafc9f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Jan 20 18:52:17 compute-0 systemd[1]: libpod-conmon-43787959d302118139e89a80ca1740444aad72442ae62cce162cd71fe9dafc9f.scope: Deactivated successfully.
Jan 20 18:52:17 compute-0 sudo[156641]: pam_unix(sudo:session): session closed for user root
Jan 20 18:52:17 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 18:52:17 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:52:17 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 18:52:17 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:52:17 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v294: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 20 18:52:17 compute-0 sudo[156855]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 18:52:17 compute-0 sudo[156855]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:52:17 compute-0 sudo[156855]: pam_unix(sudo:session): session closed for user root
Jan 20 18:52:17 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:52:17 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 18:52:17 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:52:17.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 18:52:17 compute-0 ceph-mon[74381]: pgmap v292: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:52:17 compute-0 ceph-mon[74381]: pgmap v293: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:52:17 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:52:17 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:52:18 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:52:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:18 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3654003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:18 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3648003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:18 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:52:18 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 18:52:18 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:52:18.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 18:52:18 compute-0 ceph-mon[74381]: pgmap v294: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 20 18:52:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:19 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3670001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:19 compute-0 sshd-session[156882]: Accepted publickey for zuul from 192.168.122.30 port 45390 ssh2: ECDSA SHA256:OqjpBifwMHsrzwMbJxHbqE54q1skVz6aecL1tN9sOps
Jan 20 18:52:19 compute-0 systemd-logind[796]: New session 53 of user zuul.
Jan 20 18:52:19 compute-0 systemd[1]: Started Session 53 of User zuul.
Jan 20 18:52:19 compute-0 sshd-session[156882]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 18:52:19 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v295: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:52:19 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:52:19 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 18:52:19 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:52:19.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 18:52:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:52:19] "GET /metrics HTTP/1.1" 200 48329 "" "Prometheus/2.51.0"
Jan 20 18:52:19 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:52:19] "GET /metrics HTTP/1.1" 200 48329 "" "Prometheus/2.51.0"
Jan 20 18:52:20 compute-0 python3.9[157037]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 18:52:20 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:20 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3650004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:20 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:20 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3654003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:20 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:52:20 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:52:20 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:52:20.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:52:21 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:21 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3648003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:21 compute-0 ceph-mon[74381]: pgmap v295: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:52:21 compute-0 sudo[157191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwnuqqnurngspdforzdarygxggldwwam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935140.9313362-57-196923272632444/AnsiballZ_file.py'
Jan 20 18:52:21 compute-0 sudo[157191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:52:21 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v296: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:52:21 compute-0 python3.9[157193]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/openstack/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:52:21 compute-0 sudo[157191]: pam_unix(sudo:session): session closed for user root
Jan 20 18:52:21 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:52:21 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:52:21 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:52:21.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:52:22 compute-0 sudo[157345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yupgzimebfsxjtdvuhsrwhsgmxrybzdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935141.7860873-57-49982971869011/AnsiballZ_file.py'
Jan 20 18:52:22 compute-0 sudo[157345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:52:22 compute-0 python3.9[157347]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:52:22 compute-0 sudo[157345]: pam_unix(sudo:session): session closed for user root
Jan 20 18:52:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:22 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f36700023e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:22 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3650004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:22 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:52:22 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 18:52:22 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:52:22.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 18:52:22 compute-0 sudo[157497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbzzvikkvgfxcurojjmzbrysyagwkfle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935142.5436397-57-238532201683572/AnsiballZ_file.py'
Jan 20 18:52:22 compute-0 sudo[157497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:52:23 compute-0 python3.9[157499]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:52:23 compute-0 sudo[157497]: pam_unix(sudo:session): session closed for user root
Jan 20 18:52:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:23 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3654003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:23 compute-0 ceph-mon[74381]: pgmap v296: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:52:23 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:52:23 compute-0 sudo[157649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmdwnowqjpjjcczmwkmsuitqboohjxku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935143.210907-57-240454051130297/AnsiballZ_file.py'
Jan 20 18:52:23 compute-0 sudo[157649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:52:23 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v297: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:52:23 compute-0 python3.9[157651]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:52:23 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:52:23 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:52:23 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:52:23.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:52:23 compute-0 sudo[157649]: pam_unix(sudo:session): session closed for user root
Jan 20 18:52:24 compute-0 sudo[157803]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubhrmqczbnlarjpcykmwxwafcvsqhtmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935143.9096465-57-194459279355572/AnsiballZ_file.py'
Jan 20 18:52:24 compute-0 sudo[157803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:52:24 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:24 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3648003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:24 compute-0 python3.9[157805]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:52:24 compute-0 sudo[157803]: pam_unix(sudo:session): session closed for user root
Jan 20 18:52:24 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:24 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f36700023e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:24 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:52:24 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:52:24 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:52:24.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:52:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:52:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:52:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:52:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:52:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:52:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:52:25 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:25 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3650004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:25 compute-0 ceph-mon[74381]: pgmap v297: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:52:25 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 18:52:25 compute-0 python3.9[157956]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 18:52:25 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v298: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:52:25 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:52:25 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 18:52:25 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:52:25.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 18:52:26 compute-0 sudo[158108]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnizvyksusyxudqtubssscbltxycggke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935145.7983522-189-127782116830533/AnsiballZ_seboolean.py'
Jan 20 18:52:26 compute-0 sudo[158108]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:52:26 compute-0 ceph-mon[74381]: pgmap v298: 337 pgs: 337 active+clean; 458 KiB data, 150 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:52:26 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:26 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3654003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:26 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:26 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3648003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:26 compute-0 python3.9[158110]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 20 18:52:26 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:52:26 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:52:26 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:52:26.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:52:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:52:27.009Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 18:52:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:27 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3654003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:27 compute-0 sudo[158108]: pam_unix(sudo:session): session closed for user root
Jan 20 18:52:27 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v299: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 77 KiB/s rd, 0 B/s wr, 128 op/s
Jan 20 18:52:27 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:52:27 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:52:27 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:52:27.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:52:28 compute-0 python3.9[158265]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:52:28 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:52:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:28 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3640000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:28 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3648003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:28 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:52:28 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:52:28 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:52:28.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:52:28 compute-0 python3.9[158386]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768935147.5331867-213-153508917240747/.source follow=False _original_basename=haproxy.j2 checksum=a5072e7b19ca96a1f495d94f97f31903737cfd27 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:52:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:29 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3650004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:29 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v300: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 77 KiB/s rd, 0 B/s wr, 128 op/s
Jan 20 18:52:29 compute-0 python3.9[158536]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:52:29 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:52:29 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:52:29 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:52:29.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:52:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:52:29] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Jan 20 18:52:29 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:52:29] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Jan 20 18:52:30 compute-0 python3.9[158659]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768935149.1275527-258-127298435534437/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:52:30 compute-0 ceph-mon[74381]: pgmap v299: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 77 KiB/s rd, 0 B/s wr, 128 op/s
Jan 20 18:52:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:30 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3654003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:30 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f36400016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:30 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:52:30 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:52:30 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:52:30.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:52:31 compute-0 sudo[158809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmcruuropruizjjmakltutxstvgxdueu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935150.8508294-309-142973805264495/AnsiballZ_setup.py'
Jan 20 18:52:31 compute-0 sudo[158809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:52:31 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:31 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3648003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:31 compute-0 ceph-mon[74381]: pgmap v300: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 77 KiB/s rd, 0 B/s wr, 128 op/s
Jan 20 18:52:31 compute-0 python3.9[158811]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 18:52:31 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v301: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 77 KiB/s rd, 0 B/s wr, 128 op/s
Jan 20 18:52:31 compute-0 sudo[158809]: pam_unix(sudo:session): session closed for user root
Jan 20 18:52:31 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:52:31 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:52:31 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:52:31.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:52:31 compute-0 sudo[158822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 18:52:31 compute-0 sudo[158822]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:52:31 compute-0 sudo[158822]: pam_unix(sudo:session): session closed for user root
Jan 20 18:52:32 compute-0 sudo[158920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfbcsnjfltbjmyqmypbbszwnjhzastza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935150.8508294-309-142973805264495/AnsiballZ_dnf.py'
Jan 20 18:52:32 compute-0 sudo[158920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:52:32 compute-0 python3.9[158922]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 18:52:32 compute-0 ceph-mon[74381]: pgmap v301: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 77 KiB/s rd, 0 B/s wr, 128 op/s
Jan 20 18:52:32 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:32 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3650004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:32 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:32 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3654003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:32 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:52:32 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:52:32 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:52:32.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:52:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:33 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f36400016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:33 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:52:33 compute-0 sudo[158920]: pam_unix(sudo:session): session closed for user root
Jan 20 18:52:33 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v302: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 77 KiB/s rd, 0 B/s wr, 128 op/s
Jan 20 18:52:33 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:52:33 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:52:33 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:52:33.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:52:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:34 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3648003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:34 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3650004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:34 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:52:34 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 18:52:34 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:52:34.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 18:52:34 compute-0 sudo[159075]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkxihheqjwsrjtpnzqznbwoejgipxsxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935154.1701941-345-29771029956555/AnsiballZ_systemd.py'
Jan 20 18:52:34 compute-0 sudo[159075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:52:34 compute-0 ceph-mon[74381]: pgmap v302: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 77 KiB/s rd, 0 B/s wr, 128 op/s
Jan 20 18:52:35 compute-0 python3.9[159077]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 20 18:52:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:35 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3654003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [WARNING] 019/185235 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 20 18:52:35 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v303: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 77 KiB/s rd, 0 B/s wr, 128 op/s
Jan 20 18:52:35 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:52:35 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 18:52:35 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:52:35.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 18:52:36 compute-0 ovn_controller[155128]: 2026-01-20T18:52:36Z|00025|memory|INFO|16384 kB peak resident set size after 29.8 seconds
Jan 20 18:52:36 compute-0 ovn_controller[155128]: 2026-01-20T18:52:36Z|00026|memory|INFO|idl-cells-OVN_Southbound:273 idl-cells-Open_vSwitch:642 ofctrl_desired_flow_usage-KB:7 ofctrl_installed_flow_usage-KB:5 ofctrl_sb_flow_ref_usage-KB:3
Jan 20 18:52:36 compute-0 sudo[159075]: pam_unix(sudo:session): session closed for user root
Jan 20 18:52:36 compute-0 podman[159082]: 2026-01-20 18:52:36.158794517 +0000 UTC m=+0.134266825 container health_status d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible)
Jan 20 18:52:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:36 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f36400016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:36 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3648003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:36 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:52:36 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:52:36 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:52:36.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:52:36 compute-0 python3.9[159257]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:52:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:52:37.011Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 18:52:37 compute-0 ceph-mon[74381]: pgmap v303: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 77 KiB/s rd, 0 B/s wr, 128 op/s
Jan 20 18:52:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:37 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3650004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:37 compute-0 python3.9[159378]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768935156.481067-369-84914384839204/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:52:37 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v304: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 77 KiB/s rd, 0 B/s wr, 128 op/s
Jan 20 18:52:37 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:52:37 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:52:37 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:52:37.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:52:37 compute-0 python3.9[159530]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:52:38 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:52:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:38 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3654003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:38 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3640002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:38 compute-0 python3.9[159651]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768935157.5249095-369-273480563983907/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:52:38 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:52:38 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 18:52:38 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:52:38.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 18:52:39 compute-0 ceph-mon[74381]: pgmap v304: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 77 KiB/s rd, 0 B/s wr, 128 op/s
Jan 20 18:52:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:39 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3648003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:39 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v305: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:52:39 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:52:39 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 18:52:39 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:52:39.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 18:52:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:52:39] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Jan 20 18:52:39 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:52:39] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Jan 20 18:52:40 compute-0 python3.9[159803]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:52:40 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:40 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3650004070 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:40 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:40 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3654003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:40 compute-0 python3.9[159924]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768935159.7117982-501-166236592156443/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:52:40 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:52:40 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:52:40 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:52:40.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:52:41 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:41 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3640002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:41 compute-0 python3.9[160074]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:52:41 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v306: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:52:41 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 18:52:41 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:52:41 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:52:41 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:52:41.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:52:41 compute-0 python3.9[160197]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768935160.841734-501-272606204712415/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:52:42 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:42 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3648003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:42 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:42 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3640002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:42 compute-0 ceph-mon[74381]: pgmap v305: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:52:42 compute-0 ceph-mon[74381]: pgmap v306: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:52:42 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:52:42 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 18:52:42 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:52:42.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 18:52:42 compute-0 python3.9[160347]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 18:52:43 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:43 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3654003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:43 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:52:43 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v307: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:52:43 compute-0 sudo[160501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwhxgvneyjtexlwutyylsilijpnthfpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935163.4054933-615-94236285165124/AnsiballZ_file.py'
Jan 20 18:52:43 compute-0 sudo[160501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:52:43 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:52:43 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 18:52:43 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:52:43.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 18:52:43 compute-0 python3.9[160503]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:52:43 compute-0 sudo[160501]: pam_unix(sudo:session): session closed for user root
Jan 20 18:52:44 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:44 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f36500040b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:44 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:44 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3648003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:44 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:52:44 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 18:52:44 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:52:44.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 18:52:45 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:45 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3640003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:45 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v308: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:52:45 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:52:45 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 18:52:45 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:52:45.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 18:52:46 compute-0 sudo[160655]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezwpcfuoliczccbvpxoxpcxxswkfipaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935165.7127788-639-62598144996132/AnsiballZ_stat.py'
Jan 20 18:52:46 compute-0 sudo[160655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:52:46 compute-0 python3.9[160657]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:52:46 compute-0 sudo[160655]: pam_unix(sudo:session): session closed for user root
Jan 20 18:52:46 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:46 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3654003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:46 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:46 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 20 18:52:46 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:46 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f36500040d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:46 compute-0 sudo[160733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnbqqjczwakktlqiqsikhsodycjiehmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935165.7127788-639-62598144996132/AnsiballZ_file.py'
Jan 20 18:52:46 compute-0 sudo[160733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:52:46 compute-0 python3.9[160735]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:52:46 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:52:46 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:52:46 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:52:46.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:52:46 compute-0 sudo[160733]: pam_unix(sudo:session): session closed for user root
Jan 20 18:52:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:52:47.011Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 18:52:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:52:47.012Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 18:52:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:47 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3648003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:47 compute-0 sudo[160885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqzpddthyvjdkxevhbufwukuvoufzjxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935166.8781073-639-233182296489350/AnsiballZ_stat.py'
Jan 20 18:52:47 compute-0 sudo[160885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:52:47 compute-0 python3.9[160887]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:52:47 compute-0 sudo[160885]: pam_unix(sudo:session): session closed for user root
Jan 20 18:52:47 compute-0 sudo[160963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymqhuhxeufxpnkezbbnccxmcwohrsrpw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935166.8781073-639-233182296489350/AnsiballZ_file.py'
Jan 20 18:52:47 compute-0 sudo[160963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:52:47 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v309: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 85 B/s wr, 0 op/s
Jan 20 18:52:47 compute-0 python3.9[160965]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:52:47 compute-0 sudo[160963]: pam_unix(sudo:session): session closed for user root
Jan 20 18:52:47 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:52:47 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:52:47 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:52:47.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:52:48 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:52:48 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:48 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3640003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:48 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:48 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3654003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:48 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:52:48 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:52:48 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:52:48.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:52:48 compute-0 sudo[161117]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vurerdicnbjqebzrwwseeenuswdvvqna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935168.3809376-708-173896568498620/AnsiballZ_file.py'
Jan 20 18:52:48 compute-0 sudo[161117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:52:49 compute-0 python3.9[161119]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:52:49 compute-0 sudo[161117]: pam_unix(sudo:session): session closed for user root
Jan 20 18:52:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:49 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f36500040f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:49 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 20 18:52:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:49 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 20 18:52:49 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v310: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 20 18:52:49 compute-0 ceph-mon[74381]: pgmap v307: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:52:49 compute-0 ceph-mon[74381]: pgmap v308: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:52:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:49 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 20 18:52:49 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:52:49 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:52:49 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:52:49.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:52:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:52:49] "GET /metrics HTTP/1.1" 200 48339 "" "Prometheus/2.51.0"
Jan 20 18:52:49 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:52:49] "GET /metrics HTTP/1.1" 200 48339 "" "Prometheus/2.51.0"
Jan 20 18:52:50 compute-0 sudo[161271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrfbifrmzufwsupqmyllqwwcvamoqrlg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935170.0333228-732-143362073058900/AnsiballZ_stat.py'
Jan 20 18:52:50 compute-0 sudo[161271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:52:50 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:50 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f36500040f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:50 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:50 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3640003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:50 compute-0 python3.9[161273]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:52:50 compute-0 sudo[161271]: pam_unix(sudo:session): session closed for user root
Jan 20 18:52:50 compute-0 sudo[161349]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isuyysmgzotmsoljtjffvsxufjpncplq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935170.0333228-732-143362073058900/AnsiballZ_file.py'
Jan 20 18:52:50 compute-0 sudo[161349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:52:50 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:52:50 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:52:50 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:52:50.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:52:50 compute-0 ceph-mon[74381]: pgmap v309: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 85 B/s wr, 0 op/s
Jan 20 18:52:50 compute-0 ceph-mon[74381]: pgmap v310: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 20 18:52:50 compute-0 python3.9[161351]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:52:50 compute-0 sudo[161349]: pam_unix(sudo:session): session closed for user root
Jan 20 18:52:51 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:51 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3654003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:51 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v311: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Jan 20 18:52:51 compute-0 sudo[161503]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlnhckwogalucljzrxklvlzotsleklxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935171.338773-768-1644913726822/AnsiballZ_stat.py'
Jan 20 18:52:51 compute-0 sudo[161503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:52:51 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:52:51 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 18:52:51 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:52:51.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 18:52:51 compute-0 python3.9[161505]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:52:51 compute-0 sudo[161503]: pam_unix(sudo:session): session closed for user root
Jan 20 18:52:51 compute-0 sudo[161509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 18:52:51 compute-0 sudo[161509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:52:51 compute-0 sudo[161509]: pam_unix(sudo:session): session closed for user root
Jan 20 18:52:52 compute-0 sudo[161606]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptyxgnwnubackhzdfidzkxpubhmgjtiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935171.338773-768-1644913726822/AnsiballZ_file.py'
Jan 20 18:52:52 compute-0 sudo[161606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:52:52 compute-0 python3.9[161608]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:52:52 compute-0 sudo[161606]: pam_unix(sudo:session): session closed for user root
Jan 20 18:52:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:52 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3648003c30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:52 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3650004110 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:52 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 20 18:52:52 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:52:52 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:52:52 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:52:52.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:52:52 compute-0 sudo[161758]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsbekjniwvyomelqofnprlpczvnmoyjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935172.6282268-804-188496426727099/AnsiballZ_systemd.py'
Jan 20 18:52:52 compute-0 sudo[161758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:52:52 compute-0 ceph-mon[74381]: pgmap v311: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Jan 20 18:52:53 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:53 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3640003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:53 compute-0 python3.9[161760]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 18:52:53 compute-0 systemd[1]: Reloading.
Jan 20 18:52:53 compute-0 systemd-sysv-generator[161791]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:52:53 compute-0 systemd-rc-local-generator[161787]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:52:53 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:52:53 compute-0 sudo[161758]: pam_unix(sudo:session): session closed for user root
Jan 20 18:52:53 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v312: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Jan 20 18:52:53 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:52:53 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 18:52:53 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:52:53.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 18:52:54 compute-0 sudo[161948]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjxsfmqghcyadmytbxsdyyorkkfsscgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935174.173376-828-66502763853397/AnsiballZ_stat.py'
Jan 20 18:52:54 compute-0 sudo[161948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:52:54 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:54 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3654003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:54 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:54 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3648003c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:54 compute-0 python3.9[161950]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:52:54 compute-0 sudo[161948]: pam_unix(sudo:session): session closed for user root
Jan 20 18:52:54 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:52:54 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:52:54 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:52:54.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:52:54 compute-0 sudo[162026]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iumkvltjklwkivpilomhyxwvrwcrgytg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935174.173376-828-66502763853397/AnsiballZ_file.py'
Jan 20 18:52:54 compute-0 sudo[162026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:52:54 compute-0 ceph-mgr[74676]: [balancer INFO root] Optimize plan auto_2026-01-20_18:52:54
Jan 20 18:52:54 compute-0 ceph-mgr[74676]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 18:52:54 compute-0 ceph-mgr[74676]: [balancer INFO root] do_upmap
Jan 20 18:52:54 compute-0 ceph-mgr[74676]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.meta', 'images', 'default.rgw.control', '.nfs', 'default.rgw.meta', '.mgr', '.rgw.root', 'vms', 'volumes', 'backups', 'cephfs.cephfs.data']
Jan 20 18:52:54 compute-0 ceph-mgr[74676]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 18:52:54 compute-0 ceph-mon[74381]: pgmap v312: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Jan 20 18:52:55 compute-0 python3.9[162028]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:52:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:52:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:52:55 compute-0 sudo[162026]: pam_unix(sudo:session): session closed for user root
Jan 20 18:52:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:52:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:52:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:52:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:52:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 18:52:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:52:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 18:52:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:52:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:52:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:52:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:52:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:52:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:52:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:52:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:52:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:52:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 20 18:52:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:52:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:52:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:52:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 18:52:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:52:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 20 18:52:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:52:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:52:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:52:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 18:52:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:52:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 20 18:52:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 18:52:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 18:52:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 18:52:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 18:52:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 18:52:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 18:52:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 18:52:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 18:52:55 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:55 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3650004130 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 18:52:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 18:52:55 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v313: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Jan 20 18:52:55 compute-0 sudo[162180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrxzujduxtfxjiaanvvoixfetkcpsjmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935175.4389243-864-158963104593004/AnsiballZ_stat.py'
Jan 20 18:52:55 compute-0 sudo[162180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:52:55 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:52:55 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 18:52:55 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:52:55.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 18:52:55 compute-0 python3.9[162182]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:52:55 compute-0 sudo[162180]: pam_unix(sudo:session): session closed for user root
Jan 20 18:52:56 compute-0 sudo[162258]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yddimyvecaimljpbldatpbvburzmkqtm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935175.4389243-864-158963104593004/AnsiballZ_file.py'
Jan 20 18:52:56 compute-0 sudo[162258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:52:56 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 18:52:56 compute-0 python3.9[162260]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:52:56 compute-0 sudo[162258]: pam_unix(sudo:session): session closed for user root
Jan 20 18:52:56 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:56 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3640003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:56 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:56 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3654003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:56 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:52:56 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:52:56 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:52:56.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:52:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:52:57.012Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 18:52:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:52:57.012Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 18:52:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:52:57.013Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 18:52:57 compute-0 sudo[162410]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnuxmlobcxoayexnhufxyaswpuuvhhpw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935176.784947-900-252833571153618/AnsiballZ_systemd.py'
Jan 20 18:52:57 compute-0 sudo[162410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:52:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:57 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3654003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:57 compute-0 python3.9[162412]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 18:52:57 compute-0 systemd[1]: Reloading.
Jan 20 18:52:57 compute-0 systemd-sysv-generator[162444]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:52:57 compute-0 systemd-rc-local-generator[162441]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:52:57 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v314: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 20 18:52:57 compute-0 systemd[1]: Starting Create netns directory...
Jan 20 18:52:57 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 20 18:52:57 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 20 18:52:57 compute-0 systemd[1]: Finished Create netns directory.
Jan 20 18:52:57 compute-0 sudo[162410]: pam_unix(sudo:session): session closed for user root
Jan 20 18:52:57 compute-0 ceph-mon[74381]: pgmap v313: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Jan 20 18:52:57 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:52:57 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000023s ======
Jan 20 18:52:57 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:52:57.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 20 18:52:58 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:52:58 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:58 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3650004150 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:58 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:58 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3650004150 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:58 compute-0 sudo[162608]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smexfgbimehbljaevavxdjwqejgnceeg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935178.455236-930-102856266782952/AnsiballZ_file.py'
Jan 20 18:52:58 compute-0 sudo[162608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:52:58 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:52:58 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:52:58 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:52:58.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:52:58 compute-0 ceph-mon[74381]: pgmap v314: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 20 18:52:58 compute-0 python3.9[162610]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:52:58 compute-0 sudo[162608]: pam_unix(sudo:session): session closed for user root
Jan 20 18:52:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:52:59 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f364c000e00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:52:59 compute-0 sudo[162760]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecfislgpuhzyufnidfykpcigsaqagrog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935179.1795077-954-177315217628428/AnsiballZ_stat.py'
Jan 20 18:52:59 compute-0 sudo[162760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:52:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [WARNING] 019/185259 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 20 18:52:59 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v315: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Jan 20 18:52:59 compute-0 python3.9[162762]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:52:59 compute-0 sudo[162760]: pam_unix(sudo:session): session closed for user root
Jan 20 18:52:59 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:52:59 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:52:59 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:52:59.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:52:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:52:59] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Jan 20 18:52:59 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:52:59] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Jan 20 18:52:59 compute-0 sudo[162885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qokoulyaojmlksvjhibjpglcfnxnixwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935179.1795077-954-177315217628428/AnsiballZ_copy.py'
Jan 20 18:53:00 compute-0 sudo[162885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:53:00 compute-0 python3.9[162887]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768935179.1795077-954-177315217628428/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:53:00 compute-0 sudo[162885]: pam_unix(sudo:session): session closed for user root
Jan 20 18:53:00 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:53:00 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3648003c90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:53:00 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:53:00 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3654003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:53:00 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:53:00 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:53:00 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:53:00.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:53:00 compute-0 sudo[163037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-koiftrusgdkrdzruzqruwhdmgxhzfpoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935180.683934-1005-163254444288961/AnsiballZ_file.py'
Jan 20 18:53:00 compute-0 sudo[163037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:53:00 compute-0 ceph-mon[74381]: pgmap v315: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Jan 20 18:53:01 compute-0 python3.9[163039]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:53:01 compute-0 sudo[163037]: pam_unix(sudo:session): session closed for user root
Jan 20 18:53:01 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:53:01 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3650004150 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:53:01 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v316: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Jan 20 18:53:01 compute-0 sudo[163191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zipjhogwyiaocusyumtnnfstxeqpxdrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935181.4188254-1029-86944931465852/AnsiballZ_file.py'
Jan 20 18:53:01 compute-0 sudo[163191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:53:01 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:53:01 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 20 18:53:01 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:53:01.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 20 18:53:01 compute-0 python3.9[163193]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:53:01 compute-0 sudo[163191]: pam_unix(sudo:session): session closed for user root
Jan 20 18:53:02 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:53:02 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f364c001900 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:53:02 compute-0 sudo[163343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acffzeprgohchqyvdkroepifpnrfsmhq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935182.1944957-1053-225257538401669/AnsiballZ_stat.py'
Jan 20 18:53:02 compute-0 sudo[163343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:53:02 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:53:02 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3648003c90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:53:02 compute-0 python3.9[163345]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:53:02 compute-0 sudo[163343]: pam_unix(sudo:session): session closed for user root
Jan 20 18:53:02 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:53:02 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:53:02 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:53:02.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:53:02 compute-0 sudo[163466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ciyjijlmtxbuuvijygeannrxmwicbnop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935182.1944957-1053-225257538401669/AnsiballZ_copy.py'
Jan 20 18:53:02 compute-0 sudo[163466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:53:03 compute-0 ceph-mon[74381]: pgmap v316: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 938 B/s wr, 3 op/s
Jan 20 18:53:03 compute-0 python3.9[163468]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1768935182.1944957-1053-225257538401669/.source.json _original_basename=.i1vaxw4k follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:53:03 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:53:03 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3648003c90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:53:03 compute-0 sudo[163466]: pam_unix(sudo:session): session closed for user root
Jan 20 18:53:03 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:53:03 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v317: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Jan 20 18:53:03 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:53:03 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:53:03 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:53:03.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:53:03 compute-0 python3.9[163620]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:53:04 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:53:04 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3650004150 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:53:04 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:53:04 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3650004150 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:53:04 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:53:04 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:53:04 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:53:04.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:53:05 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:53:05 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3648003c90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:53:05 compute-0 ceph-mon[74381]: pgmap v317: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Jan 20 18:53:05 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v318: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Jan 20 18:53:05 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:53:05 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:53:05 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:53:05.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:53:06 compute-0 ceph-mon[74381]: pgmap v318: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Jan 20 18:53:06 compute-0 sudo[164053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mndrmirwnncstpgmiweujhgmylzrvpzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935185.923686-1173-278849329574332/AnsiballZ_container_config_data.py'
Jan 20 18:53:06 compute-0 sudo[164053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:53:06 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:53:06 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3654003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:53:06 compute-0 podman[164017]: 2026-01-20 18:53:06.425777353 +0000 UTC m=+0.094813052 container health_status d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 20 18:53:06 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:53:06 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3650004150 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:53:06 compute-0 python3.9[164059]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Jan 20 18:53:06 compute-0 sudo[164053]: pam_unix(sudo:session): session closed for user root
Jan 20 18:53:06 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:53:06 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:53:06 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:53:06.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:53:07 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:53:07.014Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 18:53:07 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:53:07 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f364c002220 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:53:07 compute-0 sudo[164218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awgbcvbgbvalikyaleanbjkzyywybwrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935186.9987166-1206-6787234247559/AnsiballZ_container_config_hash.py'
Jan 20 18:53:07 compute-0 sudo[164218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:53:07 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v319: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 20 18:53:07 compute-0 python3.9[164220]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 20 18:53:07 compute-0 sudo[164218]: pam_unix(sudo:session): session closed for user root
Jan 20 18:53:07 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:53:07 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:53:07 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:53:07.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:53:08 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:53:08 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:53:08 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3648003c90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:53:08 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:53:08 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3654003db0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:53:08 compute-0 sudo[164372]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rolqlifhebcxhcrwvhjibcvnkzmezkbf ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1768935188.0983968-1236-118502356402442/AnsiballZ_edpm_container_manage.py'
Jan 20 18:53:08 compute-0 sudo[164372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:53:08 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:53:08 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:53:08 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:53:08.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:53:08 compute-0 python3[164374]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json containers=['ovn_metadata_agent'] log_base_path=/var/log/containers/stdouts debug=False
Jan 20 18:53:08 compute-0 ceph-mon[74381]: pgmap v319: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 20 18:53:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:53:09 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3650004150 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:53:09 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v320: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:53:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:53:09] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Jan 20 18:53:09 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:53:09] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Jan 20 18:53:09 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:53:09 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:53:09 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:53:09.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:53:10 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 18:53:10 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:53:10 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f364c002220 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:53:10 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:53:10 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f364c002220 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:53:10 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:53:10 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:53:10 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:53:10.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:53:11 compute-0 kernel: ganesha.nfsd[162482]: segfault at 50 ip 00007f36f592332e sp 00007f36617f9210 error 4 in libntirpc.so.5.8[7f36f5908000+2c000] likely on CPU 4 (core 0, socket 4)
Jan 20 18:53:11 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Jan 20 18:53:11 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[150648]: 20/01/2026 18:53:11 : epoch 696fceb4 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f364c002220 fd 39 proxy ignored for local
Jan 20 18:53:11 compute-0 systemd[1]: Started Process Core Dump (PID 164439/UID 0).
Jan 20 18:53:11 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v321: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:53:11 compute-0 ceph-mon[74381]: pgmap v320: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:53:11 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:53:11 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:53:11 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:53:11.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:53:12 compute-0 sudo[164443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 18:53:12 compute-0 sudo[164443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:53:12 compute-0 sudo[164443]: pam_unix(sudo:session): session closed for user root
Jan 20 18:53:12 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:53:12 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:53:12 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:53:12.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:53:13 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:53:13 compute-0 systemd-coredump[164440]: Process 150665 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 61:
                                                    #0  0x00007f36f592332e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Jan 20 18:53:13 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v322: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:53:13 compute-0 systemd[1]: systemd-coredump@5-164439-0.service: Deactivated successfully.
Jan 20 18:53:13 compute-0 systemd[1]: systemd-coredump@5-164439-0.service: Consumed 1.113s CPU time.
Jan 20 18:53:13 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:53:13 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:53:13 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:53:13.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:53:14 compute-0 ceph-mon[74381]: pgmap v321: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:53:14 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:53:14 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 20 18:53:14 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:53:14.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 20 18:53:15 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v323: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:53:15 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:53:15 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:53:15 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:53:15.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:53:16 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:53:16 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:53:16 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:53:16.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:53:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:53:17.014Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 18:53:17 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v324: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 20 18:53:17 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:53:17 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:53:17 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:53:17.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:53:17 compute-0 sudo[164527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:53:17 compute-0 sudo[164527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:53:17 compute-0 sudo[164527]: pam_unix(sudo:session): session closed for user root
Jan 20 18:53:17 compute-0 sudo[164552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 20 18:53:17 compute-0 sudo[164552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:53:18 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:53:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [WARNING] 019/185318 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 20 18:53:18 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:53:18 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000023s ======
Jan 20 18:53:18 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:53:18.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 20 18:53:19 compute-0 ceph-mon[74381]: pgmap v322: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:53:19 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v325: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:53:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:53:19] "GET /metrics HTTP/1.1" 200 48337 "" "Prometheus/2.51.0"
Jan 20 18:53:19 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:53:19] "GET /metrics HTTP/1.1" 200 48337 "" "Prometheus/2.51.0"
Jan 20 18:53:19 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:53:19 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:53:19 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:53:19.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:53:20 compute-0 podman[164489]: 2026-01-20 18:53:20.121252224 +0000 UTC m=+6.509617013 container died 42c66f3f6edd73565f6d3d0eadd59107030548d81ef3c5fa0c83fa495713fc90 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:53:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d8cb5a89715401cd863bceb946e4ff712d35a14f320d2ee3fd7dd1dbf7b71a0-merged.mount: Deactivated successfully.
Jan 20 18:53:20 compute-0 podman[164489]: 2026-01-20 18:53:20.463312257 +0000 UTC m=+6.851677026 container remove 42c66f3f6edd73565f6d3d0eadd59107030548d81ef3c5fa0c83fa495713fc90 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:53:20 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Main process exited, code=exited, status=139/n/a
Jan 20 18:53:20 compute-0 podman[164387]: 2026-01-20 18:53:20.492053315 +0000 UTC m=+11.581570386 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 18:53:20 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Failed with result 'exit-code'.
Jan 20 18:53:20 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Consumed 1.431s CPU time.
Jan 20 18:53:20 compute-0 podman[164673]: 2026-01-20 18:53:20.64129482 +0000 UTC m=+0.025030101 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 18:53:20 compute-0 sudo[164552]: pam_unix(sudo:session): session closed for user root
Jan 20 18:53:20 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:53:20 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:53:20 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:53:20.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:53:21 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v326: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:53:21 compute-0 podman[164673]: 2026-01-20 18:53:21.598456342 +0000 UTC m=+0.982191593 container create 7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 20 18:53:21 compute-0 python3[164374]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 18:53:21 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 18:53:21 compute-0 sudo[164372]: pam_unix(sudo:session): session closed for user root
Jan 20 18:53:21 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:53:21 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:53:21 compute-0 ceph-mon[74381]: pgmap v323: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:53:21 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:53:21.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:53:21 compute-0 ceph-mon[74381]: pgmap v324: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 20 18:53:22 compute-0 sudo[164878]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nklzxghhlpjxovcfxahvsznzmxdqiwkn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935202.2024744-1260-164479283945728/AnsiballZ_stat.py'
Jan 20 18:53:22 compute-0 sudo[164878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:53:22 compute-0 python3.9[164880]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 18:53:22 compute-0 sudo[164878]: pam_unix(sudo:session): session closed for user root
Jan 20 18:53:22 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:53:22 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:53:22 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:53:22.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:53:23 compute-0 sudo[165033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpjsgvakmmfvzaljxyxugqmggskjiyqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935203.0373464-1287-216158237003131/AnsiballZ_file.py'
Jan 20 18:53:23 compute-0 sudo[165033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:53:23 compute-0 python3.9[165035]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:53:23 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v327: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:53:23 compute-0 sudo[165033]: pam_unix(sudo:session): session closed for user root
Jan 20 18:53:23 compute-0 sudo[165111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yghxcahhmdiccdckvrknwxxnwsbbaedl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935203.0373464-1287-216158237003131/AnsiballZ_stat.py'
Jan 20 18:53:23 compute-0 sudo[165111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:53:23 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:53:23 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:53:23 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:53:23.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:53:24 compute-0 python3.9[165113]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 18:53:24 compute-0 sudo[165111]: pam_unix(sudo:session): session closed for user root
Jan 20 18:53:24 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:53:24 compute-0 sudo[165262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wruenqfpswofpcoxuyrcnhnwkypgptti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935204.3338344-1287-117407716889406/AnsiballZ_copy.py'
Jan 20 18:53:24 compute-0 sudo[165262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:53:24 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:53:24 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:53:24 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:53:24.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:53:24 compute-0 python3.9[165264]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768935204.3338344-1287-117407716889406/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:53:24 compute-0 sudo[165262]: pam_unix(sudo:session): session closed for user root
Jan 20 18:53:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:53:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:53:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:53:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:53:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:53:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:53:25 compute-0 sudo[165338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekzfinpihaebdqvzdttdjkwzpfayxceo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935204.3338344-1287-117407716889406/AnsiballZ_systemd.py'
Jan 20 18:53:25 compute-0 sudo[165338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:53:25 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v328: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:53:25 compute-0 python3.9[165340]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 20 18:53:25 compute-0 systemd[1]: Reloading.
Jan 20 18:53:25 compute-0 systemd-rc-local-generator[165368]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:53:25 compute-0 systemd-sysv-generator[165371]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:53:25 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:53:25 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:53:25 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:53:25.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:53:25 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:53:26 compute-0 ceph-mon[74381]: pgmap v325: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:53:26 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:53:26 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 18:53:26 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 20 18:53:26 compute-0 sudo[165338]: pam_unix(sudo:session): session closed for user root
Jan 20 18:53:26 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:53:26 compute-0 sudo[165387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:53:26 compute-0 sudo[165387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:53:26 compute-0 sudo[165387]: pam_unix(sudo:session): session closed for user root
Jan 20 18:53:26 compute-0 sudo[165426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 18:53:26 compute-0 sudo[165426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:53:26 compute-0 sudo[165501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yltrklufrbxndlvcclemvawuuyciadyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935204.3338344-1287-117407716889406/AnsiballZ_systemd.py'
Jan 20 18:53:26 compute-0 sudo[165501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:53:26 compute-0 python3.9[165503]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 18:53:26 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:53:26 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000023s ======
Jan 20 18:53:26 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:53:26.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 20 18:53:26 compute-0 systemd[1]: Reloading.
Jan 20 18:53:26 compute-0 podman[165545]: 2026-01-20 18:53:26.767631822 +0000 UTC m=+0.023464413 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:53:26 compute-0 systemd-sysv-generator[165587]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:53:26 compute-0 systemd-rc-local-generator[165584]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:53:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:53:27.017Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 18:53:27 compute-0 podman[165545]: 2026-01-20 18:53:27.065630329 +0000 UTC m=+0.321462900 container create 3a79744226e340b863fadb34639803e32084373fc3adecee8c0f512680f8ea93 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_meninsky, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:53:27 compute-0 ceph-mon[74381]: pgmap v326: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:53:27 compute-0 ceph-mon[74381]: pgmap v327: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:53:27 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 18:53:27 compute-0 ceph-mon[74381]: pgmap v328: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:53:27 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:53:27 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:53:27 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 18:53:27 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 18:53:27 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:53:27 compute-0 systemd[1]: Started libpod-conmon-3a79744226e340b863fadb34639803e32084373fc3adecee8c0f512680f8ea93.scope.
Jan 20 18:53:27 compute-0 systemd[1]: Starting ovn_metadata_agent container...
Jan 20 18:53:27 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:53:27 compute-0 podman[165545]: 2026-01-20 18:53:27.357679804 +0000 UTC m=+0.613512375 container init 3a79744226e340b863fadb34639803e32084373fc3adecee8c0f512680f8ea93 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_meninsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:53:27 compute-0 podman[165545]: 2026-01-20 18:53:27.365398499 +0000 UTC m=+0.621231070 container start 3a79744226e340b863fadb34639803e32084373fc3adecee8c0f512680f8ea93 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_meninsky, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:53:27 compute-0 podman[165545]: 2026-01-20 18:53:27.369329923 +0000 UTC m=+0.625162494 container attach 3a79744226e340b863fadb34639803e32084373fc3adecee8c0f512680f8ea93 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_meninsky, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:53:27 compute-0 systemd[1]: libpod-3a79744226e340b863fadb34639803e32084373fc3adecee8c0f512680f8ea93.scope: Deactivated successfully.
Jan 20 18:53:27 compute-0 funny_meninsky[165597]: 167 167
Jan 20 18:53:27 compute-0 conmon[165597]: conmon 3a79744226e340b863fa <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3a79744226e340b863fadb34639803e32084373fc3adecee8c0f512680f8ea93.scope/container/memory.events
Jan 20 18:53:27 compute-0 podman[165545]: 2026-01-20 18:53:27.381879364 +0000 UTC m=+0.637711935 container died 3a79744226e340b863fadb34639803e32084373fc3adecee8c0f512680f8ea93 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_meninsky, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 20 18:53:27 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v329: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 20 18:53:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f65b726eefc016c3a936fdb588ca695393541b59255182c12a742804ac67d57-merged.mount: Deactivated successfully.
Jan 20 18:53:27 compute-0 podman[165545]: 2026-01-20 18:53:27.787158898 +0000 UTC m=+1.042991469 container remove 3a79744226e340b863fadb34639803e32084373fc3adecee8c0f512680f8ea93 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0)
Jan 20 18:53:27 compute-0 systemd[1]: libpod-conmon-3a79744226e340b863fadb34639803e32084373fc3adecee8c0f512680f8ea93.scope: Deactivated successfully.
Jan 20 18:53:27 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:53:27 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:53:27 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:53:27.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:53:27 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:53:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a78ef1c45086852fafcac00b1f8976d2c62ca5189d2e4ab12edbf9ae6bd9b03a/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Jan 20 18:53:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a78ef1c45086852fafcac00b1f8976d2c62ca5189d2e4ab12edbf9ae6bd9b03a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 18:53:27 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1.
Jan 20 18:53:28 compute-0 podman[165601]: 2026-01-20 18:53:28.123284819 +0000 UTC m=+0.934358838 container init 7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 20 18:53:28 compute-0 ovn_metadata_agent[165637]: + sudo -E kolla_set_configs
Jan 20 18:53:28 compute-0 podman[165601]: 2026-01-20 18:53:28.147189631 +0000 UTC m=+0.958263640 container start 7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 20 18:53:28 compute-0 edpm-start-podman-container[165601]: ovn_metadata_agent
Jan 20 18:53:28 compute-0 ovn_metadata_agent[165637]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 20 18:53:28 compute-0 ovn_metadata_agent[165637]: INFO:__main__:Validating config file
Jan 20 18:53:28 compute-0 ovn_metadata_agent[165637]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 20 18:53:28 compute-0 ovn_metadata_agent[165637]: INFO:__main__:Copying service configuration files
Jan 20 18:53:28 compute-0 ovn_metadata_agent[165637]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Jan 20 18:53:28 compute-0 ovn_metadata_agent[165637]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Jan 20 18:53:28 compute-0 ovn_metadata_agent[165637]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Jan 20 18:53:28 compute-0 ovn_metadata_agent[165637]: INFO:__main__:Writing out command to execute
Jan 20 18:53:28 compute-0 ovn_metadata_agent[165637]: INFO:__main__:Setting permission for /var/lib/neutron
Jan 20 18:53:28 compute-0 ovn_metadata_agent[165637]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Jan 20 18:53:28 compute-0 ovn_metadata_agent[165637]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Jan 20 18:53:28 compute-0 ovn_metadata_agent[165637]: INFO:__main__:Setting permission for /var/lib/neutron/external
Jan 20 18:53:28 compute-0 ovn_metadata_agent[165637]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Jan 20 18:53:28 compute-0 ovn_metadata_agent[165637]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Jan 20 18:53:28 compute-0 ovn_metadata_agent[165637]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Jan 20 18:53:28 compute-0 ovn_metadata_agent[165637]: ++ cat /run_command
Jan 20 18:53:28 compute-0 ovn_metadata_agent[165637]: + CMD=neutron-ovn-metadata-agent
Jan 20 18:53:28 compute-0 ovn_metadata_agent[165637]: + ARGS=
Jan 20 18:53:28 compute-0 ovn_metadata_agent[165637]: + sudo kolla_copy_cacerts
Jan 20 18:53:28 compute-0 ovn_metadata_agent[165637]: Running command: 'neutron-ovn-metadata-agent'
Jan 20 18:53:28 compute-0 ovn_metadata_agent[165637]: + [[ ! -n '' ]]
Jan 20 18:53:28 compute-0 ovn_metadata_agent[165637]: + . kolla_extend_start
Jan 20 18:53:28 compute-0 ovn_metadata_agent[165637]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Jan 20 18:53:28 compute-0 ovn_metadata_agent[165637]: + umask 0022
Jan 20 18:53:28 compute-0 ovn_metadata_agent[165637]: + exec neutron-ovn-metadata-agent
Jan 20 18:53:28 compute-0 podman[165645]: 2026-01-20 18:53:28.236854569 +0000 UTC m=+0.300058787 container create 3c3ac0f12123f9d9bacc26a173e2b41c05d898bc1760a9887954e731d3d3559f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_archimedes, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:53:28 compute-0 podman[165645]: 2026-01-20 18:53:28.145928762 +0000 UTC m=+0.209132990 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:53:28 compute-0 systemd[1]: Started libpod-conmon-3c3ac0f12123f9d9bacc26a173e2b41c05d898bc1760a9887954e731d3d3559f.scope.
Jan 20 18:53:28 compute-0 podman[165661]: 2026-01-20 18:53:28.358727189 +0000 UTC m=+0.197247406 container health_status 7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 20 18:53:28 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:53:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac53399f807f5987881e1da4c8db521369508d26dd7fbe7df9fbf3f6bf95cd14/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 18:53:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac53399f807f5987881e1da4c8db521369508d26dd7fbe7df9fbf3f6bf95cd14/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:53:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac53399f807f5987881e1da4c8db521369508d26dd7fbe7df9fbf3f6bf95cd14/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:53:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac53399f807f5987881e1da4c8db521369508d26dd7fbe7df9fbf3f6bf95cd14/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 18:53:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac53399f807f5987881e1da4c8db521369508d26dd7fbe7df9fbf3f6bf95cd14/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 18:53:28 compute-0 podman[165645]: 2026-01-20 18:53:28.418873988 +0000 UTC m=+0.482078236 container init 3c3ac0f12123f9d9bacc26a173e2b41c05d898bc1760a9887954e731d3d3559f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_archimedes, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Jan 20 18:53:28 compute-0 podman[165645]: 2026-01-20 18:53:28.426835899 +0000 UTC m=+0.490040127 container start 3c3ac0f12123f9d9bacc26a173e2b41c05d898bc1760a9887954e731d3d3559f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_archimedes, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:53:28 compute-0 podman[165645]: 2026-01-20 18:53:28.476278543 +0000 UTC m=+0.539482781 container attach 3c3ac0f12123f9d9bacc26a173e2b41c05d898bc1760a9887954e731d3d3559f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_archimedes, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:53:28 compute-0 edpm-start-podman-container[165599]: Creating additional drop-in dependency for "ovn_metadata_agent" (7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1)
Jan 20 18:53:28 compute-0 systemd[1]: Reloading.
Jan 20 18:53:28 compute-0 systemd-rc-local-generator[165733]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:53:28 compute-0 systemd-sysv-generator[165738]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:53:28 compute-0 adoring_archimedes[165699]: --> passed data devices: 0 physical, 1 LVM
Jan 20 18:53:28 compute-0 adoring_archimedes[165699]: --> All data devices are unavailable
Jan 20 18:53:28 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:53:28 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:53:28 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:53:28.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:53:28 compute-0 systemd[1]: Started ovn_metadata_agent container.
Jan 20 18:53:28 compute-0 systemd[1]: libpod-3c3ac0f12123f9d9bacc26a173e2b41c05d898bc1760a9887954e731d3d3559f.scope: Deactivated successfully.
Jan 20 18:53:28 compute-0 sudo[165501]: pam_unix(sudo:session): session closed for user root
Jan 20 18:53:28 compute-0 podman[165757]: 2026-01-20 18:53:28.87322628 +0000 UTC m=+0.033959794 container died 3c3ac0f12123f9d9bacc26a173e2b41c05d898bc1760a9887954e731d3d3559f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_archimedes, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:53:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac53399f807f5987881e1da4c8db521369508d26dd7fbe7df9fbf3f6bf95cd14-merged.mount: Deactivated successfully.
Jan 20 18:53:29 compute-0 podman[165757]: 2026-01-20 18:53:29.022214058 +0000 UTC m=+0.182947572 container remove 3c3ac0f12123f9d9bacc26a173e2b41c05d898bc1760a9887954e731d3d3559f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_archimedes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 20 18:53:29 compute-0 systemd[1]: libpod-conmon-3c3ac0f12123f9d9bacc26a173e2b41c05d898bc1760a9887954e731d3d3559f.scope: Deactivated successfully.
Jan 20 18:53:29 compute-0 sudo[165426]: pam_unix(sudo:session): session closed for user root
Jan 20 18:53:29 compute-0 sudo[165797]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:53:29 compute-0 sudo[165797]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:53:29 compute-0 sudo[165797]: pam_unix(sudo:session): session closed for user root
Jan 20 18:53:29 compute-0 ceph-mon[74381]: pgmap v329: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 20 18:53:29 compute-0 sudo[165846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- lvm list --format json
Jan 20 18:53:29 compute-0 sudo[165846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:53:29 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:53:29 compute-0 podman[166002]: 2026-01-20 18:53:29.553532284 +0000 UTC m=+0.038509604 container create 505ad2fb6686ea169c6a5b39405ee8c20dfc3bde1f8e7012c3c39196d182c52a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:53:29 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v330: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:53:29 compute-0 systemd[1]: Started libpod-conmon-505ad2fb6686ea169c6a5b39405ee8c20dfc3bde1f8e7012c3c39196d182c52a.scope.
Jan 20 18:53:29 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:53:29 compute-0 podman[166002]: 2026-01-20 18:53:29.632697269 +0000 UTC m=+0.117674629 container init 505ad2fb6686ea169c6a5b39405ee8c20dfc3bde1f8e7012c3c39196d182c52a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Jan 20 18:53:29 compute-0 podman[166002]: 2026-01-20 18:53:29.536180047 +0000 UTC m=+0.021157387 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:53:29 compute-0 podman[166002]: 2026-01-20 18:53:29.639329818 +0000 UTC m=+0.124307148 container start 505ad2fb6686ea169c6a5b39405ee8c20dfc3bde1f8e7012c3c39196d182c52a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_bardeen, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:53:29 compute-0 sweet_bardeen[166030]: 167 167
Jan 20 18:53:29 compute-0 podman[166002]: 2026-01-20 18:53:29.643838276 +0000 UTC m=+0.128815616 container attach 505ad2fb6686ea169c6a5b39405ee8c20dfc3bde1f8e7012c3c39196d182c52a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:53:29 compute-0 systemd[1]: libpod-505ad2fb6686ea169c6a5b39405ee8c20dfc3bde1f8e7012c3c39196d182c52a.scope: Deactivated successfully.
Jan 20 18:53:29 compute-0 podman[166002]: 2026-01-20 18:53:29.645038585 +0000 UTC m=+0.130015905 container died 505ad2fb6686ea169c6a5b39405ee8c20dfc3bde1f8e7012c3c39196d182c52a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:53:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-a54ed7022ea239b98914b418f49fcf6add9e5f84f9d091489c9806f240298ac9-merged.mount: Deactivated successfully.
Jan 20 18:53:29 compute-0 podman[166002]: 2026-01-20 18:53:29.690209946 +0000 UTC m=+0.175187266 container remove 505ad2fb6686ea169c6a5b39405ee8c20dfc3bde1f8e7012c3c39196d182c52a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 20 18:53:29 compute-0 systemd[1]: libpod-conmon-505ad2fb6686ea169c6a5b39405ee8c20dfc3bde1f8e7012c3c39196d182c52a.scope: Deactivated successfully.
Jan 20 18:53:29 compute-0 python3.9[166018]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 20 18:53:29 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:53:29] "GET /metrics HTTP/1.1" 200 48339 "" "Prometheus/2.51.0"
Jan 20 18:53:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:53:29] "GET /metrics HTTP/1.1" 200 48339 "" "Prometheus/2.51.0"
Jan 20 18:53:29 compute-0 podman[166078]: 2026-01-20 18:53:29.849613734 +0000 UTC m=+0.043674137 container create a8227ca42979b31f80fac7def61d966152f4fbd0b642c2ce4a602f4e8160de1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:53:29 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:53:29 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:53:29 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:53:29.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:53:29 compute-0 systemd[1]: Started libpod-conmon-a8227ca42979b31f80fac7def61d966152f4fbd0b642c2ce4a602f4e8160de1e.scope.
Jan 20 18:53:29 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:53:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70a84c67d46c359451ad89acbf82e0cdb2eb176b105480318d7228e89c7e2f66/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 18:53:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70a84c67d46c359451ad89acbf82e0cdb2eb176b105480318d7228e89c7e2f66/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:53:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70a84c67d46c359451ad89acbf82e0cdb2eb176b105480318d7228e89c7e2f66/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:53:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70a84c67d46c359451ad89acbf82e0cdb2eb176b105480318d7228e89c7e2f66/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 18:53:29 compute-0 podman[166078]: 2026-01-20 18:53:29.829961374 +0000 UTC m=+0.024021797 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:53:29 compute-0 podman[166078]: 2026-01-20 18:53:29.935826729 +0000 UTC m=+0.129887152 container init a8227ca42979b31f80fac7def61d966152f4fbd0b642c2ce4a602f4e8160de1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_liskov, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:53:29 compute-0 podman[166078]: 2026-01-20 18:53:29.942330006 +0000 UTC m=+0.136390409 container start a8227ca42979b31f80fac7def61d966152f4fbd0b642c2ce4a602f4e8160de1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_liskov, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 18:53:29 compute-0 podman[166078]: 2026-01-20 18:53:29.94671505 +0000 UTC m=+0.140775473 container attach a8227ca42979b31f80fac7def61d966152f4fbd0b642c2ce4a602f4e8160de1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_liskov, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.138 165659 INFO neutron.common.config [-] Logging enabled!
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.138 165659 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.138 165659 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.139 165659 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.139 165659 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.139 165659 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.139 165659 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.139 165659 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.140 165659 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.140 165659 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.140 165659 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.140 165659 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.140 165659 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.140 165659 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.141 165659 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.141 165659 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.141 165659 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.141 165659 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.141 165659 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.141 165659 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.142 165659 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.142 165659 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.142 165659 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.142 165659 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.142 165659 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.142 165659 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.142 165659 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.142 165659 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.143 165659 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.143 165659 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.143 165659 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.143 165659 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.143 165659 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.143 165659 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.144 165659 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.144 165659 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.144 165659 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.144 165659 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.144 165659 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.144 165659 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.145 165659 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.145 165659 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.145 165659 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.145 165659 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.145 165659 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.145 165659 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.145 165659 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.146 165659 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.146 165659 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.146 165659 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.146 165659 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.146 165659 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.146 165659 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.146 165659 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.146 165659 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.147 165659 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.147 165659 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.147 165659 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.147 165659 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.147 165659 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.147 165659 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.147 165659 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.148 165659 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.148 165659 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.148 165659 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.148 165659 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.148 165659 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.148 165659 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.149 165659 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.149 165659 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.149 165659 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.149 165659 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.149 165659 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.149 165659 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.149 165659 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.150 165659 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.150 165659 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.150 165659 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.150 165659 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.150 165659 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.150 165659 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.151 165659 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.151 165659 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.151 165659 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.151 165659 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.151 165659 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.151 165659 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.151 165659 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.152 165659 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.152 165659 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.152 165659 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.152 165659 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.152 165659 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.152 165659 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.153 165659 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.153 165659 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.153 165659 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.153 165659 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.153 165659 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.153 165659 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.153 165659 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.153 165659 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.154 165659 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.154 165659 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.154 165659 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.154 165659 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.154 165659 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.154 165659 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.155 165659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.155 165659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.155 165659 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.155 165659 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.155 165659 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.155 165659 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.155 165659 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.156 165659 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.156 165659 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.156 165659 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.156 165659 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.156 165659 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.156 165659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.156 165659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.157 165659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.157 165659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.157 165659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.157 165659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.157 165659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.157 165659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.158 165659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.158 165659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.158 165659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.158 165659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.158 165659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.158 165659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.158 165659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.159 165659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.159 165659 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.159 165659 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.159 165659 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.159 165659 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.159 165659 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.160 165659 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.160 165659 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.160 165659 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.160 165659 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.160 165659 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.160 165659 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.160 165659 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.161 165659 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.161 165659 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.161 165659 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.161 165659 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.161 165659 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.161 165659 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.161 165659 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.162 165659 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.162 165659 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.162 165659 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.162 165659 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.162 165659 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.162 165659 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.162 165659 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.163 165659 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.163 165659 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.163 165659 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.163 165659 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.163 165659 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.163 165659 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.163 165659 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.164 165659 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.164 165659 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.164 165659 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.164 165659 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.164 165659 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.164 165659 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.164 165659 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.165 165659 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.165 165659 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.165 165659 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.165 165659 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.165 165659 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.165 165659 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.166 165659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.166 165659 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.166 165659 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.166 165659 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.166 165659 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.166 165659 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.166 165659 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.167 165659 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.167 165659 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.167 165659 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.167 165659 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.167 165659 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.167 165659 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.167 165659 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.168 165659 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.168 165659 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.168 165659 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.168 165659 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.168 165659 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.168 165659 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.168 165659 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.168 165659 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.169 165659 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.169 165659 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.169 165659 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.169 165659 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.169 165659 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.169 165659 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.170 165659 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.170 165659 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.170 165659 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.170 165659 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.170 165659 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.170 165659 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.170 165659 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.170 165659 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.171 165659 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.171 165659 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.171 165659 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.171 165659 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.171 165659 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.171 165659 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.171 165659 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.172 165659 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.172 165659 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.172 165659 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.172 165659 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.172 165659 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.172 165659 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.172 165659 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.173 165659 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.173 165659 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.173 165659 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.173 165659 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.173 165659 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.173 165659 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.173 165659 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.174 165659 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.174 165659 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.174 165659 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.174 165659 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.174 165659 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.174 165659 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.174 165659 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.174 165659 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.175 165659 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.175 165659 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.175 165659 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.175 165659 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.175 165659 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.175 165659 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.175 165659 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.176 165659 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.176 165659 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.176 165659 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.176 165659 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.176 165659 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.176 165659 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.176 165659 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.177 165659 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.177 165659 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.177 165659 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.177 165659 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.177 165659 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.177 165659 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.177 165659 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.178 165659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.178 165659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.178 165659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.178 165659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.178 165659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.178 165659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.179 165659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.179 165659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.179 165659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.179 165659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.179 165659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.179 165659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.179 165659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.180 165659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.180 165659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.180 165659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.180 165659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.180 165659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.180 165659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.180 165659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.181 165659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.181 165659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.181 165659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.181 165659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.181 165659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.181 165659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.182 165659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.182 165659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.182 165659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.182 165659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.182 165659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.182 165659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.183 165659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.183 165659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.183 165659 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.183 165659 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.192 165659 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.192 165659 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.192 165659 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.192 165659 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.193 165659 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.209 165659 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 7018ca8a-de0e-4b56-bb43-675238d4f8b3 (UUID: 7018ca8a-de0e-4b56-bb43-675238d4f8b3) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.231 165659 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.231 165659 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.231 165659 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.231 165659 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.234 165659 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 20 18:53:30 compute-0 competent_liskov[166095]: {
Jan 20 18:53:30 compute-0 competent_liskov[166095]:     "0": [
Jan 20 18:53:30 compute-0 competent_liskov[166095]:         {
Jan 20 18:53:30 compute-0 competent_liskov[166095]:             "devices": [
Jan 20 18:53:30 compute-0 competent_liskov[166095]:                 "/dev/loop3"
Jan 20 18:53:30 compute-0 competent_liskov[166095]:             ],
Jan 20 18:53:30 compute-0 competent_liskov[166095]:             "lv_name": "ceph_lv0",
Jan 20 18:53:30 compute-0 competent_liskov[166095]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 18:53:30 compute-0 competent_liskov[166095]:             "lv_size": "21470642176",
Jan 20 18:53:30 compute-0 competent_liskov[166095]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=aecbbf3b-b405-507b-97d7-637a83f5b4b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5f53c0c6-6046-4836-83f9-ff93da7e674e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 18:53:30 compute-0 competent_liskov[166095]:             "lv_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 18:53:30 compute-0 competent_liskov[166095]:             "name": "ceph_lv0",
Jan 20 18:53:30 compute-0 competent_liskov[166095]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 18:53:30 compute-0 competent_liskov[166095]:             "tags": {
Jan 20 18:53:30 compute-0 competent_liskov[166095]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 18:53:30 compute-0 competent_liskov[166095]:                 "ceph.block_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 18:53:30 compute-0 competent_liskov[166095]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 18:53:30 compute-0 competent_liskov[166095]:                 "ceph.cluster_fsid": "aecbbf3b-b405-507b-97d7-637a83f5b4b1",
Jan 20 18:53:30 compute-0 competent_liskov[166095]:                 "ceph.cluster_name": "ceph",
Jan 20 18:53:30 compute-0 competent_liskov[166095]:                 "ceph.crush_device_class": "",
Jan 20 18:53:30 compute-0 competent_liskov[166095]:                 "ceph.encrypted": "0",
Jan 20 18:53:30 compute-0 competent_liskov[166095]:                 "ceph.osd_fsid": "5f53c0c6-6046-4836-83f9-ff93da7e674e",
Jan 20 18:53:30 compute-0 competent_liskov[166095]:                 "ceph.osd_id": "0",
Jan 20 18:53:30 compute-0 competent_liskov[166095]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 18:53:30 compute-0 competent_liskov[166095]:                 "ceph.type": "block",
Jan 20 18:53:30 compute-0 competent_liskov[166095]:                 "ceph.vdo": "0",
Jan 20 18:53:30 compute-0 competent_liskov[166095]:                 "ceph.with_tpm": "0"
Jan 20 18:53:30 compute-0 competent_liskov[166095]:             },
Jan 20 18:53:30 compute-0 competent_liskov[166095]:             "type": "block",
Jan 20 18:53:30 compute-0 competent_liskov[166095]:             "vg_name": "ceph_vg0"
Jan 20 18:53:30 compute-0 competent_liskov[166095]:         }
Jan 20 18:53:30 compute-0 competent_liskov[166095]:     ]
Jan 20 18:53:30 compute-0 competent_liskov[166095]: }
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.240 165659 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.247 165659 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '7018ca8a-de0e-4b56-bb43-675238d4f8b3'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f4e780f9880>], external_ids={}, name=7018ca8a-de0e-4b56-bb43-675238d4f8b3, nb_cfg_timestamp=1768935134324, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.248 165659 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f4e780e5f70>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.249 165659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.249 165659 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.249 165659 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.250 165659 INFO oslo_service.service [-] Starting 1 workers
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.254 165659 DEBUG oslo_service.service [-] Started child 166104 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.257 166104 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-230367'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.258 165659 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpy3ryxvwe/privsep.sock']
Jan 20 18:53:30 compute-0 systemd[1]: libpod-a8227ca42979b31f80fac7def61d966152f4fbd0b642c2ce4a602f4e8160de1e.scope: Deactivated successfully.
Jan 20 18:53:30 compute-0 podman[166078]: 2026-01-20 18:53:30.26862118 +0000 UTC m=+0.462681583 container died a8227ca42979b31f80fac7def61d966152f4fbd0b642c2ce4a602f4e8160de1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.279 166104 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.280 166104 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.280 166104 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.283 166104 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.290 166104 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 20 18:53:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-70a84c67d46c359451ad89acbf82e0cdb2eb176b105480318d7228e89c7e2f66-merged.mount: Deactivated successfully.
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.301 166104 INFO eventlet.wsgi.server [-] (166104) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Jan 20 18:53:30 compute-0 podman[166078]: 2026-01-20 18:53:30.310116873 +0000 UTC m=+0.504177276 container remove a8227ca42979b31f80fac7def61d966152f4fbd0b642c2ce4a602f4e8160de1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_liskov, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:53:30 compute-0 systemd[1]: libpod-conmon-a8227ca42979b31f80fac7def61d966152f4fbd0b642c2ce4a602f4e8160de1e.scope: Deactivated successfully.
Jan 20 18:53:30 compute-0 sudo[165846]: pam_unix(sudo:session): session closed for user root
Jan 20 18:53:30 compute-0 sudo[166120]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:53:30 compute-0 sudo[166120]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:53:30 compute-0 sudo[166120]: pam_unix(sudo:session): session closed for user root
Jan 20 18:53:30 compute-0 sudo[166168]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- raw list --format json
Jan 20 18:53:30 compute-0 sudo[166168]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:53:30 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Scheduled restart job, restart counter is at 6.
Jan 20 18:53:30 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.ulclbx for aecbbf3b-b405-507b-97d7-637a83f5b4b1.
Jan 20 18:53:30 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Consumed 1.431s CPU time.
Jan 20 18:53:30 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.ulclbx for aecbbf3b-b405-507b-97d7-637a83f5b4b1...
Jan 20 18:53:30 compute-0 sudo[166303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jupmqfntxxruloupauqxskijlkqkepvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935210.4222047-1422-227534884351132/AnsiballZ_stat.py'
Jan 20 18:53:30 compute-0 sudo[166303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:53:30 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:53:30 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 20 18:53:30 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:53:30.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 20 18:53:30 compute-0 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Jan 20 18:53:30 compute-0 python3.9[166319]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:53:30 compute-0 podman[166373]: 2026-01-20 18:53:30.881020126 +0000 UTC m=+0.045790518 container create 33d32cc3dbfb9ac721ce9ad086c583f12f8c0a7b288c8dfb5b8c8217495915ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_brattain, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Jan 20 18:53:30 compute-0 podman[166395]: 2026-01-20 18:53:30.909536149 +0000 UTC m=+0.039308612 container create c259dd630f10443cf5ccfdd0ba6adcadf440c2397248508986c7e56a6da642a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Jan 20 18:53:30 compute-0 systemd[1]: Started libpod-conmon-33d32cc3dbfb9ac721ce9ad086c583f12f8c0a7b288c8dfb5b8c8217495915ad.scope.
Jan 20 18:53:30 compute-0 sudo[166303]: pam_unix(sudo:session): session closed for user root
Jan 20 18:53:30 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:53:30 compute-0 podman[166373]: 2026-01-20 18:53:30.856330245 +0000 UTC m=+0.021100637 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.960 165659 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.960 165659 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpy3ryxvwe/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 20 18:53:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c180747e8966a323b9215d3589ce815a7c53ded249cc9ca87bb0bc999faf3899/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Jan 20 18:53:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c180747e8966a323b9215d3589ce815a7c53ded249cc9ca87bb0bc999faf3899/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:53:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c180747e8966a323b9215d3589ce815a7c53ded249cc9ca87bb0bc999faf3899/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 18:53:30 compute-0 podman[166373]: 2026-01-20 18:53:30.963502052 +0000 UTC m=+0.128272454 container init 33d32cc3dbfb9ac721ce9ad086c583f12f8c0a7b288c8dfb5b8c8217495915ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Jan 20 18:53:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c180747e8966a323b9215d3589ce815a7c53ded249cc9ca87bb0bc999faf3899/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.ulclbx-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.821 166372 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.825 166372 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.827 166372 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.827 166372 INFO oslo.privsep.daemon [-] privsep daemon running as pid 166372
Jan 20 18:53:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:30.962 166372 DEBUG oslo.privsep.daemon [-] privsep: reply[21c6c1f2-b6b7-4f11-a420-19cec11c97bf]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 18:53:30 compute-0 podman[166373]: 2026-01-20 18:53:30.972404935 +0000 UTC m=+0.137175297 container start 33d32cc3dbfb9ac721ce9ad086c583f12f8c0a7b288c8dfb5b8c8217495915ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_brattain, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Jan 20 18:53:30 compute-0 podman[166373]: 2026-01-20 18:53:30.976016532 +0000 UTC m=+0.140786904 container attach 33d32cc3dbfb9ac721ce9ad086c583f12f8c0a7b288c8dfb5b8c8217495915ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:53:30 compute-0 serene_brattain[166411]: 167 167
Jan 20 18:53:30 compute-0 systemd[1]: libpod-33d32cc3dbfb9ac721ce9ad086c583f12f8c0a7b288c8dfb5b8c8217495915ad.scope: Deactivated successfully.
Jan 20 18:53:30 compute-0 podman[166373]: 2026-01-20 18:53:30.981259828 +0000 UTC m=+0.146030200 container died 33d32cc3dbfb9ac721ce9ad086c583f12f8c0a7b288c8dfb5b8c8217495915ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_brattain, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Jan 20 18:53:30 compute-0 podman[166395]: 2026-01-20 18:53:30.891460806 +0000 UTC m=+0.021233289 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:53:31 compute-0 podman[166395]: 2026-01-20 18:53:31.000152229 +0000 UTC m=+0.129924702 container init c259dd630f10443cf5ccfdd0ba6adcadf440c2397248508986c7e56a6da642a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Jan 20 18:53:31 compute-0 podman[166395]: 2026-01-20 18:53:31.006127732 +0000 UTC m=+0.135900195 container start c259dd630f10443cf5ccfdd0ba6adcadf440c2397248508986c7e56a6da642a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:53:31 compute-0 bash[166395]: c259dd630f10443cf5ccfdd0ba6adcadf440c2397248508986c7e56a6da642a4
Jan 20 18:53:31 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[166416]: 20/01/2026 18:53:31 : epoch 696fcf2b : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Jan 20 18:53:31 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[166416]: 20/01/2026 18:53:31 : epoch 696fcf2b : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Jan 20 18:53:31 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.ulclbx for aecbbf3b-b405-507b-97d7-637a83f5b4b1.
Jan 20 18:53:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-75c3bce0be2610f68144f632d0aa88f74d5a8a09c5eadf407d7a021ab7847cae-merged.mount: Deactivated successfully.
Jan 20 18:53:31 compute-0 podman[166373]: 2026-01-20 18:53:31.040836364 +0000 UTC m=+0.205606736 container remove 33d32cc3dbfb9ac721ce9ad086c583f12f8c0a7b288c8dfb5b8c8217495915ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_brattain, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 20 18:53:31 compute-0 systemd[1]: libpod-conmon-33d32cc3dbfb9ac721ce9ad086c583f12f8c0a7b288c8dfb5b8c8217495915ad.scope: Deactivated successfully.
Jan 20 18:53:31 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[166416]: 20/01/2026 18:53:31 : epoch 696fcf2b : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Jan 20 18:53:31 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[166416]: 20/01/2026 18:53:31 : epoch 696fcf2b : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Jan 20 18:53:31 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[166416]: 20/01/2026 18:53:31 : epoch 696fcf2b : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Jan 20 18:53:31 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[166416]: 20/01/2026 18:53:31 : epoch 696fcf2b : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Jan 20 18:53:31 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[166416]: 20/01/2026 18:53:31 : epoch 696fcf2b : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Jan 20 18:53:31 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[166416]: 20/01/2026 18:53:31 : epoch 696fcf2b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 20 18:53:31 compute-0 podman[166554]: 2026-01-20 18:53:31.197543367 +0000 UTC m=+0.043578905 container create bcad9fe261c6f4e1ccfba7aef6d6f0f8e262eeef589c068109c9a797c95980ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_elgamal, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Jan 20 18:53:31 compute-0 systemd[1]: Started libpod-conmon-bcad9fe261c6f4e1ccfba7aef6d6f0f8e262eeef589c068109c9a797c95980ce.scope.
Jan 20 18:53:31 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:53:31 compute-0 sudo[166622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fezjkhuqevbhkiqdpqfyqujcptbtmagm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935210.4222047-1422-227534884351132/AnsiballZ_copy.py'
Jan 20 18:53:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43a814aedd7e7bc0de341acc5fe363e9f1e3470e7eeac87ce39ca7c45fdc70d8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 18:53:31 compute-0 podman[166554]: 2026-01-20 18:53:31.178578513 +0000 UTC m=+0.024614071 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:53:31 compute-0 sudo[166622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:53:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43a814aedd7e7bc0de341acc5fe363e9f1e3470e7eeac87ce39ca7c45fdc70d8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:53:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43a814aedd7e7bc0de341acc5fe363e9f1e3470e7eeac87ce39ca7c45fdc70d8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:53:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43a814aedd7e7bc0de341acc5fe363e9f1e3470e7eeac87ce39ca7c45fdc70d8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 18:53:31 compute-0 podman[166554]: 2026-01-20 18:53:31.318209967 +0000 UTC m=+0.164245535 container init bcad9fe261c6f4e1ccfba7aef6d6f0f8e262eeef589c068109c9a797c95980ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:53:31 compute-0 podman[166554]: 2026-01-20 18:53:31.328517894 +0000 UTC m=+0.174553432 container start bcad9fe261c6f4e1ccfba7aef6d6f0f8e262eeef589c068109c9a797c95980ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:53:31 compute-0 podman[166554]: 2026-01-20 18:53:31.333626866 +0000 UTC m=+0.179662404 container attach bcad9fe261c6f4e1ccfba7aef6d6f0f8e262eeef589c068109c9a797c95980ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_elgamal, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:53:31 compute-0 python3.9[166625]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768935210.4222047-1422-227534884351132/.source.yaml _original_basename=.ib7dp1k_ follow=False checksum=f29ab5f51775497d51c7a8376ea74b170ca0ed5e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:53:31 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:31.497 166372 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 18:53:31 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:31.498 166372 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 18:53:31 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:31.498 166372 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 18:53:31 compute-0 sudo[166622]: pam_unix(sudo:session): session closed for user root
Jan 20 18:53:31 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v331: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:53:31 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:53:31 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000023s ======
Jan 20 18:53:31 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:53:31.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 20 18:53:31 compute-0 lvm[166723]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 18:53:31 compute-0 lvm[166723]: VG ceph_vg0 finished
Jan 20 18:53:32 compute-0 ceph-mon[74381]: pgmap v330: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:53:32 compute-0 friendly_elgamal[166619]: {}
Jan 20 18:53:32 compute-0 sudo[166726]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.094 166372 DEBUG oslo.privsep.daemon [-] privsep: reply[dde5ffa9-a5d7-4a72-8a79-a45734515395]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.096 165659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=7018ca8a-de0e-4b56-bb43-675238d4f8b3, column=external_ids, values=({'neutron:ovn-metadata-id': '9988e94e-fc04-590e-a961-c6ac0c2d4a19'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 18:53:32 compute-0 sudo[166726]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:53:32 compute-0 sudo[166726]: pam_unix(sudo:session): session closed for user root
Jan 20 18:53:32 compute-0 systemd[1]: libpod-bcad9fe261c6f4e1ccfba7aef6d6f0f8e262eeef589c068109c9a797c95980ce.scope: Deactivated successfully.
Jan 20 18:53:32 compute-0 podman[166554]: 2026-01-20 18:53:32.122604452 +0000 UTC m=+0.968639990 container died bcad9fe261c6f4e1ccfba7aef6d6f0f8e262eeef589c068109c9a797c95980ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid)
Jan 20 18:53:32 compute-0 systemd[1]: libpod-bcad9fe261c6f4e1ccfba7aef6d6f0f8e262eeef589c068109c9a797c95980ce.scope: Consumed 1.045s CPU time.
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.133 165659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7018ca8a-de0e-4b56-bb43-675238d4f8b3, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.142 165659 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.144 165659 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.144 165659 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.144 165659 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.144 165659 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.144 165659 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.145 165659 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.145 165659 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.145 165659 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.145 165659 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.145 165659 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.145 165659 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.145 165659 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.145 165659 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.146 165659 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.146 165659 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.146 165659 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.146 165659 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.146 165659 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.146 165659 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.146 165659 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.146 165659 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.147 165659 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.147 165659 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.147 165659 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.147 165659 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.147 165659 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.147 165659 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.148 165659 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.148 165659 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.148 165659 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.148 165659 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.148 165659 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.148 165659 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.148 165659 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.148 165659 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.149 165659 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.149 165659 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.149 165659 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.149 165659 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.149 165659 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.149 165659 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.149 165659 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.149 165659 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.150 165659 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.150 165659 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.150 165659 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.150 165659 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.150 165659 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.150 165659 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.150 165659 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.150 165659 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-43a814aedd7e7bc0de341acc5fe363e9f1e3470e7eeac87ce39ca7c45fdc70d8-merged.mount: Deactivated successfully.
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.150 165659 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.151 165659 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.151 165659 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.151 165659 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.151 165659 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.151 165659 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.151 165659 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.151 165659 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.153 165659 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.153 165659 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.153 165659 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.153 165659 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.153 165659 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.154 165659 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.154 165659 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.154 165659 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.154 165659 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.154 165659 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.154 165659 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.154 165659 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.154 165659 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.154 165659 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.155 165659 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.155 165659 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.155 165659 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.155 165659 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.155 165659 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.155 165659 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.155 165659 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.155 165659 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.155 165659 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.156 165659 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.156 165659 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.156 165659 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.156 165659 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.156 165659 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.156 165659 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.156 165659 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.156 165659 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.156 165659 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.156 165659 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.157 165659 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.157 165659 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.157 165659 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.157 165659 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.157 165659 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.157 165659 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.157 165659 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.157 165659 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.157 165659 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.157 165659 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.158 165659 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.158 165659 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.158 165659 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.158 165659 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.158 165659 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.158 165659 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.158 165659 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.158 165659 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.159 165659 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.159 165659 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.159 165659 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.159 165659 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.159 165659 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.159 165659 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.159 165659 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.159 165659 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.159 165659 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.160 165659 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.160 165659 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.160 165659 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.160 165659 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.160 165659 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.160 165659 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.160 165659 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.160 165659 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.160 165659 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.161 165659 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.161 165659 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.161 165659 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.161 165659 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.161 165659 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.161 165659 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.161 165659 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.161 165659 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.161 165659 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.162 165659 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.162 165659 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.162 165659 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.162 165659 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.162 165659 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.162 165659 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.162 165659 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.162 165659 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.162 165659 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.163 165659 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.163 165659 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.163 165659 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.163 165659 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.163 165659 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.163 165659 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.163 165659 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.163 165659 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.163 165659 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.164 165659 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.164 165659 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.164 165659 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.164 165659 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.164 165659 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.164 165659 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.164 165659 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.164 165659 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.164 165659 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.164 165659 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.165 165659 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.165 165659 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.165 165659 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.165 165659 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.165 165659 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.165 165659 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.165 165659 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.165 165659 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.166 165659 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.166 165659 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.166 165659 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.166 165659 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.166 165659 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.166 165659 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.166 165659 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.166 165659 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.167 165659 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.167 165659 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.167 165659 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.167 165659 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.167 165659 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.167 165659 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.167 165659 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.167 165659 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.168 165659 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.168 165659 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.168 165659 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.168 165659 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.168 165659 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.168 165659 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.168 165659 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.168 165659 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.169 165659 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.169 165659 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.169 165659 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.169 165659 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.169 165659 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.169 165659 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.169 165659 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.169 165659 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.169 165659 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.170 165659 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.170 165659 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.170 165659 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.170 165659 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.170 165659 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.170 165659 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.170 165659 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.170 165659 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.171 165659 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.171 165659 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.171 165659 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.171 165659 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.171 165659 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.171 165659 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.171 165659 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.171 165659 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.171 165659 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.171 165659 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.172 165659 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.172 165659 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.172 165659 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.172 165659 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.172 165659 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.172 165659 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.172 165659 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.172 165659 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.172 165659 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.173 165659 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.173 165659 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.173 165659 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.173 165659 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.173 165659 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.173 165659 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.173 165659 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.173 165659 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.173 165659 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.174 165659 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.174 165659 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.174 165659 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.174 165659 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.174 165659 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.174 165659 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.174 165659 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.174 165659 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.174 165659 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.175 165659 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.175 165659 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.175 165659 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.175 165659 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.175 165659 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.175 165659 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.175 165659 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.175 165659 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.175 165659 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.175 165659 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.176 165659 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.176 165659 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.176 165659 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.176 165659 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.176 165659 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.176 165659 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.176 165659 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.176 165659 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.176 165659 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.177 165659 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.177 165659 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.177 165659 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.177 165659 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.177 165659 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.177 165659 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.177 165659 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.177 165659 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.177 165659 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.178 165659 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.178 165659 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 podman[166554]: 2026-01-20 18:53:32.177300732 +0000 UTC m=+1.023336270 container remove bcad9fe261c6f4e1ccfba7aef6d6f0f8e262eeef589c068109c9a797c95980ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_elgamal, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.178 165659 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.178 165659 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.178 165659 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.178 165659 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.178 165659 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.179 165659 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.179 165659 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.179 165659 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.179 165659 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.179 165659 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.179 165659 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.179 165659 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.179 165659 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.179 165659 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.180 165659 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.180 165659 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.180 165659 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.180 165659 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.180 165659 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.180 165659 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.180 165659 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 18:53:32 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:53:32.180 165659 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 20 18:53:32 compute-0 systemd[1]: libpod-conmon-bcad9fe261c6f4e1ccfba7aef6d6f0f8e262eeef589c068109c9a797c95980ce.scope: Deactivated successfully.
Jan 20 18:53:32 compute-0 sudo[166168]: pam_unix(sudo:session): session closed for user root
Jan 20 18:53:32 compute-0 sshd-session[156885]: Connection closed by 192.168.122.30 port 45390
Jan 20 18:53:32 compute-0 sshd-session[156882]: pam_unix(sshd:session): session closed for user zuul
Jan 20 18:53:32 compute-0 systemd[1]: session-53.scope: Deactivated successfully.
Jan 20 18:53:32 compute-0 systemd[1]: session-53.scope: Consumed 53.507s CPU time.
Jan 20 18:53:32 compute-0 systemd-logind[796]: Session 53 logged out. Waiting for processes to exit.
Jan 20 18:53:32 compute-0 systemd-logind[796]: Removed session 53.
Jan 20 18:53:32 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:53:32 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 20 18:53:32 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:53:32.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 20 18:53:33 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v332: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Jan 20 18:53:33 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:53:33 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000023s ======
Jan 20 18:53:33 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:53:33.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 20 18:53:34 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 18:53:34 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:53:34 compute-0 ceph-mon[74381]: pgmap v331: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:53:34 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 18:53:34 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:53:34 compute-0 sudo[166763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 18:53:34 compute-0 sudo[166763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:53:34 compute-0 sudo[166763]: pam_unix(sudo:session): session closed for user root
Jan 20 18:53:34 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:53:34 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:53:34 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:53:34 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:53:34.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:53:35 compute-0 ceph-mon[74381]: pgmap v332: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Jan 20 18:53:35 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:53:35 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:53:35 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v333: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Jan 20 18:53:35 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:53:35 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:53:35 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:53:35.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:53:36 compute-0 ceph-mon[74381]: pgmap v333: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Jan 20 18:53:36 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:53:36 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:53:36 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:53:36.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:53:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:53:37.020Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 18:53:37 compute-0 podman[166790]: 2026-01-20 18:53:37.128842738 +0000 UTC m=+0.106260476 container health_status d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true)
Jan 20 18:53:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[166416]: 20/01/2026 18:53:37 : epoch 696fcf2b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 20 18:53:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[166416]: 20/01/2026 18:53:37 : epoch 696fcf2b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 20 18:53:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[166416]: 20/01/2026 18:53:37 : epoch 696fcf2b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 20 18:53:37 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v334: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Jan 20 18:53:37 compute-0 sshd-session[166820]: Accepted publickey for zuul from 192.168.122.30 port 45322 ssh2: ECDSA SHA256:OqjpBifwMHsrzwMbJxHbqE54q1skVz6aecL1tN9sOps
Jan 20 18:53:37 compute-0 systemd-logind[796]: New session 54 of user zuul.
Jan 20 18:53:37 compute-0 systemd[1]: Started Session 54 of User zuul.
Jan 20 18:53:37 compute-0 sshd-session[166820]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 18:53:37 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:53:37 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:53:37 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:53:37.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:53:38 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:53:38 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:53:38 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:53:38.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:53:38 compute-0 python3.9[166973]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 18:53:39 compute-0 ceph-mon[74381]: pgmap v334: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Jan 20 18:53:39 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:53:39 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v335: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 426 B/s wr, 1 op/s
Jan 20 18:53:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:53:39] "GET /metrics HTTP/1.1" 200 48339 "" "Prometheus/2.51.0"
Jan 20 18:53:39 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:53:39] "GET /metrics HTTP/1.1" 200 48339 "" "Prometheus/2.51.0"
Jan 20 18:53:39 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:53:39 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:53:39 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:53:39.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:53:39 compute-0 sudo[167129]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzalqciioxhfnpoogkslrlcumysbktwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935219.475258-57-45688500731388/AnsiballZ_command.py'
Jan 20 18:53:39 compute-0 sudo[167129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:53:40 compute-0 python3.9[167131]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:53:40 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [WARNING] 019/185340 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 20 18:53:40 compute-0 sudo[167129]: pam_unix(sudo:session): session closed for user root
Jan 20 18:53:40 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:53:40 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:53:40 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:53:40.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:53:40 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 18:53:41 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v336: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 426 B/s wr, 1 op/s
Jan 20 18:53:41 compute-0 ceph-mon[74381]: pgmap v335: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 426 B/s wr, 1 op/s
Jan 20 18:53:41 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:53:41 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:53:41 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:53:41.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:53:42 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[166416]: 20/01/2026 18:53:41 : epoch 696fcf2b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 20 18:53:42 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[166416]: 20/01/2026 18:53:41 : epoch 696fcf2b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 20 18:53:42 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[166416]: 20/01/2026 18:53:41 : epoch 696fcf2b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 20 18:53:42 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[166416]: 20/01/2026 18:53:42 : epoch 696fcf2b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 20 18:53:42 compute-0 sudo[167296]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhnzfqeszaxlxqamvcoxgkwyssawlpty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935221.4048617-90-71529641577137/AnsiballZ_systemd_service.py'
Jan 20 18:53:42 compute-0 sudo[167296]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:53:42 compute-0 python3.9[167298]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 20 18:53:42 compute-0 systemd[1]: Reloading.
Jan 20 18:53:42 compute-0 systemd-rc-local-generator[167326]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:53:42 compute-0 systemd-sysv-generator[167329]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:53:42 compute-0 sudo[167296]: pam_unix(sudo:session): session closed for user root
Jan 20 18:53:42 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:53:42 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:53:42 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:53:42.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:53:43 compute-0 ceph-mon[74381]: pgmap v336: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 426 B/s wr, 1 op/s
Jan 20 18:53:43 compute-0 python3.9[167484]: ansible-ansible.builtin.service_facts Invoked
Jan 20 18:53:43 compute-0 network[167501]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 20 18:53:43 compute-0 network[167502]: 'network-scripts' will be removed from distribution in near future.
Jan 20 18:53:43 compute-0 network[167503]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 20 18:53:43 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v337: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 1 op/s
Jan 20 18:53:43 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:53:43 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 20 18:53:43 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:53:43.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 20 18:53:44 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:53:44 compute-0 ceph-mon[74381]: pgmap v337: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 1 op/s
Jan 20 18:53:44 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:53:44 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:53:44 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:53:44.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:53:45 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v338: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 341 B/s wr, 1 op/s
Jan 20 18:53:45 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:53:45 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:53:45 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:53:45.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:53:46 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[166416]: 20/01/2026 18:53:46 : epoch 696fcf2b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 20 18:53:46 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[166416]: 20/01/2026 18:53:46 : epoch 696fcf2b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 20 18:53:46 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[166416]: 20/01/2026 18:53:46 : epoch 696fcf2b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 20 18:53:46 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:53:46 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:53:46 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:53:46.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:53:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:53:47.021Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 18:53:47 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v339: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 682 B/s wr, 2 op/s
Jan 20 18:53:47 compute-0 ceph-mon[74381]: pgmap v338: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 341 B/s wr, 1 op/s
Jan 20 18:53:47 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:53:47 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:53:47 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:53:47.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:53:48 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:53:48 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000023s ======
Jan 20 18:53:48 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:53:48.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 20 18:53:48 compute-0 ceph-mon[74381]: pgmap v339: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 682 B/s wr, 2 op/s
Jan 20 18:53:49 compute-0 sudo[167769]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbeihmzlvfoltpcjnkhtozzhvanqojfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935228.9234521-147-230634161941131/AnsiballZ_systemd_service.py'
Jan 20 18:53:49 compute-0 sudo[167769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:53:49 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:53:49 compute-0 python3.9[167771]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 18:53:49 compute-0 sudo[167769]: pam_unix(sudo:session): session closed for user root
Jan 20 18:53:49 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v340: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 341 B/s wr, 2 op/s
Jan 20 18:53:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:53:49] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Jan 20 18:53:49 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:53:49] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Jan 20 18:53:49 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:53:49 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:53:49 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:53:49.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:53:49 compute-0 sudo[167924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owkcirccpjkdrdlagicwbapzskpozqbl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935229.6807299-147-209863423324346/AnsiballZ_systemd_service.py'
Jan 20 18:53:49 compute-0 sudo[167924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:53:50 compute-0 python3.9[167926]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 18:53:50 compute-0 sudo[167924]: pam_unix(sudo:session): session closed for user root
Jan 20 18:53:50 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:53:50 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:53:50 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:53:50.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:53:50 compute-0 sudo[168077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wonkknthckqooaojksbuuhjoprmqnxtd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935230.43538-147-205027130532155/AnsiballZ_systemd_service.py'
Jan 20 18:53:50 compute-0 sudo[168077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:53:51 compute-0 python3.9[168079]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 18:53:51 compute-0 sudo[168077]: pam_unix(sudo:session): session closed for user root
Jan 20 18:53:51 compute-0 ceph-mon[74381]: pgmap v340: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 341 B/s wr, 2 op/s
Jan 20 18:53:51 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v341: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 341 B/s wr, 2 op/s
Jan 20 18:53:51 compute-0 sudo[168231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvvtwgmvxccvsblezarnymekykxshreo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935231.3165734-147-202634547584168/AnsiballZ_systemd_service.py'
Jan 20 18:53:51 compute-0 sudo[168231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:53:51 compute-0 python3.9[168234]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 18:53:51 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:53:51 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:53:51 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:53:51.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:53:51 compute-0 sudo[168231]: pam_unix(sudo:session): session closed for user root
Jan 20 18:53:52 compute-0 sudo[168312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 18:53:52 compute-0 sudo[168312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:53:52 compute-0 sudo[168312]: pam_unix(sudo:session): session closed for user root
Jan 20 18:53:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[166416]: 20/01/2026 18:53:52 : epoch 696fcf2b : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 20 18:53:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[166416]: 20/01/2026 18:53:52 : epoch 696fcf2b : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Jan 20 18:53:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[166416]: 20/01/2026 18:53:52 : epoch 696fcf2b : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Jan 20 18:53:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[166416]: 20/01/2026 18:53:52 : epoch 696fcf2b : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Jan 20 18:53:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[166416]: 20/01/2026 18:53:52 : epoch 696fcf2b : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Jan 20 18:53:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[166416]: 20/01/2026 18:53:52 : epoch 696fcf2b : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Jan 20 18:53:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[166416]: 20/01/2026 18:53:52 : epoch 696fcf2b : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Jan 20 18:53:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[166416]: 20/01/2026 18:53:52 : epoch 696fcf2b : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 20 18:53:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[166416]: 20/01/2026 18:53:52 : epoch 696fcf2b : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 20 18:53:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[166416]: 20/01/2026 18:53:52 : epoch 696fcf2b : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 20 18:53:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[166416]: 20/01/2026 18:53:52 : epoch 696fcf2b : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Jan 20 18:53:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[166416]: 20/01/2026 18:53:52 : epoch 696fcf2b : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 20 18:53:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[166416]: 20/01/2026 18:53:52 : epoch 696fcf2b : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Jan 20 18:53:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[166416]: 20/01/2026 18:53:52 : epoch 696fcf2b : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Jan 20 18:53:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[166416]: 20/01/2026 18:53:52 : epoch 696fcf2b : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Jan 20 18:53:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[166416]: 20/01/2026 18:53:52 : epoch 696fcf2b : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Jan 20 18:53:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[166416]: 20/01/2026 18:53:52 : epoch 696fcf2b : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Jan 20 18:53:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[166416]: 20/01/2026 18:53:52 : epoch 696fcf2b : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Jan 20 18:53:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[166416]: 20/01/2026 18:53:52 : epoch 696fcf2b : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Jan 20 18:53:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[166416]: 20/01/2026 18:53:52 : epoch 696fcf2b : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Jan 20 18:53:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[166416]: 20/01/2026 18:53:52 : epoch 696fcf2b : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Jan 20 18:53:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[166416]: 20/01/2026 18:53:52 : epoch 696fcf2b : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Jan 20 18:53:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[166416]: 20/01/2026 18:53:52 : epoch 696fcf2b : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Jan 20 18:53:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[166416]: 20/01/2026 18:53:52 : epoch 696fcf2b : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Jan 20 18:53:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[166416]: 20/01/2026 18:53:52 : epoch 696fcf2b : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 20 18:53:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[166416]: 20/01/2026 18:53:52 : epoch 696fcf2b : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Jan 20 18:53:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[166416]: 20/01/2026 18:53:52 : epoch 696fcf2b : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 20 18:53:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[166416]: 20/01/2026 18:53:52 : epoch 696fcf2b : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 20 18:53:52 compute-0 sudo[168424]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-noogzjzmwstgvypljepferlopbxefoek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935232.0768359-147-105992627944381/AnsiballZ_systemd_service.py'
Jan 20 18:53:52 compute-0 sudo[168424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:53:52 compute-0 ceph-mon[74381]: pgmap v341: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 341 B/s wr, 2 op/s
Jan 20 18:53:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[166416]: 20/01/2026 18:53:52 : epoch 696fcf2b : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f851c000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:53:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[166416]: 20/01/2026 18:53:52 : epoch 696fcf2b : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f85100016e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:53:52 compute-0 python3.9[168426]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 18:53:52 compute-0 sudo[168424]: pam_unix(sudo:session): session closed for user root
Jan 20 18:53:52 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:53:52 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:53:52 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:53:52.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:53:53 compute-0 sudo[168577]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgmqsugvhvjbgmlbbruehlizrdtxnrfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935232.847942-147-233537914399867/AnsiballZ_systemd_service.py'
Jan 20 18:53:53 compute-0 sudo[168577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:53:53 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[166416]: 20/01/2026 18:53:53 : epoch 696fcf2b : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f85100016e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:53:53 compute-0 python3.9[168579]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 18:53:53 compute-0 sudo[168577]: pam_unix(sudo:session): session closed for user root
Jan 20 18:53:53 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v342: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 597 B/s wr, 2 op/s
Jan 20 18:53:53 compute-0 sudo[168732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdxmdbdrxgteglovnlqxiylbymoqxcjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935233.5472906-147-194798242860636/AnsiballZ_systemd_service.py'
Jan 20 18:53:53 compute-0 sudo[168732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:53:53 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:53:53 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:53:53 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:53:53.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:53:54 compute-0 python3.9[168734]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 18:53:54 compute-0 sudo[168732]: pam_unix(sudo:session): session closed for user root
Jan 20 18:53:54 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:53:54 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [WARNING] 019/185354 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 20 18:53:54 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[166416]: 20/01/2026 18:53:54 : epoch 696fcf2b : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f851c000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:53:54 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[166416]: 20/01/2026 18:53:54 : epoch 696fcf2b : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8500000d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:53:54 compute-0 ceph-mon[74381]: pgmap v342: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 597 B/s wr, 2 op/s
Jan 20 18:53:54 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:53:54 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000023s ======
Jan 20 18:53:54 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:53:54.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 20 18:53:54 compute-0 ceph-mgr[74676]: [balancer INFO root] Optimize plan auto_2026-01-20_18:53:54
Jan 20 18:53:54 compute-0 ceph-mgr[74676]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 18:53:54 compute-0 ceph-mgr[74676]: [balancer INFO root] do_upmap
Jan 20 18:53:54 compute-0 ceph-mgr[74676]: [balancer INFO root] pools ['images', '.mgr', 'default.rgw.control', 'cephfs.cephfs.meta', 'vms', 'default.rgw.log', '.rgw.root', 'default.rgw.meta', 'backups', 'cephfs.cephfs.data', 'volumes', '.nfs']
Jan 20 18:53:54 compute-0 ceph-mgr[74676]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 18:53:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:53:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:53:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:53:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:53:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:53:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:53:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 18:53:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:53:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 18:53:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:53:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:53:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:53:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:53:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:53:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:53:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:53:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:53:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:53:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 20 18:53:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:53:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:53:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:53:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 18:53:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:53:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 20 18:53:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:53:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:53:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:53:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 18:53:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:53:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 20 18:53:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 18:53:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 18:53:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 18:53:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 18:53:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 18:53:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 18:53:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 18:53:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 18:53:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 18:53:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 18:53:55 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[166416]: 20/01/2026 18:53:55 : epoch 696fcf2b : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f85100016e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:53:55 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[166416]: 20/01/2026 18:53:55 : epoch 696fcf2b : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 20 18:53:55 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[166416]: 20/01/2026 18:53:55 : epoch 696fcf2b : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 20 18:53:55 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v343: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 20 18:53:55 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[166416]: 20/01/2026 18:53:55 : epoch 696fcf2b : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 20 18:53:55 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 18:53:55 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:53:55 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:53:55 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:53:55.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:53:56 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[166416]: 20/01/2026 18:53:56 : epoch 696fcf2b : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8514001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:53:56 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[166416]: 20/01/2026 18:53:56 : epoch 696fcf2b : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f851c001f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:53:56 compute-0 sudo[168887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpbamaetgchxhigvnlrjfkjhriszftpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935236.345439-303-48698994751916/AnsiballZ_file.py'
Jan 20 18:53:56 compute-0 sudo[168887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:53:56 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:53:56 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:53:56 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:53:56.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:53:56 compute-0 ceph-mon[74381]: pgmap v343: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 20 18:53:56 compute-0 python3.9[168889]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:53:56 compute-0 sudo[168887]: pam_unix(sudo:session): session closed for user root
Jan 20 18:53:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:53:57.022Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 18:53:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:53:57.022Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 18:53:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[166416]: 20/01/2026 18:53:57 : epoch 696fcf2b : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8500001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:53:57 compute-0 sudo[169039]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixfdscwvxoyfpsfontbztfeemfivbotk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935237.1438897-303-82025779506224/AnsiballZ_file.py'
Jan 20 18:53:57 compute-0 sudo[169039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:53:57 compute-0 python3.9[169041]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:53:57 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v344: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Jan 20 18:53:57 compute-0 sudo[169039]: pam_unix(sudo:session): session closed for user root
Jan 20 18:53:57 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:53:57 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:53:57 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:53:57.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:53:58 compute-0 sudo[169193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yyxlurmdizkicizerhnaiqdiysvpdats ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935237.7304187-303-143209862486584/AnsiballZ_file.py'
Jan 20 18:53:58 compute-0 sudo[169193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:53:58 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [WARNING] 019/185358 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 20 18:53:58 compute-0 python3.9[169195]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:53:58 compute-0 sudo[169193]: pam_unix(sudo:session): session closed for user root
Jan 20 18:53:58 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[166416]: 20/01/2026 18:53:58 : epoch 696fcf2b : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8514001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:53:58 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[166416]: 20/01/2026 18:53:58 : epoch 696fcf2b : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f85100027e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:53:58 compute-0 sudo[169358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utnhtbzmcsgtisubuvwdkepdavdfewjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935238.308448-303-72446056659319/AnsiballZ_file.py'
Jan 20 18:53:58 compute-0 sudo[169358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:53:58 compute-0 podman[169319]: 2026-01-20 18:53:58.611620682 +0000 UTC m=+0.053545934 container health_status 7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 20 18:53:58 compute-0 python3.9[169364]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:53:58 compute-0 sudo[169358]: pam_unix(sudo:session): session closed for user root
Jan 20 18:53:58 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:53:58 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:53:58 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:53:58.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:53:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[166416]: 20/01/2026 18:53:59 : epoch 696fcf2b : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f851c001f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:53:59 compute-0 sudo[169516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcfqtcuvzwdajliwftcxpmdxyvyeksxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935239.005854-303-277754132641103/AnsiballZ_file.py'
Jan 20 18:53:59 compute-0 sudo[169516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:53:59 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:53:59 compute-0 ceph-mon[74381]: pgmap v344: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Jan 20 18:53:59 compute-0 python3.9[169518]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:53:59 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v345: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 20 18:53:59 compute-0 sudo[169516]: pam_unix(sudo:session): session closed for user root
Jan 20 18:53:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:53:59] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Jan 20 18:53:59 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:53:59] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Jan 20 18:53:59 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:53:59 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:53:59 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:53:59.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:54:00 compute-0 sudo[169670]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzreexkpvidcizidxyopggckbougouwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935239.7491558-303-118391950914896/AnsiballZ_file.py'
Jan 20 18:54:00 compute-0 sudo[169670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:54:00 compute-0 python3.9[169672]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:54:00 compute-0 sudo[169670]: pam_unix(sudo:session): session closed for user root
Jan 20 18:54:00 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[166416]: 20/01/2026 18:54:00 : epoch 696fcf2b : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8500001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:54:00 compute-0 ceph-mon[74381]: pgmap v345: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 20 18:54:00 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[166416]: 20/01/2026 18:54:00 : epoch 696fcf2b : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f85140029b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:54:00 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:54:00 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:54:00 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:54:00.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:54:00 compute-0 sudo[169822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtwfouraxtvwlsxuiextqbyjhrqaujrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935240.4025116-303-99521696668263/AnsiballZ_file.py'
Jan 20 18:54:00 compute-0 sudo[169822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:54:01 compute-0 python3.9[169824]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:54:01 compute-0 sudo[169822]: pam_unix(sudo:session): session closed for user root
Jan 20 18:54:01 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[166416]: 20/01/2026 18:54:01 : epoch 696fcf2b : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f85100027e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:54:01 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v346: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 20 18:54:01 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:54:01 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 18:54:01 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:54:01.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 18:54:02 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[166416]: 20/01/2026 18:54:02 : epoch 696fcf2b : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f851c008dc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:54:02 compute-0 sudo[169976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsddyxjjrxgwmvznarqddlxwtrmcdauj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935242.2375565-453-25227061535665/AnsiballZ_file.py'
Jan 20 18:54:02 compute-0 sudo[169976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:54:02 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[166416]: 20/01/2026 18:54:02 : epoch 696fcf2b : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8500001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:54:02 compute-0 python3.9[169978]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:54:02 compute-0 ceph-mon[74381]: pgmap v346: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 20 18:54:02 compute-0 sudo[169976]: pam_unix(sudo:session): session closed for user root
Jan 20 18:54:02 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:54:02 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:54:02 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:54:02.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:54:03 compute-0 sudo[170128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umoulvbxcrqrwxnefvbhnebzidwagmym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935242.7737358-453-96255191540045/AnsiballZ_file.py'
Jan 20 18:54:03 compute-0 sudo[170128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:54:03 compute-0 python3.9[170130]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:54:03 compute-0 sudo[170128]: pam_unix(sudo:session): session closed for user root
Jan 20 18:54:03 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[166416]: 20/01/2026 18:54:03 : epoch 696fcf2b : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f85140029b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:54:03 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v347: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 20 18:54:03 compute-0 sudo[170281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svdixsahombdwmddmsvxweyxkwoycjnt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935243.3375747-453-6777117573054/AnsiballZ_file.py'
Jan 20 18:54:03 compute-0 sudo[170281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:54:03 compute-0 python3.9[170283]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:54:03 compute-0 sudo[170281]: pam_unix(sudo:session): session closed for user root
Jan 20 18:54:03 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:54:03 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:54:03 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:54:03.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:54:04 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [WARNING] 019/185404 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 20 18:54:04 compute-0 sudo[170434]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcootcereqarvgswlzqusofdizmpqznl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935243.97897-453-226131220271089/AnsiballZ_file.py'
Jan 20 18:54:04 compute-0 sudo[170434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:54:04 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:54:04 compute-0 python3.9[170436]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:54:04 compute-0 sudo[170434]: pam_unix(sudo:session): session closed for user root
Jan 20 18:54:04 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[166416]: 20/01/2026 18:54:04 : epoch 696fcf2b : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f85100027e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:54:04 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[166416]: 20/01/2026 18:54:04 : epoch 696fcf2b : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f851c008f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:54:04 compute-0 sudo[170586]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugimtzfruvwcpqaxxbpsvbdhnjbnvzwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935244.5747826-453-238988748134965/AnsiballZ_file.py'
Jan 20 18:54:04 compute-0 sudo[170586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:54:04 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:54:04 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:54:04 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:54:04.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:54:05 compute-0 python3.9[170588]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:54:05 compute-0 sudo[170586]: pam_unix(sudo:session): session closed for user root
Jan 20 18:54:05 compute-0 ceph-mon[74381]: pgmap v347: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 20 18:54:05 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[166416]: 20/01/2026 18:54:05 : epoch 696fcf2b : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f8500002cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:54:05 compute-0 sudo[170738]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfnzdwzhbplhssmuyugzgjqusbrclrcs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935245.1632094-453-47169389631312/AnsiballZ_file.py'
Jan 20 18:54:05 compute-0 sudo[170738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:54:05 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v348: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Jan 20 18:54:05 compute-0 python3.9[170740]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:54:05 compute-0 sudo[170738]: pam_unix(sudo:session): session closed for user root
Jan 20 18:54:05 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:54:05 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:54:05 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:54:05.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:54:06 compute-0 sudo[170892]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfzprusmztmjiykyaphzavyxyyaqzkqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935245.9027874-453-167282332629395/AnsiballZ_file.py'
Jan 20 18:54:06 compute-0 sudo[170892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:54:06 compute-0 python3.9[170894]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:54:06 compute-0 sudo[170892]: pam_unix(sudo:session): session closed for user root
Jan 20 18:54:06 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[166416]: 20/01/2026 18:54:06 : epoch 696fcf2b : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f85140029b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:54:06 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[166416]: 20/01/2026 18:54:06 : epoch 696fcf2b : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f85100027e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:54:06 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:54:06 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:54:06 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:54:06.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:54:07 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:54:07.023Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 18:54:07 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:54:07.023Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 18:54:07 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:54:07.024Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 18:54:07 compute-0 ceph-mon[74381]: pgmap v348: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Jan 20 18:54:07 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[166416]: 20/01/2026 18:54:07 : epoch 696fcf2b : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f851c009860 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:54:07 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v349: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 3 op/s
Jan 20 18:54:07 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:54:07 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:54:07 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:54:07.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:54:08 compute-0 podman[170963]: 2026-01-20 18:54:08.117756003 +0000 UTC m=+0.095732509 container health_status d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller)
Jan 20 18:54:08 compute-0 sudo[171074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnotacqnjpycymnyeikigyeqdquiilef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935247.9635575-606-75354369630513/AnsiballZ_command.py'
Jan 20 18:54:08 compute-0 sudo[171074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:54:08 compute-0 python3.9[171076]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:54:08 compute-0 kernel: ganesha.nfsd[168361]: segfault at 50 ip 00007f85a46bf32e sp 00007f8523ffe210 error 4 in libntirpc.so.5.8[7f85a46a4000+2c000] likely on CPU 0 (core 0, socket 0)
Jan 20 18:54:08 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Jan 20 18:54:08 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[166416]: 20/01/2026 18:54:08 : epoch 696fcf2b : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f851c009860 fd 38 proxy ignored for local
Jan 20 18:54:08 compute-0 sudo[171074]: pam_unix(sudo:session): session closed for user root
Jan 20 18:54:08 compute-0 systemd[1]: Started Process Core Dump (PID 171079/UID 0).
Jan 20 18:54:08 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:54:08 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:54:08 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:54:08.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:54:09 compute-0 ceph-mon[74381]: pgmap v349: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 3 op/s
Jan 20 18:54:09 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:54:09 compute-0 python3.9[171230]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 20 18:54:09 compute-0 systemd-coredump[171080]: Process 166459 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 41:
                                                    #0  0x00007f85a46bf32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Jan 20 18:54:09 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v350: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:54:09 compute-0 systemd[1]: systemd-coredump@6-171079-0.service: Deactivated successfully.
Jan 20 18:54:09 compute-0 systemd[1]: systemd-coredump@6-171079-0.service: Consumed 1.138s CPU time.
Jan 20 18:54:09 compute-0 podman[171261]: 2026-01-20 18:54:09.737997985 +0000 UTC m=+0.026483875 container died c259dd630f10443cf5ccfdd0ba6adcadf440c2397248508986c7e56a6da642a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Jan 20 18:54:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-c180747e8966a323b9215d3589ce815a7c53ded249cc9ca87bb0bc999faf3899-merged.mount: Deactivated successfully.
Jan 20 18:54:09 compute-0 podman[171261]: 2026-01-20 18:54:09.781085975 +0000 UTC m=+0.069571835 container remove c259dd630f10443cf5ccfdd0ba6adcadf440c2397248508986c7e56a6da642a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:54:09 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Main process exited, code=exited, status=139/n/a
Jan 20 18:54:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:54:09] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Jan 20 18:54:09 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:54:09] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Jan 20 18:54:09 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Failed with result 'exit-code'.
Jan 20 18:54:09 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Consumed 1.337s CPU time.
Jan 20 18:54:09 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:54:09 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:54:09 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:54:09.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:54:10 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 18:54:10 compute-0 sudo[171430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgepmmxaitdtmwseucngtmmgkascbvui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935249.904413-660-34265706686807/AnsiballZ_systemd_service.py'
Jan 20 18:54:10 compute-0 sudo[171430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:54:10 compute-0 python3.9[171432]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 20 18:54:10 compute-0 systemd[1]: Reloading.
Jan 20 18:54:10 compute-0 systemd-rc-local-generator[171461]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:54:10 compute-0 systemd-sysv-generator[171465]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:54:10 compute-0 sudo[171430]: pam_unix(sudo:session): session closed for user root
Jan 20 18:54:10 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:54:10 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:54:10 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:54:10.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:54:11 compute-0 ceph-mon[74381]: pgmap v350: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:54:11 compute-0 sudo[171618]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clfypwztynyisyiiqgnwybsvukxynljc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935251.2043965-684-151009747109265/AnsiballZ_command.py'
Jan 20 18:54:11 compute-0 sudo[171618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:54:11 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v351: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:54:11 compute-0 python3.9[171620]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:54:11 compute-0 sudo[171618]: pam_unix(sudo:session): session closed for user root
Jan 20 18:54:11 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:54:11 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 18:54:11 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:54:11.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 18:54:12 compute-0 sudo[171773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yeizdxkdjelpaomknahqkcizqioksnbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935251.7984376-684-18730995680983/AnsiballZ_command.py'
Jan 20 18:54:12 compute-0 sudo[171773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:54:12 compute-0 sudo[171776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 18:54:12 compute-0 sudo[171776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:54:12 compute-0 python3.9[171775]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:54:12 compute-0 sudo[171776]: pam_unix(sudo:session): session closed for user root
Jan 20 18:54:12 compute-0 sudo[171773]: pam_unix(sudo:session): session closed for user root
Jan 20 18:54:12 compute-0 ceph-mon[74381]: pgmap v351: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:54:12 compute-0 sudo[171951]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzpzflfqiitqsjgnazapasedyatvjyqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935252.4780738-684-39354551438505/AnsiballZ_command.py'
Jan 20 18:54:12 compute-0 sudo[171951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:54:12 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:54:12 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:54:12 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:54:12.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:54:12 compute-0 python3.9[171953]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:54:13 compute-0 sudo[171951]: pam_unix(sudo:session): session closed for user root
Jan 20 18:54:13 compute-0 sudo[172104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryfkasnewanxmgnwitaunprvnzvzqfmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935253.1646967-684-30051256873828/AnsiballZ_command.py'
Jan 20 18:54:13 compute-0 sudo[172104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:54:13 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v352: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Jan 20 18:54:13 compute-0 python3.9[172106]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:54:13 compute-0 sudo[172104]: pam_unix(sudo:session): session closed for user root
Jan 20 18:54:13 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:54:13 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:54:13 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:54:13.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:54:14 compute-0 sudo[172259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcwxeetoamopxqaauowmiahhshbcjtzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935253.7924275-684-110263618281722/AnsiballZ_command.py'
Jan 20 18:54:14 compute-0 sudo[172259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:54:14 compute-0 python3.9[172261]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:54:14 compute-0 sudo[172259]: pam_unix(sudo:session): session closed for user root
Jan 20 18:54:14 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:54:14 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [WARNING] 019/185414 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 20 18:54:14 compute-0 sudo[172412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfpuwtlglbovanezgcxmjkwdppjgcvla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935254.3737562-684-144540005042296/AnsiballZ_command.py'
Jan 20 18:54:14 compute-0 sudo[172412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:54:14 compute-0 ceph-mon[74381]: pgmap v352: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Jan 20 18:54:14 compute-0 python3.9[172414]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:54:14 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:54:14 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 18:54:14 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:54:14.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 18:54:14 compute-0 sudo[172412]: pam_unix(sudo:session): session closed for user root
Jan 20 18:54:15 compute-0 sudo[172565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dynqcrfakdvkeneqdvbouqonwfygptgl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935255.0251682-684-106181758434954/AnsiballZ_command.py'
Jan 20 18:54:15 compute-0 sudo[172565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:54:15 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v353: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Jan 20 18:54:15 compute-0 python3.9[172567]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:54:15 compute-0 sudo[172565]: pam_unix(sudo:session): session closed for user root
Jan 20 18:54:15 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:54:15 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 18:54:15 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:54:15.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 18:54:16 compute-0 ceph-mon[74381]: pgmap v353: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Jan 20 18:54:16 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:54:16 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:54:16 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:54:16.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:54:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:54:17.025Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 18:54:17 compute-0 sudo[172720]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzskqdsvwdwkuzgfvpkxzyxhrjixbyrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935256.993781-846-152733112854338/AnsiballZ_getent.py'
Jan 20 18:54:17 compute-0 sudo[172720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:54:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [WARNING] 019/185417 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 20 18:54:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [NOTICE] 019/185417 (4) : haproxy version is 2.3.17-d1c9119
Jan 20 18:54:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [NOTICE] 019/185417 (4) : path to executable is /usr/local/sbin/haproxy
Jan 20 18:54:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [ALERT] 019/185417 (4) : backend 'backend' has no server available!
Jan 20 18:54:17 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v354: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 255 B/s wr, 0 op/s
Jan 20 18:54:17 compute-0 python3.9[172722]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Jan 20 18:54:17 compute-0 sudo[172720]: pam_unix(sudo:session): session closed for user root
Jan 20 18:54:17 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:54:17 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:54:17 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:54:17.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:54:18 compute-0 sudo[172875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgizjedbycfdqfdgggaswseqyqujhnej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935257.9240313-870-27299332159237/AnsiballZ_group.py'
Jan 20 18:54:18 compute-0 sudo[172875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:54:18 compute-0 python3.9[172877]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 20 18:54:18 compute-0 groupadd[172878]: group added to /etc/group: name=libvirt, GID=42473
Jan 20 18:54:18 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 18:54:18 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 18:54:18 compute-0 groupadd[172878]: group added to /etc/gshadow: name=libvirt
Jan 20 18:54:18 compute-0 groupadd[172878]: new group: name=libvirt, GID=42473
Jan 20 18:54:18 compute-0 sudo[172875]: pam_unix(sudo:session): session closed for user root
Jan 20 18:54:18 compute-0 ceph-mon[74381]: pgmap v354: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 255 B/s wr, 0 op/s
Jan 20 18:54:18 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:54:18 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:54:18 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:54:18.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:54:19 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:54:19 compute-0 sudo[173034]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tuevmpmkcsrnhbdftjgqxaxjcvirfrmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935258.9782457-894-83104776960956/AnsiballZ_user.py'
Jan 20 18:54:19 compute-0 sudo[173034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:54:19 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v355: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 255 B/s wr, 0 op/s
Jan 20 18:54:19 compute-0 python3.9[173036]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 20 18:54:19 compute-0 useradd[173040]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Jan 20 18:54:19 compute-0 sudo[173034]: pam_unix(sudo:session): session closed for user root
Jan 20 18:54:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:54:19] "GET /metrics HTTP/1.1" 200 48337 "" "Prometheus/2.51.0"
Jan 20 18:54:19 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:54:19] "GET /metrics HTTP/1.1" 200 48337 "" "Prometheus/2.51.0"
Jan 20 18:54:19 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:54:19 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:54:19 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:54:19.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:54:19 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Scheduled restart job, restart counter is at 7.
Jan 20 18:54:19 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.ulclbx for aecbbf3b-b405-507b-97d7-637a83f5b4b1.
Jan 20 18:54:19 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Consumed 1.337s CPU time.
Jan 20 18:54:20 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.ulclbx for aecbbf3b-b405-507b-97d7-637a83f5b4b1...
Jan 20 18:54:20 compute-0 podman[173119]: 2026-01-20 18:54:20.184073914 +0000 UTC m=+0.037950387 container create 5a83a68f34001fae00764093866d1a62701b68c0868df64d0cead5810fcb2f78 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:54:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52ca68a3b848089f51650a794e07752b3b4eba13748e4f3ac84107fa80518db9/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Jan 20 18:54:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52ca68a3b848089f51650a794e07752b3b4eba13748e4f3ac84107fa80518db9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:54:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52ca68a3b848089f51650a794e07752b3b4eba13748e4f3ac84107fa80518db9/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 18:54:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52ca68a3b848089f51650a794e07752b3b4eba13748e4f3ac84107fa80518db9/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.ulclbx-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 18:54:20 compute-0 podman[173119]: 2026-01-20 18:54:20.236794314 +0000 UTC m=+0.090670807 container init 5a83a68f34001fae00764093866d1a62701b68c0868df64d0cead5810fcb2f78 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:54:20 compute-0 podman[173119]: 2026-01-20 18:54:20.241616859 +0000 UTC m=+0.095493332 container start 5a83a68f34001fae00764093866d1a62701b68c0868df64d0cead5810fcb2f78 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:54:20 compute-0 bash[173119]: 5a83a68f34001fae00764093866d1a62701b68c0868df64d0cead5810fcb2f78
Jan 20 18:54:20 compute-0 podman[173119]: 2026-01-20 18:54:20.166904402 +0000 UTC m=+0.020780905 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:54:20 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.ulclbx for aecbbf3b-b405-507b-97d7-637a83f5b4b1.
Jan 20 18:54:20 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[173134]: 20/01/2026 18:54:20 : epoch 696fcf5c : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Jan 20 18:54:20 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[173134]: 20/01/2026 18:54:20 : epoch 696fcf5c : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Jan 20 18:54:20 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[173134]: 20/01/2026 18:54:20 : epoch 696fcf5c : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Jan 20 18:54:20 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[173134]: 20/01/2026 18:54:20 : epoch 696fcf5c : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Jan 20 18:54:20 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[173134]: 20/01/2026 18:54:20 : epoch 696fcf5c : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Jan 20 18:54:20 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[173134]: 20/01/2026 18:54:20 : epoch 696fcf5c : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Jan 20 18:54:20 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[173134]: 20/01/2026 18:54:20 : epoch 696fcf5c : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Jan 20 18:54:20 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[173134]: 20/01/2026 18:54:20 : epoch 696fcf5c : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 20 18:54:20 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:54:20 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 18:54:20 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:54:20.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 18:54:21 compute-0 ceph-mon[74381]: pgmap v355: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 255 B/s wr, 0 op/s
Jan 20 18:54:21 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v356: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 255 B/s wr, 0 op/s
Jan 20 18:54:21 compute-0 sudo[173303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okkxbkvykmmtrgrkagkhcqlcumfucdjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935261.6109529-927-9157712080705/AnsiballZ_setup.py'
Jan 20 18:54:21 compute-0 sudo[173303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:54:21 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:54:21 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:54:21 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:54:21.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:54:22 compute-0 python3.9[173305]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 18:54:22 compute-0 sudo[173303]: pam_unix(sudo:session): session closed for user root
Jan 20 18:54:22 compute-0 sudo[173387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfkbfkigfndpykjulhrzwsarmzflvzug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935261.6109529-927-9157712080705/AnsiballZ_dnf.py'
Jan 20 18:54:22 compute-0 sudo[173387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:54:22 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:54:22 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:54:22 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:54:22.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:54:23 compute-0 python3.9[173389]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 18:54:23 compute-0 ceph-mon[74381]: pgmap v356: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 255 B/s wr, 0 op/s
Jan 20 18:54:23 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v357: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 341 B/s wr, 1 op/s
Jan 20 18:54:23 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:54:23 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:54:23 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:54:23.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:54:24 compute-0 ceph-mon[74381]: pgmap v357: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 341 B/s wr, 1 op/s
Jan 20 18:54:24 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:54:24 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:54:24 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:54:24 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:54:24.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:54:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:54:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:54:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:54:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:54:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:54:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:54:25 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v358: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 255 B/s wr, 1 op/s
Jan 20 18:54:25 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 18:54:25 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:54:25 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:54:25 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:54:25.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:54:26 compute-0 ceph-mon[74381]: pgmap v358: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 255 B/s wr, 1 op/s
Jan 20 18:54:26 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:54:26 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 18:54:26 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:54:26.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 18:54:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:54:27.026Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 18:54:27 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v359: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 853 B/s wr, 2 op/s
Jan 20 18:54:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[173134]: 20/01/2026 18:54:27 : epoch 696fcf5c : compute-0 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Jan 20 18:54:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[173134]: 20/01/2026 18:54:27 : epoch 696fcf5c : compute-0 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Jan 20 18:54:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[173134]: 20/01/2026 18:54:27 : epoch 696fcf5c : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 20 18:54:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[173134]: 20/01/2026 18:54:27 : epoch 696fcf5c : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 20 18:54:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[173134]: 20/01/2026 18:54:27 : epoch 696fcf5c : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 20 18:54:27 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:54:27 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:54:27 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:54:27.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:54:28 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:54:28 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:54:28 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:54:28.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:54:28 compute-0 ceph-mon[74381]: pgmap v359: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 853 B/s wr, 2 op/s
Jan 20 18:54:29 compute-0 podman[173407]: 2026-01-20 18:54:29.079072392 +0000 UTC m=+0.055188301 container health_status 7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Jan 20 18:54:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[173134]: 20/01/2026 18:54:29 : epoch 696fcf5c : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 20 18:54:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[173134]: 20/01/2026 18:54:29 : epoch 696fcf5c : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 20 18:54:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[173134]: 20/01/2026 18:54:29 : epoch 696fcf5c : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 20 18:54:29 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:54:29 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v360: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 682 B/s wr, 2 op/s
Jan 20 18:54:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:54:29] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Jan 20 18:54:29 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:54:29] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Jan 20 18:54:29 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:54:29 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:54:29 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:54:29.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:54:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:54:30.185 165659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 18:54:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:54:30.187 165659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 18:54:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:54:30.187 165659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 18:54:30 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:54:30 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:54:30 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:54:30.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:54:31 compute-0 ceph-mon[74381]: pgmap v360: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 682 B/s wr, 2 op/s
Jan 20 18:54:31 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v361: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 682 B/s wr, 2 op/s
Jan 20 18:54:31 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:54:31 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 18:54:31 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:54:31.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 18:54:32 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [WARNING] 019/185432 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 20 18:54:32 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[173134]: 20/01/2026 18:54:32 : epoch 696fcf5c : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 20 18:54:32 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[173134]: 20/01/2026 18:54:32 : epoch 696fcf5c : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 20 18:54:32 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[173134]: 20/01/2026 18:54:32 : epoch 696fcf5c : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 20 18:54:32 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[173134]: 20/01/2026 18:54:32 : epoch 696fcf5c : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 20 18:54:32 compute-0 sudo[173504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 18:54:32 compute-0 sudo[173504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:54:32 compute-0 sudo[173504]: pam_unix(sudo:session): session closed for user root
Jan 20 18:54:32 compute-0 ceph-mon[74381]: pgmap v361: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 682 B/s wr, 2 op/s
Jan 20 18:54:32 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:54:32 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:54:32 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:54:32.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:54:33 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v362: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Jan 20 18:54:33 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:54:33 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:54:33 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:54:33.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:54:34 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:54:34 compute-0 sudo[173617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:54:34 compute-0 sudo[173617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:54:34 compute-0 sudo[173617]: pam_unix(sudo:session): session closed for user root
Jan 20 18:54:34 compute-0 sudo[173645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 20 18:54:34 compute-0 sudo[173645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:54:34 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:54:34 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:54:34 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:54:34.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:54:35 compute-0 sudo[173645]: pam_unix(sudo:session): session closed for user root
Jan 20 18:54:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[173134]: 20/01/2026 18:54:35 : epoch 696fcf5c : compute-0 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000011:nfs.cephfs.2: -2
Jan 20 18:54:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[173134]: 20/01/2026 18:54:35 : epoch 696fcf5c : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 20 18:54:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[173134]: 20/01/2026 18:54:35 : epoch 696fcf5c : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Jan 20 18:54:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[173134]: 20/01/2026 18:54:35 : epoch 696fcf5c : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Jan 20 18:54:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[173134]: 20/01/2026 18:54:35 : epoch 696fcf5c : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Jan 20 18:54:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[173134]: 20/01/2026 18:54:35 : epoch 696fcf5c : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Jan 20 18:54:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[173134]: 20/01/2026 18:54:35 : epoch 696fcf5c : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Jan 20 18:54:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[173134]: 20/01/2026 18:54:35 : epoch 696fcf5c : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Jan 20 18:54:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[173134]: 20/01/2026 18:54:35 : epoch 696fcf5c : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 20 18:54:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[173134]: 20/01/2026 18:54:35 : epoch 696fcf5c : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 20 18:54:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[173134]: 20/01/2026 18:54:35 : epoch 696fcf5c : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 20 18:54:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[173134]: 20/01/2026 18:54:35 : epoch 696fcf5c : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Jan 20 18:54:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[173134]: 20/01/2026 18:54:35 : epoch 696fcf5c : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 20 18:54:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[173134]: 20/01/2026 18:54:35 : epoch 696fcf5c : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Jan 20 18:54:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[173134]: 20/01/2026 18:54:35 : epoch 696fcf5c : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Jan 20 18:54:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[173134]: 20/01/2026 18:54:35 : epoch 696fcf5c : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Jan 20 18:54:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[173134]: 20/01/2026 18:54:35 : epoch 696fcf5c : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Jan 20 18:54:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[173134]: 20/01/2026 18:54:35 : epoch 696fcf5c : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Jan 20 18:54:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[173134]: 20/01/2026 18:54:35 : epoch 696fcf5c : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Jan 20 18:54:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[173134]: 20/01/2026 18:54:35 : epoch 696fcf5c : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Jan 20 18:54:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[173134]: 20/01/2026 18:54:35 : epoch 696fcf5c : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Jan 20 18:54:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[173134]: 20/01/2026 18:54:35 : epoch 696fcf5c : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Jan 20 18:54:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[173134]: 20/01/2026 18:54:35 : epoch 696fcf5c : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Jan 20 18:54:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[173134]: 20/01/2026 18:54:35 : epoch 696fcf5c : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Jan 20 18:54:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[173134]: 20/01/2026 18:54:35 : epoch 696fcf5c : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Jan 20 18:54:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[173134]: 20/01/2026 18:54:35 : epoch 696fcf5c : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 20 18:54:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[173134]: 20/01/2026 18:54:35 : epoch 696fcf5c : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Jan 20 18:54:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[173134]: 20/01/2026 18:54:35 : epoch 696fcf5c : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 20 18:54:35 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 18:54:35 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v363: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 20 18:54:35 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:54:35 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 20 18:54:35 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:54:35 compute-0 sudo[173727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:54:35 compute-0 sudo[173727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:54:35 compute-0 sudo[173727]: pam_unix(sudo:session): session closed for user root
Jan 20 18:54:35 compute-0 sudo[173752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 18:54:35 compute-0 sudo[173752]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:54:35 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:54:35 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:54:35 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:54:35.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:54:36 compute-0 ceph-mon[74381]: pgmap v362: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Jan 20 18:54:36 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:54:36 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 18:54:36 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:54:36 compute-0 podman[173818]: 2026-01-20 18:54:36.298770531 +0000 UTC m=+0.024476038 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:54:36 compute-0 podman[173818]: 2026-01-20 18:54:36.414312725 +0000 UTC m=+0.140018212 container create 951a1fd6d2d3fb65e2eaf7d937fd3a744d7c1c0df3b59140c64fdd38e8ebaf3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_kapitsa, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:54:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[173134]: 20/01/2026 18:54:36 : epoch 696fcf5c : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f94ac000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:54:36 compute-0 systemd[1]: Started libpod-conmon-951a1fd6d2d3fb65e2eaf7d937fd3a744d7c1c0df3b59140c64fdd38e8ebaf3c.scope.
Jan 20 18:54:36 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:54:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[173134]: 20/01/2026 18:54:36 : epoch 696fcf5c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f94b0001c40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:54:36 compute-0 podman[173818]: 2026-01-20 18:54:36.641791722 +0000 UTC m=+0.367497269 container init 951a1fd6d2d3fb65e2eaf7d937fd3a744d7c1c0df3b59140c64fdd38e8ebaf3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Jan 20 18:54:36 compute-0 podman[173818]: 2026-01-20 18:54:36.651101554 +0000 UTC m=+0.376807051 container start 951a1fd6d2d3fb65e2eaf7d937fd3a744d7c1c0df3b59140c64fdd38e8ebaf3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_kapitsa, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:54:36 compute-0 systemd[1]: libpod-951a1fd6d2d3fb65e2eaf7d937fd3a744d7c1c0df3b59140c64fdd38e8ebaf3c.scope: Deactivated successfully.
Jan 20 18:54:36 compute-0 nostalgic_kapitsa[173836]: 167 167
Jan 20 18:54:36 compute-0 conmon[173836]: conmon 951a1fd6d2d3fb65e2ea <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-951a1fd6d2d3fb65e2eaf7d937fd3a744d7c1c0df3b59140c64fdd38e8ebaf3c.scope/container/memory.events
Jan 20 18:54:36 compute-0 podman[173818]: 2026-01-20 18:54:36.671526657 +0000 UTC m=+0.397232174 container attach 951a1fd6d2d3fb65e2eaf7d937fd3a744d7c1c0df3b59140c64fdd38e8ebaf3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:54:36 compute-0 podman[173818]: 2026-01-20 18:54:36.672058672 +0000 UTC m=+0.397764159 container died 951a1fd6d2d3fb65e2eaf7d937fd3a744d7c1c0df3b59140c64fdd38e8ebaf3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:54:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-4a557338820850a79a5b68d89a977c7ad2605cbfa4e9cd1326146fcd4d1e7d2b-merged.mount: Deactivated successfully.
Jan 20 18:54:36 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:54:36 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:54:36 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:54:36.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:54:36 compute-0 podman[173818]: 2026-01-20 18:54:36.904105758 +0000 UTC m=+0.629811245 container remove 951a1fd6d2d3fb65e2eaf7d937fd3a744d7c1c0df3b59140c64fdd38e8ebaf3c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_kapitsa, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:54:36 compute-0 systemd[1]: libpod-conmon-951a1fd6d2d3fb65e2eaf7d937fd3a744d7c1c0df3b59140c64fdd38e8ebaf3c.scope: Deactivated successfully.
Jan 20 18:54:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:54:37.027Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 18:54:37 compute-0 podman[173859]: 2026-01-20 18:54:37.107417386 +0000 UTC m=+0.088979830 container create 6c483ed8322caa9ca3707d6da3b00fcb8b3123e8909f2dc6137f4091d683b5e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:54:37 compute-0 podman[173859]: 2026-01-20 18:54:37.058621386 +0000 UTC m=+0.040183860 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:54:37 compute-0 systemd[1]: Started libpod-conmon-6c483ed8322caa9ca3707d6da3b00fcb8b3123e8909f2dc6137f4091d683b5e5.scope.
Jan 20 18:54:37 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:54:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6341cfb9974f37f435916ac9cb96b742103151f3b14010df4ca2e8a8afdb277c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 18:54:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6341cfb9974f37f435916ac9cb96b742103151f3b14010df4ca2e8a8afdb277c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:54:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6341cfb9974f37f435916ac9cb96b742103151f3b14010df4ca2e8a8afdb277c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:54:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6341cfb9974f37f435916ac9cb96b742103151f3b14010df4ca2e8a8afdb277c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 18:54:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6341cfb9974f37f435916ac9cb96b742103151f3b14010df4ca2e8a8afdb277c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 18:54:37 compute-0 podman[173859]: 2026-01-20 18:54:37.277556983 +0000 UTC m=+0.259119447 container init 6c483ed8322caa9ca3707d6da3b00fcb8b3123e8909f2dc6137f4091d683b5e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 20 18:54:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[173134]: 20/01/2026 18:54:37 : epoch 696fcf5c : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f948c000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:54:37 compute-0 podman[173859]: 2026-01-20 18:54:37.28529036 +0000 UTC m=+0.266852804 container start 6c483ed8322caa9ca3707d6da3b00fcb8b3123e8909f2dc6137f4091d683b5e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 18:54:37 compute-0 podman[173859]: 2026-01-20 18:54:37.288436478 +0000 UTC m=+0.269998942 container attach 6c483ed8322caa9ca3707d6da3b00fcb8b3123e8909f2dc6137f4091d683b5e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_noyce, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 20 18:54:37 compute-0 ceph-mon[74381]: pgmap v363: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 20 18:54:37 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:54:37 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 18:54:37 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 18:54:37 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:54:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [WARNING] 019/185437 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 20 18:54:37 compute-0 dazzling_noyce[173876]: --> passed data devices: 0 physical, 1 LVM
Jan 20 18:54:37 compute-0 dazzling_noyce[173876]: --> All data devices are unavailable
Jan 20 18:54:37 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v364: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.2 KiB/s wr, 4 op/s
Jan 20 18:54:37 compute-0 systemd[1]: libpod-6c483ed8322caa9ca3707d6da3b00fcb8b3123e8909f2dc6137f4091d683b5e5.scope: Deactivated successfully.
Jan 20 18:54:37 compute-0 podman[173893]: 2026-01-20 18:54:37.668213641 +0000 UTC m=+0.021697129 container died 6c483ed8322caa9ca3707d6da3b00fcb8b3123e8909f2dc6137f4091d683b5e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:54:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-6341cfb9974f37f435916ac9cb96b742103151f3b14010df4ca2e8a8afdb277c-merged.mount: Deactivated successfully.
Jan 20 18:54:37 compute-0 podman[173893]: 2026-01-20 18:54:37.71055165 +0000 UTC m=+0.064035118 container remove 6c483ed8322caa9ca3707d6da3b00fcb8b3123e8909f2dc6137f4091d683b5e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_noyce, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 20 18:54:37 compute-0 systemd[1]: libpod-conmon-6c483ed8322caa9ca3707d6da3b00fcb8b3123e8909f2dc6137f4091d683b5e5.scope: Deactivated successfully.
Jan 20 18:54:37 compute-0 sudo[173752]: pam_unix(sudo:session): session closed for user root
Jan 20 18:54:37 compute-0 sudo[173909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:54:37 compute-0 sudo[173909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:54:37 compute-0 sudo[173909]: pam_unix(sudo:session): session closed for user root
Jan 20 18:54:37 compute-0 sudo[173934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- lvm list --format json
Jan 20 18:54:37 compute-0 sudo[173934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:54:37 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:54:37 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:54:37 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:54:37.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:54:38 compute-0 podman[173999]: 2026-01-20 18:54:38.252774715 +0000 UTC m=+0.038091261 container create 6b86ec1d033244fe522b3813f798dc9a7101ca5737a8f99751873facecd2ac60 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:54:38 compute-0 systemd[1]: Started libpod-conmon-6b86ec1d033244fe522b3813f798dc9a7101ca5737a8f99751873facecd2ac60.scope.
Jan 20 18:54:38 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:54:38 compute-0 podman[173999]: 2026-01-20 18:54:38.312051839 +0000 UTC m=+0.097368375 container init 6b86ec1d033244fe522b3813f798dc9a7101ca5737a8f99751873facecd2ac60 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_mirzakhani, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:54:38 compute-0 podman[173999]: 2026-01-20 18:54:38.317885072 +0000 UTC m=+0.103201608 container start 6b86ec1d033244fe522b3813f798dc9a7101ca5737a8f99751873facecd2ac60 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_mirzakhani, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:54:38 compute-0 interesting_mirzakhani[174016]: 167 167
Jan 20 18:54:38 compute-0 systemd[1]: libpod-6b86ec1d033244fe522b3813f798dc9a7101ca5737a8f99751873facecd2ac60.scope: Deactivated successfully.
Jan 20 18:54:38 compute-0 podman[173999]: 2026-01-20 18:54:38.234482841 +0000 UTC m=+0.019799407 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:54:38 compute-0 podman[173999]: 2026-01-20 18:54:38.392659672 +0000 UTC m=+0.177976208 container attach 6b86ec1d033244fe522b3813f798dc9a7101ca5737a8f99751873facecd2ac60 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_mirzakhani, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Jan 20 18:54:38 compute-0 podman[173999]: 2026-01-20 18:54:38.393525636 +0000 UTC m=+0.178842172 container died 6b86ec1d033244fe522b3813f798dc9a7101ca5737a8f99751873facecd2ac60 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 20 18:54:38 compute-0 podman[174013]: 2026-01-20 18:54:38.406652775 +0000 UTC m=+0.121099231 container health_status d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 18:54:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-b7cb6a4ae853f75745a2bc4f8ba37108fd8c7e6563c959b16db47324d6b7007c-merged.mount: Deactivated successfully.
Jan 20 18:54:38 compute-0 podman[173999]: 2026-01-20 18:54:38.432380157 +0000 UTC m=+0.217696693 container remove 6b86ec1d033244fe522b3813f798dc9a7101ca5737a8f99751873facecd2ac60 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=interesting_mirzakhani, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Jan 20 18:54:38 compute-0 systemd[1]: libpod-conmon-6b86ec1d033244fe522b3813f798dc9a7101ca5737a8f99751873facecd2ac60.scope: Deactivated successfully.
Jan 20 18:54:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [WARNING] 019/185438 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 20 18:54:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[173134]: 20/01/2026 18:54:38 : epoch 696fcf5c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9480000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:54:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[173134]: 20/01/2026 18:54:38 : epoch 696fcf5c : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9488000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:54:38 compute-0 podman[174064]: 2026-01-20 18:54:38.598053769 +0000 UTC m=+0.057247449 container create 57b91688047ea1cab324628a7574f2ff55d9aee739fac3bf69b7dbd98824111c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:54:38 compute-0 systemd[1]: Started libpod-conmon-57b91688047ea1cab324628a7574f2ff55d9aee739fac3bf69b7dbd98824111c.scope.
Jan 20 18:54:38 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:54:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b731f129993971ddf8f428d8af37fe78a2a409942b7c737b5b969ed4dca92712/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 18:54:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b731f129993971ddf8f428d8af37fe78a2a409942b7c737b5b969ed4dca92712/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:54:38 compute-0 podman[174064]: 2026-01-20 18:54:38.56567671 +0000 UTC m=+0.024870410 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:54:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b731f129993971ddf8f428d8af37fe78a2a409942b7c737b5b969ed4dca92712/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:54:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b731f129993971ddf8f428d8af37fe78a2a409942b7c737b5b969ed4dca92712/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 18:54:38 compute-0 podman[174064]: 2026-01-20 18:54:38.70135843 +0000 UTC m=+0.160552110 container init 57b91688047ea1cab324628a7574f2ff55d9aee739fac3bf69b7dbd98824111c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:54:38 compute-0 podman[174064]: 2026-01-20 18:54:38.708370597 +0000 UTC m=+0.167564277 container start 57b91688047ea1cab324628a7574f2ff55d9aee739fac3bf69b7dbd98824111c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_bouman, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:54:38 compute-0 podman[174064]: 2026-01-20 18:54:38.712115702 +0000 UTC m=+0.171309462 container attach 57b91688047ea1cab324628a7574f2ff55d9aee739fac3bf69b7dbd98824111c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_bouman, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Jan 20 18:54:38 compute-0 ceph-mon[74381]: pgmap v364: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.2 KiB/s wr, 4 op/s
Jan 20 18:54:38 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:54:38 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:54:38 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:54:38.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:54:38 compute-0 goofy_bouman[174081]: {
Jan 20 18:54:38 compute-0 goofy_bouman[174081]:     "0": [
Jan 20 18:54:38 compute-0 goofy_bouman[174081]:         {
Jan 20 18:54:38 compute-0 goofy_bouman[174081]:             "devices": [
Jan 20 18:54:38 compute-0 goofy_bouman[174081]:                 "/dev/loop3"
Jan 20 18:54:38 compute-0 goofy_bouman[174081]:             ],
Jan 20 18:54:38 compute-0 goofy_bouman[174081]:             "lv_name": "ceph_lv0",
Jan 20 18:54:38 compute-0 goofy_bouman[174081]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 18:54:38 compute-0 goofy_bouman[174081]:             "lv_size": "21470642176",
Jan 20 18:54:38 compute-0 goofy_bouman[174081]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=aecbbf3b-b405-507b-97d7-637a83f5b4b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5f53c0c6-6046-4836-83f9-ff93da7e674e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 18:54:38 compute-0 goofy_bouman[174081]:             "lv_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 18:54:38 compute-0 goofy_bouman[174081]:             "name": "ceph_lv0",
Jan 20 18:54:38 compute-0 goofy_bouman[174081]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 18:54:38 compute-0 goofy_bouman[174081]:             "tags": {
Jan 20 18:54:38 compute-0 goofy_bouman[174081]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 18:54:38 compute-0 goofy_bouman[174081]:                 "ceph.block_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 18:54:38 compute-0 goofy_bouman[174081]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 18:54:38 compute-0 goofy_bouman[174081]:                 "ceph.cluster_fsid": "aecbbf3b-b405-507b-97d7-637a83f5b4b1",
Jan 20 18:54:38 compute-0 goofy_bouman[174081]:                 "ceph.cluster_name": "ceph",
Jan 20 18:54:38 compute-0 goofy_bouman[174081]:                 "ceph.crush_device_class": "",
Jan 20 18:54:38 compute-0 goofy_bouman[174081]:                 "ceph.encrypted": "0",
Jan 20 18:54:38 compute-0 goofy_bouman[174081]:                 "ceph.osd_fsid": "5f53c0c6-6046-4836-83f9-ff93da7e674e",
Jan 20 18:54:38 compute-0 goofy_bouman[174081]:                 "ceph.osd_id": "0",
Jan 20 18:54:38 compute-0 goofy_bouman[174081]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 18:54:38 compute-0 goofy_bouman[174081]:                 "ceph.type": "block",
Jan 20 18:54:38 compute-0 goofy_bouman[174081]:                 "ceph.vdo": "0",
Jan 20 18:54:38 compute-0 goofy_bouman[174081]:                 "ceph.with_tpm": "0"
Jan 20 18:54:38 compute-0 goofy_bouman[174081]:             },
Jan 20 18:54:38 compute-0 goofy_bouman[174081]:             "type": "block",
Jan 20 18:54:38 compute-0 goofy_bouman[174081]:             "vg_name": "ceph_vg0"
Jan 20 18:54:38 compute-0 goofy_bouman[174081]:         }
Jan 20 18:54:38 compute-0 goofy_bouman[174081]:     ]
Jan 20 18:54:38 compute-0 goofy_bouman[174081]: }
Jan 20 18:54:38 compute-0 podman[174064]: 2026-01-20 18:54:38.977685578 +0000 UTC m=+0.436879268 container died 57b91688047ea1cab324628a7574f2ff55d9aee739fac3bf69b7dbd98824111c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_bouman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:54:38 compute-0 systemd[1]: libpod-57b91688047ea1cab324628a7574f2ff55d9aee739fac3bf69b7dbd98824111c.scope: Deactivated successfully.
Jan 20 18:54:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-b731f129993971ddf8f428d8af37fe78a2a409942b7c737b5b969ed4dca92712-merged.mount: Deactivated successfully.
Jan 20 18:54:39 compute-0 podman[174064]: 2026-01-20 18:54:39.02010517 +0000 UTC m=+0.479298850 container remove 57b91688047ea1cab324628a7574f2ff55d9aee739fac3bf69b7dbd98824111c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_bouman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325)
Jan 20 18:54:39 compute-0 systemd[1]: libpod-conmon-57b91688047ea1cab324628a7574f2ff55d9aee739fac3bf69b7dbd98824111c.scope: Deactivated successfully.
Jan 20 18:54:39 compute-0 sudo[173934]: pam_unix(sudo:session): session closed for user root
Jan 20 18:54:39 compute-0 sudo[174102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:54:39 compute-0 sudo[174102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:54:39 compute-0 sudo[174102]: pam_unix(sudo:session): session closed for user root
Jan 20 18:54:39 compute-0 sudo[174127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- raw list --format json
Jan 20 18:54:39 compute-0 sudo[174127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:54:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[173134]: 20/01/2026 18:54:39 : epoch 696fcf5c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f94b0001c40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:54:39 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:54:39 compute-0 podman[174190]: 2026-01-20 18:54:39.519654496 +0000 UTC m=+0.041671681 container create 203bbaaaeede5b75ad876220da3eb145b474abf4beefeaefd7a85914f68a8990 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_lederberg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:54:39 compute-0 systemd[1]: Started libpod-conmon-203bbaaaeede5b75ad876220da3eb145b474abf4beefeaefd7a85914f68a8990.scope.
Jan 20 18:54:39 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:54:39 compute-0 podman[174190]: 2026-01-20 18:54:39.589172997 +0000 UTC m=+0.111190202 container init 203bbaaaeede5b75ad876220da3eb145b474abf4beefeaefd7a85914f68a8990 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_lederberg, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 20 18:54:39 compute-0 podman[174190]: 2026-01-20 18:54:39.497461683 +0000 UTC m=+0.019478898 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:54:39 compute-0 podman[174190]: 2026-01-20 18:54:39.595887736 +0000 UTC m=+0.117904931 container start 203bbaaaeede5b75ad876220da3eb145b474abf4beefeaefd7a85914f68a8990 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_lederberg, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Jan 20 18:54:39 compute-0 podman[174190]: 2026-01-20 18:54:39.599134947 +0000 UTC m=+0.121152162 container attach 203bbaaaeede5b75ad876220da3eb145b474abf4beefeaefd7a85914f68a8990 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_lederberg, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 20 18:54:39 compute-0 lucid_lederberg[174207]: 167 167
Jan 20 18:54:39 compute-0 systemd[1]: libpod-203bbaaaeede5b75ad876220da3eb145b474abf4beefeaefd7a85914f68a8990.scope: Deactivated successfully.
Jan 20 18:54:39 compute-0 podman[174190]: 2026-01-20 18:54:39.600773233 +0000 UTC m=+0.122790418 container died 203bbaaaeede5b75ad876220da3eb145b474abf4beefeaefd7a85914f68a8990 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_lederberg, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:54:39 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v365: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 682 B/s wr, 3 op/s
Jan 20 18:54:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-77cc408bec9d068db2e849d3961b6ff29ce777d3aad71280e7fb370896e4fae5-merged.mount: Deactivated successfully.
Jan 20 18:54:39 compute-0 podman[174190]: 2026-01-20 18:54:39.637790872 +0000 UTC m=+0.159808057 container remove 203bbaaaeede5b75ad876220da3eb145b474abf4beefeaefd7a85914f68a8990 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Jan 20 18:54:39 compute-0 systemd[1]: libpod-conmon-203bbaaaeede5b75ad876220da3eb145b474abf4beefeaefd7a85914f68a8990.scope: Deactivated successfully.
Jan 20 18:54:39 compute-0 podman[174231]: 2026-01-20 18:54:39.804280347 +0000 UTC m=+0.044360717 container create 1696867abfa548279e5603d61f8875d9b424e12680d05b3c5109e02293d7fbb0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_khorana, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Jan 20 18:54:39 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:54:39] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Jan 20 18:54:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:54:39] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Jan 20 18:54:39 compute-0 systemd[1]: Started libpod-conmon-1696867abfa548279e5603d61f8875d9b424e12680d05b3c5109e02293d7fbb0.scope.
Jan 20 18:54:39 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:54:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd62ba615fdd1d0489e912839bcba89f202262b95ed3c8cac4db53ca09a8381c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 18:54:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd62ba615fdd1d0489e912839bcba89f202262b95ed3c8cac4db53ca09a8381c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:54:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd62ba615fdd1d0489e912839bcba89f202262b95ed3c8cac4db53ca09a8381c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:54:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd62ba615fdd1d0489e912839bcba89f202262b95ed3c8cac4db53ca09a8381c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 18:54:39 compute-0 podman[174231]: 2026-01-20 18:54:39.788796112 +0000 UTC m=+0.028876502 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:54:39 compute-0 podman[174231]: 2026-01-20 18:54:39.896503377 +0000 UTC m=+0.136583747 container init 1696867abfa548279e5603d61f8875d9b424e12680d05b3c5109e02293d7fbb0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 20 18:54:39 compute-0 podman[174231]: 2026-01-20 18:54:39.905075327 +0000 UTC m=+0.145155727 container start 1696867abfa548279e5603d61f8875d9b424e12680d05b3c5109e02293d7fbb0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_khorana, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:54:39 compute-0 podman[174231]: 2026-01-20 18:54:39.910127698 +0000 UTC m=+0.150208088 container attach 1696867abfa548279e5603d61f8875d9b424e12680d05b3c5109e02293d7fbb0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_khorana, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:54:39 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:54:39 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:54:39 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:54:39.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:54:40 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[173134]: 20/01/2026 18:54:40 : epoch 696fcf5c : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f948c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:54:40 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[173134]: 20/01/2026 18:54:40 : epoch 696fcf5c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f94800016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:54:40 compute-0 lvm[174322]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 18:54:40 compute-0 lvm[174322]: VG ceph_vg0 finished
Jan 20 18:54:40 compute-0 inspiring_khorana[174248]: {}
Jan 20 18:54:40 compute-0 systemd[1]: libpod-1696867abfa548279e5603d61f8875d9b424e12680d05b3c5109e02293d7fbb0.scope: Deactivated successfully.
Jan 20 18:54:40 compute-0 systemd[1]: libpod-1696867abfa548279e5603d61f8875d9b424e12680d05b3c5109e02293d7fbb0.scope: Consumed 1.151s CPU time.
Jan 20 18:54:40 compute-0 podman[174231]: 2026-01-20 18:54:40.713968229 +0000 UTC m=+0.954048609 container died 1696867abfa548279e5603d61f8875d9b424e12680d05b3c5109e02293d7fbb0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid)
Jan 20 18:54:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd62ba615fdd1d0489e912839bcba89f202262b95ed3c8cac4db53ca09a8381c-merged.mount: Deactivated successfully.
Jan 20 18:54:40 compute-0 podman[174231]: 2026-01-20 18:54:40.760909867 +0000 UTC m=+1.000990227 container remove 1696867abfa548279e5603d61f8875d9b424e12680d05b3c5109e02293d7fbb0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_khorana, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:54:40 compute-0 systemd[1]: libpod-conmon-1696867abfa548279e5603d61f8875d9b424e12680d05b3c5109e02293d7fbb0.scope: Deactivated successfully.
Jan 20 18:54:40 compute-0 sudo[174127]: pam_unix(sudo:session): session closed for user root
Jan 20 18:54:40 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 18:54:40 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:54:40 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 18:54:40 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:54:40 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:54:40 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 18:54:40 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:54:40.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 18:54:40 compute-0 sudo[174339]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 18:54:40 compute-0 sudo[174339]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:54:40 compute-0 sudo[174339]: pam_unix(sudo:session): session closed for user root
Jan 20 18:54:40 compute-0 ceph-mon[74381]: pgmap v365: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 682 B/s wr, 3 op/s
Jan 20 18:54:40 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 18:54:40 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:54:40 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:54:41 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[173134]: 20/01/2026 18:54:41 : epoch 696fcf5c : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9488001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:54:41 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v366: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 682 B/s wr, 3 op/s
Jan 20 18:54:41 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:54:41 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:54:41 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:54:41.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:54:42 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[173134]: 20/01/2026 18:54:42 : epoch 696fcf5c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f94b0002b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:54:42 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[173134]: 20/01/2026 18:54:42 : epoch 696fcf5c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f94b0002b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:54:42 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:54:42 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:54:42 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:54:42.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:54:43 compute-0 ceph-mon[74381]: pgmap v366: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 682 B/s wr, 3 op/s
Jan 20 18:54:43 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[173134]: 20/01/2026 18:54:43 : epoch 696fcf5c : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f94800016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:54:43 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v367: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 682 B/s wr, 3 op/s
Jan 20 18:54:43 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:54:43 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:54:43 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:54:43.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:54:44 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[173134]: 20/01/2026 18:54:44 : epoch 696fcf5c : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9488001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:54:44 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:54:44 compute-0 ceph-mon[74381]: pgmap v367: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 682 B/s wr, 3 op/s
Jan 20 18:54:44 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[173134]: 20/01/2026 18:54:44 : epoch 696fcf5c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f948c001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:54:44 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:54:44 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:54:44 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:54:44.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:54:45 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[173134]: 20/01/2026 18:54:45 : epoch 696fcf5c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f94b0002b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:54:45 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v368: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 255 B/s wr, 1 op/s
Jan 20 18:54:45 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:54:45 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:54:45 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:54:45.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:54:46 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[173134]: 20/01/2026 18:54:46 : epoch 696fcf5c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f94b0002b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:54:46 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[173134]: 20/01/2026 18:54:46 : epoch 696fcf5c : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f94800016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:54:46 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:54:46 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:54:46 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:54:46.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:54:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:54:47.027Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 18:54:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[173134]: 20/01/2026 18:54:47 : epoch 696fcf5c : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9490000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:54:47 compute-0 ceph-mon[74381]: pgmap v368: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 255 B/s wr, 1 op/s
Jan 20 18:54:47 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v369: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s
Jan 20 18:54:47 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:54:47 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:54:47 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:54:47.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:54:48 compute-0 ceph-mon[74381]: pgmap v369: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s
Jan 20 18:54:48 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[173134]: 20/01/2026 18:54:48 : epoch 696fcf5c : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9488002b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:54:48 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[173134]: 20/01/2026 18:54:48 : epoch 696fcf5c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f94b0002b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:54:48 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:54:48 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:54:48 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:54:48.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:54:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[173134]: 20/01/2026 18:54:49 : epoch 696fcf5c : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9480002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:54:49 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:54:49 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v370: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:54:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:54:49] "GET /metrics HTTP/1.1" 200 48348 "" "Prometheus/2.51.0"
Jan 20 18:54:49 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:54:49] "GET /metrics HTTP/1.1" 200 48348 "" "Prometheus/2.51.0"
Jan 20 18:54:49 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:54:49 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:54:49 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:54:49.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:54:50 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [WARNING] 019/185450 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 20 18:54:50 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[173134]: 20/01/2026 18:54:50 : epoch 696fcf5c : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9490001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:54:50 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[173134]: 20/01/2026 18:54:50 : epoch 696fcf5c : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9480002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:54:50 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:54:50 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:54:50 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:54:50.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:54:50 compute-0 ceph-mon[74381]: pgmap v370: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:54:51 compute-0 kernel: SELinux:  Converting 2782 SID table entries...
Jan 20 18:54:51 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 20 18:54:51 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 20 18:54:51 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 20 18:54:51 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 20 18:54:51 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 20 18:54:51 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 20 18:54:51 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 20 18:54:51 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[173134]: 20/01/2026 18:54:51 : epoch 696fcf5c : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9488002b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:54:51 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v371: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:54:51 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:54:51 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 18:54:51 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:54:51.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 18:54:52 compute-0 sudo[174387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 18:54:52 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Jan 20 18:54:52 compute-0 sudo[174387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:54:52 compute-0 sudo[174387]: pam_unix(sudo:session): session closed for user root
Jan 20 18:54:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[173134]: 20/01/2026 18:54:52 : epoch 696fcf5c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f94b0002b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:54:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[173134]: 20/01/2026 18:54:52 : epoch 696fcf5c : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9490001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:54:52 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:54:52 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:54:52 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:54:52.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:54:53 compute-0 ceph-mon[74381]: pgmap v371: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:54:53 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[173134]: 20/01/2026 18:54:53 : epoch 696fcf5c : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9490001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:54:53 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v372: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:54:54 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:54:54 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:54:54 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:54:54.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:54:54 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[173134]: 20/01/2026 18:54:54 : epoch 696fcf5c : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9488003870 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:54:54 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:54:54 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[173134]: 20/01/2026 18:54:54 : epoch 696fcf5c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f94b0002b30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:54:54 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:54:54 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000028s ======
Jan 20 18:54:54 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:54:54.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000028s
Jan 20 18:54:54 compute-0 ceph-mgr[74676]: [balancer INFO root] Optimize plan auto_2026-01-20_18:54:54
Jan 20 18:54:54 compute-0 ceph-mgr[74676]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 18:54:54 compute-0 ceph-mgr[74676]: [balancer INFO root] do_upmap
Jan 20 18:54:54 compute-0 ceph-mgr[74676]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.nfs', 'vms', 'default.rgw.log', '.mgr', 'default.rgw.meta', '.rgw.root', 'backups', 'images', 'cephfs.cephfs.data', 'default.rgw.control', 'volumes']
Jan 20 18:54:54 compute-0 ceph-mgr[74676]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 18:54:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:54:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:54:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:54:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:54:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:54:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:54:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 18:54:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:54:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 18:54:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:54:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:54:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:54:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:54:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:54:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:54:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:54:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:54:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:54:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 20 18:54:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:54:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:54:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:54:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 18:54:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:54:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 20 18:54:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:54:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:54:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:54:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 18:54:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:54:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 20 18:54:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 18:54:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 18:54:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 18:54:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 18:54:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 18:54:55 compute-0 ceph-mon[74381]: pgmap v372: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:54:55 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 18:54:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 18:54:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 18:54:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 18:54:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 18:54:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 18:54:55 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[173134]: 20/01/2026 18:54:55 : epoch 696fcf5c : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9490001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:54:55 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v373: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:54:56 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:54:56 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:54:56 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:54:56.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:54:56 compute-0 kernel: ganesha.nfsd[173719]: segfault at 50 ip 00007f953599532e sp 00007f94a9ffa210 error 4 in libntirpc.so.5.8[7f953597a000+2c000] likely on CPU 5 (core 0, socket 5)
Jan 20 18:54:56 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Jan 20 18:54:56 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[173134]: 20/01/2026 18:54:56 : epoch 696fcf5c : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9490001ac0 fd 39 proxy ignored for local
Jan 20 18:54:56 compute-0 systemd[1]: Started Process Core Dump (PID 174417/UID 0).
Jan 20 18:54:56 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:54:56 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 18:54:56 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:54:56.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 18:54:57 compute-0 ceph-mgr[74676]: [devicehealth INFO root] Check health
Jan 20 18:54:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:54:57.029Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 18:54:57 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v374: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 20 18:54:57 compute-0 systemd-coredump[174418]: Process 173138 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 47:
                                                    #0  0x00007f953599532e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Jan 20 18:54:57 compute-0 systemd[1]: systemd-coredump@7-174417-0.service: Deactivated successfully.
Jan 20 18:54:57 compute-0 systemd[1]: systemd-coredump@7-174417-0.service: Consumed 1.300s CPU time.
Jan 20 18:54:57 compute-0 ceph-mon[74381]: pgmap v373: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:54:58 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:54:58 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:54:58 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:54:58.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:54:58 compute-0 podman[174425]: 2026-01-20 18:54:58.06249243 +0000 UTC m=+0.048517223 container died 5a83a68f34001fae00764093866d1a62701b68c0868df64d0cead5810fcb2f78 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 18:54:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-52ca68a3b848089f51650a794e07752b3b4eba13748e4f3ac84107fa80518db9-merged.mount: Deactivated successfully.
Jan 20 18:54:58 compute-0 podman[174425]: 2026-01-20 18:54:58.114948573 +0000 UTC m=+0.100973336 container remove 5a83a68f34001fae00764093866d1a62701b68c0868df64d0cead5810fcb2f78 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Jan 20 18:54:58 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Main process exited, code=exited, status=139/n/a
Jan 20 18:54:58 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Failed with result 'exit-code'.
Jan 20 18:54:58 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Consumed 1.366s CPU time.
Jan 20 18:54:58 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:54:58 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:54:58 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:54:58.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:54:59 compute-0 ceph-mon[74381]: pgmap v374: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 20 18:54:59 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:54:59 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v375: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:54:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:54:59] "GET /metrics HTTP/1.1" 200 48340 "" "Prometheus/2.51.0"
Jan 20 18:54:59 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:54:59] "GET /metrics HTTP/1.1" 200 48340 "" "Prometheus/2.51.0"
Jan 20 18:55:00 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:55:00 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:55:00 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:55:00.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:55:00 compute-0 podman[174470]: 2026-01-20 18:55:00.077717972 +0000 UTC m=+0.056746134 container health_status 7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:55:00 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:55:00 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 18:55:00 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:55:00.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 18:55:01 compute-0 ceph-mon[74381]: pgmap v375: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:55:01 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v376: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:55:02 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:55:02 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:55:02 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:55:02.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:55:02 compute-0 kernel: SELinux:  Converting 2782 SID table entries...
Jan 20 18:55:02 compute-0 ceph-mon[74381]: pgmap v376: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:55:02 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 20 18:55:02 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 20 18:55:02 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 20 18:55:02 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 20 18:55:02 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 20 18:55:02 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 20 18:55:02 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 20 18:55:02 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [WARNING] 019/185502 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 20 18:55:02 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:55:02 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:55:02 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:55:02.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:55:03 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v377: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Jan 20 18:55:04 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:55:04 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:55:04 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:55:04.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:55:04 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:55:04 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:55:04 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:55:04 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:55:04.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:55:05 compute-0 ceph-mon[74381]: pgmap v377: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Jan 20 18:55:05 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v378: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Jan 20 18:55:06 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:55:06 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:55:06 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:55:06.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:55:06 compute-0 ceph-mon[74381]: pgmap v378: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Jan 20 18:55:06 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:55:06 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:55:06 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:55:06.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:55:07 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:55:07.030Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 18:55:07 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v379: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 426 B/s wr, 1 op/s
Jan 20 18:55:08 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:55:08 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:55:08 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:55:08.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:55:08 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Scheduled restart job, restart counter is at 8.
Jan 20 18:55:08 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.ulclbx for aecbbf3b-b405-507b-97d7-637a83f5b4b1.
Jan 20 18:55:08 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Consumed 1.366s CPU time.
Jan 20 18:55:08 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Jan 20 18:55:08 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.ulclbx for aecbbf3b-b405-507b-97d7-637a83f5b4b1...
Jan 20 18:55:08 compute-0 podman[174505]: 2026-01-20 18:55:08.623557875 +0000 UTC m=+0.092908999 container health_status d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 18:55:08 compute-0 podman[174575]: 2026-01-20 18:55:08.735473831 +0000 UTC m=+0.051447916 container create 898bd6e879b4c0478d25cfa0540942e1d4af1388e5ef00aacda3e506863d6952 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:55:08 compute-0 podman[174575]: 2026-01-20 18:55:08.70945272 +0000 UTC m=+0.025426825 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:55:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceed1f106d38d4325baebe7a3807657814c5394508c1c55ce5d8ab5e3b4d4cc7/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Jan 20 18:55:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceed1f106d38d4325baebe7a3807657814c5394508c1c55ce5d8ab5e3b4d4cc7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:55:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceed1f106d38d4325baebe7a3807657814c5394508c1c55ce5d8ab5e3b4d4cc7/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 18:55:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceed1f106d38d4325baebe7a3807657814c5394508c1c55ce5d8ab5e3b4d4cc7/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.ulclbx-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 18:55:08 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:55:08 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:55:08 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:55:08.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:55:08 compute-0 podman[174575]: 2026-01-20 18:55:08.942928684 +0000 UTC m=+0.258902789 container init 898bd6e879b4c0478d25cfa0540942e1d4af1388e5ef00aacda3e506863d6952 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:55:08 compute-0 podman[174575]: 2026-01-20 18:55:08.947398081 +0000 UTC m=+0.263372166 container start 898bd6e879b4c0478d25cfa0540942e1d4af1388e5ef00aacda3e506863d6952 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:55:08 compute-0 bash[174575]: 898bd6e879b4c0478d25cfa0540942e1d4af1388e5ef00aacda3e506863d6952
Jan 20 18:55:08 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:08 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Jan 20 18:55:08 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:08 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Jan 20 18:55:08 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.ulclbx for aecbbf3b-b405-507b-97d7-637a83f5b4b1.
Jan 20 18:55:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:09 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Jan 20 18:55:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:09 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Jan 20 18:55:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:09 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Jan 20 18:55:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:09 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Jan 20 18:55:09 compute-0 ceph-mon[74381]: pgmap v379: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 426 B/s wr, 1 op/s
Jan 20 18:55:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:09 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Jan 20 18:55:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:09 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 20 18:55:09 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:55:09 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v380: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 20 18:55:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:55:09] "GET /metrics HTTP/1.1" 200 48340 "" "Prometheus/2.51.0"
Jan 20 18:55:09 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:55:09] "GET /metrics HTTP/1.1" 200 48340 "" "Prometheus/2.51.0"
Jan 20 18:55:10 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:55:10 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:55:10 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:55:10.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:55:10 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:55:10 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:55:10 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:55:10.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:55:11 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v381: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 20 18:55:11 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [WARNING] 019/185511 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 20 18:55:11 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [ALERT] 019/185511 (4) : backend 'backend' has no server available!
Jan 20 18:55:12 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:55:12 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:55:12 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:55:12.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:55:12 compute-0 sudo[174636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 18:55:12 compute-0 sudo[174636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:55:12 compute-0 sudo[174636]: pam_unix(sudo:session): session closed for user root
Jan 20 18:55:12 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:55:12 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 18:55:12 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:55:12.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 18:55:13 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 18:55:13 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v382: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Jan 20 18:55:14 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:55:14 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 18:55:14 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:55:14.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 18:55:14 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:55:14 compute-0 ceph-mon[74381]: pgmap v380: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 20 18:55:14 compute-0 ceph-mon[74381]: pgmap v381: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 20 18:55:14 compute-0 ceph-mon[74381]: pgmap v382: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Jan 20 18:55:14 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:55:14 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:55:14 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:55:14.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:55:15 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:15 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Jan 20 18:55:15 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:15 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Jan 20 18:55:15 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:15 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 20 18:55:15 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:15 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 20 18:55:15 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:15 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 20 18:55:15 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:15 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 20 18:55:15 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:15 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 20 18:55:15 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:15 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 20 18:55:15 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v383: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 511 B/s wr, 2 op/s
Jan 20 18:55:16 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:55:16 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.002000051s ======
Jan 20 18:55:16 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:55:16.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000051s
Jan 20 18:55:16 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:55:16 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:55:16 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:55:16.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:55:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:55:17.031Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 18:55:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[106777]: logger=sqlstore.transactions t=2026-01-20T18:55:17.519962414Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Jan 20 18:55:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[106777]: logger=cleanup t=2026-01-20T18:55:17.534840142Z level=info msg="Completed cleanup jobs" duration=26.45321ms
Jan 20 18:55:17 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v384: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 853 B/s wr, 3 op/s
Jan 20 18:55:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[106777]: logger=grafana.update.checker t=2026-01-20T18:55:17.653217177Z level=info msg="Update check succeeded" duration=61.369474ms
Jan 20 18:55:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[106777]: logger=plugins.update.checker t=2026-01-20T18:55:17.676028463Z level=info msg="Update check succeeded" duration=84.15766ms
Jan 20 18:55:18 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:55:18 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 18:55:18 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:55:18.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 18:55:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [WARNING] 019/185518 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 1ms. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 20 18:55:18 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:55:18 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:55:18 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:55:18.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:55:19 compute-0 ceph-mon[74381]: pgmap v383: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 511 B/s wr, 2 op/s
Jan 20 18:55:19 compute-0 ceph-mon[74381]: pgmap v384: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 853 B/s wr, 3 op/s
Jan 20 18:55:19 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v385: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 853 B/s wr, 2 op/s
Jan 20 18:55:19 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:55:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:55:19] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Jan 20 18:55:19 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:55:19] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Jan 20 18:55:20 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:55:20 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:55:20 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:55:20.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:55:20 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:55:20 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:55:20 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:55:20.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:55:21 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v386: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 853 B/s wr, 2 op/s
Jan 20 18:55:21 compute-0 ceph-mon[74381]: pgmap v385: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 853 B/s wr, 2 op/s
Jan 20 18:55:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:22 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000013:nfs.cephfs.2: -2
Jan 20 18:55:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:22 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 20 18:55:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:22 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Jan 20 18:55:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:22 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Jan 20 18:55:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:22 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Jan 20 18:55:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:22 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Jan 20 18:55:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:22 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Jan 20 18:55:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:22 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Jan 20 18:55:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:22 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 20 18:55:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:22 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 20 18:55:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:22 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 20 18:55:22 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:55:22 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:55:22 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:55:22.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:55:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:22 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Jan 20 18:55:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:22 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 20 18:55:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:22 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Jan 20 18:55:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:22 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Jan 20 18:55:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:22 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Jan 20 18:55:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:22 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Jan 20 18:55:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:22 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Jan 20 18:55:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:22 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Jan 20 18:55:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:22 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Jan 20 18:55:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:22 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Jan 20 18:55:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:22 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Jan 20 18:55:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:22 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Jan 20 18:55:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:22 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Jan 20 18:55:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:22 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Jan 20 18:55:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:22 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 20 18:55:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:22 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Jan 20 18:55:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:22 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 20 18:55:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:22 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 20 18:55:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:22 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfcc000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:55:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:22 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfc80014d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:55:22 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:55:22 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:55:22 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:55:22.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:55:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:23 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfb4000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:55:23 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v387: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Jan 20 18:55:23 compute-0 ceph-mon[74381]: pgmap v386: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 853 B/s wr, 2 op/s
Jan 20 18:55:24 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:55:24 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:55:24 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:55:24.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:55:24 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [WARNING] 019/185524 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 20 18:55:24 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:24 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfa8000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:55:24 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:24 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfb0000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:55:24 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:55:24 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:55:24 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 18:55:24 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:55:24.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 18:55:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:55:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:55:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:55:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:55:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:55:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:55:25 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:25 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 20 18:55:25 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:25 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 20 18:55:25 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:25 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfc80021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:55:25 compute-0 ceph-mon[74381]: pgmap v387: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Jan 20 18:55:25 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v388: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 597 B/s wr, 2 op/s
Jan 20 18:55:26 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:55:26 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:55:26 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:55:26.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:55:26 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 18:55:26 compute-0 ceph-mon[74381]: pgmap v388: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 597 B/s wr, 2 op/s
Jan 20 18:55:26 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:26 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfc80021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:55:26 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:26 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfa80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:55:26 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:55:26 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:55:26 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:55:26.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:55:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:55:27.032Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 18:55:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:27 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfb0001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:55:27 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v389: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 1.4 KiB/s wr, 4 op/s
Jan 20 18:55:28 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:55:28 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 18:55:28 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:55:28.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 18:55:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:28 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfb0001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:55:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:28 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfc80021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:55:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:28 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 20 18:55:28 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:55:28 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:55:28 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:55:28.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:55:29 compute-0 ceph-mon[74381]: pgmap v389: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 1.4 KiB/s wr, 4 op/s
Jan 20 18:55:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:29 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfa80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:55:29 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v390: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Jan 20 18:55:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:55:29] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Jan 20 18:55:29 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:55:29] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Jan 20 18:55:30 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:55:30 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 18:55:30 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:55:30.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 18:55:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:55:30.187 165659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 18:55:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:55:30.187 165659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 18:55:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:55:30.187 165659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 18:55:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:30 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfb0001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:55:30 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:55:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:30 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfb0001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:55:30 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:55:30 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:55:30 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:55:30.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:55:31 compute-0 ceph-mon[74381]: pgmap v390: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Jan 20 18:55:31 compute-0 podman[185831]: 2026-01-20 18:55:31.076113387 +0000 UTC m=+0.052509214 container health_status 7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 20 18:55:31 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:31 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfc80021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:55:31 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v391: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Jan 20 18:55:32 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:55:32 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:55:32 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:55:32.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:55:32 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:32 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfa80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:55:32 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:32 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfb40019e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:55:32 compute-0 sudo[186864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 18:55:32 compute-0 sudo[186864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:55:32 compute-0 sudo[186864]: pam_unix(sudo:session): session closed for user root
Jan 20 18:55:32 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:55:32 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:55:32 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:55:32.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:55:33 compute-0 ceph-mon[74381]: pgmap v391: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Jan 20 18:55:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:33 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfb0001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:55:33 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v392: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 1.2 KiB/s wr, 4 op/s
Jan 20 18:55:34 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:55:34 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:55:34 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:55:34.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:55:34 compute-0 ceph-mon[74381]: pgmap v392: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 1.2 KiB/s wr, 4 op/s
Jan 20 18:55:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:34 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfc80021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:55:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:34 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfa8002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:55:34 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:55:34 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:55:34 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:55:34.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:55:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:35 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfb40019e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:55:35 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:55:35 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v393: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Jan 20 18:55:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [WARNING] 019/185535 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 20 18:55:36 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:55:36 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 18:55:36 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:55:36.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 18:55:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:36 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfb0003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:55:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:36 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfc80021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:55:36 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:55:36 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 18:55:36 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:55:36.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 18:55:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:55:37.033Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 18:55:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:55:37.033Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 18:55:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:37 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfa8002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:55:37 compute-0 ceph-mon[74381]: pgmap v393: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Jan 20 18:55:37 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v394: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Jan 20 18:55:38 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:55:38 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:55:38 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:55:38.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:55:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:38 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfb4002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:55:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:38 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfb0003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:55:38 compute-0 ceph-mon[74381]: pgmap v394: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Jan 20 18:55:38 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:55:38 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:55:38 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:55:38.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:55:39 compute-0 podman[191323]: 2026-01-20 18:55:39.097648046 +0000 UTC m=+0.072031575 container health_status d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:55:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:39 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfc80021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:55:39 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v395: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Jan 20 18:55:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:55:39] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Jan 20 18:55:39 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:55:39] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Jan 20 18:55:40 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:55:40 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:55:40 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:55:40.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:55:40 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:40 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfa8002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:55:40 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:55:40 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:40 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfb4002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:55:40 compute-0 ceph-mon[74381]: pgmap v395: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Jan 20 18:55:40 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 18:55:40 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:55:40 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:55:40 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:55:40.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:55:41 compute-0 sudo[191636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:55:41 compute-0 sudo[191636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:55:41 compute-0 sudo[191636]: pam_unix(sudo:session): session closed for user root
Jan 20 18:55:41 compute-0 sudo[191661]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 20 18:55:41 compute-0 sudo[191661]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:55:41 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:41 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfb0003730 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:55:41 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v396: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Jan 20 18:55:41 compute-0 sudo[191661]: pam_unix(sudo:session): session closed for user root
Jan 20 18:55:41 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 20 18:55:41 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 20 18:55:41 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 20 18:55:41 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 20 18:55:42 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:55:42 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:55:42 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:55:42.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:55:42 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:42 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfc80021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:55:42 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:42 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfa8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:55:42 compute-0 ceph-mon[74381]: pgmap v396: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Jan 20 18:55:42 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:55:42 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:55:42 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:55:42.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:55:43 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:43 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfb4003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:55:43 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v397: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Jan 20 18:55:44 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:55:44 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:55:44 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:55:44.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:55:44 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 20 18:55:44 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:55:44 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 20 18:55:44 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:55:44 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:44 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfb0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:55:44 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 20 18:55:44 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:44 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfc80021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:55:44 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:55:44 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 20 18:55:44 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:55:44 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:55:44 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:55:44.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:55:44 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:55:45 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Jan 20 18:55:45 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 20 18:55:45 compute-0 ceph-mon[74381]: pgmap v397: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Jan 20 18:55:45 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:55:45 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:55:45 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:55:45 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 20 18:55:45 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:55:45 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 20 18:55:45 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:45 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfa8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:55:45 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:55:45 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v398: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:55:45 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Jan 20 18:55:45 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 20 18:55:45 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 18:55:45 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:55:45 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 20 18:55:45 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:55:45 compute-0 sudo[191731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:55:45 compute-0 sudo[191731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:55:45 compute-0 sudo[191731]: pam_unix(sudo:session): session closed for user root
Jan 20 18:55:46 compute-0 sudo[191756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 18:55:46 compute-0 sudo[191756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:55:46 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:55:46 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 18:55:46 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:55:46.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 18:55:46 compute-0 ceph-mon[74381]: pgmap v398: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:55:46 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 20 18:55:46 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 20 18:55:46 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:55:46 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 18:55:46 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:55:46 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:55:46 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 18:55:46 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 18:55:46 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:55:46 compute-0 podman[191821]: 2026-01-20 18:55:46.398983167 +0000 UTC m=+0.041437964 container create de038469042bebf58c06df92c394530b61f3cf2d73042678dd217b252e427b3f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_hypatia, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:55:46 compute-0 systemd[1]: Started libpod-conmon-de038469042bebf58c06df92c394530b61f3cf2d73042678dd217b252e427b3f.scope.
Jan 20 18:55:46 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:55:46 compute-0 podman[191821]: 2026-01-20 18:55:46.379782855 +0000 UTC m=+0.022237672 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:55:46 compute-0 podman[191821]: 2026-01-20 18:55:46.476712159 +0000 UTC m=+0.119166976 container init de038469042bebf58c06df92c394530b61f3cf2d73042678dd217b252e427b3f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_hypatia, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 18:55:46 compute-0 podman[191821]: 2026-01-20 18:55:46.484267276 +0000 UTC m=+0.126722073 container start de038469042bebf58c06df92c394530b61f3cf2d73042678dd217b252e427b3f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_hypatia, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Jan 20 18:55:46 compute-0 podman[191821]: 2026-01-20 18:55:46.487671845 +0000 UTC m=+0.130126652 container attach de038469042bebf58c06df92c394530b61f3cf2d73042678dd217b252e427b3f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_hypatia, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:55:46 compute-0 trusting_hypatia[191838]: 167 167
Jan 20 18:55:46 compute-0 systemd[1]: libpod-de038469042bebf58c06df92c394530b61f3cf2d73042678dd217b252e427b3f.scope: Deactivated successfully.
Jan 20 18:55:46 compute-0 conmon[191838]: conmon de038469042bebf58c06 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-de038469042bebf58c06df92c394530b61f3cf2d73042678dd217b252e427b3f.scope/container/memory.events
Jan 20 18:55:46 compute-0 podman[191821]: 2026-01-20 18:55:46.490620212 +0000 UTC m=+0.133075039 container died de038469042bebf58c06df92c394530b61f3cf2d73042678dd217b252e427b3f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True)
Jan 20 18:55:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e59087b3b090a2574c1988320bc9feb37ddc45c1bb8c472b43724e3ba26d2bb-merged.mount: Deactivated successfully.
Jan 20 18:55:46 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:46 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfb4003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:55:46 compute-0 podman[191821]: 2026-01-20 18:55:46.535558897 +0000 UTC m=+0.178013694 container remove de038469042bebf58c06df92c394530b61f3cf2d73042678dd217b252e427b3f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_hypatia, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 20 18:55:46 compute-0 systemd[1]: libpod-conmon-de038469042bebf58c06df92c394530b61f3cf2d73042678dd217b252e427b3f.scope: Deactivated successfully.
Jan 20 18:55:46 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:46 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfb0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:55:46 compute-0 podman[191862]: 2026-01-20 18:55:46.707723418 +0000 UTC m=+0.047195775 container create f736ab79c34063392988de7dbf40b42f086b91988eb62c2afe033398ebc549f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_wiles, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 20 18:55:46 compute-0 systemd[1]: Started libpod-conmon-f736ab79c34063392988de7dbf40b42f086b91988eb62c2afe033398ebc549f8.scope.
Jan 20 18:55:46 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:55:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1c94377e80d947ef64a368178d50d9b95e5255080f9d0dcf54518385643b6ee/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 18:55:46 compute-0 podman[191862]: 2026-01-20 18:55:46.68522917 +0000 UTC m=+0.024701537 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:55:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1c94377e80d947ef64a368178d50d9b95e5255080f9d0dcf54518385643b6ee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:55:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1c94377e80d947ef64a368178d50d9b95e5255080f9d0dcf54518385643b6ee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:55:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1c94377e80d947ef64a368178d50d9b95e5255080f9d0dcf54518385643b6ee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 18:55:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1c94377e80d947ef64a368178d50d9b95e5255080f9d0dcf54518385643b6ee/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 18:55:46 compute-0 podman[191862]: 2026-01-20 18:55:46.802888636 +0000 UTC m=+0.142361003 container init f736ab79c34063392988de7dbf40b42f086b91988eb62c2afe033398ebc549f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_wiles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 18:55:46 compute-0 podman[191862]: 2026-01-20 18:55:46.809490589 +0000 UTC m=+0.148962936 container start f736ab79c34063392988de7dbf40b42f086b91988eb62c2afe033398ebc549f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Jan 20 18:55:46 compute-0 podman[191862]: 2026-01-20 18:55:46.813079212 +0000 UTC m=+0.152551579 container attach f736ab79c34063392988de7dbf40b42f086b91988eb62c2afe033398ebc549f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid)
Jan 20 18:55:46 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:55:46 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:55:46 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:55:46.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:55:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:55:47.034Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 18:55:47 compute-0 admiring_wiles[191878]: --> passed data devices: 0 physical, 1 LVM
Jan 20 18:55:47 compute-0 admiring_wiles[191878]: --> All data devices are unavailable
Jan 20 18:55:47 compute-0 systemd[1]: libpod-f736ab79c34063392988de7dbf40b42f086b91988eb62c2afe033398ebc549f8.scope: Deactivated successfully.
Jan 20 18:55:47 compute-0 podman[191862]: 2026-01-20 18:55:47.133069937 +0000 UTC m=+0.472542284 container died f736ab79c34063392988de7dbf40b42f086b91988eb62c2afe033398ebc549f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_wiles, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 18:55:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-d1c94377e80d947ef64a368178d50d9b95e5255080f9d0dcf54518385643b6ee-merged.mount: Deactivated successfully.
Jan 20 18:55:47 compute-0 podman[191862]: 2026-01-20 18:55:47.176341909 +0000 UTC m=+0.515814276 container remove f736ab79c34063392988de7dbf40b42f086b91988eb62c2afe033398ebc549f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_wiles, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:55:47 compute-0 systemd[1]: libpod-conmon-f736ab79c34063392988de7dbf40b42f086b91988eb62c2afe033398ebc549f8.scope: Deactivated successfully.
Jan 20 18:55:47 compute-0 sudo[191756]: pam_unix(sudo:session): session closed for user root
Jan 20 18:55:47 compute-0 sudo[191904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:55:47 compute-0 sudo[191904]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:55:47 compute-0 sudo[191904]: pam_unix(sudo:session): session closed for user root
Jan 20 18:55:47 compute-0 sudo[191929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- lvm list --format json
Jan 20 18:55:47 compute-0 sudo[191929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:55:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:47 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfc80021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:55:47 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v399: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:55:47 compute-0 podman[191998]: 2026-01-20 18:55:47.715812951 +0000 UTC m=+0.040445998 container create 91f7387ff264fc712fe66809a44c6d4c532a09a5d7cdadddc40716b21c06bd7a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_satoshi, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 20 18:55:47 compute-0 systemd[1]: Started libpod-conmon-91f7387ff264fc712fe66809a44c6d4c532a09a5d7cdadddc40716b21c06bd7a.scope.
Jan 20 18:55:47 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:55:47 compute-0 podman[191998]: 2026-01-20 18:55:47.774128076 +0000 UTC m=+0.098761123 container init 91f7387ff264fc712fe66809a44c6d4c532a09a5d7cdadddc40716b21c06bd7a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_satoshi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Jan 20 18:55:47 compute-0 podman[191998]: 2026-01-20 18:55:47.790172545 +0000 UTC m=+0.114805572 container start 91f7387ff264fc712fe66809a44c6d4c532a09a5d7cdadddc40716b21c06bd7a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_satoshi, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Jan 20 18:55:47 compute-0 podman[191998]: 2026-01-20 18:55:47.699720031 +0000 UTC m=+0.024353078 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:55:47 compute-0 podman[191998]: 2026-01-20 18:55:47.793985705 +0000 UTC m=+0.118618732 container attach 91f7387ff264fc712fe66809a44c6d4c532a09a5d7cdadddc40716b21c06bd7a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 18:55:47 compute-0 magical_satoshi[192014]: 167 167
Jan 20 18:55:47 compute-0 systemd[1]: libpod-91f7387ff264fc712fe66809a44c6d4c532a09a5d7cdadddc40716b21c06bd7a.scope: Deactivated successfully.
Jan 20 18:55:47 compute-0 conmon[192014]: conmon 91f7387ff264fc712fe6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-91f7387ff264fc712fe66809a44c6d4c532a09a5d7cdadddc40716b21c06bd7a.scope/container/memory.events
Jan 20 18:55:47 compute-0 podman[191998]: 2026-01-20 18:55:47.798666357 +0000 UTC m=+0.123299384 container died 91f7387ff264fc712fe66809a44c6d4c532a09a5d7cdadddc40716b21c06bd7a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_satoshi, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 20 18:55:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-d596bd9d2736cf030ef1ddcb0a59ac17b258440d6adcae4808d72cb3c8e8baed-merged.mount: Deactivated successfully.
Jan 20 18:55:47 compute-0 podman[191998]: 2026-01-20 18:55:47.835059038 +0000 UTC m=+0.159692065 container remove 91f7387ff264fc712fe66809a44c6d4c532a09a5d7cdadddc40716b21c06bd7a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:55:47 compute-0 systemd[1]: libpod-conmon-91f7387ff264fc712fe66809a44c6d4c532a09a5d7cdadddc40716b21c06bd7a.scope: Deactivated successfully.
Jan 20 18:55:48 compute-0 podman[192038]: 2026-01-20 18:55:48.004676102 +0000 UTC m=+0.038577549 container create 565593ff62826a0d8bdffe49ebd954c4ede2caab4c7655613ee53304ff26297a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Jan 20 18:55:48 compute-0 systemd[1]: Started libpod-conmon-565593ff62826a0d8bdffe49ebd954c4ede2caab4c7655613ee53304ff26297a.scope.
Jan 20 18:55:48 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:55:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a32d4104641179974d6ed2e2aca02d0d93edf8a10b7f8f4ce5fb3dd62cdc5c93/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 18:55:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a32d4104641179974d6ed2e2aca02d0d93edf8a10b7f8f4ce5fb3dd62cdc5c93/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:55:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a32d4104641179974d6ed2e2aca02d0d93edf8a10b7f8f4ce5fb3dd62cdc5c93/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:55:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a32d4104641179974d6ed2e2aca02d0d93edf8a10b7f8f4ce5fb3dd62cdc5c93/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 18:55:48 compute-0 podman[192038]: 2026-01-20 18:55:48.073356128 +0000 UTC m=+0.107257595 container init 565593ff62826a0d8bdffe49ebd954c4ede2caab4c7655613ee53304ff26297a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_gauss, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Jan 20 18:55:48 compute-0 podman[192038]: 2026-01-20 18:55:48.081559292 +0000 UTC m=+0.115460739 container start 565593ff62826a0d8bdffe49ebd954c4ede2caab4c7655613ee53304ff26297a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Jan 20 18:55:48 compute-0 podman[192038]: 2026-01-20 18:55:48.084430357 +0000 UTC m=+0.118331804 container attach 565593ff62826a0d8bdffe49ebd954c4ede2caab4c7655613ee53304ff26297a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0)
Jan 20 18:55:48 compute-0 podman[192038]: 2026-01-20 18:55:47.989707932 +0000 UTC m=+0.023609379 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:55:48 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:55:48 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:55:48 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:55:48.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:55:48 compute-0 thirsty_gauss[192055]: {
Jan 20 18:55:48 compute-0 thirsty_gauss[192055]:     "0": [
Jan 20 18:55:48 compute-0 thirsty_gauss[192055]:         {
Jan 20 18:55:48 compute-0 thirsty_gauss[192055]:             "devices": [
Jan 20 18:55:48 compute-0 thirsty_gauss[192055]:                 "/dev/loop3"
Jan 20 18:55:48 compute-0 thirsty_gauss[192055]:             ],
Jan 20 18:55:48 compute-0 thirsty_gauss[192055]:             "lv_name": "ceph_lv0",
Jan 20 18:55:48 compute-0 thirsty_gauss[192055]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 18:55:48 compute-0 thirsty_gauss[192055]:             "lv_size": "21470642176",
Jan 20 18:55:48 compute-0 thirsty_gauss[192055]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=aecbbf3b-b405-507b-97d7-637a83f5b4b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5f53c0c6-6046-4836-83f9-ff93da7e674e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 18:55:48 compute-0 thirsty_gauss[192055]:             "lv_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 18:55:48 compute-0 thirsty_gauss[192055]:             "name": "ceph_lv0",
Jan 20 18:55:48 compute-0 thirsty_gauss[192055]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 18:55:48 compute-0 thirsty_gauss[192055]:             "tags": {
Jan 20 18:55:48 compute-0 thirsty_gauss[192055]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 18:55:48 compute-0 thirsty_gauss[192055]:                 "ceph.block_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 18:55:48 compute-0 thirsty_gauss[192055]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 18:55:48 compute-0 thirsty_gauss[192055]:                 "ceph.cluster_fsid": "aecbbf3b-b405-507b-97d7-637a83f5b4b1",
Jan 20 18:55:48 compute-0 thirsty_gauss[192055]:                 "ceph.cluster_name": "ceph",
Jan 20 18:55:48 compute-0 thirsty_gauss[192055]:                 "ceph.crush_device_class": "",
Jan 20 18:55:48 compute-0 thirsty_gauss[192055]:                 "ceph.encrypted": "0",
Jan 20 18:55:48 compute-0 thirsty_gauss[192055]:                 "ceph.osd_fsid": "5f53c0c6-6046-4836-83f9-ff93da7e674e",
Jan 20 18:55:48 compute-0 thirsty_gauss[192055]:                 "ceph.osd_id": "0",
Jan 20 18:55:48 compute-0 thirsty_gauss[192055]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 18:55:48 compute-0 thirsty_gauss[192055]:                 "ceph.type": "block",
Jan 20 18:55:48 compute-0 thirsty_gauss[192055]:                 "ceph.vdo": "0",
Jan 20 18:55:48 compute-0 thirsty_gauss[192055]:                 "ceph.with_tpm": "0"
Jan 20 18:55:48 compute-0 thirsty_gauss[192055]:             },
Jan 20 18:55:48 compute-0 thirsty_gauss[192055]:             "type": "block",
Jan 20 18:55:48 compute-0 thirsty_gauss[192055]:             "vg_name": "ceph_vg0"
Jan 20 18:55:48 compute-0 thirsty_gauss[192055]:         }
Jan 20 18:55:48 compute-0 thirsty_gauss[192055]:     ]
Jan 20 18:55:48 compute-0 thirsty_gauss[192055]: }
Jan 20 18:55:48 compute-0 systemd[1]: libpod-565593ff62826a0d8bdffe49ebd954c4ede2caab4c7655613ee53304ff26297a.scope: Deactivated successfully.
Jan 20 18:55:48 compute-0 podman[192038]: 2026-01-20 18:55:48.371602924 +0000 UTC m=+0.405504542 container died 565593ff62826a0d8bdffe49ebd954c4ede2caab4c7655613ee53304ff26297a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_gauss, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Jan 20 18:55:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-a32d4104641179974d6ed2e2aca02d0d93edf8a10b7f8f4ce5fb3dd62cdc5c93-merged.mount: Deactivated successfully.
Jan 20 18:55:48 compute-0 podman[192038]: 2026-01-20 18:55:48.409477086 +0000 UTC m=+0.443378533 container remove 565593ff62826a0d8bdffe49ebd954c4ede2caab4c7655613ee53304ff26297a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_gauss, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2)
Jan 20 18:55:48 compute-0 systemd[1]: libpod-conmon-565593ff62826a0d8bdffe49ebd954c4ede2caab4c7655613ee53304ff26297a.scope: Deactivated successfully.
Jan 20 18:55:48 compute-0 sudo[191929]: pam_unix(sudo:session): session closed for user root
Jan 20 18:55:48 compute-0 sudo[192075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:55:48 compute-0 sudo[192075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:55:48 compute-0 sudo[192075]: pam_unix(sudo:session): session closed for user root
Jan 20 18:55:48 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:48 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfa8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:55:48 compute-0 sudo[192100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- raw list --format json
Jan 20 18:55:48 compute-0 sudo[192100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:55:48 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:48 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfb4003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:55:48 compute-0 ceph-mon[74381]: pgmap v399: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:55:48 compute-0 podman[192164]: 2026-01-20 18:55:48.956972788 +0000 UTC m=+0.068418780 container create 87b8cb11153e9018a6118b8332936ddf0cfdcd8b5fdb8b9a56737a292e445724 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_cori, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 20 18:55:48 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:55:48 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 18:55:48 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:55:48.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 18:55:48 compute-0 systemd[1]: Started libpod-conmon-87b8cb11153e9018a6118b8332936ddf0cfdcd8b5fdb8b9a56737a292e445724.scope.
Jan 20 18:55:49 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:55:49 compute-0 podman[192164]: 2026-01-20 18:55:48.912537066 +0000 UTC m=+0.023983088 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:55:49 compute-0 podman[192164]: 2026-01-20 18:55:49.017143941 +0000 UTC m=+0.128589953 container init 87b8cb11153e9018a6118b8332936ddf0cfdcd8b5fdb8b9a56737a292e445724 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_cori, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:55:49 compute-0 podman[192164]: 2026-01-20 18:55:49.023955198 +0000 UTC m=+0.135401190 container start 87b8cb11153e9018a6118b8332936ddf0cfdcd8b5fdb8b9a56737a292e445724 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:55:49 compute-0 podman[192164]: 2026-01-20 18:55:49.027381018 +0000 UTC m=+0.138827100 container attach 87b8cb11153e9018a6118b8332936ddf0cfdcd8b5fdb8b9a56737a292e445724 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_cori, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 20 18:55:49 compute-0 determined_cori[192181]: 167 167
Jan 20 18:55:49 compute-0 systemd[1]: libpod-87b8cb11153e9018a6118b8332936ddf0cfdcd8b5fdb8b9a56737a292e445724.scope: Deactivated successfully.
Jan 20 18:55:49 compute-0 podman[192164]: 2026-01-20 18:55:49.029012931 +0000 UTC m=+0.140458933 container died 87b8cb11153e9018a6118b8332936ddf0cfdcd8b5fdb8b9a56737a292e445724 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_cori, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 20 18:55:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-155830168e63aab1192e5bc8f410fab1769a31ccfc86bbaec1ec85cbbfc715ba-merged.mount: Deactivated successfully.
Jan 20 18:55:49 compute-0 podman[192164]: 2026-01-20 18:55:49.076124603 +0000 UTC m=+0.187570595 container remove 87b8cb11153e9018a6118b8332936ddf0cfdcd8b5fdb8b9a56737a292e445724 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_cori, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Jan 20 18:55:49 compute-0 systemd[1]: libpod-conmon-87b8cb11153e9018a6118b8332936ddf0cfdcd8b5fdb8b9a56737a292e445724.scope: Deactivated successfully.
Jan 20 18:55:49 compute-0 podman[192206]: 2026-01-20 18:55:49.235519219 +0000 UTC m=+0.039399171 container create bb580cc83026311c4f97d5b0f0b7d039b0736eb8e56056ce676c4571b40143c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_shaw, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:55:49 compute-0 systemd[1]: Started libpod-conmon-bb580cc83026311c4f97d5b0f0b7d039b0736eb8e56056ce676c4571b40143c5.scope.
Jan 20 18:55:49 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:55:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c643a6ee0abd71efbdcdab108dddab03f6a1ffb3332a561e95dbe95d36a435d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 18:55:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c643a6ee0abd71efbdcdab108dddab03f6a1ffb3332a561e95dbe95d36a435d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:55:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c643a6ee0abd71efbdcdab108dddab03f6a1ffb3332a561e95dbe95d36a435d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:55:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c643a6ee0abd71efbdcdab108dddab03f6a1ffb3332a561e95dbe95d36a435d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 18:55:49 compute-0 podman[192206]: 2026-01-20 18:55:49.21753528 +0000 UTC m=+0.021415242 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:55:49 compute-0 podman[192206]: 2026-01-20 18:55:49.318259283 +0000 UTC m=+0.122139235 container init bb580cc83026311c4f97d5b0f0b7d039b0736eb8e56056ce676c4571b40143c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_shaw, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:55:49 compute-0 podman[192206]: 2026-01-20 18:55:49.32314614 +0000 UTC m=+0.127026092 container start bb580cc83026311c4f97d5b0f0b7d039b0736eb8e56056ce676c4571b40143c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_shaw, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:55:49 compute-0 podman[192206]: 2026-01-20 18:55:49.326538018 +0000 UTC m=+0.130418060 container attach bb580cc83026311c4f97d5b0f0b7d039b0736eb8e56056ce676c4571b40143c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_shaw, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:55:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:49 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfb0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:55:49 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v400: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:55:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:55:49] "GET /metrics HTTP/1.1" 200 48338 "" "Prometheus/2.51.0"
Jan 20 18:55:49 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:55:49] "GET /metrics HTTP/1.1" 200 48338 "" "Prometheus/2.51.0"
Jan 20 18:55:49 compute-0 lvm[192296]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 18:55:49 compute-0 lvm[192296]: VG ceph_vg0 finished
Jan 20 18:55:49 compute-0 admiring_shaw[192220]: {}
Jan 20 18:55:50 compute-0 systemd[1]: libpod-bb580cc83026311c4f97d5b0f0b7d039b0736eb8e56056ce676c4571b40143c5.scope: Deactivated successfully.
Jan 20 18:55:50 compute-0 systemd[1]: libpod-bb580cc83026311c4f97d5b0f0b7d039b0736eb8e56056ce676c4571b40143c5.scope: Consumed 1.079s CPU time.
Jan 20 18:55:50 compute-0 podman[192206]: 2026-01-20 18:55:50.020944602 +0000 UTC m=+0.824824574 container died bb580cc83026311c4f97d5b0f0b7d039b0736eb8e56056ce676c4571b40143c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_shaw, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default)
Jan 20 18:55:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-9c643a6ee0abd71efbdcdab108dddab03f6a1ffb3332a561e95dbe95d36a435d-merged.mount: Deactivated successfully.
Jan 20 18:55:50 compute-0 podman[192206]: 2026-01-20 18:55:50.06372062 +0000 UTC m=+0.867600572 container remove bb580cc83026311c4f97d5b0f0b7d039b0736eb8e56056ce676c4571b40143c5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_shaw, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:55:50 compute-0 systemd[1]: libpod-conmon-bb580cc83026311c4f97d5b0f0b7d039b0736eb8e56056ce676c4571b40143c5.scope: Deactivated successfully.
Jan 20 18:55:50 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:55:50 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:55:50 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:55:50.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:55:50 compute-0 sudo[192100]: pam_unix(sudo:session): session closed for user root
Jan 20 18:55:50 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 18:55:50 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:55:50 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 18:55:50 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:55:50 compute-0 sudo[192314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 18:55:50 compute-0 sudo[192314]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:55:50 compute-0 sudo[192314]: pam_unix(sudo:session): session closed for user root
Jan 20 18:55:50 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:50 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfc80021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:55:50 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:55:50 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:50 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfa8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:55:50 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:55:50 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:55:50 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:55:50.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:55:51 compute-0 ceph-mon[74381]: pgmap v400: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:55:51 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:55:51 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:55:51 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:51 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfb4003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:55:51 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v401: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:55:52 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:55:52 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 18:55:52 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:55:52.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 18:55:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:52 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfb0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:55:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:52 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfc80021d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:55:52 compute-0 ceph-mon[74381]: pgmap v401: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:55:52 compute-0 sudo[192345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 18:55:52 compute-0 sudo[192345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:55:52 compute-0 sudo[192345]: pam_unix(sudo:session): session closed for user root
Jan 20 18:55:52 compute-0 kernel: SELinux:  Converting 2783 SID table entries...
Jan 20 18:55:52 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 20 18:55:52 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 20 18:55:52 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 20 18:55:52 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 20 18:55:52 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 20 18:55:52 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 20 18:55:52 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 20 18:55:52 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:55:52 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 18:55:52 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:55:52.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 18:55:53 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:53 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfa8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:55:53 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v402: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 20 18:55:53 compute-0 groupadd[192380]: group added to /etc/group: name=dnsmasq, GID=992
Jan 20 18:55:53 compute-0 groupadd[192380]: group added to /etc/gshadow: name=dnsmasq
Jan 20 18:55:53 compute-0 groupadd[192380]: new group: name=dnsmasq, GID=992
Jan 20 18:55:53 compute-0 useradd[192387]: new user: name=dnsmasq, UID=991, GID=992, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Jan 20 18:55:53 compute-0 dbus-broker-launch[758]: Noticed file-system modification, trigger reload.
Jan 20 18:55:53 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Jan 20 18:55:53 compute-0 dbus-broker-launch[758]: Noticed file-system modification, trigger reload.
Jan 20 18:55:54 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:55:54 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:55:54 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:55:54.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:55:54 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:54 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfb4003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:55:54 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:54 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfb0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:55:54 compute-0 ceph-mon[74381]: pgmap v402: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 20 18:55:54 compute-0 groupadd[192402]: group added to /etc/group: name=clevis, GID=991
Jan 20 18:55:54 compute-0 groupadd[192402]: group added to /etc/gshadow: name=clevis
Jan 20 18:55:54 compute-0 groupadd[192402]: new group: name=clevis, GID=991
Jan 20 18:55:54 compute-0 useradd[192409]: new user: name=clevis, UID=990, GID=991, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Jan 20 18:55:54 compute-0 usermod[192419]: add 'clevis' to group 'tss'
Jan 20 18:55:54 compute-0 usermod[192419]: add 'clevis' to shadow group 'tss'
Jan 20 18:55:54 compute-0 ceph-mgr[74676]: [balancer INFO root] Optimize plan auto_2026-01-20_18:55:54
Jan 20 18:55:54 compute-0 ceph-mgr[74676]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 18:55:54 compute-0 ceph-mgr[74676]: [balancer INFO root] do_upmap
Jan 20 18:55:54 compute-0 ceph-mgr[74676]: [balancer INFO root] pools ['backups', 'default.rgw.meta', 'default.rgw.log', '.mgr', '.nfs', 'cephfs.cephfs.meta', 'images', 'volumes', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.control', 'vms']
Jan 20 18:55:54 compute-0 ceph-mgr[74676]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 18:55:54 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:55:54 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:55:54 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:55:54.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:55:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:55:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:55:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:55:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:55:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:55:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:55:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 18:55:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 18:55:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 18:55:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:55:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 18:55:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:55:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:55:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:55:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:55:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:55:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:55:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:55:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:55:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:55:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 20 18:55:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:55:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:55:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:55:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 18:55:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:55:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 20 18:55:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:55:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:55:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:55:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 18:55:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:55:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 20 18:55:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 18:55:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 18:55:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 18:55:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 18:55:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 18:55:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 18:55:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 18:55:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 18:55:55 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:55 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfac000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:55:55 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:55:55 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v403: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:55:56 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 18:55:56 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:55:56 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:55:56 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:55:56.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:55:56 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:56 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfa8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:55:56 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:56 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfb4003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:55:56 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:55:56 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 18:55:56 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:55:56.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 18:55:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:55:57.035Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 18:55:57 compute-0 ceph-mon[74381]: pgmap v403: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:55:57 compute-0 polkitd[43401]: Reloading rules
Jan 20 18:55:57 compute-0 polkitd[43401]: Collecting garbage unconditionally...
Jan 20 18:55:57 compute-0 polkitd[43401]: Loading rules from directory /etc/polkit-1/rules.d
Jan 20 18:55:57 compute-0 polkitd[43401]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 20 18:55:57 compute-0 polkitd[43401]: Finished loading, compiling and executing 3 rules
Jan 20 18:55:57 compute-0 polkitd[43401]: Reloading rules
Jan 20 18:55:57 compute-0 polkitd[43401]: Collecting garbage unconditionally...
Jan 20 18:55:57 compute-0 polkitd[43401]: Loading rules from directory /etc/polkit-1/rules.d
Jan 20 18:55:57 compute-0 polkitd[43401]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 20 18:55:57 compute-0 polkitd[43401]: Finished loading, compiling and executing 3 rules
Jan 20 18:55:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:57 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfb0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:55:57 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v404: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:55:58 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:55:58 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:55:58 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:55:58.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:55:58 compute-0 groupadd[192613]: group added to /etc/group: name=ceph, GID=167
Jan 20 18:55:58 compute-0 groupadd[192613]: group added to /etc/gshadow: name=ceph
Jan 20 18:55:58 compute-0 groupadd[192613]: new group: name=ceph, GID=167
Jan 20 18:55:58 compute-0 useradd[192619]: new user: name=ceph, UID=167, GID=167, home=/var/lib/ceph, shell=/sbin/nologin, from=none
Jan 20 18:55:58 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:58 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfac0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:55:58 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:58 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfa8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:55:58 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:55:58 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:55:58 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:55:58.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:55:59 compute-0 ceph-mon[74381]: pgmap v404: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:55:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:55:59 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfa8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:55:59 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v405: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:55:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:55:59] "GET /metrics HTTP/1.1" 200 48339 "" "Prometheus/2.51.0"
Jan 20 18:55:59 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:55:59] "GET /metrics HTTP/1.1" 200 48339 "" "Prometheus/2.51.0"
Jan 20 18:56:00 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:56:00 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 18:56:00 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:56:00.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 18:56:00 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:00 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfb0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:00 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:56:00 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:00 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfac0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:00 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:56:00 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:56:00 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:56:00.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:56:01 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:01 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfa8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:01 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v406: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:56:01 compute-0 systemd[1]: Stopping OpenSSH server daemon...
Jan 20 18:56:01 compute-0 sshd[1004]: Received signal 15; terminating.
Jan 20 18:56:01 compute-0 systemd[1]: sshd.service: Deactivated successfully.
Jan 20 18:56:01 compute-0 systemd[1]: Stopped OpenSSH server daemon.
Jan 20 18:56:01 compute-0 systemd[1]: sshd.service: Consumed 2.521s CPU time, read 32.0K from disk, written 0B to disk.
Jan 20 18:56:01 compute-0 systemd[1]: Stopped target sshd-keygen.target.
Jan 20 18:56:01 compute-0 systemd[1]: Stopping sshd-keygen.target...
Jan 20 18:56:01 compute-0 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 20 18:56:01 compute-0 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 20 18:56:01 compute-0 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 20 18:56:01 compute-0 systemd[1]: Reached target sshd-keygen.target.
Jan 20 18:56:01 compute-0 systemd[1]: Starting OpenSSH server daemon...
Jan 20 18:56:01 compute-0 sshd[193329]: Server listening on 0.0.0.0 port 22.
Jan 20 18:56:01 compute-0 sshd[193329]: Server listening on :: port 22.
Jan 20 18:56:01 compute-0 systemd[1]: Started OpenSSH server daemon.
Jan 20 18:56:01 compute-0 podman[193316]: 2026-01-20 18:56:01.906565064 +0000 UTC m=+0.054562737 container health_status 7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 20 18:56:02 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:56:02 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:56:02 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:56:02.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:56:02 compute-0 ceph-mon[74381]: pgmap v405: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:56:02 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:02 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfb4003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:02 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:02 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfb0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:02 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:56:02 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:56:02 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:56:02.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:56:03 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:03 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfac0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:03 compute-0 ceph-mon[74381]: pgmap v406: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:56:03 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v407: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 20 18:56:03 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 20 18:56:03 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 20 18:56:03 compute-0 systemd[1]: Reloading.
Jan 20 18:56:03 compute-0 systemd-rc-local-generator[193598]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:56:03 compute-0 systemd-sysv-generator[193601]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:56:04 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:56:04 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:56:04 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:56:04.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:56:04 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 20 18:56:04 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:04 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfac0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:04 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:04 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfb4003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:04 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:56:04 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:56:04 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:56:04.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:56:05 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:05 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfb0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:05 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:56:05 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v408: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:56:05 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [WARNING] 019/185605 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 20 18:56:06 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:56:06 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:56:06 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:56:06.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:56:06 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:06 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfa8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:06 compute-0 ceph-mon[74381]: pgmap v407: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 20 18:56:06 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:06 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfac002f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:06 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:56:06 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:56:06 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:56:06.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:56:07 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:56:07.037Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 18:56:07 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:56:07.037Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 18:56:07 compute-0 sudo[173387]: pam_unix(sudo:session): session closed for user root
Jan 20 18:56:07 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:07 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfb4003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:07 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v409: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:56:07 compute-0 ceph-mon[74381]: pgmap v408: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:56:08 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:56:08 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:56:08 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:56:08.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:56:08 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:08 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfb0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:08 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:08 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfa8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:08 compute-0 ceph-mon[74381]: pgmap v409: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:56:08 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:56:08 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:56:08 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:56:08.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:56:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:09 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfac002f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:09 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v410: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:56:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:56:09] "GET /metrics HTTP/1.1" 200 48339 "" "Prometheus/2.51.0"
Jan 20 18:56:09 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:56:09] "GET /metrics HTTP/1.1" 200 48339 "" "Prometheus/2.51.0"
Jan 20 18:56:10 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:56:10 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:56:10 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:56:10.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:56:10 compute-0 podman[200238]: 2026-01-20 18:56:10.156562915 +0000 UTC m=+0.132063074 container health_status d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, container_name=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 20 18:56:10 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:10 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfb4003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:10 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:56:10 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:10 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfb0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:10 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:56:10 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:56:10 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:56:10.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:56:11 compute-0 ceph-mon[74381]: pgmap v410: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:56:11 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 18:56:11 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:11 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfa8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:11 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v411: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:56:11 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 20 18:56:11 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 20 18:56:11 compute-0 systemd[1]: man-db-cache-update.service: Consumed 10.370s CPU time.
Jan 20 18:56:11 compute-0 systemd[1]: run-r7ddd9601447047b9959e5b930a047ec7.service: Deactivated successfully.
Jan 20 18:56:12 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:56:12 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:56:12 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:56:12.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:56:12 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:12 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfac003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:12 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:12 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfb4003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:12 compute-0 sudo[202042]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 18:56:12 compute-0 sudo[202042]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:56:12 compute-0 sudo[202042]: pam_unix(sudo:session): session closed for user root
Jan 20 18:56:12 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:56:12 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:56:12 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:56:12.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:56:13 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:13 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfb0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:13 compute-0 ceph-mon[74381]: pgmap v411: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:56:13 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v412: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 20 18:56:14 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:56:14 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:56:14 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:56:14.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:56:14 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:14 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfa8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:14 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:14 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfa8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:14 compute-0 ceph-mon[74381]: pgmap v412: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 20 18:56:15 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:56:15 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:56:15 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:56:14.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:56:15 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:15 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfb4003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:15 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:15 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 20 18:56:15 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:56:15 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v413: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:56:16 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:56:16 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:56:16 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:56:16.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:56:16 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:16 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfb0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:16 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:16 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfa8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:17 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:56:17 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 20 18:56:17 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:56:17.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 20 18:56:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:56:17.038Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 18:56:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:17 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfac003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:17 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v414: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 20 18:56:18 compute-0 ceph-mon[74381]: pgmap v413: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:56:18 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:56:18 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 20 18:56:18 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:56:18.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 20 18:56:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:18 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 20 18:56:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:18 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 20 18:56:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:18 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfb4003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:18 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfb0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:19 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:56:19 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:56:19 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:56:19.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:56:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:19 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfa8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:19 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v415: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 20 18:56:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:56:19] "GET /metrics HTTP/1.1" 200 48337 "" "Prometheus/2.51.0"
Jan 20 18:56:19 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:56:19] "GET /metrics HTTP/1.1" 200 48337 "" "Prometheus/2.51.0"
Jan 20 18:56:20 compute-0 ceph-mon[74381]: pgmap v414: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 20 18:56:20 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:56:20 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:56:20 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:56:20.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:56:20 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:20 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfac003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:20 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:56:20 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:20 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfb4003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:21 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:56:21 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:56:21 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:56:21.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:56:21 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:21 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfb0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:21 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:21 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 20 18:56:21 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v416: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 1 op/s
Jan 20 18:56:22 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:56:22 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 20 18:56:22 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:56:22.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 20 18:56:22 compute-0 ceph-mon[74381]: pgmap v415: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 20 18:56:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:22 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfa8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:22 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfac003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:23 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:56:23 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:56:23 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:56:23.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:56:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:23 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfb4003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:23 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v417: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Jan 20 18:56:24 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:56:24 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:56:24 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:56:24.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:56:24 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:24 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfb0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:24 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:24 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfa8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:24 compute-0 ceph-mon[74381]: pgmap v416: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 1 op/s
Jan 20 18:56:25 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:56:25 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000023s ======
Jan 20 18:56:25 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:56:25.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 20 18:56:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:56:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:56:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:56:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:56:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:56:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:56:25 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:25 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfc8001080 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:25 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:56:25 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v418: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Jan 20 18:56:26 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:56:26 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:56:26 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:56:26.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:56:26 compute-0 ceph-mon[74381]: pgmap v417: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Jan 20 18:56:26 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 18:56:26 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:26 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfb4003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:26 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:26 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfb0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:27 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:56:27 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:56:27 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:56:27.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:56:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:56:27.039Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 18:56:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:27 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfc4001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:27 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v419: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 20 18:56:27 compute-0 ceph-mon[74381]: pgmap v418: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Jan 20 18:56:28 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:56:28 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:56:28 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:56:28.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:56:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:28 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfc8001f80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:28 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfb4003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:28 compute-0 ceph-mon[74381]: pgmap v419: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 20 18:56:29 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:56:29 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:56:29 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:56:29.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:56:29 compute-0 sudo[202210]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znkcrmqyhaghxmgfskwxgfikqsonkxxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935388.5658002-963-132257186564911/AnsiballZ_systemd.py'
Jan 20 18:56:29 compute-0 sudo[202210]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:56:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:29 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfb0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:29 compute-0 python3.9[202212]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 20 18:56:29 compute-0 systemd[1]: Reloading.
Jan 20 18:56:29 compute-0 systemd-rc-local-generator[202239]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:56:29 compute-0 systemd-sysv-generator[202244]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:56:29 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v420: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 426 B/s wr, 2 op/s
Jan 20 18:56:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [WARNING] 019/185629 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 20 18:56:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:56:29] "GET /metrics HTTP/1.1" 200 48334 "" "Prometheus/2.51.0"
Jan 20 18:56:29 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:56:29] "GET /metrics HTTP/1.1" 200 48334 "" "Prometheus/2.51.0"
Jan 20 18:56:29 compute-0 sudo[202210]: pam_unix(sudo:session): session closed for user root
Jan 20 18:56:30 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:56:30 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:56:30 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:56:30.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:56:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:56:30.188 165659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 18:56:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:56:30.189 165659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 18:56:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:56:30.189 165659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 18:56:30 compute-0 sudo[202402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxsvdribkhdqrvvixwzxmocoorkhdnkg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935390.0011222-963-47117483057392/AnsiballZ_systemd.py'
Jan 20 18:56:30 compute-0 sudo[202402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:56:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:30 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfc8001f80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:30 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:56:30 compute-0 python3.9[202404]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 20 18:56:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:30 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfc8001f80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:30 compute-0 systemd[1]: Reloading.
Jan 20 18:56:30 compute-0 systemd-rc-local-generator[202430]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:56:30 compute-0 systemd-sysv-generator[202436]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:56:30 compute-0 ceph-mon[74381]: pgmap v420: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 426 B/s wr, 2 op/s
Jan 20 18:56:31 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:56:31 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 20 18:56:31 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:56:31.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 20 18:56:31 compute-0 sudo[202402]: pam_unix(sudo:session): session closed for user root
Jan 20 18:56:31 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:31 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfc8001f80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:31 compute-0 sudo[202591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcsdotkdtygkkfmimcczjzwxfehswzws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935391.1929533-963-275690572355743/AnsiballZ_systemd.py'
Jan 20 18:56:31 compute-0 sudo[202591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:56:31 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v421: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 426 B/s wr, 2 op/s
Jan 20 18:56:31 compute-0 python3.9[202593]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 20 18:56:31 compute-0 systemd[1]: Reloading.
Jan 20 18:56:31 compute-0 systemd-sysv-generator[202626]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:56:31 compute-0 systemd-rc-local-generator[202623]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:56:32 compute-0 sudo[202591]: pam_unix(sudo:session): session closed for user root
Jan 20 18:56:32 compute-0 podman[202634]: 2026-01-20 18:56:32.117133731 +0000 UTC m=+0.054916239 container health_status 7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 20 18:56:32 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:56:32 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:56:32 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:56:32.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:56:32 compute-0 sudo[202802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkqiaomxevdroosdoetzyeecghejfeta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935392.1941628-963-91555237560597/AnsiballZ_systemd.py'
Jan 20 18:56:32 compute-0 sudo[202802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:56:32 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:32 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfb0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:32 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:32 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfb4003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:32 compute-0 python3.9[202804]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 20 18:56:32 compute-0 systemd[1]: Reloading.
Jan 20 18:56:32 compute-0 systemd-rc-local-generator[202833]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:56:32 compute-0 systemd-sysv-generator[202836]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:56:33 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:56:33 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:56:33 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:56:33.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:56:33 compute-0 sudo[202841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 18:56:33 compute-0 sudo[202841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:56:33 compute-0 sudo[202841]: pam_unix(sudo:session): session closed for user root
Jan 20 18:56:33 compute-0 sudo[202802]: pam_unix(sudo:session): session closed for user root
Jan 20 18:56:33 compute-0 ceph-mon[74381]: pgmap v421: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 426 B/s wr, 2 op/s
Jan 20 18:56:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:33 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfc40023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:33 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v422: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Jan 20 18:56:34 compute-0 sudo[203018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzusuxwuhremujgqprkqbgxeagzpzizf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935393.8735228-1050-231741346164130/AnsiballZ_systemd.py'
Jan 20 18:56:34 compute-0 sudo[203018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:56:34 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:56:34 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:56:34 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:56:34.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:56:34 compute-0 python3.9[203020]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 18:56:34 compute-0 systemd[1]: Reloading.
Jan 20 18:56:34 compute-0 systemd-rc-local-generator[203051]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:56:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:34 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfc80031c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:34 compute-0 systemd-sysv-generator[203054]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:56:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:34 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfb0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:34 compute-0 sudo[203018]: pam_unix(sudo:session): session closed for user root
Jan 20 18:56:35 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:56:35 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 20 18:56:35 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:56:35.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 20 18:56:35 compute-0 sudo[203208]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpkgmykcdicqxasopvtzifzvtwhzdkrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935394.945091-1050-2229330598465/AnsiballZ_systemd.py'
Jan 20 18:56:35 compute-0 sudo[203208]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:56:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:35 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfb4003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:35 compute-0 python3.9[203210]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 18:56:35 compute-0 ceph-mon[74381]: pgmap v422: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Jan 20 18:56:35 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:56:35 compute-0 systemd[1]: Reloading.
Jan 20 18:56:35 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v423: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 85 B/s wr, 0 op/s
Jan 20 18:56:35 compute-0 systemd-rc-local-generator[203243]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:56:35 compute-0 systemd-sysv-generator[203246]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:56:35 compute-0 sudo[203208]: pam_unix(sudo:session): session closed for user root
Jan 20 18:56:36 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:56:36 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:56:36 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:56:36.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:56:36 compute-0 sudo[203400]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgwjfchowsmhdcplawywxpnovsujbaug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935396.0902648-1050-34091304283646/AnsiballZ_systemd.py'
Jan 20 18:56:36 compute-0 sudo[203400]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:56:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:36 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfc4002d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:36 compute-0 python3.9[203402]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 18:56:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:36 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfc80031c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:36 compute-0 systemd[1]: Reloading.
Jan 20 18:56:36 compute-0 systemd-rc-local-generator[203433]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:56:36 compute-0 systemd-sysv-generator[203436]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:56:37 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:56:37 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:56:37 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:56:37.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:56:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:56:37.041Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 18:56:37 compute-0 sudo[203400]: pam_unix(sudo:session): session closed for user root
Jan 20 18:56:37 compute-0 ceph-mon[74381]: pgmap v423: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 85 B/s wr, 0 op/s
Jan 20 18:56:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:37 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfb0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:37 compute-0 sudo[203590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izvwoxjukgonbkmhybmgzrsmicpaqqvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935397.214904-1050-5343540135875/AnsiballZ_systemd.py'
Jan 20 18:56:37 compute-0 sudo[203590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:56:37 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v424: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Jan 20 18:56:37 compute-0 python3.9[203592]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 18:56:37 compute-0 sudo[203590]: pam_unix(sudo:session): session closed for user root
Jan 20 18:56:38 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:56:38 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:56:38 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:56:38.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:56:38 compute-0 sudo[203747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkejqwpqehywwwicsgtqylzoqitodhjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935398.029263-1050-232982147572087/AnsiballZ_systemd.py'
Jan 20 18:56:38 compute-0 sudo[203747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:56:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:38 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfb4004530 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:38 compute-0 python3.9[203749]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 18:56:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:38 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfc4002d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:38 compute-0 systemd[1]: Reloading.
Jan 20 18:56:38 compute-0 ceph-mon[74381]: pgmap v424: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Jan 20 18:56:38 compute-0 systemd-sysv-generator[203784]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:56:38 compute-0 systemd-rc-local-generator[203780]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:56:39 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:56:39 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:56:39 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:56:39.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:56:39 compute-0 sudo[203747]: pam_unix(sudo:session): session closed for user root
Jan 20 18:56:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:39 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfc80031c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:39 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v425: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:56:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:56:39] "GET /metrics HTTP/1.1" 200 48334 "" "Prometheus/2.51.0"
Jan 20 18:56:39 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:56:39] "GET /metrics HTTP/1.1" 200 48334 "" "Prometheus/2.51.0"
Jan 20 18:56:40 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:56:40 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:56:40 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:56:40.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:56:40 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:40 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfb0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:40 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:56:40 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:40 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfb4004530 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:40 compute-0 ceph-mon[74381]: pgmap v425: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:56:40 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 18:56:41 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:56:41 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:56:41 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:56:41.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:56:41 compute-0 podman[203874]: 2026-01-20 18:56:41.109535038 +0000 UTC m=+0.087008478 container health_status d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller)
Jan 20 18:56:41 compute-0 sudo[203965]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbfocmcrwrnelnbipzxgiyzekugvivmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935400.9152818-1158-191320799054710/AnsiballZ_systemd.py'
Jan 20 18:56:41 compute-0 sudo[203965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:56:41 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:41 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfc4002d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:41 compute-0 python3.9[203967]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 20 18:56:41 compute-0 systemd[1]: Reloading.
Jan 20 18:56:41 compute-0 systemd-rc-local-generator[203997]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:56:41 compute-0 systemd-sysv-generator[204000]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:56:41 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v426: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:56:41 compute-0 systemd[1]: Listening on libvirt proxy daemon socket.
Jan 20 18:56:41 compute-0 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Jan 20 18:56:41 compute-0 sudo[203965]: pam_unix(sudo:session): session closed for user root
Jan 20 18:56:42 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:56:42 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:56:42 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:56:42.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:56:42 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:42 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfc8004120 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:42 compute-0 sudo[204160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gparmlckwzvrzwcjwytjnydcwsygamam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935402.3657532-1182-156353876577086/AnsiballZ_systemd.py'
Jan 20 18:56:42 compute-0 sudo[204160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:56:42 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:42 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfb0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:42 compute-0 ceph-mon[74381]: pgmap v426: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:56:42 compute-0 python3.9[204162]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 18:56:43 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:56:43 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:56:43 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:56:43.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:56:43 compute-0 sudo[204160]: pam_unix(sudo:session): session closed for user root
Jan 20 18:56:43 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:43 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfb4004530 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:43 compute-0 sudo[204315]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwmhzqbrxyoudplinsnlynovijnueoxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935403.1868849-1182-227980261895096/AnsiballZ_systemd.py'
Jan 20 18:56:43 compute-0 sudo[204315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:56:43 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v427: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:56:43 compute-0 ceph-osd[82836]: bluestore.MempoolThread fragmentation_score=0.000035 took=0.000045s
Jan 20 18:56:43 compute-0 python3.9[204317]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 18:56:43 compute-0 sudo[204315]: pam_unix(sudo:session): session closed for user root
Jan 20 18:56:43 compute-0 auditd[700]: Audit daemon rotating log files
Jan 20 18:56:44 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:56:44 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:56:44 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:56:44.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:56:44 compute-0 sudo[204472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbngkrcssulyunlfplxaaqjjacdtvqdu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935403.923085-1182-59668877282292/AnsiballZ_systemd.py'
Jan 20 18:56:44 compute-0 sudo[204472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:56:44 compute-0 python3.9[204474]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 18:56:44 compute-0 sudo[204472]: pam_unix(sudo:session): session closed for user root
Jan 20 18:56:44 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:44 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfc4003e00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:44 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:44 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfc8004120 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:44 compute-0 ceph-mon[74381]: pgmap v427: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:56:44 compute-0 sudo[204627]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfxgnztlphqxtzwcuztkyzuwmjtbdupo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935404.6986156-1182-147807884519333/AnsiballZ_systemd.py'
Jan 20 18:56:44 compute-0 sudo[204627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:56:45 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:56:45 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:56:45 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:56:45.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:56:45 compute-0 python3.9[204629]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 18:56:45 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:45 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfb0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:45 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:56:45 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v428: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:56:46 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:56:46 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 20 18:56:46 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:56:46.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 20 18:56:46 compute-0 sudo[204627]: pam_unix(sudo:session): session closed for user root
Jan 20 18:56:46 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:46 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfb4004530 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:46 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:46 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfc4003e00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:46 compute-0 sudo[204784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lljuftsvyryewyjgbqvrrrqjgufotytt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935406.4817822-1182-61604418088062/AnsiballZ_systemd.py'
Jan 20 18:56:46 compute-0 sudo[204784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:56:46 compute-0 ceph-mon[74381]: pgmap v428: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:56:47 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:56:47 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 20 18:56:47 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:56:47.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 20 18:56:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:56:47.043Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 18:56:47 compute-0 python3.9[204786]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 18:56:47 compute-0 sudo[204784]: pam_unix(sudo:session): session closed for user root
Jan 20 18:56:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:47 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfc8004120 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:47 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v429: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 20 18:56:47 compute-0 sudo[204941]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdplodetkrqjqfcurmepqndwddkdpdhx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935407.4204996-1182-76808232091356/AnsiballZ_systemd.py'
Jan 20 18:56:47 compute-0 sudo[204941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:56:48 compute-0 python3.9[204943]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 18:56:48 compute-0 sudo[204941]: pam_unix(sudo:session): session closed for user root
Jan 20 18:56:48 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:56:48 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:56:48 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:56:48.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:56:48 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:48 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfb0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:48 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:48 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfb4004530 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:48 compute-0 sudo[205096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-huuxyeyckuqhdkxhwwwephervdhpsiyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935408.254624-1182-267273850845007/AnsiballZ_systemd.py'
Jan 20 18:56:48 compute-0 sudo[205096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:56:48 compute-0 ceph-mon[74381]: pgmap v429: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 20 18:56:49 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:56:49 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:56:49 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:56:49.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:56:49 compute-0 python3.9[205098]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 18:56:49 compute-0 sudo[205096]: pam_unix(sudo:session): session closed for user root
Jan 20 18:56:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:49 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfc4003e00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:49 compute-0 sudo[205253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blkoudmdflgwmbzdjunxgcvupwyusqik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935409.300602-1182-84580709880816/AnsiballZ_systemd.py'
Jan 20 18:56:49 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v430: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:56:49 compute-0 sudo[205253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:56:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:56:49] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Jan 20 18:56:49 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:56:49] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Jan 20 18:56:49 compute-0 python3.9[205255]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 18:56:50 compute-0 sudo[205253]: pam_unix(sudo:session): session closed for user root
Jan 20 18:56:50 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:56:50 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:56:50 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:56:50.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:56:50 compute-0 sudo[205382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:56:50 compute-0 sudo[205382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:56:50 compute-0 sudo[205382]: pam_unix(sudo:session): session closed for user root
Jan 20 18:56:50 compute-0 sudo[205433]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmxnbdqwsrpwlrabugurftabvgxazimk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935410.2261636-1182-247191598603752/AnsiballZ_systemd.py'
Jan 20 18:56:50 compute-0 sudo[205433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:56:50 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:50 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfc8004120 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:50 compute-0 sudo[205434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 20 18:56:50 compute-0 sudo[205434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:56:50 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:56:50 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:50 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfb0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:50 compute-0 python3.9[205444]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 18:56:50 compute-0 sudo[205433]: pam_unix(sudo:session): session closed for user root
Jan 20 18:56:50 compute-0 ceph-mon[74381]: pgmap v430: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:56:51 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:56:51 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:56:51 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:56:51.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:56:51 compute-0 sudo[205434]: pam_unix(sudo:session): session closed for user root
Jan 20 18:56:51 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 18:56:51 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:56:51 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 20 18:56:51 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:56:51 compute-0 sudo[205665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cixhvrwjpjdctfdcqkyskvtukdzjnzzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935411.100519-1182-200672484425180/AnsiballZ_systemd.py'
Jan 20 18:56:51 compute-0 sudo[205665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:56:51 compute-0 sudo[205626]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:56:51 compute-0 sudo[205626]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:56:51 compute-0 sudo[205626]: pam_unix(sudo:session): session closed for user root
Jan 20 18:56:51 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:51 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfb4004530 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:51 compute-0 sudo[205673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 18:56:51 compute-0 sudo[205673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:56:51 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v431: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:56:51 compute-0 python3.9[205670]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 18:56:51 compute-0 sudo[205665]: pam_unix(sudo:session): session closed for user root
Jan 20 18:56:51 compute-0 podman[205745]: 2026-01-20 18:56:51.85918846 +0000 UTC m=+0.037798349 container create 4e54969f8ecc50c4c1063bbc69dd561b91b66c63b25d37204f2a61f231d89df0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_maxwell, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid)
Jan 20 18:56:51 compute-0 systemd[1]: Started libpod-conmon-4e54969f8ecc50c4c1063bbc69dd561b91b66c63b25d37204f2a61f231d89df0.scope.
Jan 20 18:56:51 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:56:51 compute-0 podman[205745]: 2026-01-20 18:56:51.840415839 +0000 UTC m=+0.019025758 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:56:51 compute-0 podman[205745]: 2026-01-20 18:56:51.9431028 +0000 UTC m=+0.121712709 container init 4e54969f8ecc50c4c1063bbc69dd561b91b66c63b25d37204f2a61f231d89df0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_maxwell, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:56:51 compute-0 podman[205745]: 2026-01-20 18:56:51.951001255 +0000 UTC m=+0.129611144 container start 4e54969f8ecc50c4c1063bbc69dd561b91b66c63b25d37204f2a61f231d89df0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_maxwell, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Jan 20 18:56:51 compute-0 podman[205745]: 2026-01-20 18:56:51.955884584 +0000 UTC m=+0.134494473 container attach 4e54969f8ecc50c4c1063bbc69dd561b91b66c63b25d37204f2a61f231d89df0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 20 18:56:51 compute-0 nice_maxwell[205790]: 167 167
Jan 20 18:56:51 compute-0 systemd[1]: libpod-4e54969f8ecc50c4c1063bbc69dd561b91b66c63b25d37204f2a61f231d89df0.scope: Deactivated successfully.
Jan 20 18:56:51 compute-0 podman[205745]: 2026-01-20 18:56:51.957377611 +0000 UTC m=+0.135987500 container died 4e54969f8ecc50c4c1063bbc69dd561b91b66c63b25d37204f2a61f231d89df0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_maxwell, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 20 18:56:51 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:56:51 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 18:56:51 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:56:51 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:56:51 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 18:56:51 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 18:56:51 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:56:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc431edc7fee66cec359b222fcf541dd3f817feb64611a7afffd8df9c1be2710-merged.mount: Deactivated successfully.
Jan 20 18:56:51 compute-0 podman[205745]: 2026-01-20 18:56:51.996227825 +0000 UTC m=+0.174837714 container remove 4e54969f8ecc50c4c1063bbc69dd561b91b66c63b25d37204f2a61f231d89df0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nice_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Jan 20 18:56:52 compute-0 systemd[1]: libpod-conmon-4e54969f8ecc50c4c1063bbc69dd561b91b66c63b25d37204f2a61f231d89df0.scope: Deactivated successfully.
Jan 20 18:56:52 compute-0 podman[205872]: 2026-01-20 18:56:52.15775796 +0000 UTC m=+0.047533248 container create 37bb8f76a3965fef0a8f770fff169ff060601ce08439521ba02fef69bdfc0eb9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1)
Jan 20 18:56:52 compute-0 systemd[1]: Started libpod-conmon-37bb8f76a3965fef0a8f770fff169ff060601ce08439521ba02fef69bdfc0eb9.scope.
Jan 20 18:56:52 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:56:52 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:56:52 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:56:52.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:56:52 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:56:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4da3312e01d3387c387ca883f6e7f3fb283cb4ddce39557e16ca4cb2c573ee5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 18:56:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4da3312e01d3387c387ca883f6e7f3fb283cb4ddce39557e16ca4cb2c573ee5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:56:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4da3312e01d3387c387ca883f6e7f3fb283cb4ddce39557e16ca4cb2c573ee5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:56:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4da3312e01d3387c387ca883f6e7f3fb283cb4ddce39557e16ca4cb2c573ee5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 18:56:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4da3312e01d3387c387ca883f6e7f3fb283cb4ddce39557e16ca4cb2c573ee5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 18:56:52 compute-0 podman[205872]: 2026-01-20 18:56:52.138286682 +0000 UTC m=+0.028061990 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:56:52 compute-0 podman[205872]: 2026-01-20 18:56:52.236891314 +0000 UTC m=+0.126666612 container init 37bb8f76a3965fef0a8f770fff169ff060601ce08439521ba02fef69bdfc0eb9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_cerf, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Jan 20 18:56:52 compute-0 podman[205872]: 2026-01-20 18:56:52.246468439 +0000 UTC m=+0.136243727 container start 37bb8f76a3965fef0a8f770fff169ff060601ce08439521ba02fef69bdfc0eb9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Jan 20 18:56:52 compute-0 podman[205872]: 2026-01-20 18:56:52.249701558 +0000 UTC m=+0.139476846 container attach 37bb8f76a3965fef0a8f770fff169ff060601ce08439521ba02fef69bdfc0eb9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_cerf, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 20 18:56:52 compute-0 sudo[205955]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zapwnyqfdcvdjmacsbifxllishhyzwtz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935411.9322329-1182-229534222426799/AnsiballZ_systemd.py'
Jan 20 18:56:52 compute-0 sudo[205955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:56:52 compute-0 pensive_cerf[205923]: --> passed data devices: 0 physical, 1 LVM
Jan 20 18:56:52 compute-0 pensive_cerf[205923]: --> All data devices are unavailable
Jan 20 18:56:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:52 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfb4004530 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:52 compute-0 systemd[1]: libpod-37bb8f76a3965fef0a8f770fff169ff060601ce08439521ba02fef69bdfc0eb9.scope: Deactivated successfully.
Jan 20 18:56:52 compute-0 python3.9[205957]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 18:56:52 compute-0 podman[205968]: 2026-01-20 18:56:52.622183284 +0000 UTC m=+0.022463363 container died 37bb8f76a3965fef0a8f770fff169ff060601ce08439521ba02fef69bdfc0eb9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_cerf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:56:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-b4da3312e01d3387c387ca883f6e7f3fb283cb4ddce39557e16ca4cb2c573ee5-merged.mount: Deactivated successfully.
Jan 20 18:56:52 compute-0 podman[205968]: 2026-01-20 18:56:52.664180265 +0000 UTC m=+0.064460324 container remove 37bb8f76a3965fef0a8f770fff169ff060601ce08439521ba02fef69bdfc0eb9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_cerf, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:56:52 compute-0 systemd[1]: libpod-conmon-37bb8f76a3965fef0a8f770fff169ff060601ce08439521ba02fef69bdfc0eb9.scope: Deactivated successfully.
Jan 20 18:56:52 compute-0 sudo[205673]: pam_unix(sudo:session): session closed for user root
Jan 20 18:56:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:52 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfc8004120 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:52 compute-0 sudo[205986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:56:52 compute-0 sudo[205986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:56:52 compute-0 sudo[205986]: pam_unix(sudo:session): session closed for user root
Jan 20 18:56:52 compute-0 sudo[206011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- lvm list --format json
Jan 20 18:56:52 compute-0 sudo[206011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:56:52 compute-0 ceph-mon[74381]: pgmap v431: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:56:53 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:56:53 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:56:53 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:56:53.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:56:53 compute-0 sudo[206076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 18:56:53 compute-0 sudo[206076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:56:53 compute-0 sudo[206076]: pam_unix(sudo:session): session closed for user root
Jan 20 18:56:53 compute-0 podman[206083]: 2026-01-20 18:56:53.166274033 +0000 UTC m=+0.041248504 container create 851fb331b26e9b8e5777e7a727e59f98ea68076ce81566e113b84b0486084bc8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:56:53 compute-0 systemd[1]: Started libpod-conmon-851fb331b26e9b8e5777e7a727e59f98ea68076ce81566e113b84b0486084bc8.scope.
Jan 20 18:56:53 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:56:53 compute-0 podman[206083]: 2026-01-20 18:56:53.147312877 +0000 UTC m=+0.022287368 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:56:53 compute-0 podman[206083]: 2026-01-20 18:56:53.243181641 +0000 UTC m=+0.118156132 container init 851fb331b26e9b8e5777e7a727e59f98ea68076ce81566e113b84b0486084bc8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_booth, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 20 18:56:53 compute-0 podman[206083]: 2026-01-20 18:56:53.249234529 +0000 UTC m=+0.124209000 container start 851fb331b26e9b8e5777e7a727e59f98ea68076ce81566e113b84b0486084bc8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_booth, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:56:53 compute-0 podman[206083]: 2026-01-20 18:56:53.252135921 +0000 UTC m=+0.127110392 container attach 851fb331b26e9b8e5777e7a727e59f98ea68076ce81566e113b84b0486084bc8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Jan 20 18:56:53 compute-0 agitated_booth[206117]: 167 167
Jan 20 18:56:53 compute-0 systemd[1]: libpod-851fb331b26e9b8e5777e7a727e59f98ea68076ce81566e113b84b0486084bc8.scope: Deactivated successfully.
Jan 20 18:56:53 compute-0 podman[206083]: 2026-01-20 18:56:53.254269013 +0000 UTC m=+0.129243474 container died 851fb331b26e9b8e5777e7a727e59f98ea68076ce81566e113b84b0486084bc8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_booth, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Jan 20 18:56:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-b80492a5345fd4ea70b9ddf9dd9a9d92412fb47898b398c2d75aabfe664c6d07-merged.mount: Deactivated successfully.
Jan 20 18:56:53 compute-0 podman[206083]: 2026-01-20 18:56:53.285048619 +0000 UTC m=+0.160023090 container remove 851fb331b26e9b8e5777e7a727e59f98ea68076ce81566e113b84b0486084bc8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_booth, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Jan 20 18:56:53 compute-0 systemd[1]: libpod-conmon-851fb331b26e9b8e5777e7a727e59f98ea68076ce81566e113b84b0486084bc8.scope: Deactivated successfully.
Jan 20 18:56:53 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:53 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfb0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:53 compute-0 podman[206140]: 2026-01-20 18:56:53.452194153 +0000 UTC m=+0.047244121 container create 2b5599670fce1b533371528f20653bade4fea4af21ae24b557c65bf40286af88 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_wing, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:56:53 compute-0 systemd[1]: Started libpod-conmon-2b5599670fce1b533371528f20653bade4fea4af21ae24b557c65bf40286af88.scope.
Jan 20 18:56:53 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:56:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f61380dfb451d907256f3d86858b6fa853ce2560f5b8781992e7119e6c0c349f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 18:56:53 compute-0 podman[206140]: 2026-01-20 18:56:53.432419837 +0000 UTC m=+0.027469855 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:56:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f61380dfb451d907256f3d86858b6fa853ce2560f5b8781992e7119e6c0c349f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:56:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f61380dfb451d907256f3d86858b6fa853ce2560f5b8781992e7119e6c0c349f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:56:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f61380dfb451d907256f3d86858b6fa853ce2560f5b8781992e7119e6c0c349f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 18:56:53 compute-0 podman[206140]: 2026-01-20 18:56:53.539526707 +0000 UTC m=+0.134576675 container init 2b5599670fce1b533371528f20653bade4fea4af21ae24b557c65bf40286af88 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_wing, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 18:56:53 compute-0 podman[206140]: 2026-01-20 18:56:53.547616355 +0000 UTC m=+0.142666323 container start 2b5599670fce1b533371528f20653bade4fea4af21ae24b557c65bf40286af88 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_wing, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:56:53 compute-0 podman[206140]: 2026-01-20 18:56:53.550740742 +0000 UTC m=+0.145790710 container attach 2b5599670fce1b533371528f20653bade4fea4af21ae24b557c65bf40286af88 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_wing, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 20 18:56:53 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v432: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:56:53 compute-0 sudo[205955]: pam_unix(sudo:session): session closed for user root
Jan 20 18:56:53 compute-0 heuristic_wing[206156]: {
Jan 20 18:56:53 compute-0 heuristic_wing[206156]:     "0": [
Jan 20 18:56:53 compute-0 heuristic_wing[206156]:         {
Jan 20 18:56:53 compute-0 heuristic_wing[206156]:             "devices": [
Jan 20 18:56:53 compute-0 heuristic_wing[206156]:                 "/dev/loop3"
Jan 20 18:56:53 compute-0 heuristic_wing[206156]:             ],
Jan 20 18:56:53 compute-0 heuristic_wing[206156]:             "lv_name": "ceph_lv0",
Jan 20 18:56:53 compute-0 heuristic_wing[206156]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 18:56:53 compute-0 heuristic_wing[206156]:             "lv_size": "21470642176",
Jan 20 18:56:53 compute-0 heuristic_wing[206156]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=aecbbf3b-b405-507b-97d7-637a83f5b4b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5f53c0c6-6046-4836-83f9-ff93da7e674e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 18:56:53 compute-0 heuristic_wing[206156]:             "lv_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 18:56:53 compute-0 heuristic_wing[206156]:             "name": "ceph_lv0",
Jan 20 18:56:53 compute-0 heuristic_wing[206156]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 18:56:53 compute-0 heuristic_wing[206156]:             "tags": {
Jan 20 18:56:53 compute-0 heuristic_wing[206156]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 18:56:53 compute-0 heuristic_wing[206156]:                 "ceph.block_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 18:56:53 compute-0 heuristic_wing[206156]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 18:56:53 compute-0 heuristic_wing[206156]:                 "ceph.cluster_fsid": "aecbbf3b-b405-507b-97d7-637a83f5b4b1",
Jan 20 18:56:53 compute-0 heuristic_wing[206156]:                 "ceph.cluster_name": "ceph",
Jan 20 18:56:53 compute-0 heuristic_wing[206156]:                 "ceph.crush_device_class": "",
Jan 20 18:56:53 compute-0 heuristic_wing[206156]:                 "ceph.encrypted": "0",
Jan 20 18:56:53 compute-0 heuristic_wing[206156]:                 "ceph.osd_fsid": "5f53c0c6-6046-4836-83f9-ff93da7e674e",
Jan 20 18:56:53 compute-0 heuristic_wing[206156]:                 "ceph.osd_id": "0",
Jan 20 18:56:53 compute-0 heuristic_wing[206156]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 18:56:53 compute-0 heuristic_wing[206156]:                 "ceph.type": "block",
Jan 20 18:56:53 compute-0 heuristic_wing[206156]:                 "ceph.vdo": "0",
Jan 20 18:56:53 compute-0 heuristic_wing[206156]:                 "ceph.with_tpm": "0"
Jan 20 18:56:53 compute-0 heuristic_wing[206156]:             },
Jan 20 18:56:53 compute-0 heuristic_wing[206156]:             "type": "block",
Jan 20 18:56:53 compute-0 heuristic_wing[206156]:             "vg_name": "ceph_vg0"
Jan 20 18:56:53 compute-0 heuristic_wing[206156]:         }
Jan 20 18:56:53 compute-0 heuristic_wing[206156]:     ]
Jan 20 18:56:53 compute-0 heuristic_wing[206156]: }
Jan 20 18:56:53 compute-0 systemd[1]: libpod-2b5599670fce1b533371528f20653bade4fea4af21ae24b557c65bf40286af88.scope: Deactivated successfully.
Jan 20 18:56:53 compute-0 podman[206140]: 2026-01-20 18:56:53.883711097 +0000 UTC m=+0.478761075 container died 2b5599670fce1b533371528f20653bade4fea4af21ae24b557c65bf40286af88 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_wing, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 20 18:56:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-f61380dfb451d907256f3d86858b6fa853ce2560f5b8781992e7119e6c0c349f-merged.mount: Deactivated successfully.
Jan 20 18:56:53 compute-0 podman[206140]: 2026-01-20 18:56:53.920373917 +0000 UTC m=+0.515423885 container remove 2b5599670fce1b533371528f20653bade4fea4af21ae24b557c65bf40286af88 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 18:56:53 compute-0 systemd[1]: libpod-conmon-2b5599670fce1b533371528f20653bade4fea4af21ae24b557c65bf40286af88.scope: Deactivated successfully.
Jan 20 18:56:53 compute-0 sudo[206011]: pam_unix(sudo:session): session closed for user root
Jan 20 18:56:54 compute-0 sudo[206282]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:56:54 compute-0 sudo[206282]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:56:54 compute-0 sudo[206282]: pam_unix(sudo:session): session closed for user root
Jan 20 18:56:54 compute-0 sudo[206374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgjukfygiqjquptzplfhhdudooigtyqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935413.7952037-1182-240703287532800/AnsiballZ_systemd.py'
Jan 20 18:56:54 compute-0 sudo[206374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:56:54 compute-0 sudo[206334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- raw list --format json
Jan 20 18:56:54 compute-0 sudo[206334]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:56:54 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:56:54 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:56:54 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:56:54.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:56:54 compute-0 python3.9[206379]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 18:56:54 compute-0 sudo[206374]: pam_unix(sudo:session): session closed for user root
Jan 20 18:56:54 compute-0 podman[206426]: 2026-01-20 18:56:54.484023477 +0000 UTC m=+0.042467293 container create f67f75afd207f25b534df5ab732534630c65076b483d9ad4bc43f13a948e6ff7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_moser, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True)
Jan 20 18:56:54 compute-0 systemd[1]: Started libpod-conmon-f67f75afd207f25b534df5ab732534630c65076b483d9ad4bc43f13a948e6ff7.scope.
Jan 20 18:56:54 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:56:54 compute-0 podman[206426]: 2026-01-20 18:56:54.46702552 +0000 UTC m=+0.025469356 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:56:54 compute-0 podman[206426]: 2026-01-20 18:56:54.564236876 +0000 UTC m=+0.122680722 container init f67f75afd207f25b534df5ab732534630c65076b483d9ad4bc43f13a948e6ff7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_moser, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Jan 20 18:56:54 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:54 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfb4004530 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:54 compute-0 podman[206426]: 2026-01-20 18:56:54.57661027 +0000 UTC m=+0.135054086 container start f67f75afd207f25b534df5ab732534630c65076b483d9ad4bc43f13a948e6ff7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_moser, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:56:54 compute-0 podman[206426]: 2026-01-20 18:56:54.580391793 +0000 UTC m=+0.138835639 container attach f67f75afd207f25b534df5ab732534630c65076b483d9ad4bc43f13a948e6ff7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_moser, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 18:56:54 compute-0 vigorous_moser[206466]: 167 167
Jan 20 18:56:54 compute-0 conmon[206466]: conmon f67f75afd207f25b534d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f67f75afd207f25b534df5ab732534630c65076b483d9ad4bc43f13a948e6ff7.scope/container/memory.events
Jan 20 18:56:54 compute-0 systemd[1]: libpod-f67f75afd207f25b534df5ab732534630c65076b483d9ad4bc43f13a948e6ff7.scope: Deactivated successfully.
Jan 20 18:56:54 compute-0 podman[206426]: 2026-01-20 18:56:54.582881633 +0000 UTC m=+0.141325469 container died f67f75afd207f25b534df5ab732534630c65076b483d9ad4bc43f13a948e6ff7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 20 18:56:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-6ef9d6aacfcffbd32e251c53f60481a4c8ebfdae39f74e36f7fee6008eabe39e-merged.mount: Deactivated successfully.
Jan 20 18:56:54 compute-0 podman[206426]: 2026-01-20 18:56:54.631509548 +0000 UTC m=+0.189953394 container remove f67f75afd207f25b534df5ab732534630c65076b483d9ad4bc43f13a948e6ff7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_moser, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:56:54 compute-0 systemd[1]: libpod-conmon-f67f75afd207f25b534df5ab732534630c65076b483d9ad4bc43f13a948e6ff7.scope: Deactivated successfully.
Jan 20 18:56:54 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:54 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfc4003e00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:54 compute-0 podman[206566]: 2026-01-20 18:56:54.821862852 +0000 UTC m=+0.052596202 container create 4a9fb7c72e0692e8f1626b0e9aa680f15fda1ef38655a6b8ac89f5d1e832119d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 20 18:56:54 compute-0 systemd[1]: Started libpod-conmon-4a9fb7c72e0692e8f1626b0e9aa680f15fda1ef38655a6b8ac89f5d1e832119d.scope.
Jan 20 18:56:54 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:56:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65e22c0be1144e7c4f20712e919ee105b4869c59f6e37cf567bb66e5c75f42aa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 18:56:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65e22c0be1144e7c4f20712e919ee105b4869c59f6e37cf567bb66e5c75f42aa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:56:54 compute-0 podman[206566]: 2026-01-20 18:56:54.798946349 +0000 UTC m=+0.029679719 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:56:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65e22c0be1144e7c4f20712e919ee105b4869c59f6e37cf567bb66e5c75f42aa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:56:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65e22c0be1144e7c4f20712e919ee105b4869c59f6e37cf567bb66e5c75f42aa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 18:56:54 compute-0 podman[206566]: 2026-01-20 18:56:54.90897294 +0000 UTC m=+0.139706310 container init 4a9fb7c72e0692e8f1626b0e9aa680f15fda1ef38655a6b8ac89f5d1e832119d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_shannon, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Jan 20 18:56:54 compute-0 podman[206566]: 2026-01-20 18:56:54.917558721 +0000 UTC m=+0.148292071 container start 4a9fb7c72e0692e8f1626b0e9aa680f15fda1ef38655a6b8ac89f5d1e832119d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Jan 20 18:56:54 compute-0 podman[206566]: 2026-01-20 18:56:54.92117379 +0000 UTC m=+0.151907140 container attach 4a9fb7c72e0692e8f1626b0e9aa680f15fda1ef38655a6b8ac89f5d1e832119d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_shannon, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Jan 20 18:56:54 compute-0 ceph-mgr[74676]: [balancer INFO root] Optimize plan auto_2026-01-20_18:56:54
Jan 20 18:56:54 compute-0 ceph-mgr[74676]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 18:56:54 compute-0 ceph-mgr[74676]: [balancer INFO root] do_upmap
Jan 20 18:56:54 compute-0 ceph-mgr[74676]: [balancer INFO root] pools ['default.rgw.meta', 'volumes', 'cephfs.cephfs.data', 'vms', 'backups', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.meta', 'images', '.nfs', 'default.rgw.log', '.mgr']
Jan 20 18:56:54 compute-0 ceph-mgr[74676]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 18:56:54 compute-0 sudo[206638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvlgkdnwfcusbcffblredkudvxmjgheb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935414.609862-1182-273109405924750/AnsiballZ_systemd.py'
Jan 20 18:56:54 compute-0 sudo[206638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:56:55 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:56:55 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:56:55 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:56:55.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:56:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:56:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:56:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:56:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:56:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:56:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:56:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 18:56:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:56:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 18:56:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:56:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:56:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:56:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:56:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:56:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:56:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:56:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:56:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:56:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 20 18:56:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:56:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:56:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:56:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 18:56:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:56:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 20 18:56:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:56:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:56:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:56:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 18:56:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:56:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 20 18:56:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 18:56:55 compute-0 ceph-mon[74381]: pgmap v432: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:56:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 18:56:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 18:56:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 18:56:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 18:56:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 18:56:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 18:56:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 18:56:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 18:56:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 18:56:55 compute-0 python3.9[206640]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 18:56:55 compute-0 sudo[206638]: pam_unix(sudo:session): session closed for user root
Jan 20 18:56:55 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:55 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfc4003e00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:55 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:56:55 compute-0 lvm[206807]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 18:56:55 compute-0 lvm[206807]: VG ceph_vg0 finished
Jan 20 18:56:55 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v433: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:56:55 compute-0 boring_shannon[206607]: {}
Jan 20 18:56:55 compute-0 podman[206566]: 2026-01-20 18:56:55.712646572 +0000 UTC m=+0.943379922 container died 4a9fb7c72e0692e8f1626b0e9aa680f15fda1ef38655a6b8ac89f5d1e832119d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_shannon, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Jan 20 18:56:55 compute-0 systemd[1]: libpod-4a9fb7c72e0692e8f1626b0e9aa680f15fda1ef38655a6b8ac89f5d1e832119d.scope: Deactivated successfully.
Jan 20 18:56:55 compute-0 systemd[1]: libpod-4a9fb7c72e0692e8f1626b0e9aa680f15fda1ef38655a6b8ac89f5d1e832119d.scope: Consumed 1.216s CPU time.
Jan 20 18:56:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-65e22c0be1144e7c4f20712e919ee105b4869c59f6e37cf567bb66e5c75f42aa-merged.mount: Deactivated successfully.
Jan 20 18:56:55 compute-0 podman[206566]: 2026-01-20 18:56:55.754995173 +0000 UTC m=+0.985728523 container remove 4a9fb7c72e0692e8f1626b0e9aa680f15fda1ef38655a6b8ac89f5d1e832119d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_shannon, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:56:55 compute-0 systemd[1]: libpod-conmon-4a9fb7c72e0692e8f1626b0e9aa680f15fda1ef38655a6b8ac89f5d1e832119d.scope: Deactivated successfully.
Jan 20 18:56:55 compute-0 sudo[206334]: pam_unix(sudo:session): session closed for user root
Jan 20 18:56:55 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 18:56:55 compute-0 sudo[206880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzljysbzdllkwqoijpcyoxgioyufphwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935415.5170867-1182-201204914034843/AnsiballZ_systemd.py'
Jan 20 18:56:55 compute-0 sudo[206880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:56:55 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:56:55 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 18:56:55 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:56:55 compute-0 sudo[206883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 18:56:55 compute-0 sudo[206883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:56:55 compute-0 sudo[206883]: pam_unix(sudo:session): session closed for user root
Jan 20 18:56:56 compute-0 python3.9[206882]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 18:56:56 compute-0 sudo[206880]: pam_unix(sudo:session): session closed for user root
Jan 20 18:56:56 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:56:56 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:56:56 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:56:56.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:56:56 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 18:56:56 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:56:56 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:56:56 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:56 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfb0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:56 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:56 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfb4004530 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:56:57.043Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 18:56:57 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:56:57 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:56:57 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:56:57.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:56:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:57 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfac001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:57 compute-0 ceph-mon[74381]: pgmap v433: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:56:57 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v434: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 20 18:56:58 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:56:58 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:56:58 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:56:58.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:56:58 compute-0 sudo[207063]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhmjtyifaejyqacvuirvxbqktihwwqpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935418.038579-1488-94526642433070/AnsiballZ_file.py'
Jan 20 18:56:58 compute-0 sudo[207063]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:56:58 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:58 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfc4003e00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:58 compute-0 python3.9[207065]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:56:58 compute-0 sudo[207063]: pam_unix(sudo:session): session closed for user root
Jan 20 18:56:58 compute-0 ceph-mon[74381]: pgmap v434: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 20 18:56:58 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:58 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfb4004530 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:59 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:56:59 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:56:59 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:56:59.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:56:59 compute-0 sudo[207216]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egwawlvckwzfpbkuxcnionftktxmmpxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935418.7573857-1488-11943572412538/AnsiballZ_file.py'
Jan 20 18:56:59 compute-0 sudo[207216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:56:59 compute-0 python3.9[207218]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:56:59 compute-0 sudo[207216]: pam_unix(sudo:session): session closed for user root
Jan 20 18:56:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:56:59 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfb4004530 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:56:59 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v435: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:56:59 compute-0 sudo[207370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trtisoivfyzdsjlgcphasjpvzmdwpgmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935419.4548454-1488-268257835328858/AnsiballZ_file.py'
Jan 20 18:56:59 compute-0 sudo[207370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:56:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:56:59] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Jan 20 18:56:59 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:56:59] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Jan 20 18:56:59 compute-0 python3.9[207372]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:56:59 compute-0 sudo[207370]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:00 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:57:00 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 20 18:57:00 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:57:00.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 20 18:57:00 compute-0 sudo[207522]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqyprrdpudwlcgeqrrbqspztfzytpskp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935420.0716145-1488-239848264518073/AnsiballZ_file.py'
Jan 20 18:57:00 compute-0 sudo[207522]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:00 compute-0 python3.9[207524]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:57:00 compute-0 sudo[207522]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:00 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:57:00 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfa8001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:57:00 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:57:00 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:57:00 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfc4003e00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:57:00 compute-0 ceph-mon[74381]: pgmap v435: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:57:00 compute-0 sudo[207674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgderjshetpxubpdjlpbtvvglhqfddmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935420.6611407-1488-106708559314285/AnsiballZ_file.py'
Jan 20 18:57:00 compute-0 sudo[207674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:01 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:57:01 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:57:01 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:57:01.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:57:01 compute-0 python3.9[207676]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:57:01 compute-0 sudo[207674]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:01 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:57:01 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfb4004530 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:57:01 compute-0 sudo[207828]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkhjvfjhlkujlvslommlwefzmqqeutcw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935421.318451-1488-24553433324230/AnsiballZ_file.py'
Jan 20 18:57:01 compute-0 sudo[207828]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:01 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v436: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:57:01 compute-0 python3.9[207830]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:57:01 compute-0 sudo[207828]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:02 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:57:02 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:57:02 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:57:02.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:57:02 compute-0 kernel: ganesha.nfsd[180176]: segfault at 50 ip 00007fd054aa132e sp 00007fcfc0ff8210 error 4 in libntirpc.so.5.8[7fd054a86000+2c000] likely on CPU 7 (core 0, socket 7)
Jan 20 18:57:02 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Jan 20 18:57:02 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[174590]: 20/01/2026 18:57:02 : epoch 696fcf8c : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fcfb4004530 fd 38 proxy ignored for local
Jan 20 18:57:02 compute-0 systemd[1]: Started Process Core Dump (PID 207855/UID 0).
Jan 20 18:57:02 compute-0 podman[207856]: 2026-01-20 18:57:02.724893993 +0000 UTC m=+0.093053267 container health_status 7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 18:57:03 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:57:03 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:57:03 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:57:03.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:57:03 compute-0 python3.9[208001]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 18:57:03 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v437: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:57:03 compute-0 systemd-coredump[207857]: Process 174594 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 54:
                                                    #0  0x00007fd054aa132e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Jan 20 18:57:03 compute-0 systemd[1]: systemd-coredump@8-207855-0.service: Deactivated successfully.
Jan 20 18:57:03 compute-0 systemd[1]: systemd-coredump@8-207855-0.service: Consumed 1.089s CPU time.
Jan 20 18:57:03 compute-0 podman[208084]: 2026-01-20 18:57:03.826501109 +0000 UTC m=+0.024362879 container died 898bd6e879b4c0478d25cfa0540942e1d4af1388e5ef00aacda3e506863d6952 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Jan 20 18:57:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-ceed1f106d38d4325baebe7a3807657814c5394508c1c55ce5d8ab5e3b4d4cc7-merged.mount: Deactivated successfully.
Jan 20 18:57:03 compute-0 podman[208084]: 2026-01-20 18:57:03.863192841 +0000 UTC m=+0.061054631 container remove 898bd6e879b4c0478d25cfa0540942e1d4af1388e5ef00aacda3e506863d6952 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:57:03 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Main process exited, code=exited, status=139/n/a
Jan 20 18:57:04 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Failed with result 'exit-code'.
Jan 20 18:57:04 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Consumed 1.504s CPU time.
Jan 20 18:57:04 compute-0 sudo[208200]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzkzrbrrszghpdodbiockpwuikpwiqnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935423.657796-1641-92820388535315/AnsiballZ_stat.py'
Jan 20 18:57:04 compute-0 sudo[208200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:04 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:57:04 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 20 18:57:04 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:57:04.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 20 18:57:04 compute-0 python3.9[208202]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:57:04 compute-0 sudo[208200]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:04 compute-0 ceph-mon[74381]: pgmap v436: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:57:04 compute-0 sudo[208325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-teuaxbildfluyntnnvgpbdlzjekrfnnb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935423.657796-1641-92820388535315/AnsiballZ_copy.py'
Jan 20 18:57:04 compute-0 sudo[208325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:05 compute-0 python3.9[208327]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1768935423.657796-1641-92820388535315/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:57:05 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:57:05 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 20 18:57:05 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:57:05.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 20 18:57:05 compute-0 sudo[208325]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:05 compute-0 sudo[208477]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhommgwrzlyoewkwdrjivncvuxuibjic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935425.244023-1641-133517282296854/AnsiballZ_stat.py'
Jan 20 18:57:05 compute-0 sudo[208477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:05 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:57:05 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v438: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:57:05 compute-0 ceph-mon[74381]: pgmap v437: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:57:05 compute-0 python3.9[208480]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:57:05 compute-0 sudo[208477]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:06 compute-0 sudo[208604]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ircjmhcjgvgvveyoykwowfwlqolxfuca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935425.244023-1641-133517282296854/AnsiballZ_copy.py'
Jan 20 18:57:06 compute-0 sudo[208604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:06 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:57:06 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:57:06 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:57:06.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:57:06 compute-0 python3.9[208606]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1768935425.244023-1641-133517282296854/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:57:06 compute-0 sudo[208604]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:06 compute-0 ceph-mon[74381]: pgmap v438: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:57:06 compute-0 sudo[208756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vclhqhlwdcgovduxphmoqdyszdtzwdmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935426.5022035-1641-78009674692766/AnsiballZ_stat.py'
Jan 20 18:57:06 compute-0 sudo[208756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:06 compute-0 python3.9[208758]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:57:06 compute-0 sudo[208756]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:07 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:57:07.044Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 18:57:07 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:57:07 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 20 18:57:07 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:57:07.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 20 18:57:07 compute-0 sudo[208881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asvtzwublocmprnnosyimtriihpqzddn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935426.5022035-1641-78009674692766/AnsiballZ_copy.py'
Jan 20 18:57:07 compute-0 sudo[208881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:07 compute-0 python3.9[208883]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1768935426.5022035-1641-78009674692766/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:57:07 compute-0 sudo[208881]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:07 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v439: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 20 18:57:08 compute-0 sudo[209035]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjfiydktoldljyyuhvvhcpqfzkeciwdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935427.7762978-1641-95841535089353/AnsiballZ_stat.py'
Jan 20 18:57:08 compute-0 sudo[209035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:08 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:57:08 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 20 18:57:08 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:57:08.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 20 18:57:08 compute-0 python3.9[209037]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:57:08 compute-0 sudo[209035]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:08 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [WARNING] 019/185708 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 20 18:57:08 compute-0 sudo[209160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kugbizyaimvkywcmtiisutxuivfbsiai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935427.7762978-1641-95841535089353/AnsiballZ_copy.py'
Jan 20 18:57:08 compute-0 sudo[209160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:08 compute-0 python3.9[209162]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1768935427.7762978-1641-95841535089353/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:57:08 compute-0 sudo[209160]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:09 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:57:09 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000023s ======
Jan 20 18:57:09 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:57:09.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Jan 20 18:57:09 compute-0 sudo[209312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfdhcraizezgzveaxlghslyupyjlnzml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935429.1217175-1641-85048660939168/AnsiballZ_stat.py'
Jan 20 18:57:09 compute-0 sudo[209312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:09 compute-0 python3.9[209314]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:57:09 compute-0 ceph-mon[74381]: pgmap v439: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 20 18:57:09 compute-0 sudo[209312]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:09 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v440: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:57:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:57:09] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Jan 20 18:57:09 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:57:09] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Jan 20 18:57:09 compute-0 sudo[209439]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-matponzdtfhrtiuxdrepcppdetdnbcru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935429.1217175-1641-85048660939168/AnsiballZ_copy.py'
Jan 20 18:57:09 compute-0 sudo[209439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:10 compute-0 python3.9[209441]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1768935429.1217175-1641-85048660939168/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:57:10 compute-0 sudo[209439]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:10 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:57:10 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:57:10 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:57:10.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:57:10 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:57:10 compute-0 sudo[209591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owgrbsntokloxfbujozlnhltauxhqhja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935430.3344948-1641-240976985875629/AnsiballZ_stat.py'
Jan 20 18:57:10 compute-0 sudo[209591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:10 compute-0 python3.9[209593]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:57:10 compute-0 sudo[209591]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:10 compute-0 ceph-mon[74381]: pgmap v440: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:57:10 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 18:57:11 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:57:11 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:57:11 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:57:11.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:57:11 compute-0 sudo[209730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpomkcdkiycvltgjcmiclywzwasqepfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935430.3344948-1641-240976985875629/AnsiballZ_copy.py'
Jan 20 18:57:11 compute-0 sudo[209730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:11 compute-0 podman[209690]: 2026-01-20 18:57:11.339472082 +0000 UTC m=+0.100692863 container health_status d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 20 18:57:11 compute-0 python3.9[209737]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1768935430.3344948-1641-240976985875629/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:57:11 compute-0 sudo[209730]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:11 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v441: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:57:11 compute-0 sudo[209896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkowouoyzvwmhuhqjuvvjzhvgvgnjilj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935431.6296043-1641-281407866885399/AnsiballZ_stat.py'
Jan 20 18:57:11 compute-0 sudo[209896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:12 compute-0 python3.9[209898]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:57:12 compute-0 sudo[209896]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:12 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:57:12 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 20 18:57:12 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:57:12.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 20 18:57:12 compute-0 sudo[210019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bilxmjwtcexqxwrsedgkfleodyognfdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935431.6296043-1641-281407866885399/AnsiballZ_copy.py'
Jan 20 18:57:12 compute-0 sudo[210019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:12 compute-0 python3.9[210021]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1768935431.6296043-1641-281407866885399/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:57:12 compute-0 sudo[210019]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:13 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:57:13 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:57:13 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:57:13.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:57:13 compute-0 sudo[210171]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eeohskvbffxwhdfsqdkmxwoovfujscid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935432.8160474-1641-63529803737324/AnsiballZ_stat.py'
Jan 20 18:57:13 compute-0 sudo[210171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:13 compute-0 ceph-mon[74381]: pgmap v441: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:57:13 compute-0 sudo[210174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 18:57:13 compute-0 sudo[210174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:57:13 compute-0 sudo[210174]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:13 compute-0 python3.9[210173]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:57:13 compute-0 sudo[210171]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:13 compute-0 sudo[210323]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-deiqxoegxduacwbuxoiuhbxqpokdemgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935432.8160474-1641-63529803737324/AnsiballZ_copy.py'
Jan 20 18:57:13 compute-0 sudo[210323]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:13 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v442: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:57:13 compute-0 python3.9[210325]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1768935432.8160474-1641-63529803737324/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:57:13 compute-0 sudo[210323]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:14 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:57:14 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:57:14 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:57:14.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:57:14 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Scheduled restart job, restart counter is at 9.
Jan 20 18:57:14 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.ulclbx for aecbbf3b-b405-507b-97d7-637a83f5b4b1.
Jan 20 18:57:14 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Consumed 1.504s CPU time.
Jan 20 18:57:14 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.ulclbx for aecbbf3b-b405-507b-97d7-637a83f5b4b1...
Jan 20 18:57:14 compute-0 podman[210396]: 2026-01-20 18:57:14.443981267 +0000 UTC m=+0.040352462 container create abfcaa9520820940e6fa70b64c4df644f07298ed0b90326d610d9a5408659009 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 20 18:57:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8b6daf5b8a9306411b3ba3d6b50b6160b2d8cccb7ee3d0f7b149895b1266122/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Jan 20 18:57:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8b6daf5b8a9306411b3ba3d6b50b6160b2d8cccb7ee3d0f7b149895b1266122/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 18:57:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8b6daf5b8a9306411b3ba3d6b50b6160b2d8cccb7ee3d0f7b149895b1266122/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:57:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8b6daf5b8a9306411b3ba3d6b50b6160b2d8cccb7ee3d0f7b149895b1266122/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.ulclbx-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 18:57:14 compute-0 podman[210396]: 2026-01-20 18:57:14.496179638 +0000 UTC m=+0.092550873 container init abfcaa9520820940e6fa70b64c4df644f07298ed0b90326d610d9a5408659009 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 20 18:57:14 compute-0 podman[210396]: 2026-01-20 18:57:14.502290358 +0000 UTC m=+0.098661553 container start abfcaa9520820940e6fa70b64c4df644f07298ed0b90326d610d9a5408659009 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 18:57:14 compute-0 bash[210396]: abfcaa9520820940e6fa70b64c4df644f07298ed0b90326d610d9a5408659009
Jan 20 18:57:14 compute-0 podman[210396]: 2026-01-20 18:57:14.424290492 +0000 UTC m=+0.020661727 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:57:14 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:14 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Jan 20 18:57:14 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:14 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Jan 20 18:57:14 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.ulclbx for aecbbf3b-b405-507b-97d7-637a83f5b4b1.
Jan 20 18:57:14 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:14 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Jan 20 18:57:14 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:14 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Jan 20 18:57:14 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:14 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Jan 20 18:57:14 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:14 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Jan 20 18:57:14 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:14 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Jan 20 18:57:14 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:14 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 20 18:57:15 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:57:15 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:57:15 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:57:15.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:57:15 compute-0 ceph-mon[74381]: pgmap v442: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:57:15 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:57:15 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v443: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:57:15 compute-0 sudo[210580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glwntnvbdifqrfkmuanwjrfoyqcbvevd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935435.6155674-1980-149908414627996/AnsiballZ_command.py'
Jan 20 18:57:15 compute-0 sudo[210580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:16 compute-0 python3.9[210582]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Jan 20 18:57:16 compute-0 sudo[210580]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:16 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:57:16 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 18:57:16 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:57:16.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 18:57:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:57:17.045Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 18:57:17 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:57:17 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:57:17 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:57:17.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:57:17 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v444: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Jan 20 18:57:17 compute-0 sudo[210735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcfwnbzzckjikrerztikicpqibgywbgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935437.4257474-2007-251095901788984/AnsiballZ_file.py'
Jan 20 18:57:17 compute-0 sudo[210735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:17 compute-0 ceph-mon[74381]: pgmap v443: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:57:17 compute-0 python3.9[210737]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:57:17 compute-0 sudo[210735]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:18 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:57:18 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:57:18 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:57:18.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:57:18 compute-0 sudo[210887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrzxdbqwuaqtyokiywanzrrfuoiojvua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935438.0866072-2007-204053794159430/AnsiballZ_file.py'
Jan 20 18:57:18 compute-0 sudo[210887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:18 compute-0 python3.9[210889]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:57:18 compute-0 sudo[210887]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:18 compute-0 ceph-mon[74381]: pgmap v444: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Jan 20 18:57:18 compute-0 sudo[211039]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cszdadfjjftkbvkvxloibljatqoyqlkx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935438.6799212-2007-80435840771466/AnsiballZ_file.py'
Jan 20 18:57:18 compute-0 sudo[211039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:19 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:57:19 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:57:19 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:57:19.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:57:19 compute-0 python3.9[211041]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:57:19 compute-0 sudo[211039]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:19 compute-0 sudo[211192]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pppwraswqorfexxxdmnxmpvellaouezw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935439.3176477-2007-30110425932642/AnsiballZ_file.py'
Jan 20 18:57:19 compute-0 sudo[211192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:19 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v445: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Jan 20 18:57:19 compute-0 python3.9[211194]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:57:19 compute-0 sudo[211192]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:57:19] "GET /metrics HTTP/1.1" 200 48334 "" "Prometheus/2.51.0"
Jan 20 18:57:19 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:57:19] "GET /metrics HTTP/1.1" 200 48334 "" "Prometheus/2.51.0"
Jan 20 18:57:20 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:57:20 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:57:20 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:57:20.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:57:20 compute-0 sudo[211345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibxciqvobgquauqjildjpacbxaugrxdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935439.9583836-2007-278952453609138/AnsiballZ_file.py'
Jan 20 18:57:20 compute-0 sudo[211345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:20 compute-0 python3.9[211347]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:57:20 compute-0 sudo[211345]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:20 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:57:20 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:20 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 20 18:57:20 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:20 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 20 18:57:20 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:20 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 20 18:57:20 compute-0 ceph-mon[74381]: pgmap v445: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Jan 20 18:57:20 compute-0 sudo[211497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gayxixvqtsluazlpslfsfzbdlcomqfwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935440.5843716-2007-87427532768243/AnsiballZ_file.py'
Jan 20 18:57:20 compute-0 sudo[211497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:21 compute-0 python3.9[211499]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:57:21 compute-0 sudo[211497]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:21 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:57:21 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:57:21 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:57:21.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:57:21 compute-0 sudo[211649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-volrrpdmgulfquwtvkwaoxudhfptrmer ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935441.193199-2007-279263270738552/AnsiballZ_file.py'
Jan 20 18:57:21 compute-0 sudo[211649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:21 compute-0 python3.9[211651]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:57:21 compute-0 sudo[211649]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:21 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v446: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 20 18:57:22 compute-0 sudo[211803]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-echvhrryycbexjzoqzzdftwxdwtuekvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935441.7812884-2007-233082390933029/AnsiballZ_file.py'
Jan 20 18:57:22 compute-0 sudo[211803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:22 compute-0 python3.9[211805]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:57:22 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:57:22 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:57:22 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:57:22.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:57:22 compute-0 sudo[211803]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:22 compute-0 sudo[211955]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zukwsboojsbwfhywqiftrcopeloyvvoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935442.395037-2007-224071655825387/AnsiballZ_file.py'
Jan 20 18:57:22 compute-0 sudo[211955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:22 compute-0 python3.9[211957]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:57:22 compute-0 sudo[211955]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:22 compute-0 ceph-mon[74381]: pgmap v446: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 20 18:57:23 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:57:23 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:57:23 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:57:23.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:57:23 compute-0 sudo[212107]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-donegpdkutouymhylpiewnomkehfxqdq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935443.015232-2007-149121767446647/AnsiballZ_file.py'
Jan 20 18:57:23 compute-0 sudo[212107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:23 compute-0 python3.9[212109]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:57:23 compute-0 sudo[212107]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:23 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v447: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 426 B/s wr, 1 op/s
Jan 20 18:57:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [WARNING] 019/185723 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 20 18:57:23 compute-0 sudo[212261]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kruuxpmlorziwizsnufvbrjrprwoitqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935443.6603565-2007-273337561215030/AnsiballZ_file.py'
Jan 20 18:57:23 compute-0 sudo[212261]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:24 compute-0 python3.9[212263]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:57:24 compute-0 sudo[212261]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:24 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:57:24 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:57:24 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:57:24.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:57:24 compute-0 sudo[212413]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdfrthzsabjtqgrecqwogwdoxxwnytkk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935444.3157873-2007-144945985952090/AnsiballZ_file.py'
Jan 20 18:57:24 compute-0 sudo[212413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:24 compute-0 python3.9[212415]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:57:24 compute-0 sudo[212413]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:25 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:24 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 20 18:57:25 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:24 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 20 18:57:25 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:24 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 20 18:57:25 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:25 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 20 18:57:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:57:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:57:25 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:57:25 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:57:25 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:57:25.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:57:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:57:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:57:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:57:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:57:25 compute-0 sudo[212565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvgoicuvmxepyalsfyeqwtpzbolvddls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935444.940793-2007-190164279212660/AnsiballZ_file.py'
Jan 20 18:57:25 compute-0 sudo[212565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:25 compute-0 python3.9[212567]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:57:25 compute-0 sudo[212565]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:25 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:57:25 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v448: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 426 B/s wr, 1 op/s
Jan 20 18:57:25 compute-0 sudo[212719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spvcxmgdsncplggcrmxzqwugmeurdjbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935445.5037775-2007-137928435418142/AnsiballZ_file.py'
Jan 20 18:57:25 compute-0 sudo[212719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:25 compute-0 ceph-mon[74381]: pgmap v447: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 426 B/s wr, 1 op/s
Jan 20 18:57:25 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 18:57:25 compute-0 python3.9[212721]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:57:25 compute-0 sudo[212719]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:26 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:57:26 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:57:26 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:57:26.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:57:26 compute-0 ceph-mon[74381]: pgmap v448: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 426 B/s wr, 1 op/s
Jan 20 18:57:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:57:27.045Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 18:57:27 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:57:27 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:57:27 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:57:27.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:57:27 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v449: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Jan 20 18:57:28 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:57:28 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:57:28 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:57:28.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:57:28 compute-0 sudo[212873]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fasdlgqdyjogfxeqnzyxwvkxlnxfewcf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935448.3719192-2304-135649627759740/AnsiballZ_stat.py'
Jan 20 18:57:28 compute-0 sudo[212873]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:28 compute-0 ceph-mon[74381]: pgmap v449: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Jan 20 18:57:28 compute-0 python3.9[212875]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:57:28 compute-0 sudo[212873]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:29 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:57:29 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:57:29 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:57:29.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:57:29 compute-0 sudo[212996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zeynahggjqvvpigdezbwlzzvqcvubmhp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935448.3719192-2304-135649627759740/AnsiballZ_copy.py'
Jan 20 18:57:29 compute-0 sudo[212996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:29 compute-0 python3.9[212998]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768935448.3719192-2304-135649627759740/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:57:29 compute-0 sudo[212996]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:29 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 20 18:57:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:29 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 20 18:57:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:29 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 20 18:57:29 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v450: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 341 B/s wr, 1 op/s
Jan 20 18:57:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:57:29] "GET /metrics HTTP/1.1" 200 48337 "" "Prometheus/2.51.0"
Jan 20 18:57:29 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:57:29] "GET /metrics HTTP/1.1" 200 48337 "" "Prometheus/2.51.0"
Jan 20 18:57:29 compute-0 sudo[213150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsbcikboplkusjvwpczeoemvbmnzhkxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935449.6020596-2304-99456926534789/AnsiballZ_stat.py'
Jan 20 18:57:29 compute-0 sudo[213150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:57:30.190 165659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 18:57:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:57:30.191 165659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 18:57:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:57:30.191 165659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 18:57:30 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:57:30 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:57:30 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:57:30.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:57:30 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:57:30 compute-0 python3.9[213152]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:57:30 compute-0 ceph-mon[74381]: pgmap v450: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 341 B/s wr, 1 op/s
Jan 20 18:57:30 compute-0 sudo[213150]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:30 compute-0 ceph-mon[74381]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Jan 20 18:57:30 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:57:30.726367) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 18:57:30 compute-0 ceph-mon[74381]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Jan 20 18:57:30 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935450726414, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 3970, "num_deletes": 502, "total_data_size": 8066393, "memory_usage": 8177368, "flush_reason": "Manual Compaction"}
Jan 20 18:57:30 compute-0 ceph-mon[74381]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Jan 20 18:57:30 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935450754776, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 4509484, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13602, "largest_seqno": 17571, "table_properties": {"data_size": 4497813, "index_size": 6629, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3973, "raw_key_size": 31388, "raw_average_key_size": 19, "raw_value_size": 4470376, "raw_average_value_size": 2845, "num_data_blocks": 290, "num_entries": 1571, "num_filter_entries": 1571, "num_deletions": 502, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768935012, "oldest_key_time": 1768935012, "file_creation_time": 1768935450, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cbf3ab03-d51c-4622-b6c7-e997cd5246eb", "db_session_id": "I40O2DG19JCNHUB0JQU4", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Jan 20 18:57:30 compute-0 ceph-mon[74381]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 28562 microseconds, and 7520 cpu microseconds.
Jan 20 18:57:30 compute-0 ceph-mon[74381]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 18:57:30 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:57:30.754935) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 4509484 bytes OK
Jan 20 18:57:30 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:57:30.754979) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Jan 20 18:57:30 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:57:30.757580) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Jan 20 18:57:30 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:57:30.757596) EVENT_LOG_v1 {"time_micros": 1768935450757590, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 18:57:30 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:57:30.757618) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 18:57:30 compute-0 ceph-mon[74381]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 8050377, prev total WAL file size 8050658, number of live WAL files 2.
Jan 20 18:57:30 compute-0 ceph-mon[74381]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 18:57:30 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:57:30.759986) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323531' seq:72057594037927935, type:22 .. '6D67727374617400353033' seq:0, type:0; will stop at (end)
Jan 20 18:57:30 compute-0 ceph-mon[74381]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 18:57:30 compute-0 ceph-mon[74381]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(4403KB)], [32(13MB)]
Jan 20 18:57:30 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935450760062, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 18303432, "oldest_snapshot_seqno": -1}
Jan 20 18:57:30 compute-0 ceph-mon[74381]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 5231 keys, 13950208 bytes, temperature: kUnknown
Jan 20 18:57:30 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935450872963, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 13950208, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13913155, "index_size": 22863, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13125, "raw_key_size": 131126, "raw_average_key_size": 25, "raw_value_size": 13816406, "raw_average_value_size": 2641, "num_data_blocks": 954, "num_entries": 5231, "num_filter_entries": 5231, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768934326, "oldest_key_time": 0, "file_creation_time": 1768935450, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cbf3ab03-d51c-4622-b6c7-e997cd5246eb", "db_session_id": "I40O2DG19JCNHUB0JQU4", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 20 18:57:30 compute-0 ceph-mon[74381]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 18:57:30 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:57:30.873185) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 13950208 bytes
Jan 20 18:57:30 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:57:30.874394) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 162.0 rd, 123.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(4.3, 13.2 +0.0 blob) out(13.3 +0.0 blob), read-write-amplify(7.2) write-amplify(3.1) OK, records in: 6047, records dropped: 816 output_compression: NoCompression
Jan 20 18:57:30 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:57:30.874409) EVENT_LOG_v1 {"time_micros": 1768935450874401, "job": 14, "event": "compaction_finished", "compaction_time_micros": 112979, "compaction_time_cpu_micros": 37058, "output_level": 6, "num_output_files": 1, "total_output_size": 13950208, "num_input_records": 6047, "num_output_records": 5231, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 18:57:30 compute-0 ceph-mon[74381]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 18:57:30 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935450875180, "job": 14, "event": "table_file_deletion", "file_number": 34}
Jan 20 18:57:30 compute-0 ceph-mon[74381]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 18:57:30 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935450877424, "job": 14, "event": "table_file_deletion", "file_number": 32}
Jan 20 18:57:30 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:57:30.759887) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 18:57:30 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:57:30.877483) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 18:57:30 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:57:30.877489) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 18:57:30 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:57:30.877491) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 18:57:30 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:57:30.877493) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 18:57:30 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:57:30.877494) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 18:57:31 compute-0 sudo[213273]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eemujmvwrbnqsubcjpqprplqdprgnezl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935449.6020596-2304-99456926534789/AnsiballZ_copy.py'
Jan 20 18:57:31 compute-0 sudo[213273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:31 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:57:31 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:57:31 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:57:31.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:57:31 compute-0 python3.9[213275]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768935449.6020596-2304-99456926534789/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:57:31 compute-0 sudo[213273]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:31 compute-0 sudo[213427]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcxalehlflvpnzlhrlfnwilcdysoalkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935451.3800216-2304-72845256175768/AnsiballZ_stat.py'
Jan 20 18:57:31 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v451: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 341 B/s wr, 1 op/s
Jan 20 18:57:31 compute-0 sudo[213427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:31 compute-0 python3.9[213429]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:57:31 compute-0 sudo[213427]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:32 compute-0 sudo[213550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aywchrywxckzddlarifezudttuzckuxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935451.3800216-2304-72845256175768/AnsiballZ_copy.py'
Jan 20 18:57:32 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:57:32 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:57:32 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:57:32.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:57:32 compute-0 sudo[213550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:32 compute-0 python3.9[213552]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768935451.3800216-2304-72845256175768/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:57:32 compute-0 sudo[213550]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:32 compute-0 ceph-mon[74381]: pgmap v451: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 341 B/s wr, 1 op/s
Jan 20 18:57:32 compute-0 sudo[213715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcncqxipfngypibfkcaozevkwdbzbutw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935452.6172233-2304-6400134396566/AnsiballZ_stat.py'
Jan 20 18:57:32 compute-0 sudo[213715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:32 compute-0 podman[213676]: 2026-01-20 18:57:32.964489989 +0000 UTC m=+0.054788234 container health_status 7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 18:57:33 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:57:33 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:57:33 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:57:33.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:57:33 compute-0 python3.9[213722]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:57:33 compute-0 sudo[213715]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:33 compute-0 sudo[213772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 18:57:33 compute-0 sudo[213772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:57:33 compute-0 sudo[213772]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:33 compute-0 sudo[213870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shdqpewvaqwniogaznuzqwgkqjsfjnph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935452.6172233-2304-6400134396566/AnsiballZ_copy.py'
Jan 20 18:57:33 compute-0 sudo[213870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:33 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v452: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 682 B/s wr, 2 op/s
Jan 20 18:57:33 compute-0 python3.9[213872]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768935452.6172233-2304-6400134396566/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:57:33 compute-0 sudo[213870]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:34 compute-0 sudo[214024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apxfslwzbknqlcbrqhsjsmqdxvbcbblk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935453.9350631-2304-34568420516260/AnsiballZ_stat.py'
Jan 20 18:57:34 compute-0 sudo[214024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:34 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:57:34 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:57:34 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:57:34.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:57:34 compute-0 python3.9[214026]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:57:34 compute-0 sudo[214024]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:34 compute-0 sudo[214147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icuoausdvngpqajfxeojxouitixuwdfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935453.9350631-2304-34568420516260/AnsiballZ_copy.py'
Jan 20 18:57:34 compute-0 ceph-mon[74381]: pgmap v452: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 682 B/s wr, 2 op/s
Jan 20 18:57:34 compute-0 sudo[214147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:34 compute-0 python3.9[214149]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768935453.9350631-2304-34568420516260/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:57:34 compute-0 sudo[214147]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:35 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:57:35 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:57:35 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:57:35.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:57:35 compute-0 sudo[214299]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxbmxistxwluaucakyzgfhfcycjrjcex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935455.095042-2304-233797656310747/AnsiballZ_stat.py'
Jan 20 18:57:35 compute-0 sudo[214299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:35 compute-0 python3.9[214301]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:57:35 compute-0 sudo[214299]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:35 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 20 18:57:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:35 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Jan 20 18:57:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:35 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Jan 20 18:57:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:35 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Jan 20 18:57:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:35 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Jan 20 18:57:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:35 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Jan 20 18:57:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:35 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Jan 20 18:57:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:35 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 20 18:57:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:35 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 20 18:57:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:35 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 20 18:57:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:35 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Jan 20 18:57:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:35 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 20 18:57:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:35 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Jan 20 18:57:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:35 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Jan 20 18:57:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:35 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Jan 20 18:57:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:35 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Jan 20 18:57:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:35 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Jan 20 18:57:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:35 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Jan 20 18:57:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:35 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Jan 20 18:57:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:35 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Jan 20 18:57:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:35 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Jan 20 18:57:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:35 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Jan 20 18:57:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:35 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Jan 20 18:57:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:35 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Jan 20 18:57:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:35 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 20 18:57:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:35 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Jan 20 18:57:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:35 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 20 18:57:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:35 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 20 18:57:35 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:57:35 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v453: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 341 B/s wr, 1 op/s
Jan 20 18:57:35 compute-0 sudo[214436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-feiodknkqipeomjmtccwvcupvzltyyhb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935455.095042-2304-233797656310747/AnsiballZ_copy.py'
Jan 20 18:57:35 compute-0 sudo[214436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:36 compute-0 python3.9[214438]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768935455.095042-2304-233797656310747/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:57:36 compute-0 sudo[214436]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:36 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:57:36 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:57:36 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:57:36.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:57:36 compute-0 sudo[214588]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pcacjqsautwpviqvpgfwluiollleqtob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935456.1491082-2304-157975115348803/AnsiballZ_stat.py'
Jan 20 18:57:36 compute-0 sudo[214588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:36 compute-0 python3.9[214590]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:57:36 compute-0 sudo[214588]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:36 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40f4000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:57:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:36 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40e4001970 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:57:36 compute-0 ceph-mon[74381]: pgmap v453: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 341 B/s wr, 1 op/s
Jan 20 18:57:36 compute-0 sudo[214714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-joyknaqepcnojvtwgwteegqvrahtcgim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935456.1491082-2304-157975115348803/AnsiballZ_copy.py'
Jan 20 18:57:36 compute-0 sudo[214714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:57:37.046Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 18:57:37 compute-0 python3.9[214716]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768935456.1491082-2304-157975115348803/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:57:37 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:57:37 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:57:37 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:57:37.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:57:37 compute-0 sudo[214714]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:37 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40c8000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:57:37 compute-0 sudo[214866]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjcvxkenrmbcwgujxvrimwmqswkdzwkw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935457.2560525-2304-55053550843823/AnsiballZ_stat.py'
Jan 20 18:57:37 compute-0 sudo[214866]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:37 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v454: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Jan 20 18:57:37 compute-0 python3.9[214868]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:57:37 compute-0 sudo[214866]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:38 compute-0 sudo[214991]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkihyzkxsykbsfbmiifcvcealorxyerf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935457.2560525-2304-55053550843823/AnsiballZ_copy.py'
Jan 20 18:57:38 compute-0 sudo[214991]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:38 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:57:38 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:57:38 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:57:38.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:57:38 compute-0 python3.9[214993]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768935457.2560525-2304-55053550843823/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:57:38 compute-0 sudo[214991]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [WARNING] 019/185738 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 20 18:57:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:38 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40e0000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:57:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:38 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 20 18:57:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:38 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 20 18:57:38 compute-0 sudo[215143]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yboeaueaxzfovdixibrtqefxizqwnzog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935458.460275-2304-163385904801912/AnsiballZ_stat.py'
Jan 20 18:57:38 compute-0 sudo[215143]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:38 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40c8000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:57:38 compute-0 ceph-mon[74381]: pgmap v454: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Jan 20 18:57:38 compute-0 python3.9[215145]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:57:38 compute-0 sudo[215143]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:39 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:57:39 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 18:57:39 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:57:39.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 18:57:39 compute-0 sudo[215266]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-klctbhrcjymbmwnkvojvhpqgxxqyqydb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935458.460275-2304-163385904801912/AnsiballZ_copy.py'
Jan 20 18:57:39 compute-0 sudo[215266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:39 compute-0 python3.9[215268]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768935458.460275-2304-163385904801912/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:57:39 compute-0 sudo[215266]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:39 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40e8001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:57:39 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v455: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Jan 20 18:57:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:57:39] "GET /metrics HTTP/1.1" 200 48337 "" "Prometheus/2.51.0"
Jan 20 18:57:39 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:57:39] "GET /metrics HTTP/1.1" 200 48337 "" "Prometheus/2.51.0"
Jan 20 18:57:39 compute-0 sudo[215420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnsslwhdkfrudedmrlwlpzuuaidyqpyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935459.5955963-2304-274971024617817/AnsiballZ_stat.py'
Jan 20 18:57:39 compute-0 sudo[215420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:40 compute-0 python3.9[215422]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:57:40 compute-0 sudo[215420]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:40 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:57:40 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 18:57:40 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:57:40.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 18:57:40 compute-0 sudo[215543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifmkrrecuhfruihijuxromzuwphgbuil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935459.5955963-2304-274971024617817/AnsiballZ_copy.py'
Jan 20 18:57:40 compute-0 sudo[215543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:40 compute-0 python3.9[215545]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768935459.5955963-2304-274971024617817/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:57:40 compute-0 sudo[215543]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:40 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:40 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40e8001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:57:40 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:57:40 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:40 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40e0001930 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:57:40 compute-0 ceph-mon[74381]: pgmap v455: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Jan 20 18:57:40 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 18:57:40 compute-0 sudo[215695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrxuvizqpuntaeeqrpoydwmdqkcmqhga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935460.7141542-2304-123987023215537/AnsiballZ_stat.py'
Jan 20 18:57:40 compute-0 sudo[215695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:41 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:57:41 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:57:41 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:57:41.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:57:41 compute-0 python3.9[215697]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:57:41 compute-0 sudo[215695]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:41 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:41 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40c8001b40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:57:41 compute-0 sudo[215835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xiwodfkcbxbnrniyeybxrbknynxdqnue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935460.7141542-2304-123987023215537/AnsiballZ_copy.py'
Jan 20 18:57:41 compute-0 sudo[215835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:41 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:41 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 20 18:57:41 compute-0 podman[215792]: 2026-01-20 18:57:41.638169887 +0000 UTC m=+0.079476482 container health_status d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 20 18:57:41 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v456: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Jan 20 18:57:41 compute-0 python3.9[215841]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768935460.7141542-2304-123987023215537/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:57:41 compute-0 sudo[215835]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:42 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:57:42 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 18:57:42 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:57:42.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 18:57:42 compute-0 sudo[215999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgyuzgqeywkjrfglgiybpqkthzfewaoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935461.9819107-2304-75023386497463/AnsiballZ_stat.py'
Jan 20 18:57:42 compute-0 sudo[215999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:42 compute-0 python3.9[216001]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:57:42 compute-0 sudo[215999]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:42 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:42 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40e8001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:57:42 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:42 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40e4002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:57:42 compute-0 ceph-mon[74381]: pgmap v456: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Jan 20 18:57:42 compute-0 sudo[216122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnfnkrrkdaeilipmibipjfirgztczfam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935461.9819107-2304-75023386497463/AnsiballZ_copy.py'
Jan 20 18:57:42 compute-0 sudo[216122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:43 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:57:43 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:57:43 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:57:43.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:57:43 compute-0 python3.9[216124]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768935461.9819107-2304-75023386497463/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:57:43 compute-0 sudo[216122]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:43 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:43 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40e4002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:57:43 compute-0 sudo[216274]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdzzdsntopjxejscjqcqgmjscfldugma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935463.3404446-2304-124361889908259/AnsiballZ_stat.py'
Jan 20 18:57:43 compute-0 sudo[216274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:43 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v457: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 1.3 KiB/s wr, 4 op/s
Jan 20 18:57:43 compute-0 python3.9[216277]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:57:43 compute-0 sudo[216274]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:43 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [WARNING] 019/185743 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 20 18:57:44 compute-0 sudo[216399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqchiasbnzhgawgeoraxwuenuojotkgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935463.3404446-2304-124361889908259/AnsiballZ_copy.py'
Jan 20 18:57:44 compute-0 sudo[216399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:44 compute-0 python3.9[216401]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768935463.3404446-2304-124361889908259/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:57:44 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:57:44 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:57:44 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:57:44.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:57:44 compute-0 sudo[216399]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:44 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:44 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40c8001b40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:57:44 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:44 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40e8002e30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:57:44 compute-0 sudo[216551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfefczfyzjqtkbuozrqmksajxtnfullf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935464.4283712-2304-206568711647778/AnsiballZ_stat.py'
Jan 20 18:57:44 compute-0 sudo[216551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:44 compute-0 ceph-mon[74381]: pgmap v457: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 1.3 KiB/s wr, 4 op/s
Jan 20 18:57:44 compute-0 python3.9[216553]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:57:44 compute-0 sudo[216551]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:45 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:57:45 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:57:45 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:57:45.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:57:45 compute-0 sudo[216674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgjvgkywpdyvwjbzqcngnawtzahcmmni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935464.4283712-2304-206568711647778/AnsiballZ_copy.py'
Jan 20 18:57:45 compute-0 sudo[216674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:45 compute-0 python3.9[216676]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768935464.4283712-2304-206568711647778/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:57:45 compute-0 sudo[216674]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:45 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:45 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40e0002250 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:57:45 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:57:45 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v458: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 20 18:57:46 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:57:46 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:57:46 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:57:46.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:57:46 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:46 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40e4002290 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:57:46 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:46 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40c8001b40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:57:46 compute-0 ceph-mon[74381]: pgmap v458: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 20 18:57:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:57:47.047Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 18:57:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:57:47.047Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 18:57:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:57:47.047Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 18:57:47 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:57:47 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:57:47 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:57:47.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:57:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:47 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40e8002e30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:57:47 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v459: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 20 18:57:48 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:57:48 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:57:48 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:57:48.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:57:48 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:48 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40e8002e30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:57:48 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:48 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40e4003390 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:57:48 compute-0 ceph-mon[74381]: pgmap v459: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 20 18:57:48 compute-0 python3.9[216830]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:57:49 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:57:49 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:57:49 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:57:49.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:57:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:49 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40c8002f00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:57:49 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v460: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Jan 20 18:57:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:57:49] "GET /metrics HTTP/1.1" 200 48339 "" "Prometheus/2.51.0"
Jan 20 18:57:49 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:57:49] "GET /metrics HTTP/1.1" 200 48339 "" "Prometheus/2.51.0"
Jan 20 18:57:49 compute-0 sudo[216985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axhlrshcneybbtpfdoguhrflpmnuknso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935469.3872585-2922-223930061538096/AnsiballZ_seboolean.py'
Jan 20 18:57:49 compute-0 sudo[216985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:50 compute-0 python3.9[216987]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Jan 20 18:57:50 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:57:50 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:57:50 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:57:50.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:57:50 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:50 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40c8002f00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:57:50 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:57:50 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:50 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40e8002e30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:57:51 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:57:51 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:57:51 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:57:51.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:57:51 compute-0 ceph-mon[74381]: pgmap v460: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Jan 20 18:57:51 compute-0 sudo[216985]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:51 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:51 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40e4003390 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:57:51 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v461: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Jan 20 18:57:52 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:57:52 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:57:52 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:57:52.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:57:52 compute-0 ceph-mon[74381]: pgmap v461: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Jan 20 18:57:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:52 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40c8002f00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:57:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:52 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40e0002da0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:57:53 compute-0 sudo[217143]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmpmxelfylnsxreboqvmujzjveolgmmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935472.7488408-2946-35877330885987/AnsiballZ_copy.py'
Jan 20 18:57:53 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Jan 20 18:57:53 compute-0 sudo[217143]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:53 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:57:53 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 18:57:53 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:57:53.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 18:57:53 compute-0 python3.9[217145]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:57:53 compute-0 sudo[217143]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:53 compute-0 sudo[217170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 18:57:53 compute-0 sudo[217170]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:57:53 compute-0 sudo[217170]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:53 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:53 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40e8004620 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:57:53 compute-0 sudo[217322]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kizwfuptcecwdapexukiimxljxafoekp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935473.419331-2946-6763885876599/AnsiballZ_copy.py'
Jan 20 18:57:53 compute-0 sudo[217322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:53 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v462: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 255 B/s wr, 1 op/s
Jan 20 18:57:53 compute-0 python3.9[217324]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:57:53 compute-0 sudo[217322]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:54 compute-0 sudo[217474]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlallpgaemklfhejyifwebirqgfiqspe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935474.0210547-2946-114755948126273/AnsiballZ_copy.py'
Jan 20 18:57:54 compute-0 sudo[217474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:54 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:57:54 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:57:54 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:57:54.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:57:54 compute-0 python3.9[217476]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:57:54 compute-0 sudo[217474]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:54 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:54 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40e4003390 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:57:54 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:54 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40c8002f00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:57:54 compute-0 ceph-mon[74381]: pgmap v462: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 255 B/s wr, 1 op/s
Jan 20 18:57:54 compute-0 sudo[217626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nngwopyodhhytxeljjvqihvbotdubhvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935474.6255534-2946-198736496983269/AnsiballZ_copy.py'
Jan 20 18:57:54 compute-0 sudo[217626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:54 compute-0 ceph-mgr[74676]: [balancer INFO root] Optimize plan auto_2026-01-20_18:57:54
Jan 20 18:57:54 compute-0 ceph-mgr[74676]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 18:57:54 compute-0 ceph-mgr[74676]: [balancer INFO root] do_upmap
Jan 20 18:57:54 compute-0 ceph-mgr[74676]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'vms', '.nfs', 'cephfs.cephfs.data', 'volumes', 'images', 'default.rgw.meta', 'default.rgw.log', '.rgw.root', 'default.rgw.control', 'backups']
Jan 20 18:57:54 compute-0 ceph-mgr[74676]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 18:57:55 compute-0 python3.9[217628]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:57:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:57:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:57:55 compute-0 sudo[217626]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:57:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:57:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:57:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:57:55 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:57:55 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 18:57:55 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:57:55.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 18:57:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 18:57:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:57:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 18:57:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:57:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:57:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:57:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:57:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:57:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:57:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:57:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:57:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:57:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 20 18:57:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:57:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:57:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:57:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 18:57:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:57:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 20 18:57:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:57:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:57:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:57:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 18:57:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:57:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 20 18:57:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 18:57:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 18:57:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 18:57:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 18:57:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 18:57:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 18:57:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 18:57:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 18:57:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 18:57:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 18:57:55 compute-0 sudo[217778]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chdfmjocaiorwawwtqwvnhasnmxfeqev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935475.200192-2946-141431521991189/AnsiballZ_copy.py'
Jan 20 18:57:55 compute-0 sudo[217778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:55 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:55 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40e0002da0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:57:55 compute-0 python3.9[217780]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:57:55 compute-0 sudo[217778]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:55 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:57:55 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v463: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Jan 20 18:57:55 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 18:57:56 compute-0 sudo[217807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:57:56 compute-0 sudo[217807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:57:56 compute-0 sudo[217807]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:56 compute-0 sudo[217832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Jan 20 18:57:56 compute-0 sudo[217832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:57:56 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:57:56 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:57:56 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:57:56.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:57:56 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:56 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40e8004620 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:57:56 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:56 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40e4003390 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:57:56 compute-0 podman[218013]: 2026-01-20 18:57:56.766072096 +0000 UTC m=+0.053345767 container exec 2fba800b181b7eef43f9bbe592c3e2bd413c4a1140b3b26fe8cd839a68603da7 (image=quay.io/ceph/ceph:v19, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:57:56 compute-0 sudo[218072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecyecjhiwjvharjaiclgijdaljcqniln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935476.5206003-3054-53552150275904/AnsiballZ_copy.py'
Jan 20 18:57:56 compute-0 sudo[218072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:56 compute-0 podman[218013]: 2026-01-20 18:57:56.865095698 +0000 UTC m=+0.152369359 container exec_died 2fba800b181b7eef43f9bbe592c3e2bd413c4a1140b3b26fe8cd839a68603da7 (image=quay.io/ceph/ceph:v19, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Jan 20 18:57:56 compute-0 ceph-mon[74381]: pgmap v463: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Jan 20 18:57:56 compute-0 python3.9[218074]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:57:56 compute-0 sudo[218072]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:57:57.048Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 18:57:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:57:57.049Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 18:57:57 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:57:57 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:57:57 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:57:57.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:57:57 compute-0 podman[218307]: 2026-01-20 18:57:57.375481158 +0000 UTC m=+0.057542227 container exec d3ebab8fa832c2f5835ae87e49226279b06e4ab5bb811ac33df7c9afc2478663 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:57:57 compute-0 sudo[218351]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-myckflzvfwcpkkuohzrtczdcyhjtkmau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935477.1100178-3054-234464621270162/AnsiballZ_copy.py'
Jan 20 18:57:57 compute-0 podman[218307]: 2026-01-20 18:57:57.387295927 +0000 UTC m=+0.069356966 container exec_died d3ebab8fa832c2f5835ae87e49226279b06e4ab5bb811ac33df7c9afc2478663 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:57:57 compute-0 sudo[218351]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:57 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40c8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:57:57 compute-0 python3.9[218365]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:57:57 compute-0 sudo[218351]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:57 compute-0 podman[218408]: 2026-01-20 18:57:57.596333288 +0000 UTC m=+0.048752107 container exec abfcaa9520820940e6fa70b64c4df644f07298ed0b90326d610d9a5408659009 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 20 18:57:57 compute-0 podman[218408]: 2026-01-20 18:57:57.608097986 +0000 UTC m=+0.060516785 container exec_died abfcaa9520820940e6fa70b64c4df644f07298ed0b90326d610d9a5408659009 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 20 18:57:57 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v464: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Jan 20 18:57:57 compute-0 podman[218522]: 2026-01-20 18:57:57.804881106 +0000 UTC m=+0.056801117 container exec 4e8e761c58a323b2e232c17a02958108663f7cfbfd5c846bd9edf3835afc34ea (image=quay.io/ceph/haproxy:2.3, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm)
Jan 20 18:57:57 compute-0 podman[218522]: 2026-01-20 18:57:57.818070782 +0000 UTC m=+0.069990733 container exec_died 4e8e761c58a323b2e232c17a02958108663f7cfbfd5c846bd9edf3835afc34ea (image=quay.io/ceph/haproxy:2.3, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm)
Jan 20 18:57:57 compute-0 sudo[218685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdoiiqszvkbbpwzberuhaosuucbstpzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935477.721514-3054-127385449699197/AnsiballZ_copy.py'
Jan 20 18:57:57 compute-0 sudo[218685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:58 compute-0 podman[218688]: 2026-01-20 18:57:58.008600639 +0000 UTC m=+0.051608432 container exec 03da620c845e54fd933e41497235ed884812b05ac119911f7372fce805879a03 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-keepalived-nfs-cephfs-compute-0-kuklye, io.openshift.expose-services=, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, architecture=x86_64, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., version=2.2.4, io.openshift.tags=Ceph keepalived, release=1793, vcs-type=git, com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 20 18:57:58 compute-0 podman[218688]: 2026-01-20 18:57:58.033132391 +0000 UTC m=+0.076140174 container exec_died 03da620c845e54fd933e41497235ed884812b05ac119911f7372fce805879a03 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-keepalived-nfs-cephfs-compute-0-kuklye, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, release=1793, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, architecture=x86_64, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph.)
Jan 20 18:57:58 compute-0 python3.9[218691]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:57:58 compute-0 sudo[218685]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:58 compute-0 podman[218756]: 2026-01-20 18:57:58.215039042 +0000 UTC m=+0.047565566 container exec a8dfcd2937823e651e7ddb7da3250c4b2d68a69832f3be4617cbfedf4f0f7749 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:57:58 compute-0 podman[218756]: 2026-01-20 18:57:58.247159123 +0000 UTC m=+0.079685617 container exec_died a8dfcd2937823e651e7ddb7da3250c4b2d68a69832f3be4617cbfedf4f0f7749 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:57:58 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:57:58 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:57:58 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:57:58.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:57:58 compute-0 podman[218908]: 2026-01-20 18:57:58.435246286 +0000 UTC m=+0.048620593 container exec 07b235bbcf8e275aef429beeae1d676b774a5b2e004859627b1a936217e5b679 (image=quay.io/ceph/grafana:10.4.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 20 18:57:58 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [WARNING] 019/185758 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 20 18:57:58 compute-0 podman[218908]: 2026-01-20 18:57:58.600338098 +0000 UTC m=+0.213712425 container exec_died 07b235bbcf8e275aef429beeae1d676b774a5b2e004859627b1a936217e5b679 (image=quay.io/ceph/grafana:10.4.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 20 18:57:58 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:58 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40e0002da0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:57:58 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:58 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40e8004620 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:57:58 compute-0 sudo[219058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzawmpkitkkxhldcjurixqoxeciyafsa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935478.2982645-3054-161248207602417/AnsiballZ_copy.py'
Jan 20 18:57:58 compute-0 sudo[219058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:58 compute-0 podman[219098]: 2026-01-20 18:57:58.949862476 +0000 UTC m=+0.042766170 container exec 3b46b6d3dc84fc240ebe1bff0868890dcc11b4799d0aa0050a48fba74c90cc31 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:57:58 compute-0 python3.9[219064]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:57:58 compute-0 ceph-mon[74381]: pgmap v464: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Jan 20 18:57:58 compute-0 podman[219098]: 2026-01-20 18:57:58.989144194 +0000 UTC m=+0.082047858 container exec_died 3b46b6d3dc84fc240ebe1bff0868890dcc11b4799d0aa0050a48fba74c90cc31 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 18:57:59 compute-0 sudo[219058]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:59 compute-0 sudo[217832]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:59 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 18:57:59 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:57:59 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 18:57:59 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:57:59 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:57:59 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:57:59 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:57:59.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:57:59 compute-0 sudo[219164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:57:59 compute-0 sudo[219164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:57:59 compute-0 sudo[219164]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:59 compute-0 sudo[219189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 20 18:57:59 compute-0 sudo[219189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:57:59 compute-0 sudo[219352]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohymtbktomkzyuoomisqxddsvlxrmkbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935479.1746984-3054-257184343592487/AnsiballZ_copy.py'
Jan 20 18:57:59 compute-0 sudo[219352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:57:59 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40e4003390 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:57:59 compute-0 python3.9[219354]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:57:59 compute-0 sudo[219352]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:59 compute-0 sudo[219189]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:59 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v465: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:57:59 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 18:57:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:57:59] "GET /metrics HTTP/1.1" 200 48332 "" "Prometheus/2.51.0"
Jan 20 18:57:59 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:57:59] "GET /metrics HTTP/1.1" 200 48332 "" "Prometheus/2.51.0"
Jan 20 18:57:59 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:57:59 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 20 18:57:59 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:58:00 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:58:00 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 18:58:00 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:58:00.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 18:58:00 compute-0 ceph-mon[74381]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Jan 20 18:58:00 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:58:00.371162) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 18:58:00 compute-0 ceph-mon[74381]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Jan 20 18:58:00 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935480371201, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 494, "num_deletes": 251, "total_data_size": 573260, "memory_usage": 583336, "flush_reason": "Manual Compaction"}
Jan 20 18:58:00 compute-0 ceph-mon[74381]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Jan 20 18:58:00 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935480382136, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 567791, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17573, "largest_seqno": 18065, "table_properties": {"data_size": 564998, "index_size": 829, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6604, "raw_average_key_size": 19, "raw_value_size": 559443, "raw_average_value_size": 1612, "num_data_blocks": 36, "num_entries": 347, "num_filter_entries": 347, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768935450, "oldest_key_time": 1768935450, "file_creation_time": 1768935480, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cbf3ab03-d51c-4622-b6c7-e997cd5246eb", "db_session_id": "I40O2DG19JCNHUB0JQU4", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 20 18:58:00 compute-0 ceph-mon[74381]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 11011 microseconds, and 2897 cpu microseconds.
Jan 20 18:58:00 compute-0 ceph-mon[74381]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 18:58:00 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:58:00.382171) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 567791 bytes OK
Jan 20 18:58:00 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:58:00.382190) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Jan 20 18:58:00 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:58:00.384234) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Jan 20 18:58:00 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:58:00.384246) EVENT_LOG_v1 {"time_micros": 1768935480384242, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 18:58:00 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:58:00.384263) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 18:58:00 compute-0 ceph-mon[74381]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 570439, prev total WAL file size 589754, number of live WAL files 2.
Jan 20 18:58:00 compute-0 ceph-mon[74381]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 18:58:00 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:58:00.384657) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Jan 20 18:58:00 compute-0 ceph-mon[74381]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 18:58:00 compute-0 ceph-mon[74381]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(554KB)], [35(13MB)]
Jan 20 18:58:00 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935480384734, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 14517999, "oldest_snapshot_seqno": -1}
Jan 20 18:58:00 compute-0 ceph-mon[74381]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 5063 keys, 12307729 bytes, temperature: kUnknown
Jan 20 18:58:00 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935480481977, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 12307729, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12273176, "index_size": 20811, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12677, "raw_key_size": 128242, "raw_average_key_size": 25, "raw_value_size": 12180664, "raw_average_value_size": 2405, "num_data_blocks": 862, "num_entries": 5063, "num_filter_entries": 5063, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768934326, "oldest_key_time": 0, "file_creation_time": 1768935480, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cbf3ab03-d51c-4622-b6c7-e997cd5246eb", "db_session_id": "I40O2DG19JCNHUB0JQU4", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Jan 20 18:58:00 compute-0 ceph-mon[74381]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 18:58:00 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:58:00.482259) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 12307729 bytes
Jan 20 18:58:00 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:58:00.483635) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 149.1 rd, 126.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 13.3 +0.0 blob) out(11.7 +0.0 blob), read-write-amplify(47.2) write-amplify(21.7) OK, records in: 5578, records dropped: 515 output_compression: NoCompression
Jan 20 18:58:00 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:58:00.483657) EVENT_LOG_v1 {"time_micros": 1768935480483647, "job": 16, "event": "compaction_finished", "compaction_time_micros": 97354, "compaction_time_cpu_micros": 24159, "output_level": 6, "num_output_files": 1, "total_output_size": 12307729, "num_input_records": 5578, "num_output_records": 5063, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 18:58:00 compute-0 ceph-mon[74381]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 18:58:00 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935480483927, "job": 16, "event": "table_file_deletion", "file_number": 37}
Jan 20 18:58:00 compute-0 ceph-mon[74381]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 18:58:00 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935480486217, "job": 16, "event": "table_file_deletion", "file_number": 35}
Jan 20 18:58:00 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:58:00.384569) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 18:58:00 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:58:00.486246) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 18:58:00 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:58:00.486250) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 18:58:00 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:58:00.486252) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 18:58:00 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:58:00.486253) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 18:58:00 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-18:58:00.486254) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 18:58:00 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:58:00 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40e0002da0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:58:00 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:58:00 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:58:00 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40c8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:58:00 compute-0 sudo[219523]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yicjnnetlsfnqtvtijkhvjzlurehhwbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935480.404942-3162-177096056493806/AnsiballZ_systemd.py'
Jan 20 18:58:00 compute-0 sudo[219523]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:00 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:58:00 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:58:00 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:58:00 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 18:58:00 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:58:00 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:58:00 compute-0 sudo[219526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:58:00 compute-0 sudo[219526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:58:00 compute-0 sudo[219526]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:00 compute-0 sudo[219551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 18:58:00 compute-0 sudo[219551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:58:01 compute-0 python3.9[219525]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 18:58:01 compute-0 systemd[1]: Reloading.
Jan 20 18:58:01 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:58:01 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:58:01 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:58:01.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:58:01 compute-0 systemd-sysv-generator[219621]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:58:01 compute-0 systemd-rc-local-generator[219618]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:58:01 compute-0 podman[219651]: 2026-01-20 18:58:01.314246712 +0000 UTC m=+0.036645040 container create 8b5d560a27022a2d6adc38b80b4cd2b7969e6316d33a0c50712a5e861be2c60d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_cray, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 18:58:01 compute-0 podman[219651]: 2026-01-20 18:58:01.298824159 +0000 UTC m=+0.021222507 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:58:01 compute-0 systemd[1]: Started libpod-conmon-8b5d560a27022a2d6adc38b80b4cd2b7969e6316d33a0c50712a5e861be2c60d.scope.
Jan 20 18:58:01 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:58:01 compute-0 systemd[1]: Starting libvirt logging daemon socket...
Jan 20 18:58:01 compute-0 systemd[1]: Listening on libvirt logging daemon socket.
Jan 20 18:58:01 compute-0 systemd[1]: Starting libvirt logging daemon admin socket...
Jan 20 18:58:01 compute-0 podman[219651]: 2026-01-20 18:58:01.448320662 +0000 UTC m=+0.170719010 container init 8b5d560a27022a2d6adc38b80b4cd2b7969e6316d33a0c50712a5e861be2c60d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_cray, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 20 18:58:01 compute-0 systemd[1]: Listening on libvirt logging daemon admin socket.
Jan 20 18:58:01 compute-0 podman[219651]: 2026-01-20 18:58:01.455828598 +0000 UTC m=+0.178226926 container start 8b5d560a27022a2d6adc38b80b4cd2b7969e6316d33a0c50712a5e861be2c60d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_cray, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Jan 20 18:58:01 compute-0 systemd[1]: Starting libvirt logging daemon...
Jan 20 18:58:01 compute-0 podman[219651]: 2026-01-20 18:58:01.459532705 +0000 UTC m=+0.181931043 container attach 8b5d560a27022a2d6adc38b80b4cd2b7969e6316d33a0c50712a5e861be2c60d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_cray, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 18:58:01 compute-0 practical_cray[219669]: 167 167
Jan 20 18:58:01 compute-0 systemd[1]: libpod-8b5d560a27022a2d6adc38b80b4cd2b7969e6316d33a0c50712a5e861be2c60d.scope: Deactivated successfully.
Jan 20 18:58:01 compute-0 podman[219651]: 2026-01-20 18:58:01.462541894 +0000 UTC m=+0.184940212 container died 8b5d560a27022a2d6adc38b80b4cd2b7969e6316d33a0c50712a5e861be2c60d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_cray, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Jan 20 18:58:01 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:58:01 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40e8004620 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:58:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc1b0032883ae24ab295df9a22fcd13533daa6a0cbc0c7fd51c89e029f8f9b0e-merged.mount: Deactivated successfully.
Jan 20 18:58:01 compute-0 podman[219651]: 2026-01-20 18:58:01.503646309 +0000 UTC m=+0.226044637 container remove 8b5d560a27022a2d6adc38b80b4cd2b7969e6316d33a0c50712a5e861be2c60d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_cray, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:58:01 compute-0 systemd[1]: libpod-conmon-8b5d560a27022a2d6adc38b80b4cd2b7969e6316d33a0c50712a5e861be2c60d.scope: Deactivated successfully.
Jan 20 18:58:01 compute-0 systemd[1]: Started libvirt logging daemon.
Jan 20 18:58:01 compute-0 sudo[219523]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:01 compute-0 podman[219699]: 2026-01-20 18:58:01.652337751 +0000 UTC m=+0.040001927 container create 1dabf90adf6163e5d1f55d15905d25c9ef81cdc3cdc871026164c0c337dce07e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Jan 20 18:58:01 compute-0 systemd[1]: Started libpod-conmon-1dabf90adf6163e5d1f55d15905d25c9ef81cdc3cdc871026164c0c337dce07e.scope.
Jan 20 18:58:01 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v466: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:58:01 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:58:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77698bf1e70ae29c09a67500a21ff813f0ca4284f8c81010957cf11e303c5be9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 18:58:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77698bf1e70ae29c09a67500a21ff813f0ca4284f8c81010957cf11e303c5be9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:58:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77698bf1e70ae29c09a67500a21ff813f0ca4284f8c81010957cf11e303c5be9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:58:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77698bf1e70ae29c09a67500a21ff813f0ca4284f8c81010957cf11e303c5be9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 18:58:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77698bf1e70ae29c09a67500a21ff813f0ca4284f8c81010957cf11e303c5be9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 18:58:01 compute-0 podman[219699]: 2026-01-20 18:58:01.726212045 +0000 UTC m=+0.113876221 container init 1dabf90adf6163e5d1f55d15905d25c9ef81cdc3cdc871026164c0c337dce07e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_engelbart, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Jan 20 18:58:01 compute-0 podman[219699]: 2026-01-20 18:58:01.636041155 +0000 UTC m=+0.023705351 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:58:01 compute-0 podman[219699]: 2026-01-20 18:58:01.734956994 +0000 UTC m=+0.122621170 container start 1dabf90adf6163e5d1f55d15905d25c9ef81cdc3cdc871026164c0c337dce07e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Jan 20 18:58:01 compute-0 podman[219699]: 2026-01-20 18:58:01.738444905 +0000 UTC m=+0.126109101 container attach 1dabf90adf6163e5d1f55d15905d25c9ef81cdc3cdc871026164c0c337dce07e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_engelbart, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:58:01 compute-0 ceph-mon[74381]: pgmap v465: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:58:01 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 18:58:01 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 18:58:01 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:58:02 compute-0 sudo[219879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlmwlpdadztcxwiggktdgfumtgjxivba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935481.7557895-3162-186770310480371/AnsiballZ_systemd.py'
Jan 20 18:58:02 compute-0 sudo[219879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:02 compute-0 elated_engelbart[219741]: --> passed data devices: 0 physical, 1 LVM
Jan 20 18:58:02 compute-0 elated_engelbart[219741]: --> All data devices are unavailable
Jan 20 18:58:02 compute-0 systemd[1]: libpod-1dabf90adf6163e5d1f55d15905d25c9ef81cdc3cdc871026164c0c337dce07e.scope: Deactivated successfully.
Jan 20 18:58:02 compute-0 podman[219699]: 2026-01-20 18:58:02.069363877 +0000 UTC m=+0.457028053 container died 1dabf90adf6163e5d1f55d15905d25c9ef81cdc3cdc871026164c0c337dce07e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 18:58:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-77698bf1e70ae29c09a67500a21ff813f0ca4284f8c81010957cf11e303c5be9-merged.mount: Deactivated successfully.
Jan 20 18:58:02 compute-0 podman[219699]: 2026-01-20 18:58:02.113960694 +0000 UTC m=+0.501624870 container remove 1dabf90adf6163e5d1f55d15905d25c9ef81cdc3cdc871026164c0c337dce07e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:58:02 compute-0 systemd[1]: libpod-conmon-1dabf90adf6163e5d1f55d15905d25c9ef81cdc3cdc871026164c0c337dce07e.scope: Deactivated successfully.
Jan 20 18:58:02 compute-0 sudo[219551]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:02 compute-0 sudo[219896]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:58:02 compute-0 sudo[219896]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:58:02 compute-0 sudo[219896]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:02 compute-0 sudo[219921]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- lvm list --format json
Jan 20 18:58:02 compute-0 sudo[219921]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:58:02 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:58:02 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:58:02 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:58:02.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:58:02 compute-0 python3.9[219883]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 18:58:02 compute-0 systemd[1]: Reloading.
Jan 20 18:58:02 compute-0 systemd-rc-local-generator[219972]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:58:02 compute-0 systemd-sysv-generator[219976]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:58:02 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:58:02 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40e40040a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:58:02 compute-0 podman[220021]: 2026-01-20 18:58:02.653746333 +0000 UTC m=+0.041455006 container create e8d38d1cb1400d86b113ef05aa839184057d5688d4869c12a53e1d81a577b371 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_goodall, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Jan 20 18:58:02 compute-0 systemd[1]: Started libpod-conmon-e8d38d1cb1400d86b113ef05aa839184057d5688d4869c12a53e1d81a577b371.scope.
Jan 20 18:58:02 compute-0 podman[220021]: 2026-01-20 18:58:02.636653015 +0000 UTC m=+0.024361718 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:58:02 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:58:02 compute-0 systemd[1]: Starting libvirt nodedev daemon socket...
Jan 20 18:58:02 compute-0 systemd[1]: Listening on libvirt nodedev daemon socket.
Jan 20 18:58:02 compute-0 systemd[1]: Starting libvirt nodedev daemon admin socket...
Jan 20 18:58:02 compute-0 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Jan 20 18:58:02 compute-0 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Jan 20 18:58:02 compute-0 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Jan 20 18:58:02 compute-0 podman[220021]: 2026-01-20 18:58:02.765097577 +0000 UTC m=+0.152806280 container init e8d38d1cb1400d86b113ef05aa839184057d5688d4869c12a53e1d81a577b371 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_goodall, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Jan 20 18:58:02 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Jan 20 18:58:02 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:58:02 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40e0002da0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:58:02 compute-0 podman[220021]: 2026-01-20 18:58:02.774901064 +0000 UTC m=+0.162609737 container start e8d38d1cb1400d86b113ef05aa839184057d5688d4869c12a53e1d81a577b371 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_goodall, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Jan 20 18:58:02 compute-0 podman[220021]: 2026-01-20 18:58:02.778421036 +0000 UTC m=+0.166129709 container attach e8d38d1cb1400d86b113ef05aa839184057d5688d4869c12a53e1d81a577b371 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_goodall, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Jan 20 18:58:02 compute-0 vigilant_goodall[220038]: 167 167
Jan 20 18:58:02 compute-0 systemd[1]: libpod-e8d38d1cb1400d86b113ef05aa839184057d5688d4869c12a53e1d81a577b371.scope: Deactivated successfully.
Jan 20 18:58:02 compute-0 podman[220021]: 2026-01-20 18:58:02.781826265 +0000 UTC m=+0.169534948 container died e8d38d1cb1400d86b113ef05aa839184057d5688d4869c12a53e1d81a577b371 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_goodall, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Jan 20 18:58:02 compute-0 systemd[1]: Started libvirt nodedev daemon.
Jan 20 18:58:02 compute-0 ceph-mon[74381]: pgmap v466: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:58:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-e9b936263a90fce082fabafe5c5f77d7397cb3257749aa92c1457e06b2755059-merged.mount: Deactivated successfully.
Jan 20 18:58:02 compute-0 podman[220021]: 2026-01-20 18:58:02.827721107 +0000 UTC m=+0.215429780 container remove e8d38d1cb1400d86b113ef05aa839184057d5688d4869c12a53e1d81a577b371 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_goodall, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 20 18:58:02 compute-0 sudo[219879]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:02 compute-0 systemd[1]: libpod-conmon-e8d38d1cb1400d86b113ef05aa839184057d5688d4869c12a53e1d81a577b371.scope: Deactivated successfully.
Jan 20 18:58:02 compute-0 podman[220135]: 2026-01-20 18:58:02.985227479 +0000 UTC m=+0.039854354 container create 7a67a84a5740cb15d5ab84f13156cc6f0b40b82c2541ce2b5b20e65587a8ef1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:58:03 compute-0 systemd[1]: Started libpod-conmon-7a67a84a5740cb15d5ab84f13156cc6f0b40b82c2541ce2b5b20e65587a8ef1e.scope.
Jan 20 18:58:03 compute-0 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Jan 20 18:58:03 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:58:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b4626849dbafd4305c1e22e764089309f345db670681f9c0cc56e29289d0b2e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 18:58:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b4626849dbafd4305c1e22e764089309f345db670681f9c0cc56e29289d0b2e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:58:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b4626849dbafd4305c1e22e764089309f345db670681f9c0cc56e29289d0b2e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:58:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b4626849dbafd4305c1e22e764089309f345db670681f9c0cc56e29289d0b2e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 18:58:03 compute-0 podman[220135]: 2026-01-20 18:58:03.05059897 +0000 UTC m=+0.105225865 container init 7a67a84a5740cb15d5ab84f13156cc6f0b40b82c2541ce2b5b20e65587a8ef1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Jan 20 18:58:03 compute-0 podman[220135]: 2026-01-20 18:58:03.059049641 +0000 UTC m=+0.113676516 container start 7a67a84a5740cb15d5ab84f13156cc6f0b40b82c2541ce2b5b20e65587a8ef1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_volhard, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:58:03 compute-0 podman[220135]: 2026-01-20 18:58:03.062976244 +0000 UTC m=+0.117603119 container attach 7a67a84a5740cb15d5ab84f13156cc6f0b40b82c2541ce2b5b20e65587a8ef1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_volhard, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 20 18:58:03 compute-0 podman[220135]: 2026-01-20 18:58:02.969261661 +0000 UTC m=+0.023888556 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:58:03 compute-0 podman[220175]: 2026-01-20 18:58:03.090942346 +0000 UTC m=+0.067907459 container health_status 7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 18:58:03 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:58:03 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:58:03 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:58:03.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:58:03 compute-0 sudo[220281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzdfpmykwdwztldybznrcymfiytuvmsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935482.9451072-3162-254321699975092/AnsiballZ_systemd.py'
Jan 20 18:58:03 compute-0 sudo[220281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:03 compute-0 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Jan 20 18:58:03 compute-0 silly_volhard[220184]: {
Jan 20 18:58:03 compute-0 silly_volhard[220184]:     "0": [
Jan 20 18:58:03 compute-0 silly_volhard[220184]:         {
Jan 20 18:58:03 compute-0 silly_volhard[220184]:             "devices": [
Jan 20 18:58:03 compute-0 silly_volhard[220184]:                 "/dev/loop3"
Jan 20 18:58:03 compute-0 silly_volhard[220184]:             ],
Jan 20 18:58:03 compute-0 silly_volhard[220184]:             "lv_name": "ceph_lv0",
Jan 20 18:58:03 compute-0 silly_volhard[220184]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 18:58:03 compute-0 silly_volhard[220184]:             "lv_size": "21470642176",
Jan 20 18:58:03 compute-0 silly_volhard[220184]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=aecbbf3b-b405-507b-97d7-637a83f5b4b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5f53c0c6-6046-4836-83f9-ff93da7e674e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 18:58:03 compute-0 silly_volhard[220184]:             "lv_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 18:58:03 compute-0 silly_volhard[220184]:             "name": "ceph_lv0",
Jan 20 18:58:03 compute-0 silly_volhard[220184]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 18:58:03 compute-0 silly_volhard[220184]:             "tags": {
Jan 20 18:58:03 compute-0 silly_volhard[220184]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 18:58:03 compute-0 silly_volhard[220184]:                 "ceph.block_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 18:58:03 compute-0 silly_volhard[220184]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 18:58:03 compute-0 silly_volhard[220184]:                 "ceph.cluster_fsid": "aecbbf3b-b405-507b-97d7-637a83f5b4b1",
Jan 20 18:58:03 compute-0 silly_volhard[220184]:                 "ceph.cluster_name": "ceph",
Jan 20 18:58:03 compute-0 silly_volhard[220184]:                 "ceph.crush_device_class": "",
Jan 20 18:58:03 compute-0 silly_volhard[220184]:                 "ceph.encrypted": "0",
Jan 20 18:58:03 compute-0 silly_volhard[220184]:                 "ceph.osd_fsid": "5f53c0c6-6046-4836-83f9-ff93da7e674e",
Jan 20 18:58:03 compute-0 silly_volhard[220184]:                 "ceph.osd_id": "0",
Jan 20 18:58:03 compute-0 silly_volhard[220184]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 18:58:03 compute-0 silly_volhard[220184]:                 "ceph.type": "block",
Jan 20 18:58:03 compute-0 silly_volhard[220184]:                 "ceph.vdo": "0",
Jan 20 18:58:03 compute-0 silly_volhard[220184]:                 "ceph.with_tpm": "0"
Jan 20 18:58:03 compute-0 silly_volhard[220184]:             },
Jan 20 18:58:03 compute-0 silly_volhard[220184]:             "type": "block",
Jan 20 18:58:03 compute-0 silly_volhard[220184]:             "vg_name": "ceph_vg0"
Jan 20 18:58:03 compute-0 silly_volhard[220184]:         }
Jan 20 18:58:03 compute-0 silly_volhard[220184]:     ]
Jan 20 18:58:03 compute-0 silly_volhard[220184]: }
Jan 20 18:58:03 compute-0 systemd[1]: libpod-7a67a84a5740cb15d5ab84f13156cc6f0b40b82c2541ce2b5b20e65587a8ef1e.scope: Deactivated successfully.
Jan 20 18:58:03 compute-0 podman[220135]: 2026-01-20 18:58:03.392733565 +0000 UTC m=+0.447360450 container died 7a67a84a5740cb15d5ab84f13156cc6f0b40b82c2541ce2b5b20e65587a8ef1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_volhard, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:58:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-2b4626849dbafd4305c1e22e764089309f345db670681f9c0cc56e29289d0b2e-merged.mount: Deactivated successfully.
Jan 20 18:58:03 compute-0 podman[220135]: 2026-01-20 18:58:03.444147201 +0000 UTC m=+0.498774096 container remove 7a67a84a5740cb15d5ab84f13156cc6f0b40b82c2541ce2b5b20e65587a8ef1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_volhard, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:58:03 compute-0 systemd[1]: libpod-conmon-7a67a84a5740cb15d5ab84f13156cc6f0b40b82c2541ce2b5b20e65587a8ef1e.scope: Deactivated successfully.
Jan 20 18:58:03 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:58:03 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40c8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:58:03 compute-0 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Jan 20 18:58:03 compute-0 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Jan 20 18:58:03 compute-0 sudo[219921]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:03 compute-0 python3.9[220284]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 18:58:03 compute-0 systemd[1]: Reloading.
Jan 20 18:58:03 compute-0 systemd-rc-local-generator[220359]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:58:03 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v467: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 20 18:58:03 compute-0 systemd-sysv-generator[220362]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:58:03 compute-0 sudo[220305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:58:03 compute-0 sudo[220305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:58:03 compute-0 sudo[220305]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:03 compute-0 systemd[1]: Starting libvirt proxy daemon admin socket...
Jan 20 18:58:03 compute-0 systemd[1]: Starting libvirt proxy daemon read-only socket...
Jan 20 18:58:03 compute-0 systemd[1]: Listening on libvirt proxy daemon admin socket.
Jan 20 18:58:03 compute-0 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Jan 20 18:58:03 compute-0 systemd[1]: Starting libvirt proxy daemon...
Jan 20 18:58:04 compute-0 sudo[220371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- raw list --format json
Jan 20 18:58:04 compute-0 sudo[220371]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:58:04 compute-0 systemd[1]: Started libvirt proxy daemon.
Jan 20 18:58:04 compute-0 sudo[220281]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:04 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:58:04 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 18:58:04 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:58:04.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 18:58:04 compute-0 setroubleshoot[220185]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l dcb2f659-0233-45e9-864c-147ffe1f89c4
Jan 20 18:58:04 compute-0 setroubleshoot[220185]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Jan 20 18:58:04 compute-0 setroubleshoot[220185]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l dcb2f659-0233-45e9-864c-147ffe1f89c4
Jan 20 18:58:04 compute-0 setroubleshoot[220185]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Jan 20 18:58:04 compute-0 podman[220558]: 2026-01-20 18:58:04.525896545 +0000 UTC m=+0.056378517 container create a7aca786600846ee4a108818d55b41447010d18fa165b4a7f5ac9309e9226289 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_galois, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 20 18:58:04 compute-0 systemd[1]: Started libpod-conmon-a7aca786600846ee4a108818d55b41447010d18fa165b4a7f5ac9309e9226289.scope.
Jan 20 18:58:04 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:58:04 compute-0 podman[220558]: 2026-01-20 18:58:04.504307299 +0000 UTC m=+0.034789261 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:58:04 compute-0 sudo[220627]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyefhxdiygevehennnhorswvselujzxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935484.229023-3162-174227622043749/AnsiballZ_systemd.py'
Jan 20 18:58:04 compute-0 sudo[220627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:04 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:58:04 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40e8004620 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:58:04 compute-0 podman[220558]: 2026-01-20 18:58:04.618597601 +0000 UTC m=+0.149079643 container init a7aca786600846ee4a108818d55b41447010d18fa165b4a7f5ac9309e9226289 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 20 18:58:04 compute-0 podman[220558]: 2026-01-20 18:58:04.628414088 +0000 UTC m=+0.158896080 container start a7aca786600846ee4a108818d55b41447010d18fa165b4a7f5ac9309e9226289 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_galois, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:58:04 compute-0 podman[220558]: 2026-01-20 18:58:04.633331116 +0000 UTC m=+0.163813158 container attach a7aca786600846ee4a108818d55b41447010d18fa165b4a7f5ac9309e9226289 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:58:04 compute-0 systemd[1]: libpod-a7aca786600846ee4a108818d55b41447010d18fa165b4a7f5ac9309e9226289.scope: Deactivated successfully.
Jan 20 18:58:04 compute-0 conmon[220615]: conmon a7aca786600846ee4a10 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a7aca786600846ee4a108818d55b41447010d18fa165b4a7f5ac9309e9226289.scope/container/memory.events
Jan 20 18:58:04 compute-0 recursing_galois[220615]: 167 167
Jan 20 18:58:04 compute-0 podman[220558]: 2026-01-20 18:58:04.641210773 +0000 UTC m=+0.171692775 container died a7aca786600846ee4a108818d55b41447010d18fa165b4a7f5ac9309e9226289 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 18:58:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-80f651366800f7d2fab2e0c68e2aea8848c210528d2596b17d6d027dbc04da62-merged.mount: Deactivated successfully.
Jan 20 18:58:04 compute-0 podman[220558]: 2026-01-20 18:58:04.688907281 +0000 UTC m=+0.219389243 container remove a7aca786600846ee4a108818d55b41447010d18fa165b4a7f5ac9309e9226289 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_galois, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 20 18:58:04 compute-0 systemd[1]: libpod-conmon-a7aca786600846ee4a108818d55b41447010d18fa165b4a7f5ac9309e9226289.scope: Deactivated successfully.
Jan 20 18:58:04 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:58:04 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40e40040a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:58:04 compute-0 ceph-mon[74381]: pgmap v467: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 20 18:58:04 compute-0 podman[220653]: 2026-01-20 18:58:04.867338702 +0000 UTC m=+0.062969039 container create fd35851a864eaf8d05cc903d0f19f39f712c587b7a8180f25e74b502b4958760 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_galois, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Jan 20 18:58:04 compute-0 systemd[1]: Started libpod-conmon-fd35851a864eaf8d05cc903d0f19f39f712c587b7a8180f25e74b502b4958760.scope.
Jan 20 18:58:04 compute-0 podman[220653]: 2026-01-20 18:58:04.841216158 +0000 UTC m=+0.036846585 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:58:04 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:58:04 compute-0 python3.9[220629]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 18:58:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcfaff756fe5846548224b7ac9e9f55455ef93a3d2bc89369a3a107131276fff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 18:58:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcfaff756fe5846548224b7ac9e9f55455ef93a3d2bc89369a3a107131276fff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:58:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcfaff756fe5846548224b7ac9e9f55455ef93a3d2bc89369a3a107131276fff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:58:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcfaff756fe5846548224b7ac9e9f55455ef93a3d2bc89369a3a107131276fff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 18:58:04 compute-0 systemd[1]: Reloading.
Jan 20 18:58:04 compute-0 podman[220653]: 2026-01-20 18:58:04.974348603 +0000 UTC m=+0.169978960 container init fd35851a864eaf8d05cc903d0f19f39f712c587b7a8180f25e74b502b4958760 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_galois, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid)
Jan 20 18:58:04 compute-0 podman[220653]: 2026-01-20 18:58:04.982961268 +0000 UTC m=+0.178591605 container start fd35851a864eaf8d05cc903d0f19f39f712c587b7a8180f25e74b502b4958760 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_galois, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 18:58:04 compute-0 podman[220653]: 2026-01-20 18:58:04.986655465 +0000 UTC m=+0.182285822 container attach fd35851a864eaf8d05cc903d0f19f39f712c587b7a8180f25e74b502b4958760 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_galois, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Jan 20 18:58:05 compute-0 systemd-sysv-generator[220696]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:58:05 compute-0 systemd-rc-local-generator[220693]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:58:05 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:58:05 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:58:05 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:58:05.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:58:05 compute-0 systemd[1]: Listening on libvirt locking daemon socket.
Jan 20 18:58:05 compute-0 systemd[1]: Starting libvirt QEMU daemon socket...
Jan 20 18:58:05 compute-0 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Jan 20 18:58:05 compute-0 systemd[1]: Starting Virtual Machine and Container Registration Service...
Jan 20 18:58:05 compute-0 systemd[1]: Listening on libvirt QEMU daemon socket.
Jan 20 18:58:05 compute-0 systemd[1]: Starting libvirt QEMU daemon admin socket...
Jan 20 18:58:05 compute-0 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Jan 20 18:58:05 compute-0 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Jan 20 18:58:05 compute-0 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Jan 20 18:58:05 compute-0 systemd[1]: Started Virtual Machine and Container Registration Service.
Jan 20 18:58:05 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Jan 20 18:58:05 compute-0 systemd[1]: Started libvirt QEMU daemon.
Jan 20 18:58:05 compute-0 sudo[220627]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:05 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:58:05 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40e0002da0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:58:05 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:58:05 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v468: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:58:05 compute-0 lvm[220857]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 18:58:05 compute-0 lvm[220857]: VG ceph_vg0 finished
Jan 20 18:58:05 compute-0 musing_galois[220670]: {}
Jan 20 18:58:05 compute-0 systemd[1]: libpod-fd35851a864eaf8d05cc903d0f19f39f712c587b7a8180f25e74b502b4958760.scope: Deactivated successfully.
Jan 20 18:58:05 compute-0 systemd[1]: libpod-fd35851a864eaf8d05cc903d0f19f39f712c587b7a8180f25e74b502b4958760.scope: Consumed 1.310s CPU time.
Jan 20 18:58:05 compute-0 podman[220653]: 2026-01-20 18:58:05.798053872 +0000 UTC m=+0.993684229 container died fd35851a864eaf8d05cc903d0f19f39f712c587b7a8180f25e74b502b4958760 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_galois, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Jan 20 18:58:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-fcfaff756fe5846548224b7ac9e9f55455ef93a3d2bc89369a3a107131276fff-merged.mount: Deactivated successfully.
Jan 20 18:58:05 compute-0 podman[220653]: 2026-01-20 18:58:05.850465395 +0000 UTC m=+1.046095732 container remove fd35851a864eaf8d05cc903d0f19f39f712c587b7a8180f25e74b502b4958760 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_galois, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 18:58:05 compute-0 systemd[1]: libpod-conmon-fd35851a864eaf8d05cc903d0f19f39f712c587b7a8180f25e74b502b4958760.scope: Deactivated successfully.
Jan 20 18:58:05 compute-0 sudo[220371]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:05 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 18:58:05 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:58:05 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 18:58:05 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:58:05 compute-0 sudo[220976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdgxldgmceezbfughzpalaeipvxigexb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935485.6683474-3162-246734375997130/AnsiballZ_systemd.py'
Jan 20 18:58:05 compute-0 sudo[220976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:06 compute-0 sudo[220975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 18:58:06 compute-0 sudo[220975]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:58:06 compute-0 sudo[220975]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:06 compute-0 python3.9[220995]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 18:58:06 compute-0 systemd[1]: Reloading.
Jan 20 18:58:06 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:58:06 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 18:58:06 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:58:06.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 18:58:06 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:58:06 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 20 18:58:06 compute-0 systemd-rc-local-generator[221028]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:58:06 compute-0 systemd-sysv-generator[221032]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:58:06 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:58:06 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40c8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:58:06 compute-0 systemd[1]: Starting libvirt secret daemon socket...
Jan 20 18:58:06 compute-0 systemd[1]: Listening on libvirt secret daemon socket.
Jan 20 18:58:06 compute-0 systemd[1]: Starting libvirt secret daemon admin socket...
Jan 20 18:58:06 compute-0 systemd[1]: Starting libvirt secret daemon read-only socket...
Jan 20 18:58:06 compute-0 systemd[1]: Listening on libvirt secret daemon admin socket.
Jan 20 18:58:06 compute-0 systemd[1]: Listening on libvirt secret daemon read-only socket.
Jan 20 18:58:06 compute-0 systemd[1]: Starting libvirt secret daemon...
Jan 20 18:58:06 compute-0 systemd[1]: Started libvirt secret daemon.
Jan 20 18:58:06 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:58:06 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40e8004620 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:58:06 compute-0 sudo[220976]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:06 compute-0 ceph-mon[74381]: pgmap v468: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 18:58:06 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:58:06 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:58:07 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:58:07.050Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 18:58:07 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:58:07 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:58:07 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:58:07.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:58:07 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:58:07 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40e40040a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:58:07 compute-0 sudo[221213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlwepqmvrqlttcclbrfxvnrfwtsfsnlq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935487.3629136-3273-45708760140485/AnsiballZ_file.py'
Jan 20 18:58:07 compute-0 sudo[221213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:07 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v469: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 20 18:58:07 compute-0 python3.9[221215]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:58:07 compute-0 sudo[221213]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:08 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:58:08 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:58:08 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:58:08.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:58:08 compute-0 sudo[221366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttcvmtiksfsjgrfbshchedboogldapak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935488.17445-3297-81661923409944/AnsiballZ_find.py'
Jan 20 18:58:08 compute-0 sudo[221366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:08 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:58:08 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40c8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:58:08 compute-0 python3.9[221368]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 20 18:58:08 compute-0 sudo[221366]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:08 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:58:08 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40c8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:58:08 compute-0 ceph-mon[74381]: pgmap v469: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 20 18:58:09 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:58:09 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:58:09 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:58:09.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:58:09 compute-0 sudo[221520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uchfuxekkllitsjasugnlipapuzzvrub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935489.0310621-3321-254322796451897/AnsiballZ_command.py'
Jan 20 18:58:09 compute-0 sudo[221520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:58:09 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 20 18:58:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:58:09 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 20 18:58:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:58:09 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40cc000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:58:09 compute-0 python3.9[221522]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;
                                             echo ceph
                                             awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:58:09 compute-0 sudo[221520]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:09 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v470: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 20 18:58:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:58:09] "GET /metrics HTTP/1.1" 200 48332 "" "Prometheus/2.51.0"
Jan 20 18:58:09 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:58:09] "GET /metrics HTTP/1.1" 200 48332 "" "Prometheus/2.51.0"
Jan 20 18:58:10 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:58:10 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:58:10 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:58:10.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:58:10 compute-0 python3.9[221678]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 20 18:58:10 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:58:10 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40e40040a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:58:10 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:58:10 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:58:10 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40c8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:58:10 compute-0 ceph-mon[74381]: pgmap v470: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 20 18:58:10 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 18:58:11 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:58:11 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:58:11 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:58:11.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:58:11 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:58:11 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40c8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:58:11 compute-0 python3.9[221828]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:58:11 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v471: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 20 18:58:11 compute-0 podman[221925]: 2026-01-20 18:58:11.9833978 +0000 UTC m=+0.097490563 container health_status d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 20 18:58:12 compute-0 python3.9[221966]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1768935491.0838852-3378-194291513344414/.source.xml follow=False _original_basename=secret.xml.j2 checksum=3cdb940d28f218c644bbb310b25eee63bb3b21cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:58:12 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:58:12 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:58:12 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:58:12.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:58:12 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:58:12 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 20 18:58:12 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:58:12 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40cc0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:58:12 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:58:12 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40e40040a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:58:12 compute-0 sudo[222128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnqgwyroxnfnvdkgxbohrtplncicdbsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935492.5373902-3423-217248256304407/AnsiballZ_command.py'
Jan 20 18:58:12 compute-0 sudo[222128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:13 compute-0 ceph-mon[74381]: pgmap v471: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 20 18:58:13 compute-0 python3.9[222130]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine aecbbf3b-b405-507b-97d7-637a83f5b4b1
                                             virsh secret-define --file /tmp/secret.xml
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:58:13 compute-0 polkitd[43401]: Registered Authentication Agent for unix-process:222132:368939 (system bus name :1.2878 [pkttyagent --process 222132 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Jan 20 18:58:13 compute-0 polkitd[43401]: Unregistered Authentication Agent for unix-process:222132:368939 (system bus name :1.2878, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Jan 20 18:58:13 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:58:13 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 18:58:13 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:58:13.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 18:58:13 compute-0 polkitd[43401]: Registered Authentication Agent for unix-process:222131:368938 (system bus name :1.2879 [pkttyagent --process 222131 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Jan 20 18:58:13 compute-0 polkitd[43401]: Unregistered Authentication Agent for unix-process:222131:368938 (system bus name :1.2879, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Jan 20 18:58:13 compute-0 sudo[222128]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:13 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:58:13 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40c8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:58:13 compute-0 sudo[222167]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 18:58:13 compute-0 sudo[222167]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:58:13 compute-0 sudo[222167]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:13 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v472: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Jan 20 18:58:14 compute-0 python3.9[222319]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:58:14 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:58:14 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:58:14 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:58:14.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:58:14 compute-0 sudo[222469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lueermzswvhllloyovbqqpfmadjwjtnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935494.2357738-3471-72101866298399/AnsiballZ_command.py'
Jan 20 18:58:14 compute-0 sudo[222469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:14 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Jan 20 18:58:14 compute-0 systemd[1]: setroubleshootd.service: Deactivated successfully.
Jan 20 18:58:14 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:58:14 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40c8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:58:14 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:58:14 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40cc0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:58:14 compute-0 sudo[222469]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:15 compute-0 ceph-mon[74381]: pgmap v472: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Jan 20 18:58:15 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:58:15 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:58:15 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:58:15.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:58:15 compute-0 sudo[222622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmuqxoyfjlgnlfnqnoqlloancfrqyygo ; FSID=aecbbf3b-b405-507b-97d7-637a83f5b4b1 KEY=AQCMy29pAAAAABAAS5mI8AokUU3QFTWUgUlXCA== /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935495.093101-3495-201592670898218/AnsiballZ_command.py'
Jan 20 18:58:15 compute-0 sudo[222622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:15 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:58:15 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40e40040a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:58:15 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:58:15 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v473: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Jan 20 18:58:15 compute-0 polkitd[43401]: Registered Authentication Agent for unix-process:222627:369198 (system bus name :1.2883 [pkttyagent --process 222627 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Jan 20 18:58:15 compute-0 polkitd[43401]: Unregistered Authentication Agent for unix-process:222627:369198 (system bus name :1.2883, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Jan 20 18:58:15 compute-0 sudo[222622]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:16 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:58:16 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:58:16 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:58:16.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:58:16 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:58:16 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40c8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:58:16 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:58:16 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40c8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:58:16 compute-0 sudo[222782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dniwdkvgcyjvlemqwxlijbvzrsmcqjbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935496.5754595-3519-187686414600484/AnsiballZ_copy.py'
Jan 20 18:58:16 compute-0 sudo[222782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:17 compute-0 python3.9[222784]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:58:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:58:17.051Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 18:58:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:58:17.053Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 18:58:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:58:17.053Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 18:58:17 compute-0 sudo[222782]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:17 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:58:17 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:58:17 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:58:17.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:58:17 compute-0 ceph-mon[74381]: pgmap v473: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Jan 20 18:58:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:58:17 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40cc0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:58:17 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v474: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 20 18:58:17 compute-0 sudo[222936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwkhrhpaijtfgxamrmmoyyyotsoehahd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935497.4027073-3543-25461444722674/AnsiballZ_stat.py'
Jan 20 18:58:17 compute-0 sudo[222936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:18 compute-0 python3.9[222938]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:58:18 compute-0 sudo[222936]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:18 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:58:18 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:58:18 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:58:18.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:58:18 compute-0 sudo[223059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpipbdrxsylekjwcgntbzrkoeowqvalm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935497.4027073-3543-25461444722674/AnsiballZ_copy.py'
Jan 20 18:58:18 compute-0 sudo[223059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [WARNING] 019/185818 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 20 18:58:18 compute-0 ceph-mon[74381]: pgmap v474: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 20 18:58:18 compute-0 python3.9[223061]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1768935497.4027073-3543-25461444722674/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:58:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:58:18 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40e40040a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:58:18 compute-0 sudo[223059]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:58:18 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40c8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:58:19 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:58:19 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:58:19 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:58:19.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:58:19 compute-0 sudo[223211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udrcqqnstmxherqvzlmzfqgswdpuqjps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935499.138888-3591-213265903468024/AnsiballZ_file.py'
Jan 20 18:58:19 compute-0 sudo[223211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:58:19 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40c8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:58:19 compute-0 python3.9[223213]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:58:19 compute-0 sudo[223211]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:19 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v475: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 20 18:58:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:58:19] "GET /metrics HTTP/1.1" 200 48337 "" "Prometheus/2.51.0"
Jan 20 18:58:19 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:58:19] "GET /metrics HTTP/1.1" 200 48337 "" "Prometheus/2.51.0"
Jan 20 18:58:20 compute-0 sudo[223365]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfrwqdarraytzpscpklmfmcxxvgdqexe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935499.9810417-3615-49600637674673/AnsiballZ_stat.py'
Jan 20 18:58:20 compute-0 sudo[223365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:20 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:58:20 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:58:20 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:58:20.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:58:20 compute-0 python3.9[223367]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:58:20 compute-0 sudo[223365]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:20 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:58:20 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40cc002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:58:20 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:58:20 compute-0 ceph-mon[74381]: pgmap v475: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 20 18:58:20 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:58:20 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40e40040a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:58:20 compute-0 sudo[223443]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djmtibhpnghfbgcaczldmebqqiiahycu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935499.9810417-3615-49600637674673/AnsiballZ_file.py'
Jan 20 18:58:20 compute-0 sudo[223443]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:21 compute-0 python3.9[223445]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:58:21 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:58:21 compute-0 sudo[223443]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:21 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 20 18:58:21 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:58:21.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 20 18:58:21 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:58:21 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40c8003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:58:21 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v476: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 20 18:58:21 compute-0 sudo[223597]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppkoxizrdbopamrbeskswkakkhjtcvot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935501.4532304-3651-3706119583807/AnsiballZ_stat.py'
Jan 20 18:58:21 compute-0 sudo[223597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:22 compute-0 python3.9[223599]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:58:22 compute-0 sudo[223597]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:22 compute-0 sudo[223675]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irxapgdwkpjouxngxhzcgtjiqwiukzkw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935501.4532304-3651-3706119583807/AnsiballZ_file.py'
Jan 20 18:58:22 compute-0 sudo[223675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:22 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:58:22 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:58:22 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:58:22.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:58:22 compute-0 python3.9[223677]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.gocu4msg recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:58:22 compute-0 sudo[223675]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:22 compute-0 kernel: ganesha.nfsd[221370]: segfault at 50 ip 00007f417515632e sp 00007f40d5ffa210 error 4 in libntirpc.so.5.8[7f417513b000+2c000] likely on CPU 0 (core 0, socket 0)
Jan 20 18:58:22 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Jan 20 18:58:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[210411]: 20/01/2026 18:58:22 : epoch 696fd00a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f40c8003c10 fd 39 proxy ignored for local
Jan 20 18:58:22 compute-0 systemd[1]: Started Process Core Dump (PID 223702/UID 0).
Jan 20 18:58:22 compute-0 ceph-mon[74381]: pgmap v476: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 20 18:58:23 compute-0 sudo[223829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zyfzrdohctawfacofatqalryaafnrggy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935502.7843437-3687-35268352148273/AnsiballZ_stat.py'
Jan 20 18:58:23 compute-0 sudo[223829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:23 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:58:23 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:58:23 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:58:23.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:58:23 compute-0 python3.9[223831]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:58:23 compute-0 sudo[223829]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:23 compute-0 sudo[223907]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhzuiicqxhnxneyqzspllqilcrtsjhsb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935502.7843437-3687-35268352148273/AnsiballZ_file.py'
Jan 20 18:58:23 compute-0 sudo[223907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:23 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v477: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 20 18:58:23 compute-0 python3.9[223909]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:58:23 compute-0 systemd-coredump[223703]: Process 210415 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 57:
                                                    #0  0x00007f417515632e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    #1  0x0000000000000000 n/a (n/a + 0x0)
                                                    #2  0x00007f4175160900 n/a (/usr/lib64/libntirpc.so.5.8 + 0x2c900)
                                                    ELF object binary architecture: AMD x86-64
Jan 20 18:58:23 compute-0 sudo[223907]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:23 compute-0 systemd[1]: systemd-coredump@9-223702-0.service: Deactivated successfully.
Jan 20 18:58:23 compute-0 systemd[1]: systemd-coredump@9-223702-0.service: Consumed 1.143s CPU time.
Jan 20 18:58:23 compute-0 podman[223940]: 2026-01-20 18:58:23.912499452 +0000 UTC m=+0.025969126 container died abfcaa9520820940e6fa70b64c4df644f07298ed0b90326d610d9a5408659009 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 20 18:58:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-b8b6daf5b8a9306411b3ba3d6b50b6160b2d8cccb7ee3d0f7b149895b1266122-merged.mount: Deactivated successfully.
Jan 20 18:58:23 compute-0 podman[223940]: 2026-01-20 18:58:23.955241209 +0000 UTC m=+0.068710863 container remove abfcaa9520820940e6fa70b64c4df644f07298ed0b90326d610d9a5408659009 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:58:23 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Main process exited, code=exited, status=139/n/a
Jan 20 18:58:24 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Failed with result 'exit-code'.
Jan 20 18:58:24 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Consumed 1.443s CPU time.
Jan 20 18:58:24 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:58:24 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:58:24 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:58:24.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:58:24 compute-0 sudo[224107]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pyowrfugcmjndorvenfvjfxmsbvnokyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935504.2242084-3726-83853946596604/AnsiballZ_command.py'
Jan 20 18:58:24 compute-0 sudo[224107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:24 compute-0 python3.9[224109]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:58:24 compute-0 sudo[224107]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:24 compute-0 ceph-mon[74381]: pgmap v477: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 20 18:58:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:58:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:58:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:58:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:58:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:58:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:58:25 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:58:25 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:58:25 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:58:25.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:58:25 compute-0 sudo[224260]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtrhuwexnorcnvrglhkrbcowbxtjtrly ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1768935505.0454676-3750-251325234642647/AnsiballZ_edpm_nftables_from_files.py'
Jan 20 18:58:25 compute-0 sudo[224260]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:25 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:58:25 compute-0 python3[224262]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 20 18:58:25 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v478: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Jan 20 18:58:25 compute-0 sudo[224260]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:25 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 18:58:26 compute-0 sudo[224414]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-naingcvzmkumewipvbsnqpbmvnnvpyve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935505.995608-3774-44946397357574/AnsiballZ_stat.py'
Jan 20 18:58:26 compute-0 sudo[224414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:26 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:58:26 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:58:26 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:58:26.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:58:26 compute-0 python3.9[224416]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:58:26 compute-0 sudo[224414]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:26 compute-0 sudo[224492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agsntzxzyugekeerxyagbzumlogfqznz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935505.995608-3774-44946397357574/AnsiballZ_file.py'
Jan 20 18:58:26 compute-0 sudo[224492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:26 compute-0 python3.9[224494]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:58:26 compute-0 sudo[224492]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:27 compute-0 ceph-mon[74381]: pgmap v478: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Jan 20 18:58:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:58:27.053Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 18:58:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:58:27.055Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 18:58:27 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:58:27 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:58:27 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:58:27.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:58:27 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v479: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Jan 20 18:58:27 compute-0 sudo[224646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajdxinllzcknerhgvlftsmhqgoiywgok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935507.4294062-3810-76631456451110/AnsiballZ_stat.py'
Jan 20 18:58:27 compute-0 sudo[224646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:27 compute-0 python3.9[224648]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:58:27 compute-0 sudo[224646]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:28 compute-0 sudo[224771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovhsfdraajgcxfjyzgxcjktmwxvnuwxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935507.4294062-3810-76631456451110/AnsiballZ_copy.py'
Jan 20 18:58:28 compute-0 sudo[224771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:28 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:58:28 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:58:28 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:58:28.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:58:28 compute-0 python3.9[224773]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768935507.4294062-3810-76631456451110/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:58:28 compute-0 sudo[224771]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [WARNING] 019/185828 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 20 18:58:29 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:58:29 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:58:29 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:58:29.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:58:29 compute-0 ceph-mon[74381]: pgmap v479: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Jan 20 18:58:29 compute-0 sudo[224923]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tiwxbmytyrnywipmastimcwoohxlivyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935509.2472112-3855-80014230782920/AnsiballZ_stat.py'
Jan 20 18:58:29 compute-0 sudo[224923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:29 compute-0 python3.9[224925]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:58:29 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v480: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:58:29 compute-0 sudo[224923]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:58:29] "GET /metrics HTTP/1.1" 200 48340 "" "Prometheus/2.51.0"
Jan 20 18:58:29 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:58:29] "GET /metrics HTTP/1.1" 200 48340 "" "Prometheus/2.51.0"
Jan 20 18:58:29 compute-0 sudo[225003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-roaeuwtcwdxlhwthwyylilgmwftspnrt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935509.2472112-3855-80014230782920/AnsiballZ_file.py'
Jan 20 18:58:29 compute-0 sudo[225003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:30 compute-0 python3.9[225005]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:58:30 compute-0 sudo[225003]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:58:30.192 165659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 18:58:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:58:30.192 165659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 18:58:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:58:30.192 165659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 18:58:30 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:58:30 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 20 18:58:30 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:58:30.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 20 18:58:30 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:58:30 compute-0 sudo[225155]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdqxgmebzqnsntgnqdlxmancrkgiklqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935510.5784233-3891-170210093028400/AnsiballZ_stat.py'
Jan 20 18:58:30 compute-0 sudo[225155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:31 compute-0 python3.9[225157]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:58:31 compute-0 sudo[225155]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:31 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:58:31 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:58:31 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:58:31.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:58:31 compute-0 ceph-mon[74381]: pgmap v480: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:58:31 compute-0 sudo[225233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftcordytrnxbcfsuwsevgskdayqpryuy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935510.5784233-3891-170210093028400/AnsiballZ_file.py'
Jan 20 18:58:31 compute-0 sudo[225233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:31 compute-0 python3.9[225235]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:58:31 compute-0 sudo[225233]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:31 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v481: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:58:32 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:58:32 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:58:32 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:58:32.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:58:32 compute-0 sudo[225387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlfxqggjordojybmkjzmijkmnflvoqmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935512.0136418-3927-6224225121648/AnsiballZ_stat.py'
Jan 20 18:58:32 compute-0 sudo[225387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:32 compute-0 python3.9[225389]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:58:32 compute-0 sudo[225387]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:32 compute-0 sudo[225512]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgkpvishqvelkvedxbhhoixwzzyetrxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935512.0136418-3927-6224225121648/AnsiballZ_copy.py'
Jan 20 18:58:32 compute-0 sudo[225512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:33 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:58:33 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:58:33 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:58:33.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:58:33 compute-0 python3.9[225514]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768935512.0136418-3927-6224225121648/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:58:33 compute-0 sudo[225512]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:33 compute-0 ceph-mon[74381]: pgmap v481: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:58:33 compute-0 sudo[225540]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 18:58:33 compute-0 sudo[225540]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:58:33 compute-0 sudo[225540]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:33 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v482: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:58:33 compute-0 podman[225571]: 2026-01-20 18:58:33.725048962 +0000 UTC m=+0.053638222 container health_status 7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 20 18:58:33 compute-0 sudo[225709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xuocjvghyrpztckrhmhglwkhxcufewgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935513.6853797-3972-180039930668471/AnsiballZ_file.py'
Jan 20 18:58:33 compute-0 sudo[225709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:34 compute-0 python3.9[225711]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:58:34 compute-0 sudo[225709]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:34 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Scheduled restart job, restart counter is at 10.
Jan 20 18:58:34 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.ulclbx for aecbbf3b-b405-507b-97d7-637a83f5b4b1.
Jan 20 18:58:34 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Consumed 1.443s CPU time.
Jan 20 18:58:34 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.ulclbx for aecbbf3b-b405-507b-97d7-637a83f5b4b1...
Jan 20 18:58:34 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:58:34 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:58:34 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:58:34.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:58:34 compute-0 podman[225786]: 2026-01-20 18:58:34.479511194 +0000 UTC m=+0.044612075 container create d071ca8dfc06ee12e00a2e29069311aae236f780132e498d62303b0eb9bbd23c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:58:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/007148ab82a89dc9cb08a74cc1ac49450b286dcc834e9e6c2e3dcd1e7350cd36/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Jan 20 18:58:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/007148ab82a89dc9cb08a74cc1ac49450b286dcc834e9e6c2e3dcd1e7350cd36/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:58:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/007148ab82a89dc9cb08a74cc1ac49450b286dcc834e9e6c2e3dcd1e7350cd36/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 18:58:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/007148ab82a89dc9cb08a74cc1ac49450b286dcc834e9e6c2e3dcd1e7350cd36/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.ulclbx-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 18:58:34 compute-0 podman[225786]: 2026-01-20 18:58:34.459074279 +0000 UTC m=+0.024175200 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:58:34 compute-0 podman[225786]: 2026-01-20 18:58:34.553396656 +0000 UTC m=+0.118497547 container init d071ca8dfc06ee12e00a2e29069311aae236f780132e498d62303b0eb9bbd23c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:58:34 compute-0 podman[225786]: 2026-01-20 18:58:34.557931151 +0000 UTC m=+0.123032032 container start d071ca8dfc06ee12e00a2e29069311aae236f780132e498d62303b0eb9bbd23c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:58:34 compute-0 bash[225786]: d071ca8dfc06ee12e00a2e29069311aae236f780132e498d62303b0eb9bbd23c
Jan 20 18:58:34 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.ulclbx for aecbbf3b-b405-507b-97d7-637a83f5b4b1.
Jan 20 18:58:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[225818]: 20/01/2026 18:58:34 : epoch 696fd05a : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Jan 20 18:58:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[225818]: 20/01/2026 18:58:34 : epoch 696fd05a : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Jan 20 18:58:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[225818]: 20/01/2026 18:58:34 : epoch 696fd05a : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Jan 20 18:58:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[225818]: 20/01/2026 18:58:34 : epoch 696fd05a : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Jan 20 18:58:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[225818]: 20/01/2026 18:58:34 : epoch 696fd05a : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Jan 20 18:58:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[225818]: 20/01/2026 18:58:34 : epoch 696fd05a : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Jan 20 18:58:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[225818]: 20/01/2026 18:58:34 : epoch 696fd05a : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Jan 20 18:58:34 compute-0 sudo[225968]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-juocfhffcpoetylrgcktsyfzoscibfnw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935514.5432255-3996-69394504304896/AnsiballZ_command.py'
Jan 20 18:58:34 compute-0 sudo[225968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:35 compute-0 python3.9[225970]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:58:35 compute-0 sudo[225968]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:35 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:58:35 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 18:58:35 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:58:35.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 18:58:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[225818]: 20/01/2026 18:58:35 : epoch 696fd05a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 20 18:58:35 compute-0 ceph-mon[74381]: pgmap v482: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:58:35 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:58:35 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v483: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:58:35 compute-0 sudo[226125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hyrcgesgbnkaneyjnzypealpmowjuozs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935515.485762-4020-8138477315453/AnsiballZ_blockinfile.py'
Jan 20 18:58:35 compute-0 sudo[226125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:36 compute-0 python3.9[226127]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:58:36 compute-0 sudo[226125]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:36 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:58:36 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:58:36 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:58:36.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:58:36 compute-0 ceph-mon[74381]: pgmap v483: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:58:36 compute-0 sudo[226277]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgvbrhmxjuwvlewpjghjifmagppulqtc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935516.6081495-4047-159371310594900/AnsiballZ_command.py'
Jan 20 18:58:36 compute-0 sudo[226277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:58:37.056Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 18:58:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:58:37.056Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 18:58:37 compute-0 python3.9[226279]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:58:37 compute-0 sudo[226277]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:37 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:58:37 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:58:37 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:58:37.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:58:37 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v484: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 85 B/s wr, 0 op/s
Jan 20 18:58:37 compute-0 sudo[226432]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cugrtdkrgjoxnzxaxayujhmylezprjyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935517.4782815-4071-124963778045692/AnsiballZ_stat.py'
Jan 20 18:58:37 compute-0 sudo[226432]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:37 compute-0 python3.9[226434]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 18:58:37 compute-0 sudo[226432]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:38 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:58:38 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:58:38 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:58:38.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:58:38 compute-0 sudo[226586]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwlqlycxlpeqfbprexgconaguwzxxaky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935518.3245165-4095-229504450242985/AnsiballZ_command.py'
Jan 20 18:58:38 compute-0 sudo[226586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:38 compute-0 ceph-mon[74381]: pgmap v484: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 85 B/s wr, 0 op/s
Jan 20 18:58:38 compute-0 python3.9[226588]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:58:38 compute-0 sudo[226586]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:39 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:58:39 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:58:39 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:58:39.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:58:39 compute-0 sudo[226741]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozeqgljdczfbqqgkchqbtxkxkuapyfcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935519.149783-4119-81093681601193/AnsiballZ_file.py'
Jan 20 18:58:39 compute-0 sudo[226741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:39 compute-0 python3.9[226743]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:58:39 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v485: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Jan 20 18:58:39 compute-0 sudo[226741]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:58:39] "GET /metrics HTTP/1.1" 200 48340 "" "Prometheus/2.51.0"
Jan 20 18:58:39 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:58:39] "GET /metrics HTTP/1.1" 200 48340 "" "Prometheus/2.51.0"
Jan 20 18:58:40 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:58:40 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:58:40 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:58:40.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:58:40 compute-0 sudo[226895]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxeququaxetmgiuxrppuwkhfxkbkdzmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935520.1818576-4143-64981759888048/AnsiballZ_stat.py'
Jan 20 18:58:40 compute-0 sudo[226895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:40 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:58:40 compute-0 python3.9[226897]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:58:40 compute-0 sudo[226895]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:40 compute-0 ceph-mon[74381]: pgmap v485: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Jan 20 18:58:40 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 18:58:41 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:58:41 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 18:58:41 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:58:41.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 18:58:41 compute-0 sudo[227018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjkyifzendkgxwqewxfkccmmqnwtkohe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935520.1818576-4143-64981759888048/AnsiballZ_copy.py'
Jan 20 18:58:41 compute-0 sudo[227018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:41 compute-0 python3.9[227020]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768935520.1818576-4143-64981759888048/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:58:41 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[225818]: 20/01/2026 18:58:41 : epoch 696fd05a : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 20 18:58:41 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[225818]: 20/01/2026 18:58:41 : epoch 696fd05a : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 20 18:58:41 compute-0 sudo[227018]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:41 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v486: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 255 B/s wr, 0 op/s
Jan 20 18:58:42 compute-0 sudo[227172]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hydbdqtlyfbnagyhpbkwzortyahtgsis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935521.6438434-4188-4171441448522/AnsiballZ_stat.py'
Jan 20 18:58:42 compute-0 sudo[227172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:42 compute-0 podman[227174]: 2026-01-20 18:58:42.135413659 +0000 UTC m=+0.098724500 container health_status d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 20 18:58:42 compute-0 python3.9[227175]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:58:42 compute-0 sudo[227172]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:42 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:58:42 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:58:42 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:58:42.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:58:42 compute-0 sudo[227322]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ettwjcikaiuobsokjnfoivzhsdojkuio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935521.6438434-4188-4171441448522/AnsiballZ_copy.py'
Jan 20 18:58:42 compute-0 sudo[227322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:42 compute-0 python3.9[227324]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768935521.6438434-4188-4171441448522/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:58:42 compute-0 sudo[227322]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:42 compute-0 ceph-mon[74381]: pgmap v486: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 255 B/s wr, 0 op/s
Jan 20 18:58:43 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:58:43 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 20 18:58:43 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:58:43.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 20 18:58:43 compute-0 sudo[227474]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhuydurrlptndgidxyvizmtqvwujalxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935523.2002337-4233-122117159594610/AnsiballZ_stat.py'
Jan 20 18:58:43 compute-0 sudo[227474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:43 compute-0 python3.9[227476]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:58:43 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v487: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Jan 20 18:58:43 compute-0 sudo[227474]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:44 compute-0 sudo[227599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjfsqutveqqvghlhjytcdakdlkbhcmir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935523.2002337-4233-122117159594610/AnsiballZ_copy.py'
Jan 20 18:58:44 compute-0 sudo[227599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:44 compute-0 python3.9[227601]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768935523.2002337-4233-122117159594610/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:58:44 compute-0 sudo[227599]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:44 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:58:44 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:58:44 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:58:44.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:58:44 compute-0 ceph-mon[74381]: pgmap v487: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Jan 20 18:58:45 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:58:45 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:58:45 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:58:45.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:58:45 compute-0 sudo[227751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crczbluclqtvyymspdzivebosiqwvrqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935524.7674556-4278-253195593746202/AnsiballZ_systemd.py'
Jan 20 18:58:45 compute-0 sudo[227751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:45 compute-0 python3.9[227753]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 18:58:45 compute-0 systemd[1]: Reloading.
Jan 20 18:58:45 compute-0 systemd-rc-local-generator[227784]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:58:45 compute-0 systemd-sysv-generator[227787]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:58:45 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:58:45 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v488: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Jan 20 18:58:45 compute-0 systemd[1]: Reached target edpm_libvirt.target.
Jan 20 18:58:45 compute-0 sudo[227751]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:46 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:58:46 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:58:46 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:58:46.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:58:46 compute-0 sudo[227945]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnfomxesbfbondeivfjnmmmyqndeuvjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935526.6470528-4302-39588347620922/AnsiballZ_systemd.py'
Jan 20 18:58:46 compute-0 sudo[227945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:47 compute-0 ceph-mon[74381]: pgmap v488: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Jan 20 18:58:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:58:47.057Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 18:58:47 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:58:47 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:58:47 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:58:47.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:58:47 compute-0 python3.9[227947]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 20 18:58:47 compute-0 systemd[1]: Reloading.
Jan 20 18:58:47 compute-0 systemd-rc-local-generator[227976]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:58:47 compute-0 systemd-sysv-generator[227979]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:58:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[225818]: 20/01/2026 18:58:47 : epoch 696fd05a : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 20 18:58:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[225818]: 20/01/2026 18:58:47 : epoch 696fd05a : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Jan 20 18:58:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[225818]: 20/01/2026 18:58:47 : epoch 696fd05a : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Jan 20 18:58:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[225818]: 20/01/2026 18:58:47 : epoch 696fd05a : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Jan 20 18:58:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[225818]: 20/01/2026 18:58:47 : epoch 696fd05a : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Jan 20 18:58:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[225818]: 20/01/2026 18:58:47 : epoch 696fd05a : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Jan 20 18:58:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[225818]: 20/01/2026 18:58:47 : epoch 696fd05a : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Jan 20 18:58:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[225818]: 20/01/2026 18:58:47 : epoch 696fd05a : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 20 18:58:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[225818]: 20/01/2026 18:58:47 : epoch 696fd05a : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 20 18:58:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[225818]: 20/01/2026 18:58:47 : epoch 696fd05a : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 20 18:58:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[225818]: 20/01/2026 18:58:47 : epoch 696fd05a : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Jan 20 18:58:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[225818]: 20/01/2026 18:58:47 : epoch 696fd05a : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 20 18:58:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[225818]: 20/01/2026 18:58:47 : epoch 696fd05a : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Jan 20 18:58:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[225818]: 20/01/2026 18:58:47 : epoch 696fd05a : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Jan 20 18:58:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[225818]: 20/01/2026 18:58:47 : epoch 696fd05a : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Jan 20 18:58:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[225818]: 20/01/2026 18:58:47 : epoch 696fd05a : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Jan 20 18:58:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[225818]: 20/01/2026 18:58:47 : epoch 696fd05a : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Jan 20 18:58:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[225818]: 20/01/2026 18:58:47 : epoch 696fd05a : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Jan 20 18:58:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[225818]: 20/01/2026 18:58:47 : epoch 696fd05a : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Jan 20 18:58:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[225818]: 20/01/2026 18:58:47 : epoch 696fd05a : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Jan 20 18:58:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[225818]: 20/01/2026 18:58:47 : epoch 696fd05a : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Jan 20 18:58:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[225818]: 20/01/2026 18:58:47 : epoch 696fd05a : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Jan 20 18:58:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[225818]: 20/01/2026 18:58:47 : epoch 696fd05a : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 20 18:58:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[225818]: 20/01/2026 18:58:47 : epoch 696fd05a : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Jan 20 18:58:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[225818]: 20/01/2026 18:58:47 : epoch 696fd05a : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 20 18:58:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[225818]: 20/01/2026 18:58:47 : epoch 696fd05a : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Jan 20 18:58:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[225818]: 20/01/2026 18:58:47 : epoch 696fd05a : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Jan 20 18:58:47 compute-0 systemd[1]: Reloading.
Jan 20 18:58:47 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v489: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 20 18:58:47 compute-0 systemd-rc-local-generator[228028]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:58:47 compute-0 systemd-sysv-generator[228031]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:58:47 compute-0 sudo[227945]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:48 compute-0 ceph-mon[74381]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 18:58:48 compute-0 ceph-mon[74381]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 3985 writes, 18K keys, 3983 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.03 MB/s
                                           Cumulative WAL: 3985 writes, 3983 syncs, 1.00 writes per sync, written: 0.03 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1432 writes, 6110 keys, 1431 commit groups, 1.0 writes per commit group, ingest: 10.81 MB, 0.02 MB/s
                                           Interval WAL: 1432 writes, 1431 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    120.6      0.23              0.08         8    0.029       0      0       0.0       0.0
                                             L6      1/0   11.74 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.3    144.2    122.7      0.74              0.23         7    0.106     34K   3666       0.0       0.0
                                            Sum      1/0   11.74 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.3    110.2    122.2      0.97              0.32        15    0.065     34K   3666       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.6    132.9    127.2      0.47              0.14         8    0.059     21K   2306       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    144.2    122.7      0.74              0.23         7    0.106     34K   3666       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    122.8      0.23              0.08         7    0.032       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.2      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.027, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.12 GB write, 0.10 MB/s write, 0.10 GB read, 0.09 MB/s read, 1.0 seconds
                                           Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.5 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564b95c0c9b0#2 capacity: 304.00 MB usage: 5.29 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 7.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(333,4.99 MB,1.63999%) FilterBlock(16,103.17 KB,0.0331427%) IndexBlock(16,203.95 KB,0.0655174%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 20 18:58:48 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:58:48 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:58:48 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:58:48.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:58:48 compute-0 sshd-session[166823]: Connection closed by 192.168.122.30 port 45322
Jan 20 18:58:48 compute-0 sshd-session[166820]: pam_unix(sshd:session): session closed for user zuul
Jan 20 18:58:48 compute-0 systemd[1]: session-54.scope: Deactivated successfully.
Jan 20 18:58:48 compute-0 systemd[1]: session-54.scope: Consumed 3min 21.769s CPU time.
Jan 20 18:58:48 compute-0 systemd-logind[796]: Session 54 logged out. Waiting for processes to exit.
Jan 20 18:58:48 compute-0 systemd-logind[796]: Removed session 54.
Jan 20 18:58:48 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[225818]: 20/01/2026 18:58:48 : epoch 696fd05a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2924000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:58:48 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 18:58:48 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 18:58:48 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[225818]: 20/01/2026 18:58:48 : epoch 696fd05a : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f291c001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:58:49 compute-0 ceph-mon[74381]: pgmap v489: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 20 18:58:49 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:58:49 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 18:58:49 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:58:49.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 18:58:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[225818]: 20/01/2026 18:58:49 : epoch 696fd05a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2900000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:58:49 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v490: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Jan 20 18:58:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:58:49] "GET /metrics HTTP/1.1" 200 48339 "" "Prometheus/2.51.0"
Jan 20 18:58:49 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:58:49] "GET /metrics HTTP/1.1" 200 48339 "" "Prometheus/2.51.0"
Jan 20 18:58:50 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:58:50 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:58:50 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:58:50.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:58:50 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [WARNING] 019/185850 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 20 18:58:50 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[225818]: 20/01/2026 18:58:50 : epoch 696fd05a : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2924000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:58:50 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:58:50 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[225818]: 20/01/2026 18:58:50 : epoch 696fd05a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2918001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:58:51 compute-0 ceph-mon[74381]: pgmap v490: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Jan 20 18:58:51 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:58:51 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 20 18:58:51 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:58:51.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 20 18:58:51 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[225818]: 20/01/2026 18:58:51 : epoch 696fd05a : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f291c001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:58:51 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v491: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Jan 20 18:58:52 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:58:52 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:58:52 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:58:52.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:58:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[225818]: 20/01/2026 18:58:52 : epoch 696fd05a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29000016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:58:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[225818]: 20/01/2026 18:58:52 : epoch 696fd05a : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29240021f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:58:53 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:58:53 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:58:53 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:58:53.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:58:53 compute-0 ceph-mon[74381]: pgmap v491: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Jan 20 18:58:53 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[225818]: 20/01/2026 18:58:53 : epoch 696fd05a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29180023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:58:53 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v492: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 767 B/s wr, 2 op/s
Jan 20 18:58:53 compute-0 sudo[228068]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 18:58:53 compute-0 sudo[228068]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:58:53 compute-0 sudo[228068]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:53 compute-0 sshd-session[228093]: Accepted publickey for zuul from 192.168.122.30 port 35134 ssh2: ECDSA SHA256:OqjpBifwMHsrzwMbJxHbqE54q1skVz6aecL1tN9sOps
Jan 20 18:58:53 compute-0 systemd-logind[796]: New session 55 of user zuul.
Jan 20 18:58:53 compute-0 systemd[1]: Started Session 55 of User zuul.
Jan 20 18:58:53 compute-0 sshd-session[228093]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 18:58:54 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:58:54 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:58:54 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:58:54.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:58:54 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[225818]: 20/01/2026 18:58:54 : epoch 696fd05a : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f291c001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:58:54 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[225818]: 20/01/2026 18:58:54 : epoch 696fd05a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29000016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:58:54 compute-0 ceph-mgr[74676]: [balancer INFO root] Optimize plan auto_2026-01-20_18:58:54
Jan 20 18:58:54 compute-0 ceph-mgr[74676]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 18:58:54 compute-0 ceph-mgr[74676]: [balancer INFO root] do_upmap
Jan 20 18:58:54 compute-0 ceph-mgr[74676]: [balancer INFO root] pools ['.nfs', 'images', 'backups', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.control', 'vms', 'cephfs.cephfs.data', '.rgw.root', 'volumes', 'default.rgw.log', 'default.rgw.meta']
Jan 20 18:58:54 compute-0 ceph-mgr[74676]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 18:58:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:58:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:58:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:58:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:58:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:58:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:58:55 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:58:55 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:58:55 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:58:55.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:58:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 18:58:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:58:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 18:58:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:58:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:58:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:58:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:58:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:58:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:58:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:58:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:58:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:58:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 20 18:58:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:58:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:58:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:58:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 18:58:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:58:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 20 18:58:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:58:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:58:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:58:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 18:58:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:58:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 20 18:58:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 18:58:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 18:58:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 18:58:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 18:58:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 18:58:55 compute-0 ceph-mon[74381]: pgmap v492: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 767 B/s wr, 2 op/s
Jan 20 18:58:55 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 18:58:55 compute-0 python3.9[228246]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 18:58:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 18:58:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 18:58:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 18:58:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 18:58:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 18:58:55 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[225818]: 20/01/2026 18:58:55 : epoch 696fd05a : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29240021f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:58:55 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:58:55 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v493: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Jan 20 18:58:56 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:58:56 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:58:56 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:58:56.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:58:56 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[225818]: 20/01/2026 18:58:56 : epoch 696fd05a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29180023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:58:56 compute-0 python3.9[228402]: ansible-ansible.builtin.service_facts Invoked
Jan 20 18:58:56 compute-0 network[228419]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 20 18:58:56 compute-0 network[228420]: 'network-scripts' will be removed from distribution in near future.
Jan 20 18:58:56 compute-0 network[228421]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 20 18:58:56 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[225818]: 20/01/2026 18:58:56 : epoch 696fd05a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29180023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:58:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:58:57.059Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 18:58:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:58:57.059Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 18:58:57 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:58:57 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 20 18:58:57 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:58:57.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 20 18:58:57 compute-0 ceph-mon[74381]: pgmap v493: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Jan 20 18:58:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[225818]: 20/01/2026 18:58:57 : epoch 696fd05a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29000016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:58:57 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v494: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Jan 20 18:58:58 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:58:58 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:58:58 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:58:58.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:58:58 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[225818]: 20/01/2026 18:58:58 : epoch 696fd05a : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29240021f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:58:58 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[225818]: 20/01/2026 18:58:58 : epoch 696fd05a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29180023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:58:59 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:58:59 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:58:59 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:58:59.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:58:59 compute-0 ceph-mon[74381]: pgmap v494: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Jan 20 18:58:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[225818]: 20/01/2026 18:58:59 : epoch 696fd05a : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f291c002d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:58:59 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v495: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 20 18:58:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:58:59] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Jan 20 18:58:59 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:58:59] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Jan 20 18:59:00 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:59:00 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 20 18:59:00 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:59:00.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 20 18:59:00 compute-0 ceph-mon[74381]: pgmap v495: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 20 18:59:00 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[225818]: 20/01/2026 18:59:00 : epoch 696fd05a : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f291c002d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:59:00 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:59:00 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[225818]: 20/01/2026 18:59:00 : epoch 696fd05a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29240095a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:59:01 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:59:01 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:59:01 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:59:01.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:59:01 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[225818]: 20/01/2026 18:59:01 : epoch 696fd05a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2900002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:59:01 compute-0 sudo[228696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-giuqiklbbbitgdhiufruausnbbhqqjmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935541.3584402-96-229456116786001/AnsiballZ_setup.py'
Jan 20 18:59:01 compute-0 sudo[228696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:01 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v496: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 20 18:59:01 compute-0 python3.9[228699]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 18:59:02 compute-0 sudo[228696]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:02 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:59:02 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:59:02 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:59:02.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:59:02 compute-0 sudo[228781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffrpcjegvruvelomkdtpzzhkmdnhsymc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935541.3584402-96-229456116786001/AnsiballZ_dnf.py'
Jan 20 18:59:02 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[225818]: 20/01/2026 18:59:02 : epoch 696fd05a : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29180038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:59:02 compute-0 sudo[228781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:02 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[225818]: 20/01/2026 18:59:02 : epoch 696fd05a : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f291c002d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:59:02 compute-0 ceph-mon[74381]: pgmap v496: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 20 18:59:02 compute-0 python3.9[228783]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 18:59:03 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:59:03 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:59:03 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:59:03.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:59:03 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[225818]: 20/01/2026 18:59:03 : epoch 696fd05a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29240095a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:59:03 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v497: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 20 18:59:04 compute-0 podman[228787]: 2026-01-20 18:59:04.077887068 +0000 UTC m=+0.055177631 container health_status 7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Jan 20 18:59:04 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:59:04 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:59:04 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:59:04.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:59:04 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[225818]: 20/01/2026 18:59:04 : epoch 696fd05a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2900002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:59:04 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[225818]: 20/01/2026 18:59:04 : epoch 696fd05a : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29180038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:59:05 compute-0 ceph-mon[74381]: pgmap v497: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 20 18:59:05 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:59:05 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:59:05 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:59:05.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:59:05 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[225818]: 20/01/2026 18:59:05 : epoch 696fd05a : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f291c002d00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:59:05 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:59:05 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v498: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:59:06 compute-0 sudo[228809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:59:06 compute-0 sudo[228809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:59:06 compute-0 sudo[228809]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:06 compute-0 sudo[228834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 20 18:59:06 compute-0 sudo[228834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:59:06 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:59:06 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:59:06 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:59:06.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:59:06 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[225818]: 20/01/2026 18:59:06 : epoch 696fd05a : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f292400a2b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:59:06 compute-0 sudo[228834]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:06 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[225818]: 20/01/2026 18:59:06 : epoch 696fd05a : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f2900003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:59:07 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:59:07.059Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 18:59:07 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 18:59:07 compute-0 ceph-mon[74381]: pgmap v498: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:59:07 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:59:07 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 20 18:59:07 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:59:07 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:59:07 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:59:07 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:59:07.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:59:07 compute-0 sudo[228890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:59:07 compute-0 sudo[228890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:59:07 compute-0 sudo[228890]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:07 compute-0 sudo[228915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 18:59:07 compute-0 sudo[228915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:59:07 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[225818]: 20/01/2026 18:59:07 : epoch 696fd05a : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f29180038d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:59:07 compute-0 podman[228980]: 2026-01-20 18:59:07.670916272 +0000 UTC m=+0.042861242 container create 892bcfc1bf88a577a50e007264f47925c1a5729e02a65bc6d34ce7b55274f7e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_banzai, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Jan 20 18:59:07 compute-0 systemd[1]: Started libpod-conmon-892bcfc1bf88a577a50e007264f47925c1a5729e02a65bc6d34ce7b55274f7e3.scope.
Jan 20 18:59:07 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v499: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 20 18:59:07 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:59:07 compute-0 podman[228980]: 2026-01-20 18:59:07.650337543 +0000 UTC m=+0.022282553 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:59:07 compute-0 podman[228980]: 2026-01-20 18:59:07.751274296 +0000 UTC m=+0.123219306 container init 892bcfc1bf88a577a50e007264f47925c1a5729e02a65bc6d34ce7b55274f7e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_banzai, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Jan 20 18:59:07 compute-0 podman[228980]: 2026-01-20 18:59:07.758355615 +0000 UTC m=+0.130300595 container start 892bcfc1bf88a577a50e007264f47925c1a5729e02a65bc6d34ce7b55274f7e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_banzai, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 20 18:59:07 compute-0 podman[228980]: 2026-01-20 18:59:07.761136065 +0000 UTC m=+0.133081095 container attach 892bcfc1bf88a577a50e007264f47925c1a5729e02a65bc6d34ce7b55274f7e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_banzai, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 18:59:07 compute-0 systemd[1]: libpod-892bcfc1bf88a577a50e007264f47925c1a5729e02a65bc6d34ce7b55274f7e3.scope: Deactivated successfully.
Jan 20 18:59:07 compute-0 flamboyant_banzai[228998]: 167 167
Jan 20 18:59:07 compute-0 conmon[228998]: conmon 892bcfc1bf88a577a50e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-892bcfc1bf88a577a50e007264f47925c1a5729e02a65bc6d34ce7b55274f7e3.scope/container/memory.events
Jan 20 18:59:07 compute-0 podman[228980]: 2026-01-20 18:59:07.7660751 +0000 UTC m=+0.138020090 container died 892bcfc1bf88a577a50e007264f47925c1a5729e02a65bc6d34ce7b55274f7e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_banzai, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 18:59:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-8133f60351eb56a6885295b4d163e69735817f1e9ba36c7ddce993d4f2ea6423-merged.mount: Deactivated successfully.
Jan 20 18:59:07 compute-0 podman[228980]: 2026-01-20 18:59:07.835838327 +0000 UTC m=+0.207783317 container remove 892bcfc1bf88a577a50e007264f47925c1a5729e02a65bc6d34ce7b55274f7e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_banzai, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:59:07 compute-0 systemd[1]: libpod-conmon-892bcfc1bf88a577a50e007264f47925c1a5729e02a65bc6d34ce7b55274f7e3.scope: Deactivated successfully.
Jan 20 18:59:07 compute-0 podman[229024]: 2026-01-20 18:59:07.992197608 +0000 UTC m=+0.041269101 container create 33364c3a929966437af86b626288e31a6532e28b3b0c208ba072680ecd9ea37c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_clarke, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:59:08 compute-0 systemd[1]: Started libpod-conmon-33364c3a929966437af86b626288e31a6532e28b3b0c208ba072680ecd9ea37c.scope.
Jan 20 18:59:08 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:59:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ba5d529f1101f81747195eda68e148d94a5c48e9862d8605368d8ecf7331b98/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 18:59:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ba5d529f1101f81747195eda68e148d94a5c48e9862d8605368d8ecf7331b98/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:59:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ba5d529f1101f81747195eda68e148d94a5c48e9862d8605368d8ecf7331b98/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:59:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ba5d529f1101f81747195eda68e148d94a5c48e9862d8605368d8ecf7331b98/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 18:59:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ba5d529f1101f81747195eda68e148d94a5c48e9862d8605368d8ecf7331b98/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 18:59:08 compute-0 podman[229024]: 2026-01-20 18:59:08.045370498 +0000 UTC m=+0.094441991 container init 33364c3a929966437af86b626288e31a6532e28b3b0c208ba072680ecd9ea37c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_clarke, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Jan 20 18:59:08 compute-0 podman[229024]: 2026-01-20 18:59:08.055130904 +0000 UTC m=+0.104202397 container start 33364c3a929966437af86b626288e31a6532e28b3b0c208ba072680ecd9ea37c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_clarke, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Jan 20 18:59:08 compute-0 podman[229024]: 2026-01-20 18:59:08.059871963 +0000 UTC m=+0.108943456 container attach 33364c3a929966437af86b626288e31a6532e28b3b0c208ba072680ecd9ea37c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 20 18:59:08 compute-0 podman[229024]: 2026-01-20 18:59:07.974149083 +0000 UTC m=+0.023220596 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:59:08 compute-0 sudo[228781]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:08 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:59:08 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 18:59:08 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:59:08 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:59:08 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 18:59:08 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 18:59:08 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 18:59:08 compute-0 flamboyant_clarke[229041]: --> passed data devices: 0 physical, 1 LVM
Jan 20 18:59:08 compute-0 flamboyant_clarke[229041]: --> All data devices are unavailable
Jan 20 18:59:08 compute-0 systemd[1]: libpod-33364c3a929966437af86b626288e31a6532e28b3b0c208ba072680ecd9ea37c.scope: Deactivated successfully.
Jan 20 18:59:08 compute-0 podman[229024]: 2026-01-20 18:59:08.406900368 +0000 UTC m=+0.455971861 container died 33364c3a929966437af86b626288e31a6532e28b3b0c208ba072680ecd9ea37c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_clarke, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Jan 20 18:59:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ba5d529f1101f81747195eda68e148d94a5c48e9862d8605368d8ecf7331b98-merged.mount: Deactivated successfully.
Jan 20 18:59:08 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:59:08 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 20 18:59:08 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:59:08.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 20 18:59:08 compute-0 podman[229024]: 2026-01-20 18:59:08.450918638 +0000 UTC m=+0.499990121 container remove 33364c3a929966437af86b626288e31a6532e28b3b0c208ba072680ecd9ea37c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_clarke, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Jan 20 18:59:08 compute-0 systemd[1]: libpod-conmon-33364c3a929966437af86b626288e31a6532e28b3b0c208ba072680ecd9ea37c.scope: Deactivated successfully.
Jan 20 18:59:08 compute-0 sudo[228915]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:08 compute-0 sudo[229093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:59:08 compute-0 sudo[229093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:59:08 compute-0 sudo[229093]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:08 compute-0 sudo[229118]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- lvm list --format json
Jan 20 18:59:08 compute-0 sudo[229118]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:59:08 compute-0 kernel: ganesha.nfsd[227990]: segfault at 50 ip 00007f29a790832e sp 00007f292dffa210 error 4 in libntirpc.so.5.8[7f29a78ed000+2c000] likely on CPU 3 (core 0, socket 3)
Jan 20 18:59:08 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Jan 20 18:59:08 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[225818]: 20/01/2026 18:59:08 : epoch 696fd05a : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f291c002d00 fd 38 proxy ignored for local
Jan 20 18:59:08 compute-0 systemd[1]: Started Process Core Dump (PID 229173/UID 0).
Jan 20 18:59:08 compute-0 podman[229260]: 2026-01-20 18:59:08.990199827 +0000 UTC m=+0.041020894 container create 815edc36267a6dce05ad233965d5c70b2d7c02477d1f58cbeb7615291bfd8dc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_chatelet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:59:09 compute-0 systemd[1]: Started libpod-conmon-815edc36267a6dce05ad233965d5c70b2d7c02477d1f58cbeb7615291bfd8dc9.scope.
Jan 20 18:59:09 compute-0 podman[229260]: 2026-01-20 18:59:08.971918207 +0000 UTC m=+0.022739294 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:59:09 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:59:09 compute-0 sudo[229328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhzwujmtviuraldtctduwyfmqkyikqoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935548.6247687-132-170750334295220/AnsiballZ_stat.py'
Jan 20 18:59:09 compute-0 sudo[229328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:09 compute-0 podman[229260]: 2026-01-20 18:59:09.08911119 +0000 UTC m=+0.139932287 container init 815edc36267a6dce05ad233965d5c70b2d7c02477d1f58cbeb7615291bfd8dc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_chatelet, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:59:09 compute-0 podman[229260]: 2026-01-20 18:59:09.097509112 +0000 UTC m=+0.148330179 container start 815edc36267a6dce05ad233965d5c70b2d7c02477d1f58cbeb7615291bfd8dc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 18:59:09 compute-0 gracious_chatelet[229326]: 167 167
Jan 20 18:59:09 compute-0 systemd[1]: libpod-815edc36267a6dce05ad233965d5c70b2d7c02477d1f58cbeb7615291bfd8dc9.scope: Deactivated successfully.
Jan 20 18:59:09 compute-0 podman[229260]: 2026-01-20 18:59:09.105637306 +0000 UTC m=+0.156458403 container attach 815edc36267a6dce05ad233965d5c70b2d7c02477d1f58cbeb7615291bfd8dc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:59:09 compute-0 podman[229260]: 2026-01-20 18:59:09.107046482 +0000 UTC m=+0.157867549 container died 815edc36267a6dce05ad233965d5c70b2d7c02477d1f58cbeb7615291bfd8dc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_chatelet, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:59:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-391149b8a97be7552a583adca12f576a97b1938a4cb2a30b232dd4f94d8a5820-merged.mount: Deactivated successfully.
Jan 20 18:59:09 compute-0 podman[229260]: 2026-01-20 18:59:09.161429532 +0000 UTC m=+0.212250599 container remove 815edc36267a6dce05ad233965d5c70b2d7c02477d1f58cbeb7615291bfd8dc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_chatelet, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:59:09 compute-0 systemd[1]: libpod-conmon-815edc36267a6dce05ad233965d5c70b2d7c02477d1f58cbeb7615291bfd8dc9.scope: Deactivated successfully.
Jan 20 18:59:09 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:59:09 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 18:59:09 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:59:09.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 18:59:09 compute-0 python3.9[229331]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 18:59:09 compute-0 sudo[229328]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:09 compute-0 ceph-mon[74381]: pgmap v499: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 20 18:59:09 compute-0 podman[229354]: 2026-01-20 18:59:09.336498444 +0000 UTC m=+0.045574759 container create 157cb5b478091199798b5816fe9569dda88e86d5728036fa7a4ad73ee1638b99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_kirch, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 20 18:59:09 compute-0 systemd[1]: Started libpod-conmon-157cb5b478091199798b5816fe9569dda88e86d5728036fa7a4ad73ee1638b99.scope.
Jan 20 18:59:09 compute-0 podman[229354]: 2026-01-20 18:59:09.318148502 +0000 UTC m=+0.027224847 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:59:09 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:59:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/773fab5fc92d33b93b10fc0a2413d28331a9d381af2cb534875bd1cc9da9b36e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 18:59:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/773fab5fc92d33b93b10fc0a2413d28331a9d381af2cb534875bd1cc9da9b36e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:59:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/773fab5fc92d33b93b10fc0a2413d28331a9d381af2cb534875bd1cc9da9b36e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:59:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/773fab5fc92d33b93b10fc0a2413d28331a9d381af2cb534875bd1cc9da9b36e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 18:59:09 compute-0 podman[229354]: 2026-01-20 18:59:09.441759936 +0000 UTC m=+0.150836291 container init 157cb5b478091199798b5816fe9569dda88e86d5728036fa7a4ad73ee1638b99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_kirch, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 18:59:09 compute-0 podman[229354]: 2026-01-20 18:59:09.456500218 +0000 UTC m=+0.165576543 container start 157cb5b478091199798b5816fe9569dda88e86d5728036fa7a4ad73ee1638b99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:59:09 compute-0 podman[229354]: 2026-01-20 18:59:09.470837819 +0000 UTC m=+0.179914154 container attach 157cb5b478091199798b5816fe9569dda88e86d5728036fa7a4ad73ee1638b99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:59:09 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v500: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:59:09 compute-0 suspicious_kirch[229393]: {
Jan 20 18:59:09 compute-0 suspicious_kirch[229393]:     "0": [
Jan 20 18:59:09 compute-0 suspicious_kirch[229393]:         {
Jan 20 18:59:09 compute-0 suspicious_kirch[229393]:             "devices": [
Jan 20 18:59:09 compute-0 suspicious_kirch[229393]:                 "/dev/loop3"
Jan 20 18:59:09 compute-0 suspicious_kirch[229393]:             ],
Jan 20 18:59:09 compute-0 suspicious_kirch[229393]:             "lv_name": "ceph_lv0",
Jan 20 18:59:09 compute-0 suspicious_kirch[229393]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 18:59:09 compute-0 suspicious_kirch[229393]:             "lv_size": "21470642176",
Jan 20 18:59:09 compute-0 suspicious_kirch[229393]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=aecbbf3b-b405-507b-97d7-637a83f5b4b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5f53c0c6-6046-4836-83f9-ff93da7e674e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 18:59:09 compute-0 suspicious_kirch[229393]:             "lv_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 18:59:09 compute-0 suspicious_kirch[229393]:             "name": "ceph_lv0",
Jan 20 18:59:09 compute-0 suspicious_kirch[229393]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 18:59:09 compute-0 suspicious_kirch[229393]:             "tags": {
Jan 20 18:59:09 compute-0 suspicious_kirch[229393]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 18:59:09 compute-0 suspicious_kirch[229393]:                 "ceph.block_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 18:59:09 compute-0 suspicious_kirch[229393]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 18:59:09 compute-0 suspicious_kirch[229393]:                 "ceph.cluster_fsid": "aecbbf3b-b405-507b-97d7-637a83f5b4b1",
Jan 20 18:59:09 compute-0 suspicious_kirch[229393]:                 "ceph.cluster_name": "ceph",
Jan 20 18:59:09 compute-0 suspicious_kirch[229393]:                 "ceph.crush_device_class": "",
Jan 20 18:59:09 compute-0 suspicious_kirch[229393]:                 "ceph.encrypted": "0",
Jan 20 18:59:09 compute-0 suspicious_kirch[229393]:                 "ceph.osd_fsid": "5f53c0c6-6046-4836-83f9-ff93da7e674e",
Jan 20 18:59:09 compute-0 suspicious_kirch[229393]:                 "ceph.osd_id": "0",
Jan 20 18:59:09 compute-0 suspicious_kirch[229393]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 18:59:09 compute-0 suspicious_kirch[229393]:                 "ceph.type": "block",
Jan 20 18:59:09 compute-0 suspicious_kirch[229393]:                 "ceph.vdo": "0",
Jan 20 18:59:09 compute-0 suspicious_kirch[229393]:                 "ceph.with_tpm": "0"
Jan 20 18:59:09 compute-0 suspicious_kirch[229393]:             },
Jan 20 18:59:09 compute-0 suspicious_kirch[229393]:             "type": "block",
Jan 20 18:59:09 compute-0 suspicious_kirch[229393]:             "vg_name": "ceph_vg0"
Jan 20 18:59:09 compute-0 suspicious_kirch[229393]:         }
Jan 20 18:59:09 compute-0 suspicious_kirch[229393]:     ]
Jan 20 18:59:09 compute-0 suspicious_kirch[229393]: }
Jan 20 18:59:09 compute-0 systemd[1]: libpod-157cb5b478091199798b5816fe9569dda88e86d5728036fa7a4ad73ee1638b99.scope: Deactivated successfully.
Jan 20 18:59:09 compute-0 podman[229354]: 2026-01-20 18:59:09.790495044 +0000 UTC m=+0.499571369 container died 157cb5b478091199798b5816fe9569dda88e86d5728036fa7a4ad73ee1638b99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_kirch, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 18:59:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:59:09] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Jan 20 18:59:09 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:59:09] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Jan 20 18:59:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-773fab5fc92d33b93b10fc0a2413d28331a9d381af2cb534875bd1cc9da9b36e-merged.mount: Deactivated successfully.
Jan 20 18:59:09 compute-0 podman[229354]: 2026-01-20 18:59:09.842900655 +0000 UTC m=+0.551976980 container remove 157cb5b478091199798b5816fe9569dda88e86d5728036fa7a4ad73ee1638b99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 20 18:59:09 compute-0 systemd[1]: libpod-conmon-157cb5b478091199798b5816fe9569dda88e86d5728036fa7a4ad73ee1638b99.scope: Deactivated successfully.
Jan 20 18:59:09 compute-0 sudo[229118]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:09 compute-0 sudo[229478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 18:59:09 compute-0 sudo[229478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:59:09 compute-0 sudo[229478]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:10 compute-0 systemd-coredump[229187]: Process 225831 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 43:
                                                    #0  0x00007f29a790832e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Jan 20 18:59:10 compute-0 sudo[229522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- raw list --format json
Jan 20 18:59:10 compute-0 sudo[229522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:59:10 compute-0 systemd[1]: systemd-coredump@10-229173-0.service: Deactivated successfully.
Jan 20 18:59:10 compute-0 systemd[1]: systemd-coredump@10-229173-0.service: Consumed 1.378s CPU time.
Jan 20 18:59:10 compute-0 sudo[229592]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwwsyytfdvqvpddqzckxgjzngzuzkbvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935549.6301298-162-192682081622642/AnsiballZ_command.py'
Jan 20 18:59:10 compute-0 sudo[229592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:10 compute-0 podman[229597]: 2026-01-20 18:59:10.230243807 +0000 UTC m=+0.045961639 container died d071ca8dfc06ee12e00a2e29069311aae236f780132e498d62303b0eb9bbd23c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:59:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-007148ab82a89dc9cb08a74cc1ac49450b286dcc834e9e6c2e3dcd1e7350cd36-merged.mount: Deactivated successfully.
Jan 20 18:59:10 compute-0 podman[229597]: 2026-01-20 18:59:10.268338107 +0000 UTC m=+0.084055929 container remove d071ca8dfc06ee12e00a2e29069311aae236f780132e498d62303b0eb9bbd23c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 18:59:10 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Main process exited, code=exited, status=139/n/a
Jan 20 18:59:10 compute-0 ceph-mon[74381]: pgmap v500: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:59:10 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 18:59:10 compute-0 python3.9[229598]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:59:10 compute-0 sudo[229592]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:10 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Failed with result 'exit-code'.
Jan 20 18:59:10 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:59:10 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:59:10 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Consumed 1.655s CPU time.
Jan 20 18:59:10 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:59:10.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:59:10 compute-0 podman[229696]: 2026-01-20 18:59:10.487449948 +0000 UTC m=+0.045947999 container create 700f7aee46b0e18b4ebeeac3e6b3a34caf73a50cfdb2ffcd7d3fc01253668923 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_mirzakhani, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:59:10 compute-0 systemd[1]: Started libpod-conmon-700f7aee46b0e18b4ebeeac3e6b3a34caf73a50cfdb2ffcd7d3fc01253668923.scope.
Jan 20 18:59:10 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:59:10 compute-0 podman[229696]: 2026-01-20 18:59:10.468545183 +0000 UTC m=+0.027043234 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:59:10 compute-0 podman[229696]: 2026-01-20 18:59:10.574912073 +0000 UTC m=+0.133410124 container init 700f7aee46b0e18b4ebeeac3e6b3a34caf73a50cfdb2ffcd7d3fc01253668923 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:59:10 compute-0 podman[229696]: 2026-01-20 18:59:10.582005732 +0000 UTC m=+0.140503763 container start 700f7aee46b0e18b4ebeeac3e6b3a34caf73a50cfdb2ffcd7d3fc01253668923 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_mirzakhani, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 20 18:59:10 compute-0 charming_mirzakhani[229717]: 167 167
Jan 20 18:59:10 compute-0 podman[229696]: 2026-01-20 18:59:10.587185372 +0000 UTC m=+0.145683433 container attach 700f7aee46b0e18b4ebeeac3e6b3a34caf73a50cfdb2ffcd7d3fc01253668923 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_mirzakhani, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Jan 20 18:59:10 compute-0 systemd[1]: libpod-700f7aee46b0e18b4ebeeac3e6b3a34caf73a50cfdb2ffcd7d3fc01253668923.scope: Deactivated successfully.
Jan 20 18:59:10 compute-0 podman[229696]: 2026-01-20 18:59:10.588141636 +0000 UTC m=+0.146639677 container died 700f7aee46b0e18b4ebeeac3e6b3a34caf73a50cfdb2ffcd7d3fc01253668923 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_mirzakhani, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Jan 20 18:59:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-1679e6ec34bd93e8ac277ac568419e7ba5fa6878240e5d048d976a5fde9e3c7c-merged.mount: Deactivated successfully.
Jan 20 18:59:10 compute-0 podman[229696]: 2026-01-20 18:59:10.63035153 +0000 UTC m=+0.188849571 container remove 700f7aee46b0e18b4ebeeac3e6b3a34caf73a50cfdb2ffcd7d3fc01253668923 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_mirzakhani, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 18:59:10 compute-0 systemd[1]: libpod-conmon-700f7aee46b0e18b4ebeeac3e6b3a34caf73a50cfdb2ffcd7d3fc01253668923.scope: Deactivated successfully.
Jan 20 18:59:10 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:59:10 compute-0 podman[229742]: 2026-01-20 18:59:10.801189305 +0000 UTC m=+0.044659747 container create 71b51a30f133b9483c4cfb87d40195f8f3ebc9400b8d4d25e15d7e3bc44c3b09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 18:59:10 compute-0 systemd[1]: Started libpod-conmon-71b51a30f133b9483c4cfb87d40195f8f3ebc9400b8d4d25e15d7e3bc44c3b09.scope.
Jan 20 18:59:10 compute-0 systemd[1]: Started libcrun container.
Jan 20 18:59:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/196f75098d90c37137645fe1fc431e31a9a7d1fb338406da03d9813b76f5d129/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 18:59:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/196f75098d90c37137645fe1fc431e31a9a7d1fb338406da03d9813b76f5d129/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 18:59:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/196f75098d90c37137645fe1fc431e31a9a7d1fb338406da03d9813b76f5d129/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:59:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/196f75098d90c37137645fe1fc431e31a9a7d1fb338406da03d9813b76f5d129/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 18:59:10 compute-0 podman[229742]: 2026-01-20 18:59:10.785379027 +0000 UTC m=+0.028849509 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:59:10 compute-0 podman[229742]: 2026-01-20 18:59:10.889546301 +0000 UTC m=+0.133016773 container init 71b51a30f133b9483c4cfb87d40195f8f3ebc9400b8d4d25e15d7e3bc44c3b09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_ptolemy, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 18:59:10 compute-0 podman[229742]: 2026-01-20 18:59:10.896714842 +0000 UTC m=+0.140185294 container start 71b51a30f133b9483c4cfb87d40195f8f3ebc9400b8d4d25e15d7e3bc44c3b09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 18:59:10 compute-0 podman[229742]: 2026-01-20 18:59:10.900712102 +0000 UTC m=+0.144182574 container attach 71b51a30f133b9483c4cfb87d40195f8f3ebc9400b8d4d25e15d7e3bc44c3b09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_ptolemy, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Jan 20 18:59:11 compute-0 sudo[229899]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldmqcnuinhkmhhxtlokghogddcrolfvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935550.8960223-192-212440281258555/AnsiballZ_stat.py'
Jan 20 18:59:11 compute-0 sudo[229899]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:11 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:59:11 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:59:11 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:59:11.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:59:11 compute-0 python3.9[229904]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 18:59:11 compute-0 sudo[229899]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:11 compute-0 lvm[229984]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 18:59:11 compute-0 lvm[229984]: VG ceph_vg0 finished
Jan 20 18:59:11 compute-0 vibrant_ptolemy[229758]: {}
Jan 20 18:59:11 compute-0 systemd[1]: libpod-71b51a30f133b9483c4cfb87d40195f8f3ebc9400b8d4d25e15d7e3bc44c3b09.scope: Deactivated successfully.
Jan 20 18:59:11 compute-0 systemd[1]: libpod-71b51a30f133b9483c4cfb87d40195f8f3ebc9400b8d4d25e15d7e3bc44c3b09.scope: Consumed 1.262s CPU time.
Jan 20 18:59:11 compute-0 podman[229742]: 2026-01-20 18:59:11.661619248 +0000 UTC m=+0.905089730 container died 71b51a30f133b9483c4cfb87d40195f8f3ebc9400b8d4d25e15d7e3bc44c3b09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 18:59:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-196f75098d90c37137645fe1fc431e31a9a7d1fb338406da03d9813b76f5d129-merged.mount: Deactivated successfully.
Jan 20 18:59:11 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v501: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:59:11 compute-0 podman[229742]: 2026-01-20 18:59:11.722753109 +0000 UTC m=+0.966223591 container remove 71b51a30f133b9483c4cfb87d40195f8f3ebc9400b8d4d25e15d7e3bc44c3b09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_ptolemy, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 18:59:11 compute-0 systemd[1]: libpod-conmon-71b51a30f133b9483c4cfb87d40195f8f3ebc9400b8d4d25e15d7e3bc44c3b09.scope: Deactivated successfully.
Jan 20 18:59:11 compute-0 sudo[229522]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:11 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 18:59:12 compute-0 sudo[230129]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpnhzvicdbbfgbewtdfqcjawnfjqmygg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935551.6653223-216-158018331082750/AnsiballZ_command.py'
Jan 20 18:59:12 compute-0 sudo[230129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:12 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:59:12 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 18:59:12 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:59:12 compute-0 sudo[230132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 18:59:12 compute-0 sudo[230132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:59:12 compute-0 sudo[230132]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:12 compute-0 python3.9[230131]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:59:12 compute-0 sudo[230129]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:12 compute-0 podman[230156]: 2026-01-20 18:59:12.34663384 +0000 UTC m=+0.121446952 container health_status d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 18:59:12 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:59:12 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:59:12 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:59:12.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:59:12 compute-0 sudo[230334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldsrifryfsicekmtwesqhhyibfcgttzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935552.4435854-240-210086672105093/AnsiballZ_stat.py'
Jan 20 18:59:12 compute-0 sudo[230334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:13 compute-0 python3.9[230336]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:59:13 compute-0 sudo[230334]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:13 compute-0 ceph-mon[74381]: pgmap v501: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:59:13 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:59:13 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 18:59:13 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:59:13 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 20 18:59:13 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:59:13.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 20 18:59:13 compute-0 sudo[230457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucznlriaqpflzftwvftfbpsztsvhikoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935552.4435854-240-210086672105093/AnsiballZ_copy.py'
Jan 20 18:59:13 compute-0 sudo[230457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:13 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v502: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 20 18:59:13 compute-0 python3.9[230460]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768935552.4435854-240-210086672105093/.source.iscsi _original_basename=.t_vl0yc1 follow=False checksum=09a61e631865783ab8c612cfbd46b5b2abbec15e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:59:13 compute-0 sudo[230457]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:13 compute-0 sudo[230462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 18:59:13 compute-0 sudo[230462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:59:13 compute-0 sudo[230462]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:14 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:59:14 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 20 18:59:14 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:59:14.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 20 18:59:14 compute-0 sudo[230636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aueragtcptuqsczwhojfvmzlyibzizou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935553.9897096-285-173075981852505/AnsiballZ_file.py'
Jan 20 18:59:14 compute-0 sudo[230636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:14 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [WARNING] 019/185914 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 20 18:59:14 compute-0 python3.9[230638]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:59:14 compute-0 sudo[230636]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:15 compute-0 ceph-mon[74381]: pgmap v502: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 20 18:59:15 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:59:15 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 20 18:59:15 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:59:15.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 20 18:59:15 compute-0 sudo[230790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqtukniofwscvkvidwzzjyrlpichwgkx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935554.9910107-309-20405513195233/AnsiballZ_lineinfile.py'
Jan 20 18:59:15 compute-0 sudo[230790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:15 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:59:15 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v503: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:59:15 compute-0 python3.9[230792]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:59:15 compute-0 sudo[230790]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:16 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:59:16 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:59:16 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:59:16.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:59:16 compute-0 sudo[230942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugnhftakbowhjljybjfiilbkdziiioin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935556.2453644-336-73179164761496/AnsiballZ_systemd_service.py'
Jan 20 18:59:16 compute-0 sudo[230942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:59:17.060Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 18:59:17 compute-0 ceph-mon[74381]: pgmap v503: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:59:17 compute-0 python3.9[230944]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 18:59:17 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:59:17 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:59:17 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:59:17.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:59:17 compute-0 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Jan 20 18:59:17 compute-0 sudo[230942]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:17 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v504: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 20 18:59:17 compute-0 sudo[231100]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umsjshxupjgktlllgtruhrdlrcfycliq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935557.5704937-360-227244983101790/AnsiballZ_systemd_service.py'
Jan 20 18:59:17 compute-0 sudo[231100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:18 compute-0 python3.9[231102]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 18:59:18 compute-0 systemd[1]: Reloading.
Jan 20 18:59:18 compute-0 systemd-rc-local-generator[231134]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:59:18 compute-0 systemd-sysv-generator[231137]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:59:18 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:59:18 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 18:59:18 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:59:18.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 18:59:18 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 20 18:59:18 compute-0 systemd[1]: Starting Open-iSCSI...
Jan 20 18:59:18 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Jan 20 18:59:18 compute-0 systemd[1]: Started Open-iSCSI.
Jan 20 18:59:18 compute-0 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Jan 20 18:59:18 compute-0 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Jan 20 18:59:18 compute-0 sudo[231100]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:19 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:59:19 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 18:59:19 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:59:19.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 18:59:19 compute-0 ceph-mon[74381]: pgmap v504: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 20 18:59:19 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v505: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:59:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:59:19] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Jan 20 18:59:19 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:59:19] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Jan 20 18:59:20 compute-0 python3.9[231304]: ansible-ansible.builtin.service_facts Invoked
Jan 20 18:59:20 compute-0 network[231321]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 20 18:59:20 compute-0 network[231322]: 'network-scripts' will be removed from distribution in near future.
Jan 20 18:59:20 compute-0 network[231323]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 20 18:59:20 compute-0 ceph-mon[74381]: pgmap v505: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:59:20 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:59:20 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 18:59:20 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:59:20.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 18:59:20 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:59:20 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Scheduled restart job, restart counter is at 11.
Jan 20 18:59:20 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.ulclbx for aecbbf3b-b405-507b-97d7-637a83f5b4b1.
Jan 20 18:59:20 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Consumed 1.655s CPU time.
Jan 20 18:59:20 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.ulclbx for aecbbf3b-b405-507b-97d7-637a83f5b4b1...
Jan 20 18:59:21 compute-0 podman[231380]: 2026-01-20 18:59:21.177285087 +0000 UTC m=+0.062993278 container create e9ff7cb93c378d4a91bb0fc81458df1345cae54b85feb4cefa344df0eea62bff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Jan 20 18:59:21 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:59:21 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 18:59:21 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:59:21.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 18:59:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f086c1e9211c56265660d5300b2a57482eda3ad0c183e51709af7aed1603004e/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Jan 20 18:59:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f086c1e9211c56265660d5300b2a57482eda3ad0c183e51709af7aed1603004e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 18:59:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f086c1e9211c56265660d5300b2a57482eda3ad0c183e51709af7aed1603004e/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 18:59:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f086c1e9211c56265660d5300b2a57482eda3ad0c183e51709af7aed1603004e/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.ulclbx-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 18:59:21 compute-0 podman[231380]: 2026-01-20 18:59:21.153430606 +0000 UTC m=+0.039138817 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 18:59:21 compute-0 podman[231380]: 2026-01-20 18:59:21.250453771 +0000 UTC m=+0.136161992 container init e9ff7cb93c378d4a91bb0fc81458df1345cae54b85feb4cefa344df0eea62bff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Jan 20 18:59:21 compute-0 podman[231380]: 2026-01-20 18:59:21.25598215 +0000 UTC m=+0.141690341 container start e9ff7cb93c378d4a91bb0fc81458df1345cae54b85feb4cefa344df0eea62bff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 20 18:59:21 compute-0 bash[231380]: e9ff7cb93c378d4a91bb0fc81458df1345cae54b85feb4cefa344df0eea62bff
Jan 20 18:59:21 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:21 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Jan 20 18:59:21 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:21 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Jan 20 18:59:21 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.ulclbx for aecbbf3b-b405-507b-97d7-637a83f5b4b1.
Jan 20 18:59:21 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:21 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Jan 20 18:59:21 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:21 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Jan 20 18:59:21 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:21 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Jan 20 18:59:21 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:21 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Jan 20 18:59:21 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:21 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Jan 20 18:59:21 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:21 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 20 18:59:21 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v506: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:59:22 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:59:22 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 20 18:59:22 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:59:22.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 20 18:59:22 compute-0 ceph-mon[74381]: pgmap v506: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:59:23 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:59:23 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:59:23 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:59:23.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:59:23 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v507: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 20 18:59:24 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:59:24 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:59:24 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:59:24.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:59:24 compute-0 ceph-mon[74381]: pgmap v507: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 20 18:59:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:59:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:59:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:59:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:59:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:59:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:59:25 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:59:25 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:59:25 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:59:25.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:59:25 compute-0 sudo[231695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyyuovjxymixwywpokvhvypohpzaabso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935565.0564291-429-106344605938185/AnsiballZ_dnf.py'
Jan 20 18:59:25 compute-0 sudo[231695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:25 compute-0 python3.9[231697]: ansible-ansible.legacy.dnf Invoked with name=['device-mapper-multipath'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 18:59:25 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:59:25 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v508: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Jan 20 18:59:25 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [WARNING] 019/185925 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 20 18:59:26 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 18:59:26 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:59:26 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:59:26 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:59:26.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:59:27 compute-0 ceph-mon[74381]: pgmap v508: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Jan 20 18:59:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:59:27.061Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 18:59:27 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:59:27 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:59:27 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:59:27.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:59:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:27 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 20 18:59:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:27 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 20 18:59:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:27 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 20 18:59:27 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v509: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Jan 20 18:59:27 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 20 18:59:27 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 20 18:59:27 compute-0 systemd[1]: Reloading.
Jan 20 18:59:27 compute-0 systemd-rc-local-generator[231745]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:59:27 compute-0 systemd-sysv-generator[231748]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:59:28 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 20 18:59:28 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 20 18:59:28 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 20 18:59:28 compute-0 systemd[1]: run-rd9882f0beee84db3b7ea14c8d1e77d40.service: Deactivated successfully.
Jan 20 18:59:28 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:59:28 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:59:28 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:59:28.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:59:28 compute-0 sudo[231695]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:29 compute-0 ceph-mon[74381]: pgmap v509: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Jan 20 18:59:29 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:59:29 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 18:59:29 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:59:29.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 18:59:29 compute-0 sudo[232017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-syvkxhssjmveykjywqtmqqrmxtrwlkec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935569.2424078-456-207932103552244/AnsiballZ_file.py'
Jan 20 18:59:29 compute-0 sudo[232017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:29 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v510: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 426 B/s wr, 1 op/s
Jan 20 18:59:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:59:29] "GET /metrics HTTP/1.1" 200 48334 "" "Prometheus/2.51.0"
Jan 20 18:59:29 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:59:29] "GET /metrics HTTP/1.1" 200 48334 "" "Prometheus/2.51.0"
Jan 20 18:59:29 compute-0 python3.9[232019]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 20 18:59:29 compute-0 sudo[232017]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:59:30.193 165659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 18:59:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:59:30.194 165659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 18:59:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 18:59:30.194 165659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 18:59:30 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:59:30 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.002000049s ======
Jan 20 18:59:30 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:59:30.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000049s
Jan 20 18:59:30 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:59:30 compute-0 sudo[232170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szlunachvkyobyeznmaakglpwijuwsyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935570.1548955-480-220892222305360/AnsiballZ_modprobe.py'
Jan 20 18:59:30 compute-0 sudo[232170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:30 compute-0 python3.9[232172]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Jan 20 18:59:30 compute-0 sudo[232170]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:31 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-crash-compute-0[79794]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 20 18:59:31 compute-0 ceph-mon[74381]: pgmap v510: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 426 B/s wr, 1 op/s
Jan 20 18:59:31 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:59:31 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 18:59:31 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:59:31.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 18:59:31 compute-0 sudo[232326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkdzunjqrairhuimibvitgdcrmovehcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935571.200235-504-70448621666644/AnsiballZ_stat.py'
Jan 20 18:59:31 compute-0 sudo[232326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:31 compute-0 python3.9[232328]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:59:31 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v511: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 426 B/s wr, 1 op/s
Jan 20 18:59:31 compute-0 sudo[232326]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:32 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:31 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 20 18:59:32 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:31 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 20 18:59:32 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:31 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 20 18:59:32 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:32 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 20 18:59:32 compute-0 sudo[232451]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohzxfxnrwixfvpheciamgatovtxadwvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935571.200235-504-70448621666644/AnsiballZ_copy.py'
Jan 20 18:59:32 compute-0 sudo[232451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:32 compute-0 python3.9[232453]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768935571.200235-504-70448621666644/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:59:32 compute-0 sudo[232451]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:32 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:59:32 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:59:32 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:59:32.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:59:32 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:32 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 20 18:59:32 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:32 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 20 18:59:32 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:32 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 20 18:59:33 compute-0 sudo[232603]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jaoffqelktcdxuskkxkcslrracjiwmfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935572.8479846-552-59983275335928/AnsiballZ_lineinfile.py'
Jan 20 18:59:33 compute-0 sudo[232603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:33 compute-0 ceph-mon[74381]: pgmap v511: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 426 B/s wr, 1 op/s
Jan 20 18:59:33 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:59:33 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 18:59:33 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:59:33.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 18:59:33 compute-0 python3.9[232605]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:59:33 compute-0 sudo[232603]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:33 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v512: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 1 op/s
Jan 20 18:59:33 compute-0 sudo[232684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 18:59:33 compute-0 sudo[232684]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:59:33 compute-0 sudo[232684]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:34 compute-0 podman[232756]: 2026-01-20 18:59:34.446824012 +0000 UTC m=+0.059711165 container health_status 7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent)
Jan 20 18:59:34 compute-0 sudo[232798]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhkbnybqeioqqengjwdnswisenskpjhb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935573.7469501-576-200411239105321/AnsiballZ_systemd.py'
Jan 20 18:59:34 compute-0 sudo[232798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:34 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:59:34 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:59:34 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:59:34.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:59:34 compute-0 python3.9[232804]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 18:59:34 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 20 18:59:34 compute-0 systemd[1]: Stopped Load Kernel Modules.
Jan 20 18:59:34 compute-0 systemd[1]: Stopping Load Kernel Modules...
Jan 20 18:59:34 compute-0 systemd[1]: Starting Load Kernel Modules...
Jan 20 18:59:34 compute-0 systemd[1]: Finished Load Kernel Modules.
Jan 20 18:59:34 compute-0 sudo[232798]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:35 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:59:35 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 18:59:35 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:59:35.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 18:59:35 compute-0 ceph-mon[74381]: pgmap v512: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 1 op/s
Jan 20 18:59:35 compute-0 sudo[232958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qezblswwmosyvfkvmwmucxtulrqtyilk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935575.1834853-600-75163022327082/AnsiballZ_command.py'
Jan 20 18:59:35 compute-0 sudo[232958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:35 compute-0 python3.9[232960]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/multipath _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:59:35 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:59:35 compute-0 sudo[232958]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:35 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v513: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 341 B/s wr, 1 op/s
Jan 20 18:59:36 compute-0 sudo[233113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksfxqxytvhkdapywehixlkunqcnunjol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935576.21214-630-236343602416480/AnsiballZ_stat.py'
Jan 20 18:59:36 compute-0 sudo[233113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:36 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:59:36 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:59:36 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:59:36.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:59:36 compute-0 python3.9[233115]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 18:59:36 compute-0 sudo[233113]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:59:37.063Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 18:59:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:59:37.063Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 18:59:37 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:59:37 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:59:37 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:59:37.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:59:37 compute-0 ceph-mon[74381]: pgmap v513: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 341 B/s wr, 1 op/s
Jan 20 18:59:37 compute-0 sudo[233265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avvqvuvdlchqodkwvbgpbvnkjenzhpgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935577.0406604-657-261006911373292/AnsiballZ_stat.py'
Jan 20 18:59:37 compute-0 sudo[233265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:37 compute-0 python3.9[233267]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:59:37 compute-0 sudo[233265]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:37 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v514: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Jan 20 18:59:37 compute-0 sudo[233390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrpybogjpkdoryftgwbjfamdibjnekrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935577.0406604-657-261006911373292/AnsiballZ_copy.py'
Jan 20 18:59:37 compute-0 sudo[233390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:38 compute-0 python3.9[233392]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768935577.0406604-657-261006911373292/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:59:38 compute-0 sudo[233390]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:38 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:59:38 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 20 18:59:38 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:59:38.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 20 18:59:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:38 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 20 18:59:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:38 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Jan 20 18:59:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:38 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Jan 20 18:59:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:38 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Jan 20 18:59:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:38 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Jan 20 18:59:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:38 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Jan 20 18:59:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:38 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Jan 20 18:59:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:38 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 20 18:59:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:38 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 20 18:59:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:38 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 20 18:59:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:38 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Jan 20 18:59:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:38 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 20 18:59:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:38 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Jan 20 18:59:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:38 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Jan 20 18:59:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:38 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Jan 20 18:59:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:38 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Jan 20 18:59:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:38 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Jan 20 18:59:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:38 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Jan 20 18:59:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:38 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Jan 20 18:59:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:38 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Jan 20 18:59:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:38 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Jan 20 18:59:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:38 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Jan 20 18:59:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:38 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 20 18:59:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:38 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Jan 20 18:59:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:38 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 20 18:59:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:38 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Jan 20 18:59:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:38 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Jan 20 18:59:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:38 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 20 18:59:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:38 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4d4000df0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:59:38 compute-0 sudo[233556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utmfpjtzpkfoiomsiflvvfoeojhpwtbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935578.4706585-702-190287796313720/AnsiballZ_command.py'
Jan 20 18:59:38 compute-0 sudo[233556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:38 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4c00016e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:59:38 compute-0 python3.9[233558]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:59:38 compute-0 sudo[233556]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:39 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:59:39 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 18:59:39 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:59:39.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 18:59:39 compute-0 ceph-mon[74381]: pgmap v514: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Jan 20 18:59:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:39 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b4000b60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:59:39 compute-0 sudo[233709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxncpzrcqrfyibnqgrreingckgwxfuae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935579.269287-726-111960817297636/AnsiballZ_lineinfile.py'
Jan 20 18:59:39 compute-0 sudo[233709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:39 compute-0 python3.9[233711]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:59:39 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v515: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 3 op/s
Jan 20 18:59:39 compute-0 sudo[233709]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:59:39] "GET /metrics HTTP/1.1" 200 48334 "" "Prometheus/2.51.0"
Jan 20 18:59:39 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:59:39] "GET /metrics HTTP/1.1" 200 48334 "" "Prometheus/2.51.0"
Jan 20 18:59:40 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 18:59:40 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:59:40 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 20 18:59:40 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:59:40.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 20 18:59:40 compute-0 sudo[233863]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtmcevjdqcgworvznnqolyqhztgmatzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935580.0298195-750-1999792449963/AnsiballZ_replace.py'
Jan 20 18:59:40 compute-0 sudo[233863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:40 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:59:40 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [WARNING] 019/185940 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 20 18:59:40 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:40 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4cc001c00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:59:40 compute-0 python3.9[233865]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:59:40 compute-0 sudo[233863]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:40 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:40 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4d8001ac0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:59:41 compute-0 sudo[234015]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unxwoybttadjrtnkpuatfgtwiaseoidh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935580.933764-774-187945355916597/AnsiballZ_replace.py'
Jan 20 18:59:41 compute-0 sudo[234015]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:41 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:59:41 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:59:41 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:59:41.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:59:41 compute-0 ceph-mon[74381]: pgmap v515: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 3 op/s
Jan 20 18:59:41 compute-0 python3.9[234017]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:59:41 compute-0 sudo[234015]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:41 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:41 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4c0002000 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:59:41 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:41 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 20 18:59:41 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:41 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 20 18:59:41 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v516: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 853 B/s wr, 3 op/s
Jan 20 18:59:42 compute-0 sudo[234169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmgsmawvdbupvvugolzqscpgwlvtilax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935581.7793808-801-44129346911029/AnsiballZ_lineinfile.py'
Jan 20 18:59:42 compute-0 sudo[234169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:42 compute-0 python3.9[234171]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:59:42 compute-0 sudo[234169]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:42 compute-0 ceph-mon[74381]: pgmap v516: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 853 B/s wr, 3 op/s
Jan 20 18:59:42 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:59:42 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000024s ======
Jan 20 18:59:42 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:59:42.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Jan 20 18:59:42 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:42 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b40016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:59:42 compute-0 sudo[234333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgnqvnvljyjtfuwbbpxyhllsurnfslyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935582.3952503-801-197136396670687/AnsiballZ_lineinfile.py'
Jan 20 18:59:42 compute-0 sudo[234333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:42 compute-0 podman[234295]: 2026-01-20 18:59:42.713601822 +0000 UTC m=+0.087998188 container health_status d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Jan 20 18:59:42 compute-0 python3.9[234340]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:59:42 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:42 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4cc001c00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:59:42 compute-0 sudo[234333]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:43 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:59:43 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:59:43 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:59:43.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:59:43 compute-0 sudo[234496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkintxnkrmsuszurgcwftpmzfmvzyvpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935583.0050569-801-91857515644574/AnsiballZ_lineinfile.py'
Jan 20 18:59:43 compute-0 sudo[234496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:43 compute-0 python3.9[234498]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:59:43 compute-0 sudo[234496]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:43 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:43 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4d80023e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:59:43 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v517: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 4.1 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Jan 20 18:59:43 compute-0 sudo[234650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnhqfcvjyktvvlzzaurdmkktwdkaxyzv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935583.5998256-801-66337417719980/AnsiballZ_lineinfile.py'
Jan 20 18:59:43 compute-0 sudo[234650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:44 compute-0 python3.9[234652]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:59:44 compute-0 sudo[234650]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:44 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:59:44 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:59:44 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:59:44.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:59:44 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:44 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 20 18:59:44 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:44 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4c0002000 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:59:44 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:44 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b40016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:59:45 compute-0 sudo[234802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovdkwomqxzkcvfedxudqazdpqaipwdwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935584.7431273-888-58461890635305/AnsiballZ_stat.py'
Jan 20 18:59:45 compute-0 sudo[234802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:45 compute-0 ceph-mon[74381]: pgmap v517: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 4.1 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Jan 20 18:59:45 compute-0 python3.9[234804]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 18:59:45 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:59:45 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:59:45 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:59:45.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:59:45 compute-0 sudo[234802]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:45 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:45 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4cc001c00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:59:45 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:59:45 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v518: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 1.3 KiB/s wr, 4 op/s
Jan 20 18:59:46 compute-0 sudo[234958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhevwucjlopxjpiizalzncwfxzqehqsx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935585.8177052-912-7896424431214/AnsiballZ_command.py'
Jan 20 18:59:46 compute-0 sudo[234958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:46 compute-0 python3.9[234960]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/true _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:59:46 compute-0 sudo[234958]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:46 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:59:46 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:59:46 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:59:46.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:59:46 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:46 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4d80023e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:59:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:47 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b40016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:59:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:59:47.064Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 18:59:47 compute-0 ceph-mon[74381]: pgmap v518: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 1.3 KiB/s wr, 4 op/s
Jan 20 18:59:47 compute-0 sudo[235111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uytziyedrxcmmdltyixecxfbsfjybwfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935586.800948-939-31465818888184/AnsiballZ_systemd_service.py'
Jan 20 18:59:47 compute-0 sudo[235111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:47 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:59:47 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 18:59:47 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:59:47.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 18:59:47 compute-0 python3.9[235113]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 18:59:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:47 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4cc001c00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:59:47 compute-0 systemd[1]: Listening on multipathd control socket.
Jan 20 18:59:47 compute-0 sudo[235111]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:47 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v519: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Jan 20 18:59:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [WARNING] 019/185947 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 20 18:59:48 compute-0 sudo[235269]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-muuqsxecfzktraorpyothidohmyudyar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935587.7950974-963-181531215640709/AnsiballZ_systemd_service.py'
Jan 20 18:59:48 compute-0 sudo[235269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:48 compute-0 python3.9[235271]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 18:59:48 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:59:48 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:59:48 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:59:48.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:59:48 compute-0 systemd[1]: Starting Wait for udev To Complete Device Initialization...
Jan 20 18:59:48 compute-0 udevadm[235276]: systemd-udev-settle.service is deprecated. Please fix multipathd.service not to pull it in.
Jan 20 18:59:48 compute-0 systemd[1]: Finished Wait for udev To Complete Device Initialization.
Jan 20 18:59:48 compute-0 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 20 18:59:48 compute-0 multipathd[235279]: --------start up--------
Jan 20 18:59:48 compute-0 multipathd[235279]: read /etc/multipath.conf
Jan 20 18:59:48 compute-0 multipathd[235279]: path checkers start up
Jan 20 18:59:48 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:48 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4c0002d10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:59:48 compute-0 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 20 18:59:48 compute-0 sudo[235269]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:49 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4d80023e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:59:49 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:59:49 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:59:49 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:59:49.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:59:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:49 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b40016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:59:49 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v520: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 597 B/s wr, 2 op/s
Jan 20 18:59:49 compute-0 sudo[235438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbbdvbagmjpshpaalaeebkqufrkqjxce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935589.403877-999-273280163193968/AnsiballZ_file.py'
Jan 20 18:59:49 compute-0 sudo[235438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:59:49] "GET /metrics HTTP/1.1" 200 48337 "" "Prometheus/2.51.0"
Jan 20 18:59:49 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:59:49] "GET /metrics HTTP/1.1" 200 48337 "" "Prometheus/2.51.0"
Jan 20 18:59:49 compute-0 python3.9[235440]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 20 18:59:50 compute-0 sudo[235438]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:50 compute-0 ceph-mon[74381]: pgmap v519: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Jan 20 18:59:50 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:59:50 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 18:59:50 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:59:50.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 18:59:50 compute-0 sudo[235590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyavbfsbtioeflghywkxmxhmckxmahef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935590.2977345-1023-111545181351422/AnsiballZ_modprobe.py'
Jan 20 18:59:50 compute-0 sudo[235590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:50 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:59:50 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:50 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4cc001c00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:59:50 compute-0 python3.9[235592]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Jan 20 18:59:50 compute-0 kernel: Key type psk registered
Jan 20 18:59:50 compute-0 sudo[235590]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:51 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:51 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4c0002d10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:59:51 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:59:51 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:59:51 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:59:51.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:59:51 compute-0 ceph-mon[74381]: pgmap v520: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 597 B/s wr, 2 op/s
Jan 20 18:59:51 compute-0 sudo[235753]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzdtshsleiknqvsopxdveokatknhaggp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935591.192033-1047-58903342868377/AnsiballZ_stat.py'
Jan 20 18:59:51 compute-0 sudo[235753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:51 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:51 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4d80034e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:59:51 compute-0 python3.9[235755]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:59:51 compute-0 sudo[235753]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:51 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v521: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 597 B/s wr, 2 op/s
Jan 20 18:59:52 compute-0 sudo[235878]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzoqifsijljpynsalnigdfbsusfhjgyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935591.192033-1047-58903342868377/AnsiballZ_copy.py'
Jan 20 18:59:52 compute-0 sudo[235878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:52 compute-0 python3.9[235880]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768935591.192033-1047-58903342868377/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:59:52 compute-0 sudo[235878]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:52 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:59:52 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:59:52 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:59:52.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:59:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:52 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b4002f00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:59:53 compute-0 sudo[236030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ishtacltkhhaxbmezitsakjpkardxrno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935592.7850814-1095-140596890114366/AnsiballZ_lineinfile.py'
Jan 20 18:59:53 compute-0 sudo[236030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:53 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:53 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4cc001c00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:59:53 compute-0 python3.9[236032]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:59:53 compute-0 sudo[236030]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:53 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:59:53 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:59:53 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:59:53.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:59:53 compute-0 ceph-mon[74381]: pgmap v521: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 597 B/s wr, 2 op/s
Jan 20 18:59:53 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:53 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4c0002d10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:59:53 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v522: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 511 B/s wr, 2 op/s
Jan 20 18:59:53 compute-0 sudo[236184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbjfulcjpolgmxxoshmsvmqnelfkqcmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935593.5579123-1119-9558506346355/AnsiballZ_systemd.py'
Jan 20 18:59:53 compute-0 sudo[236184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:54 compute-0 sudo[236187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 18:59:54 compute-0 sudo[236187]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 18:59:54 compute-0 sudo[236187]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:54 compute-0 python3.9[236186]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 18:59:54 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 20 18:59:54 compute-0 systemd[1]: Stopped Load Kernel Modules.
Jan 20 18:59:54 compute-0 systemd[1]: Stopping Load Kernel Modules...
Jan 20 18:59:54 compute-0 systemd[1]: Starting Load Kernel Modules...
Jan 20 18:59:54 compute-0 systemd[1]: Finished Load Kernel Modules.
Jan 20 18:59:54 compute-0 sudo[236184]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:54 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:59:54 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:59:54 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:59:54.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:59:54 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:54 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4d80034e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:59:54 compute-0 ceph-mon[74381]: pgmap v522: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 511 B/s wr, 2 op/s
Jan 20 18:59:54 compute-0 sudo[236365]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cryimdnhvbusgswhhsscaqmjsrdgyyow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935594.6230795-1143-147692587663163/AnsiballZ_dnf.py'
Jan 20 18:59:54 compute-0 sudo[236365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:54 compute-0 ceph-mgr[74676]: [balancer INFO root] Optimize plan auto_2026-01-20_18:59:54
Jan 20 18:59:54 compute-0 ceph-mgr[74676]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 18:59:54 compute-0 ceph-mgr[74676]: [balancer INFO root] do_upmap
Jan 20 18:59:54 compute-0 ceph-mgr[74676]: [balancer INFO root] pools ['images', 'backups', 'default.rgw.log', 'volumes', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.meta', '.mgr', '.nfs', 'default.rgw.control', 'vms', 'cephfs.cephfs.data']
Jan 20 18:59:54 compute-0 ceph-mgr[74676]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 18:59:55 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:55 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b4002f00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:59:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:59:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:59:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:59:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:59:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 18:59:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 18:59:55 compute-0 python3.9[236367]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 18:59:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 18:59:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:59:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 18:59:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:59:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:59:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:59:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:59:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:59:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:59:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:59:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:59:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:59:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 20 18:59:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:59:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:59:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:59:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 18:59:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:59:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 20 18:59:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:59:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 18:59:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:59:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 18:59:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 18:59:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 20 18:59:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 18:59:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 18:59:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 18:59:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 18:59:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 18:59:55 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:59:55 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:59:55 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:59:55.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:59:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 18:59:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 18:59:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 18:59:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 18:59:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 18:59:55 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:55 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4cc001c00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:59:55 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 18:59:55 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v523: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Jan 20 18:59:55 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 18:59:56 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:59:56 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:59:56 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:59:56.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:59:56 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:56 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4c0002d10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:59:56 compute-0 ceph-mon[74381]: pgmap v523: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Jan 20 18:59:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:57 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4d80034e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:59:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:59:57.065Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 18:59:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T18:59:57.066Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 18:59:57 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:59:57 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:59:57 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:59:57.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:59:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:57 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b4002f00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:59:57 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v524: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Jan 20 18:59:57 compute-0 systemd[1]: Reloading.
Jan 20 18:59:57 compute-0 systemd-rc-local-generator[236404]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:59:57 compute-0 systemd-sysv-generator[236408]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:59:58 compute-0 systemd[1]: Reloading.
Jan 20 18:59:58 compute-0 systemd-sysv-generator[236440]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:59:58 compute-0 systemd-rc-local-generator[236437]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:59:58 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:59:58 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:59:58 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:18:59:58.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:59:58 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:58 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4cc001c00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:59:58 compute-0 systemd-logind[796]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 20 18:59:58 compute-0 systemd-logind[796]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 20 18:59:58 compute-0 lvm[236484]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 18:59:58 compute-0 lvm[236484]: VG ceph_vg0 finished
Jan 20 18:59:58 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 20 18:59:58 compute-0 ceph-mon[74381]: pgmap v524: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Jan 20 18:59:58 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 20 18:59:58 compute-0 systemd[1]: Reloading.
Jan 20 18:59:59 compute-0 systemd-sysv-generator[236537]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:59:59 compute-0 systemd-rc-local-generator[236534]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:59:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:59 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4c0002d10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:59:59 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 18:59:59 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 18:59:59 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:18:59:59.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 18:59:59 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 20 18:59:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 18:59:59 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4d80034e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 18:59:59 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v525: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 18:59:59 compute-0 sudo[236365]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:18:59:59] "GET /metrics HTTP/1.1" 200 48340 "" "Prometheus/2.51.0"
Jan 20 18:59:59 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:18:59:59] "GET /metrics HTTP/1.1" 200 48340 "" "Prometheus/2.51.0"
Jan 20 19:00:00 compute-0 ceph-mon[74381]: log_channel(cluster) log [INF] : overall HEALTH_OK
Jan 20 19:00:00 compute-0 sudo[237835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odtdlcqnnvhvhntbcikvwqdrqnbirwjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935600.0791168-1167-258457069858583/AnsiballZ_systemd_service.py'
Jan 20 19:00:00 compute-0 sudo[237835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:00 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 20 19:00:00 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 20 19:00:00 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.799s CPU time.
Jan 20 19:00:00 compute-0 systemd[1]: run-r61af311fb8d7471d9cd7497e20b229b6.service: Deactivated successfully.
Jan 20 19:00:00 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:00:00 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:00:00 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:00:00.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:00:00 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:00:00 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:00 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b4002f00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:00 compute-0 ceph-mon[74381]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Jan 20 19:00:00 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:00:00.680775) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 19:00:00 compute-0 ceph-mon[74381]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Jan 20 19:00:00 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935600680868, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 1236, "num_deletes": 256, "total_data_size": 2281445, "memory_usage": 2320776, "flush_reason": "Manual Compaction"}
Jan 20 19:00:00 compute-0 ceph-mon[74381]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Jan 20 19:00:00 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935600694664, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 2239213, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18066, "largest_seqno": 19301, "table_properties": {"data_size": 2233391, "index_size": 3149, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 11692, "raw_average_key_size": 18, "raw_value_size": 2221830, "raw_average_value_size": 3554, "num_data_blocks": 142, "num_entries": 625, "num_filter_entries": 625, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768935480, "oldest_key_time": 1768935480, "file_creation_time": 1768935600, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cbf3ab03-d51c-4622-b6c7-e997cd5246eb", "db_session_id": "I40O2DG19JCNHUB0JQU4", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Jan 20 19:00:00 compute-0 ceph-mon[74381]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 14080 microseconds, and 6307 cpu microseconds.
Jan 20 19:00:00 compute-0 ceph-mon[74381]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 19:00:00 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:00:00.694854) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 2239213 bytes OK
Jan 20 19:00:00 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:00:00.694915) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Jan 20 19:00:00 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:00:00.696540) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Jan 20 19:00:00 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:00:00.696559) EVENT_LOG_v1 {"time_micros": 1768935600696553, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 19:00:00 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:00:00.696580) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 19:00:00 compute-0 ceph-mon[74381]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 2276017, prev total WAL file size 2276017, number of live WAL files 2.
Jan 20 19:00:00 compute-0 ceph-mon[74381]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 19:00:00 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:00:00.697479) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323533' seq:0, type:0; will stop at (end)
Jan 20 19:00:00 compute-0 ceph-mon[74381]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 19:00:00 compute-0 ceph-mon[74381]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(2186KB)], [38(11MB)]
Jan 20 19:00:00 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935600697514, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 14546942, "oldest_snapshot_seqno": -1}
Jan 20 19:00:00 compute-0 python3.9[237837]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 19:00:00 compute-0 systemd[1]: Stopping Open-iSCSI...
Jan 20 19:00:00 compute-0 iscsid[231143]: iscsid shutting down.
Jan 20 19:00:00 compute-0 systemd[1]: iscsid.service: Deactivated successfully.
Jan 20 19:00:00 compute-0 systemd[1]: Stopped Open-iSCSI.
Jan 20 19:00:00 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 20 19:00:00 compute-0 systemd[1]: Starting Open-iSCSI...
Jan 20 19:00:00 compute-0 systemd[1]: Started Open-iSCSI.
Jan 20 19:00:00 compute-0 ceph-mon[74381]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 5162 keys, 14045873 bytes, temperature: kUnknown
Jan 20 19:00:00 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935600797826, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 14045873, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14009509, "index_size": 22366, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12933, "raw_key_size": 131445, "raw_average_key_size": 25, "raw_value_size": 13914149, "raw_average_value_size": 2695, "num_data_blocks": 917, "num_entries": 5162, "num_filter_entries": 5162, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768934326, "oldest_key_time": 0, "file_creation_time": 1768935600, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cbf3ab03-d51c-4622-b6c7-e997cd5246eb", "db_session_id": "I40O2DG19JCNHUB0JQU4", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Jan 20 19:00:00 compute-0 ceph-mon[74381]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 19:00:00 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:00:00.798088) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 14045873 bytes
Jan 20 19:00:00 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:00:00.799319) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 144.9 rd, 139.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 11.7 +0.0 blob) out(13.4 +0.0 blob), read-write-amplify(12.8) write-amplify(6.3) OK, records in: 5688, records dropped: 526 output_compression: NoCompression
Jan 20 19:00:00 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:00:00.799334) EVENT_LOG_v1 {"time_micros": 1768935600799327, "job": 18, "event": "compaction_finished", "compaction_time_micros": 100411, "compaction_time_cpu_micros": 32976, "output_level": 6, "num_output_files": 1, "total_output_size": 14045873, "num_input_records": 5688, "num_output_records": 5162, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 19:00:00 compute-0 ceph-mon[74381]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 19:00:00 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935600799986, "job": 18, "event": "table_file_deletion", "file_number": 40}
Jan 20 19:00:00 compute-0 ceph-mon[74381]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 19:00:00 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935600801788, "job": 18, "event": "table_file_deletion", "file_number": 38}
Jan 20 19:00:00 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:00:00.697430) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:00:00 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:00:00.801907) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:00:00 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:00:00.801912) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:00:00 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:00:00.801914) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:00:00 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:00:00.801915) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:00:00 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:00:00.801916) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:00:00 compute-0 sudo[237835]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:00 compute-0 ceph-mon[74381]: pgmap v525: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:00:00 compute-0 ceph-mon[74381]: overall HEALTH_OK
Jan 20 19:00:01 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:01 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b4002f00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:01 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:00:01 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:00:01 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:00:01.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:00:01 compute-0 sudo[237993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fazyotaqdhxxrfppactnqduwgfeanmbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935601.111656-1191-257469609919651/AnsiballZ_systemd_service.py'
Jan 20 19:00:01 compute-0 sudo[237993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:01 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:01 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b8000b60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:01 compute-0 python3.9[237995]: ansible-ansible.builtin.systemd_service Invoked with name=multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 19:00:01 compute-0 systemd[1]: Stopping Device-Mapper Multipath Device Controller...
Jan 20 19:00:01 compute-0 multipathd[235279]: exit (signal)
Jan 20 19:00:01 compute-0 multipathd[235279]: --------shut down-------
Jan 20 19:00:01 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v526: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:00:01 compute-0 systemd[1]: multipathd.service: Deactivated successfully.
Jan 20 19:00:01 compute-0 systemd[1]: Stopped Device-Mapper Multipath Device Controller.
Jan 20 19:00:01 compute-0 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 20 19:00:01 compute-0 multipathd[238003]: --------start up--------
Jan 20 19:00:01 compute-0 multipathd[238003]: read /etc/multipath.conf
Jan 20 19:00:01 compute-0 multipathd[238003]: path checkers start up
Jan 20 19:00:01 compute-0 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 20 19:00:01 compute-0 sudo[237993]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:02 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:00:02 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:00:02 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:00:02.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:00:02 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:02 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4c0002d10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:02 compute-0 python3.9[238160]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 19:00:02 compute-0 systemd[1]: virtnodedevd.service: Deactivated successfully.
Jan 20 19:00:02 compute-0 ceph-mon[74381]: pgmap v526: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:00:03 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:03 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4c0002d10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:03 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:00:03 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:00:03 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:00:03.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:00:03 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:03 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4c0002d10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:03 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v527: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 20 19:00:03 compute-0 sudo[238317]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skhpezfujutonzbxoonyjbwptbseputu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935603.4967866-1243-4810385151432/AnsiballZ_file.py'
Jan 20 19:00:03 compute-0 sudo[238317]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:03 compute-0 python3.9[238319]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:00:03 compute-0 sudo[238317]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:04 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 20 19:00:04 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:00:04 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:00:04 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:00:04.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:00:04 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:04 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4cc003cc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:04 compute-0 sudo[238483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpaqnyshswgaimnqoxbvryfrrfvcvuaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935604.646762-1276-135220128318803/AnsiballZ_systemd_service.py'
Jan 20 19:00:04 compute-0 sudo[238483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:04 compute-0 podman[238444]: 2026-01-20 19:00:04.941633158 +0000 UTC m=+0.059330880 container health_status 7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 19:00:04 compute-0 ceph-mon[74381]: pgmap v527: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 20 19:00:05 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:05 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b4003c30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:05 compute-0 python3.9[238489]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 20 19:00:05 compute-0 systemd[1]: Reloading.
Jan 20 19:00:05 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:00:05 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:00:05 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:00:05.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:00:05 compute-0 systemd-rc-local-generator[238519]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 19:00:05 compute-0 systemd-sysv-generator[238522]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 19:00:05 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:05 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b80016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:05 compute-0 sudo[238483]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:05 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:00:05 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v528: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:00:06 compute-0 python3.9[238678]: ansible-ansible.builtin.service_facts Invoked
Jan 20 19:00:06 compute-0 network[238695]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 20 19:00:06 compute-0 network[238696]: 'network-scripts' will be removed from distribution in near future.
Jan 20 19:00:06 compute-0 network[238697]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 20 19:00:06 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:00:06 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:00:06 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:00:06.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:00:06 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:06 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4c0002d10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:06 compute-0 ceph-mon[74381]: pgmap v528: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:00:07 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:07 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4cc003cc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:07 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:00:07.066Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:00:07 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:00:07 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:00:07 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:00:07.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:00:07 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:07 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b4003c30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:07 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v529: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 20 19:00:08 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:00:08 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:00:08 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:00:08.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:00:08 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:08 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b8001fc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:08 compute-0 ceph-mon[74381]: pgmap v529: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 20 19:00:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:09 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4c0002d10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:09 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:00:09 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:00:09 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:00:09.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:00:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:09 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4cc003cc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:09 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v530: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:00:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:00:09] "GET /metrics HTTP/1.1" 200 48340 "" "Prometheus/2.51.0"
Jan 20 19:00:09 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:00:09] "GET /metrics HTTP/1.1" 200 48340 "" "Prometheus/2.51.0"
Jan 20 19:00:10 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:00:10 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:00:10 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:00:10.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:00:10 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:00:10 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:10 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b4003c30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:11 compute-0 ceph-mon[74381]: pgmap v530: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:00:11 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:00:11 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:11 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b8001fc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:11 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:00:11 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:00:11 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:00:11.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:00:11 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:11 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4c0002d10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:11 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v531: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:00:12 compute-0 sudo[238849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:00:12 compute-0 sudo[238849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:00:12 compute-0 sudo[238849]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:12 compute-0 sudo[238874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Jan 20 19:00:12 compute-0 sudo[238874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:00:12 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:00:12 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:00:12 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:00:12.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:00:12 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:12 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4cc003cc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:12 compute-0 sudo[238874]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:12 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:00:12 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:00:12 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:00:12 compute-0 podman[238913]: 2026-01-20 19:00:12.8317322 +0000 UTC m=+0.085047412 container health_status d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 20 19:00:12 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:00:12 compute-0 sudo[238946]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:00:12 compute-0 sudo[238946]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:00:12 compute-0 sudo[238946]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:12 compute-0 sudo[238971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 20 19:00:12 compute-0 sudo[238971]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:00:12 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 20 19:00:12 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:00:12 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 20 19:00:13 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:00:13 compute-0 ceph-mon[74381]: pgmap v531: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:00:13 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:00:13 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:00:13 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:00:13 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:00:13 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:13 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b4003c30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:13 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:00:13 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:00:13 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:00:13.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:00:13 compute-0 sudo[238971]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:13 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:13 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b8001fc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:13 compute-0 sudo[239153]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufuqwbdyqsgcoyuucfkztlypfohlwgyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935613.33839-1333-23584646191046/AnsiballZ_systemd_service.py'
Jan 20 19:00:13 compute-0 sudo[239153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:13 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v532: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 20 19:00:13 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 19:00:13 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:00:13 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 20 19:00:13 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:00:13 compute-0 python3.9[239155]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 19:00:13 compute-0 sudo[239157]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:00:13 compute-0 sudo[239157]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:00:13 compute-0 sudo[239157]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:13 compute-0 sudo[239153]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:14 compute-0 sudo[239183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 19:00:14 compute-0 sudo[239183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:00:14 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 19:00:14 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 19:00:14 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:00:14 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:00:14 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 19:00:14 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 19:00:14 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 19:00:14 compute-0 sudo[239240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:00:14 compute-0 sudo[239240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:00:14 compute-0 sudo[239240]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:14 compute-0 podman[239396]: 2026-01-20 19:00:14.368993435 +0000 UTC m=+0.038334527 container create 41acc2793c0d354a7770210984a313e2d5d271ebf3d1c2914618cf58b3a07b8a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_dijkstra, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 20 19:00:14 compute-0 systemd[1]: Started libpod-conmon-41acc2793c0d354a7770210984a313e2d5d271ebf3d1c2914618cf58b3a07b8a.scope.
Jan 20 19:00:14 compute-0 sudo[239436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dopmgdahqlpqxcleuwmztrzhdkgyyxzv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935614.1216447-1333-154011658263758/AnsiballZ_systemd_service.py'
Jan 20 19:00:14 compute-0 sudo[239436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:14 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:00:14 compute-0 podman[239396]: 2026-01-20 19:00:14.352062163 +0000 UTC m=+0.021403275 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:00:14 compute-0 podman[239396]: 2026-01-20 19:00:14.450741578 +0000 UTC m=+0.120082690 container init 41acc2793c0d354a7770210984a313e2d5d271ebf3d1c2914618cf58b3a07b8a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:00:14 compute-0 podman[239396]: 2026-01-20 19:00:14.457199483 +0000 UTC m=+0.126540565 container start 41acc2793c0d354a7770210984a313e2d5d271ebf3d1c2914618cf58b3a07b8a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_dijkstra, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:00:14 compute-0 podman[239396]: 2026-01-20 19:00:14.460961206 +0000 UTC m=+0.130302318 container attach 41acc2793c0d354a7770210984a313e2d5d271ebf3d1c2914618cf58b3a07b8a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_dijkstra, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:00:14 compute-0 reverent_dijkstra[239441]: 167 167
Jan 20 19:00:14 compute-0 systemd[1]: libpod-41acc2793c0d354a7770210984a313e2d5d271ebf3d1c2914618cf58b3a07b8a.scope: Deactivated successfully.
Jan 20 19:00:14 compute-0 podman[239396]: 2026-01-20 19:00:14.462153029 +0000 UTC m=+0.131494121 container died 41acc2793c0d354a7770210984a313e2d5d271ebf3d1c2914618cf58b3a07b8a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_dijkstra, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:00:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-81be9d99b8605c03de35181ace34b3249ab0a330db62f83337ca23911a1ce31b-merged.mount: Deactivated successfully.
Jan 20 19:00:14 compute-0 podman[239396]: 2026-01-20 19:00:14.497551775 +0000 UTC m=+0.166892867 container remove 41acc2793c0d354a7770210984a313e2d5d271ebf3d1c2914618cf58b3a07b8a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_dijkstra, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Jan 20 19:00:14 compute-0 systemd[1]: libpod-conmon-41acc2793c0d354a7770210984a313e2d5d271ebf3d1c2914618cf58b3a07b8a.scope: Deactivated successfully.
Jan 20 19:00:14 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:00:14 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:00:14 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:00:14.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:00:14 compute-0 podman[239466]: 2026-01-20 19:00:14.679710348 +0000 UTC m=+0.060401440 container create e5a778b7910d5185d19e18637bd2d004242d6d5677574f0df8e513e654319adc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Jan 20 19:00:14 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:14 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4c0002d10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:14 compute-0 python3.9[239443]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 19:00:14 compute-0 systemd[1]: Started libpod-conmon-e5a778b7910d5185d19e18637bd2d004242d6d5677574f0df8e513e654319adc.scope.
Jan 20 19:00:14 compute-0 podman[239466]: 2026-01-20 19:00:14.651439706 +0000 UTC m=+0.032130758 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:00:14 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:00:14 compute-0 sudo[239436]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0182a28a987b11e33a53d7b51665d4b91a3b676d040c33995c57c48e52de8c0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:00:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0182a28a987b11e33a53d7b51665d4b91a3b676d040c33995c57c48e52de8c0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:00:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0182a28a987b11e33a53d7b51665d4b91a3b676d040c33995c57c48e52de8c0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:00:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0182a28a987b11e33a53d7b51665d4b91a3b676d040c33995c57c48e52de8c0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:00:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0182a28a987b11e33a53d7b51665d4b91a3b676d040c33995c57c48e52de8c0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:00:14 compute-0 podman[239466]: 2026-01-20 19:00:14.787323136 +0000 UTC m=+0.168014288 container init e5a778b7910d5185d19e18637bd2d004242d6d5677574f0df8e513e654319adc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:00:14 compute-0 podman[239466]: 2026-01-20 19:00:14.796110175 +0000 UTC m=+0.176801247 container start e5a778b7910d5185d19e18637bd2d004242d6d5677574f0df8e513e654319adc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_lovelace, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Jan 20 19:00:14 compute-0 podman[239466]: 2026-01-20 19:00:14.809898202 +0000 UTC m=+0.190589294 container attach e5a778b7910d5185d19e18637bd2d004242d6d5677574f0df8e513e654319adc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_lovelace, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1)
Jan 20 19:00:15 compute-0 ceph-mon[74381]: pgmap v532: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 20 19:00:15 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:15 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4cc003cc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:15 compute-0 goofy_lovelace[239483]: --> passed data devices: 0 physical, 1 LVM
Jan 20 19:00:15 compute-0 goofy_lovelace[239483]: --> All data devices are unavailable
Jan 20 19:00:15 compute-0 systemd[1]: libpod-e5a778b7910d5185d19e18637bd2d004242d6d5677574f0df8e513e654319adc.scope: Deactivated successfully.
Jan 20 19:00:15 compute-0 podman[239466]: 2026-01-20 19:00:15.140400874 +0000 UTC m=+0.521091916 container died e5a778b7910d5185d19e18637bd2d004242d6d5677574f0df8e513e654319adc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:00:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-f0182a28a987b11e33a53d7b51665d4b91a3b676d040c33995c57c48e52de8c0-merged.mount: Deactivated successfully.
Jan 20 19:00:15 compute-0 podman[239466]: 2026-01-20 19:00:15.183386268 +0000 UTC m=+0.564077350 container remove e5a778b7910d5185d19e18637bd2d004242d6d5677574f0df8e513e654319adc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 19:00:15 compute-0 systemd[1]: libpod-conmon-e5a778b7910d5185d19e18637bd2d004242d6d5677574f0df8e513e654319adc.scope: Deactivated successfully.
Jan 20 19:00:15 compute-0 sudo[239183]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:15 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:00:15 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:00:15 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:00:15.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:00:15 compute-0 sudo[239636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:00:15 compute-0 sudo[239636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:00:15 compute-0 sudo[239636]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:15 compute-0 sudo[239687]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxbqylfswmdzqejhovgpedpvqsmhgmdd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935615.029317-1333-214239748469002/AnsiballZ_systemd_service.py'
Jan 20 19:00:15 compute-0 sudo[239687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:15 compute-0 sudo[239688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- lvm list --format json
Jan 20 19:00:15 compute-0 sudo[239688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:00:15 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:15 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b4003c30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:15 compute-0 python3.9[239695]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 19:00:15 compute-0 sudo[239687]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:15 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:00:15 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Jan 20 19:00:15 compute-0 systemd[1]: virtqemud.service: Deactivated successfully.
Jan 20 19:00:15 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v533: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:00:15 compute-0 podman[239784]: 2026-01-20 19:00:15.79815587 +0000 UTC m=+0.038769219 container create 96b53fbe9c5f4fc4db6dd0e7dddf19586231bef72b2feff1683af9d5c98be28c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:00:15 compute-0 systemd[1]: Started libpod-conmon-96b53fbe9c5f4fc4db6dd0e7dddf19586231bef72b2feff1683af9d5c98be28c.scope.
Jan 20 19:00:15 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:00:15 compute-0 podman[239784]: 2026-01-20 19:00:15.78204968 +0000 UTC m=+0.022663039 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:00:15 compute-0 podman[239784]: 2026-01-20 19:00:15.881725532 +0000 UTC m=+0.122338961 container init 96b53fbe9c5f4fc4db6dd0e7dddf19586231bef72b2feff1683af9d5c98be28c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:00:15 compute-0 podman[239784]: 2026-01-20 19:00:15.889675848 +0000 UTC m=+0.130289187 container start 96b53fbe9c5f4fc4db6dd0e7dddf19586231bef72b2feff1683af9d5c98be28c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_maxwell, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:00:15 compute-0 podman[239784]: 2026-01-20 19:00:15.893105722 +0000 UTC m=+0.133719101 container attach 96b53fbe9c5f4fc4db6dd0e7dddf19586231bef72b2feff1683af9d5c98be28c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_maxwell, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:00:15 compute-0 thirsty_maxwell[239851]: 167 167
Jan 20 19:00:15 compute-0 systemd[1]: libpod-96b53fbe9c5f4fc4db6dd0e7dddf19586231bef72b2feff1683af9d5c98be28c.scope: Deactivated successfully.
Jan 20 19:00:15 compute-0 podman[239784]: 2026-01-20 19:00:15.895278841 +0000 UTC m=+0.135892180 container died 96b53fbe9c5f4fc4db6dd0e7dddf19586231bef72b2feff1683af9d5c98be28c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:00:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-a68a1610be241b19b6b0a423447687109927a4d0463d21de0dad2f69a19e050a-merged.mount: Deactivated successfully.
Jan 20 19:00:15 compute-0 podman[239784]: 2026-01-20 19:00:15.935217112 +0000 UTC m=+0.175830451 container remove 96b53fbe9c5f4fc4db6dd0e7dddf19586231bef72b2feff1683af9d5c98be28c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_maxwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:00:15 compute-0 systemd[1]: libpod-conmon-96b53fbe9c5f4fc4db6dd0e7dddf19586231bef72b2feff1683af9d5c98be28c.scope: Deactivated successfully.
Jan 20 19:00:16 compute-0 sudo[239960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hyoaixmqngnrtyigbspagxqvayjnnkqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935615.7906582-1333-253428674292403/AnsiballZ_systemd_service.py'
Jan 20 19:00:16 compute-0 sudo[239960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:16 compute-0 podman[239930]: 2026-01-20 19:00:16.103204348 +0000 UTC m=+0.047748165 container create 76c141a8d87c7d05517e549b33b474ad158741c995735dbf9822c9745bc186e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_tesla, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Jan 20 19:00:16 compute-0 systemd[1]: Started libpod-conmon-76c141a8d87c7d05517e549b33b474ad158741c995735dbf9822c9745bc186e6.scope.
Jan 20 19:00:16 compute-0 podman[239930]: 2026-01-20 19:00:16.076431727 +0000 UTC m=+0.020975564 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:00:16 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:00:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a441410a2ebef95b857aafa27ba3c86588c36d0e73dc3cc904da529e58377fb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:00:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a441410a2ebef95b857aafa27ba3c86588c36d0e73dc3cc904da529e58377fb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:00:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a441410a2ebef95b857aafa27ba3c86588c36d0e73dc3cc904da529e58377fb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:00:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a441410a2ebef95b857aafa27ba3c86588c36d0e73dc3cc904da529e58377fb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:00:16 compute-0 podman[239930]: 2026-01-20 19:00:16.202284033 +0000 UTC m=+0.146827870 container init 76c141a8d87c7d05517e549b33b474ad158741c995735dbf9822c9745bc186e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_tesla, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:00:16 compute-0 podman[239930]: 2026-01-20 19:00:16.209615013 +0000 UTC m=+0.154158830 container start 76c141a8d87c7d05517e549b33b474ad158741c995735dbf9822c9745bc186e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_tesla, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Jan 20 19:00:16 compute-0 podman[239930]: 2026-01-20 19:00:16.212717087 +0000 UTC m=+0.157260894 container attach 76c141a8d87c7d05517e549b33b474ad158741c995735dbf9822c9745bc186e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_tesla, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Jan 20 19:00:16 compute-0 python3.9[239965]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 19:00:16 compute-0 sudo[239960]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:16 compute-0 ecstatic_tesla[239968]: {
Jan 20 19:00:16 compute-0 ecstatic_tesla[239968]:     "0": [
Jan 20 19:00:16 compute-0 ecstatic_tesla[239968]:         {
Jan 20 19:00:16 compute-0 ecstatic_tesla[239968]:             "devices": [
Jan 20 19:00:16 compute-0 ecstatic_tesla[239968]:                 "/dev/loop3"
Jan 20 19:00:16 compute-0 ecstatic_tesla[239968]:             ],
Jan 20 19:00:16 compute-0 ecstatic_tesla[239968]:             "lv_name": "ceph_lv0",
Jan 20 19:00:16 compute-0 ecstatic_tesla[239968]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:00:16 compute-0 ecstatic_tesla[239968]:             "lv_size": "21470642176",
Jan 20 19:00:16 compute-0 ecstatic_tesla[239968]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=aecbbf3b-b405-507b-97d7-637a83f5b4b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5f53c0c6-6046-4836-83f9-ff93da7e674e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:00:16 compute-0 ecstatic_tesla[239968]:             "lv_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 19:00:16 compute-0 ecstatic_tesla[239968]:             "name": "ceph_lv0",
Jan 20 19:00:16 compute-0 ecstatic_tesla[239968]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:00:16 compute-0 ecstatic_tesla[239968]:             "tags": {
Jan 20 19:00:16 compute-0 ecstatic_tesla[239968]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:00:16 compute-0 ecstatic_tesla[239968]:                 "ceph.block_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 19:00:16 compute-0 ecstatic_tesla[239968]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:00:16 compute-0 ecstatic_tesla[239968]:                 "ceph.cluster_fsid": "aecbbf3b-b405-507b-97d7-637a83f5b4b1",
Jan 20 19:00:16 compute-0 ecstatic_tesla[239968]:                 "ceph.cluster_name": "ceph",
Jan 20 19:00:16 compute-0 ecstatic_tesla[239968]:                 "ceph.crush_device_class": "",
Jan 20 19:00:16 compute-0 ecstatic_tesla[239968]:                 "ceph.encrypted": "0",
Jan 20 19:00:16 compute-0 ecstatic_tesla[239968]:                 "ceph.osd_fsid": "5f53c0c6-6046-4836-83f9-ff93da7e674e",
Jan 20 19:00:16 compute-0 ecstatic_tesla[239968]:                 "ceph.osd_id": "0",
Jan 20 19:00:16 compute-0 ecstatic_tesla[239968]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:00:16 compute-0 ecstatic_tesla[239968]:                 "ceph.type": "block",
Jan 20 19:00:16 compute-0 ecstatic_tesla[239968]:                 "ceph.vdo": "0",
Jan 20 19:00:16 compute-0 ecstatic_tesla[239968]:                 "ceph.with_tpm": "0"
Jan 20 19:00:16 compute-0 ecstatic_tesla[239968]:             },
Jan 20 19:00:16 compute-0 ecstatic_tesla[239968]:             "type": "block",
Jan 20 19:00:16 compute-0 ecstatic_tesla[239968]:             "vg_name": "ceph_vg0"
Jan 20 19:00:16 compute-0 ecstatic_tesla[239968]:         }
Jan 20 19:00:16 compute-0 ecstatic_tesla[239968]:     ]
Jan 20 19:00:16 compute-0 ecstatic_tesla[239968]: }
Jan 20 19:00:16 compute-0 systemd[1]: libpod-76c141a8d87c7d05517e549b33b474ad158741c995735dbf9822c9745bc186e6.scope: Deactivated successfully.
Jan 20 19:00:16 compute-0 podman[239930]: 2026-01-20 19:00:16.533082863 +0000 UTC m=+0.477626680 container died 76c141a8d87c7d05517e549b33b474ad158741c995735dbf9822c9745bc186e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_tesla, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Jan 20 19:00:16 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:00:16 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:00:16 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:00:16.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:00:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a441410a2ebef95b857aafa27ba3c86588c36d0e73dc3cc904da529e58377fb-merged.mount: Deactivated successfully.
Jan 20 19:00:16 compute-0 podman[239930]: 2026-01-20 19:00:16.571705717 +0000 UTC m=+0.516249534 container remove 76c141a8d87c7d05517e549b33b474ad158741c995735dbf9822c9745bc186e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_tesla, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Jan 20 19:00:16 compute-0 systemd[1]: libpod-conmon-76c141a8d87c7d05517e549b33b474ad158741c995735dbf9822c9745bc186e6.scope: Deactivated successfully.
Jan 20 19:00:16 compute-0 sudo[239688]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:16 compute-0 sudo[240066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:00:16 compute-0 sudo[240066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:00:16 compute-0 sudo[240066]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:16 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:16 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b80032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:16 compute-0 sudo[240114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- raw list --format json
Jan 20 19:00:16 compute-0 sudo[240114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:00:16 compute-0 sudo[240189]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wruanefljtuupwxspykewdqxkhzutngu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935616.551915-1333-139553547118851/AnsiballZ_systemd_service.py'
Jan 20 19:00:16 compute-0 sudo[240189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:00:17.093Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:00:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:17 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4c0002d10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:17 compute-0 ceph-mon[74381]: pgmap v533: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:00:17 compute-0 python3.9[240191]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 19:00:17 compute-0 sudo[240189]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:17 compute-0 podman[240233]: 2026-01-20 19:00:17.20115382 +0000 UTC m=+0.041185104 container create b941cb9c4ae60341cdc54781b4b88cbf8aac646f6e77a1a4860bf334805fb8f7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Jan 20 19:00:17 compute-0 systemd[1]: Started libpod-conmon-b941cb9c4ae60341cdc54781b4b88cbf8aac646f6e77a1a4860bf334805fb8f7.scope.
Jan 20 19:00:17 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:00:17 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:00:17 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:00:17.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:00:17 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:00:17 compute-0 podman[240233]: 2026-01-20 19:00:17.181939186 +0000 UTC m=+0.021970500 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:00:17 compute-0 podman[240233]: 2026-01-20 19:00:17.291712003 +0000 UTC m=+0.131743307 container init b941cb9c4ae60341cdc54781b4b88cbf8aac646f6e77a1a4860bf334805fb8f7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Jan 20 19:00:17 compute-0 podman[240233]: 2026-01-20 19:00:17.306952519 +0000 UTC m=+0.146983803 container start b941cb9c4ae60341cdc54781b4b88cbf8aac646f6e77a1a4860bf334805fb8f7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_heisenberg, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Jan 20 19:00:17 compute-0 podman[240233]: 2026-01-20 19:00:17.310251199 +0000 UTC m=+0.150282513 container attach b941cb9c4ae60341cdc54781b4b88cbf8aac646f6e77a1a4860bf334805fb8f7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid)
Jan 20 19:00:17 compute-0 cool_heisenberg[240258]: 167 167
Jan 20 19:00:17 compute-0 systemd[1]: libpod-b941cb9c4ae60341cdc54781b4b88cbf8aac646f6e77a1a4860bf334805fb8f7.scope: Deactivated successfully.
Jan 20 19:00:17 compute-0 podman[240233]: 2026-01-20 19:00:17.314356701 +0000 UTC m=+0.154387985 container died b941cb9c4ae60341cdc54781b4b88cbf8aac646f6e77a1a4860bf334805fb8f7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_heisenberg, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 20 19:00:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1a2f2004837ab047d9e2d78e9e215c8341c7bb7d3cb08e0b5900b4c94672cf1-merged.mount: Deactivated successfully.
Jan 20 19:00:17 compute-0 podman[240233]: 2026-01-20 19:00:17.361447807 +0000 UTC m=+0.201479101 container remove b941cb9c4ae60341cdc54781b4b88cbf8aac646f6e77a1a4860bf334805fb8f7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cool_heisenberg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Jan 20 19:00:17 compute-0 systemd[1]: libpod-conmon-b941cb9c4ae60341cdc54781b4b88cbf8aac646f6e77a1a4860bf334805fb8f7.scope: Deactivated successfully.
Jan 20 19:00:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:17 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4cc003cc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:17 compute-0 podman[240353]: 2026-01-20 19:00:17.583766625 +0000 UTC m=+0.076827998 container create 5a9f8c54be47486fefce389c0cdd0a45c7d23cce3b674738ed80f8336594d702 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:00:17 compute-0 systemd[1]: Started libpod-conmon-5a9f8c54be47486fefce389c0cdd0a45c7d23cce3b674738ed80f8336594d702.scope.
Jan 20 19:00:17 compute-0 podman[240353]: 2026-01-20 19:00:17.537740969 +0000 UTC m=+0.030802362 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:00:17 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:00:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f864db79e4a6ee4e205a2b36f786d455b9ae6b6451d393d54b42f202a3600fa0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:00:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f864db79e4a6ee4e205a2b36f786d455b9ae6b6451d393d54b42f202a3600fa0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:00:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f864db79e4a6ee4e205a2b36f786d455b9ae6b6451d393d54b42f202a3600fa0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:00:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f864db79e4a6ee4e205a2b36f786d455b9ae6b6451d393d54b42f202a3600fa0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:00:17 compute-0 podman[240353]: 2026-01-20 19:00:17.677880795 +0000 UTC m=+0.170942198 container init 5a9f8c54be47486fefce389c0cdd0a45c7d23cce3b674738ed80f8336594d702 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_hofstadter, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 19:00:17 compute-0 podman[240353]: 2026-01-20 19:00:17.687475376 +0000 UTC m=+0.180536759 container start 5a9f8c54be47486fefce389c0cdd0a45c7d23cce3b674738ed80f8336594d702 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:00:17 compute-0 podman[240353]: 2026-01-20 19:00:17.691563638 +0000 UTC m=+0.184625011 container attach 5a9f8c54be47486fefce389c0cdd0a45c7d23cce3b674738ed80f8336594d702 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_hofstadter, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 20 19:00:17 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v534: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 20 19:00:17 compute-0 sudo[240446]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhbruxokfqqdkbufpdigchxualebmgqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935617.3691876-1333-53027295980622/AnsiballZ_systemd_service.py'
Jan 20 19:00:17 compute-0 sudo[240446]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:18 compute-0 python3.9[240448]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 19:00:18 compute-0 sudo[240446]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:18 compute-0 lvm[240618]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 19:00:18 compute-0 lvm[240618]: VG ceph_vg0 finished
Jan 20 19:00:18 compute-0 hardcore_hofstadter[240412]: {}
Jan 20 19:00:18 compute-0 systemd[1]: libpod-5a9f8c54be47486fefce389c0cdd0a45c7d23cce3b674738ed80f8336594d702.scope: Deactivated successfully.
Jan 20 19:00:18 compute-0 systemd[1]: libpod-5a9f8c54be47486fefce389c0cdd0a45c7d23cce3b674738ed80f8336594d702.scope: Consumed 1.399s CPU time.
Jan 20 19:00:18 compute-0 podman[240353]: 2026-01-20 19:00:18.535970689 +0000 UTC m=+1.029032082 container died 5a9f8c54be47486fefce389c0cdd0a45c7d23cce3b674738ed80f8336594d702 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_hofstadter, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Jan 20 19:00:18 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:00:18 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:00:18 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:00:18.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:00:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-f864db79e4a6ee4e205a2b36f786d455b9ae6b6451d393d54b42f202a3600fa0-merged.mount: Deactivated successfully.
Jan 20 19:00:18 compute-0 podman[240353]: 2026-01-20 19:00:18.59350284 +0000 UTC m=+1.086564233 container remove 5a9f8c54be47486fefce389c0cdd0a45c7d23cce3b674738ed80f8336594d702 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_hofstadter, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:00:18 compute-0 systemd[1]: libpod-conmon-5a9f8c54be47486fefce389c0cdd0a45c7d23cce3b674738ed80f8336594d702.scope: Deactivated successfully.
Jan 20 19:00:18 compute-0 sudo[240114]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:18 compute-0 sudo[240684]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pywgghnmzkytpirmdnawsnieaxhlihyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935618.2683322-1333-181010931122326/AnsiballZ_systemd_service.py'
Jan 20 19:00:18 compute-0 sudo[240684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:18 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:00:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:18 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b4003c30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:18 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:00:18 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:00:18 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:00:18 compute-0 sudo[240687]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 19:00:18 compute-0 sudo[240687]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:00:18 compute-0 sudo[240687]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:18 compute-0 python3.9[240686]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 19:00:19 compute-0 sudo[240684]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:19 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b80032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:19 compute-0 ceph-mon[74381]: pgmap v534: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 20 19:00:19 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:00:19 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:00:19 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:00:19 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:00:19 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:00:19.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:00:19 compute-0 sudo[240862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kuiyiurnvdkclroxnpmxfrjfdcyjoxsd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935619.162443-1333-121692184746373/AnsiballZ_systemd_service.py'
Jan 20 19:00:19 compute-0 sudo[240862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:19 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4c0002d10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:19 compute-0 python3.9[240864]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 19:00:19 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v535: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:00:19 compute-0 sudo[240862]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:00:19] "GET /metrics HTTP/1.1" 200 48334 "" "Prometheus/2.51.0"
Jan 20 19:00:19 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:00:19] "GET /metrics HTTP/1.1" 200 48334 "" "Prometheus/2.51.0"
Jan 20 19:00:20 compute-0 sshd-session[240893]: banner exchange: Connection from 42.193.43.83 port 52496: invalid format
Jan 20 19:00:20 compute-0 sshd-session[240866]: Connection closed by 42.193.43.83 port 55630
Jan 20 19:00:20 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:00:20 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:00:20 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:00:20.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:00:20 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:00:20 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:20 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4cc003cc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:21 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:21 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b4003c30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:21 compute-0 ceph-mon[74381]: pgmap v535: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:00:21 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:00:21 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:00:21 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:00:21.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:00:21 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:21 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b8004000 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:21 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v536: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:00:21 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [WARNING] 019/190021 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 20 19:00:22 compute-0 sudo[241021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvhejszevxiacijoicxmqzxhbapcegnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935621.8770032-1510-119078531549460/AnsiballZ_file.py'
Jan 20 19:00:22 compute-0 sudo[241021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:22 compute-0 python3.9[241023]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:00:22 compute-0 sudo[241021]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:22 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:00:22 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:00:22 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:00:22.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:00:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:22 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4cc003cc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:22 compute-0 sudo[241173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hiodjcqhoppzwkimzcdvvvtymakulvmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935622.6160378-1510-48758494783305/AnsiballZ_file.py'
Jan 20 19:00:22 compute-0 sudo[241173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:23 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4c0002d10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:23 compute-0 python3.9[241175]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:00:23 compute-0 sudo[241173]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:23 compute-0 ceph-mon[74381]: pgmap v536: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:00:23 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:00:23 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:00:23 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:00:23.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:00:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:23 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b4003c30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:23 compute-0 sudo[241326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yihflfwbclbqchapqknqidualrxpnbge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935623.2905698-1510-120632914486998/AnsiballZ_file.py'
Jan 20 19:00:23 compute-0 sudo[241326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:23 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v537: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 20 19:00:23 compute-0 python3.9[241329]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:00:23 compute-0 sudo[241326]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:24 compute-0 sudo[241479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qogzoulkyrkajxvzgpruetxrmbowljes ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935624.0208383-1510-261378352201871/AnsiballZ_file.py'
Jan 20 19:00:24 compute-0 sudo[241479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:24 compute-0 python3.9[241481]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:00:24 compute-0 sudo[241479]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:24 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:00:24 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:00:24 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:00:24.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:00:24 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:24 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4cc003cc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:25 compute-0 sudo[241631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szedniffrfpdkxfznopokdpfdsqpsfeu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935624.7048917-1510-52351194210466/AnsiballZ_file.py'
Jan 20 19:00:25 compute-0 sudo[241631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:00:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:00:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:00:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:00:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:00:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:00:25 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:25 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b8004000 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:25 compute-0 ceph-mon[74381]: pgmap v537: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 20 19:00:25 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:00:25 compute-0 python3.9[241633]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:00:25 compute-0 sudo[241631]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:25 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:00:25 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:00:25 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:00:25.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:00:25 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:25 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4c0002d10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:25 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:00:25 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v538: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:00:25 compute-0 sudo[241785]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjrldcsfhtklhuckpcdefmbvytcfxvkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935625.4204574-1510-171497301674800/AnsiballZ_file.py'
Jan 20 19:00:25 compute-0 sudo[241785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:26 compute-0 python3.9[241787]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:00:26 compute-0 sudo[241785]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:26 compute-0 sudo[241937]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sywebrerhurmcvuiwaqhngacwnitqaef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935626.1753833-1510-271791309043705/AnsiballZ_file.py'
Jan 20 19:00:26 compute-0 sudo[241937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:26 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:00:26 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:00:26 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:00:26.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:00:26 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:26 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b4003c30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:26 compute-0 python3.9[241939]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:00:26 compute-0 sudo[241937]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:00:27.094Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:00:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:27 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4cc003cc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:27 compute-0 ceph-mon[74381]: pgmap v538: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:00:27 compute-0 sudo[242089]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxhicgawsphxxunaegmijsffmspsmhzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935626.8749163-1510-187234559471759/AnsiballZ_file.py'
Jan 20 19:00:27 compute-0 sudo[242089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:27 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:00:27 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:00:27 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:00:27.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:00:27 compute-0 python3.9[242091]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:00:27 compute-0 sudo[242089]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:27 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b8004000 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:27 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v539: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 20 19:00:28 compute-0 sudo[242243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fyaubpdaofpkbvvkobmxipwghrwhwggu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935627.7124248-1681-27162627760127/AnsiballZ_file.py'
Jan 20 19:00:28 compute-0 sudo[242243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:28 compute-0 python3.9[242245]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:00:28 compute-0 sudo[242243]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:28 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:00:28 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:00:28 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:00:28.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:00:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:28 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4c0002d10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:28 compute-0 sudo[242395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmilkusnhniqnjpvqzoltwnrevtxfgoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935628.3990114-1681-212318861733136/AnsiballZ_file.py'
Jan 20 19:00:28 compute-0 sudo[242395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:28 compute-0 python3.9[242397]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:00:28 compute-0 sudo[242395]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:29 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b4003c30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:29 compute-0 ceph-mon[74381]: pgmap v539: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 20 19:00:29 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:00:29 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:00:29 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:00:29.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:00:29 compute-0 sudo[242547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iykesxtlnctzawpkmcmsisxywxhguhiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935629.0355349-1681-11303643008729/AnsiballZ_file.py'
Jan 20 19:00:29 compute-0 sudo[242547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:29 compute-0 python3.9[242549]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:00:29 compute-0 sudo[242547]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:29 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v540: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 19:00:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:00:29] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Jan 20 19:00:29 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:00:29] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Jan 20 19:00:29 compute-0 sudo[242700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-angtalvpobbwfqevulrmundfbdmygdys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935629.6326444-1681-63694825866355/AnsiballZ_file.py'
Jan 20 19:00:29 compute-0 sudo[242700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:30 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b4003c30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:00:30.274 165659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:00:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:00:30.275 165659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:00:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:00:30.275 165659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:00:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:30 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 20 19:00:30 compute-0 python3.9[242702]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:00:30 compute-0 ceph-mon[74381]: pgmap v540: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 19:00:30 compute-0 sudo[242700]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:30 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:00:30 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:00:30 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:00:30.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:00:30 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:00:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:30 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b4003c30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:30 compute-0 sudo[242853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obvzsglyxafidddjqyvtbeuhkmembchg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935630.5732696-1681-121601660957396/AnsiballZ_file.py'
Jan 20 19:00:30 compute-0 sudo[242853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:31 compute-0 python3.9[242855]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:00:31 compute-0 sudo[242853]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:31 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:31 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4c0002d10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:31 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:00:31 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:00:31 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:00:31.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:00:31 compute-0 sudo[243005]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vazfasaycuiwcalrosnnivwuujcngsfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935631.2420864-1681-64648204746277/AnsiballZ_file.py'
Jan 20 19:00:31 compute-0 sudo[243005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:31 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:31 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b8004000 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:31 compute-0 python3.9[243007]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:00:31 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v541: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 170 B/s wr, 0 op/s
Jan 20 19:00:31 compute-0 sudo[243005]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:32 compute-0 sudo[243158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxtpmbbthojhzkblyhkyscejvfjhaued ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935631.9210966-1681-262868011356006/AnsiballZ_file.py'
Jan 20 19:00:32 compute-0 sudo[243158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:32 compute-0 python3.9[243160]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:00:32 compute-0 sudo[243158]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:32 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:00:32 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:00:32 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:00:32.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:00:32 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:32 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b8004000 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:32 compute-0 sudo[243314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvdyyzbnsfaxnxytfpzzfqfsdhpjviev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935632.5791821-1681-22475460571156/AnsiballZ_file.py'
Jan 20 19:00:32 compute-0 sudo[243314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:32 compute-0 ceph-osd[82836]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 19:00:32 compute-0 ceph-osd[82836]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 9010 writes, 35K keys, 9010 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 9010 writes, 1929 syncs, 4.67 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 622 writes, 961 keys, 622 commit groups, 1.0 writes per commit group, ingest: 0.32 MB, 0.00 MB/s
                                           Interval WAL: 622 writes, 300 syncs, 2.07 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5649cc1b5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5649cc1b5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5649cc1b5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5649cc1b5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5649cc1b5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5649cc1b5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5649cc1b5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5649cc1b49b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5649cc1b49b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5649cc1b49b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5649cc1b5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5649cc1b5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 20 19:00:33 compute-0 ceph-mon[74381]: pgmap v541: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 170 B/s wr, 0 op/s
Jan 20 19:00:33 compute-0 python3.9[243316]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:00:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:33 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b4003c30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:33 compute-0 sudo[243314]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:33 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:00:33 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:00:33 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:00:33.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:00:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:33 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 20 19:00:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:33 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 20 19:00:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:33 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4ac000b60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:33 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v542: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 596 B/s wr, 1 op/s
Jan 20 19:00:34 compute-0 sudo[243417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:00:34 compute-0 sudo[243417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:00:34 compute-0 sudo[243417]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:34 compute-0 sudo[243493]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-seejvexldpzodfvifliuvkdrmnhxmcph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935634.0079453-1855-48560994496444/AnsiballZ_command.py'
Jan 20 19:00:34 compute-0 sudo[243493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:34 compute-0 python3.9[243495]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:00:34 compute-0 sudo[243493]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:34 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:00:34 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:00:34 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:00:34.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:00:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:34 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4d8001ff0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:35 compute-0 ceph-mon[74381]: pgmap v542: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 596 B/s wr, 1 op/s
Jan 20 19:00:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:35 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4cc003cc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:35 compute-0 podman[243574]: 2026-01-20 19:00:35.123762693 +0000 UTC m=+0.092582034 container health_status 7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 20 19:00:35 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:00:35 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:00:35 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:00:35.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:00:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:35 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4cc003cc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:35 compute-0 python3.9[243666]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 20 19:00:35 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:00:35 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v543: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 20 19:00:36 compute-0 sudo[243818]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrsleguobstugbisqfbsujwzckzjyudh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935636.0777621-1909-159293016091169/AnsiballZ_systemd_service.py'
Jan 20 19:00:36 compute-0 sudo[243818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:36 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 20 19:00:36 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:00:36 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:00:36 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:00:36.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:00:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:36 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4cc003cc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:36 compute-0 python3.9[243820]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 20 19:00:36 compute-0 systemd[1]: Reloading.
Jan 20 19:00:36 compute-0 systemd-sysv-generator[243851]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 19:00:36 compute-0 systemd-rc-local-generator[243847]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 19:00:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:00:37.095Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:00:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:37 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4d8001ff0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:37 compute-0 sudo[243818]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:37 compute-0 ceph-mon[74381]: pgmap v543: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 20 19:00:37 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:00:37 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:00:37 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:00:37.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:00:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:37 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4d8001ff0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:37 compute-0 sudo[244006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-biaubnhwcdeiwdqfnsmkdhswgosyrkdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935637.3225725-1933-250692697783673/AnsiballZ_command.py'
Jan 20 19:00:37 compute-0 sudo[244006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:37 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v544: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 20 19:00:37 compute-0 python3.9[244008]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:00:37 compute-0 sudo[244006]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:38 compute-0 sudo[244160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkvpzpjsyygbfwzgvlcicygmjxsdujmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935638.037516-1933-3765929581062/AnsiballZ_command.py'
Jan 20 19:00:38 compute-0 sudo[244160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:38 compute-0 python3.9[244162]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:00:38 compute-0 sudo[244160]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:38 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:00:38 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:00:38 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:00:38.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:00:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:38 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b4003c30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:39 compute-0 sudo[244313]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnuhisgbnfzxydhfqpbuzoavnaujxwbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935638.7261565-1933-156827932164228/AnsiballZ_command.py'
Jan 20 19:00:39 compute-0 sudo[244313]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:39 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4ac0016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:39 compute-0 ceph-mon[74381]: pgmap v544: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 20 19:00:39 compute-0 python3.9[244315]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:00:39 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:00:39 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:00:39 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:00:39.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:00:39 compute-0 sudo[244313]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:39 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4cc003cc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:39 compute-0 sudo[244467]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogaysmlqbeidrjnoqhswuuqtmbnudlpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935639.4485123-1933-245953023580422/AnsiballZ_command.py'
Jan 20 19:00:39 compute-0 sudo[244467]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:39 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v545: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 20 19:00:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:00:39] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Jan 20 19:00:39 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:00:39] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Jan 20 19:00:39 compute-0 python3.9[244469]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:00:40 compute-0 sudo[244467]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:40 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:00:40 compute-0 sudo[244621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxzttjlawtdjycfoenoludrzfggkdval ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935640.1491504-1933-278957567126243/AnsiballZ_command.py'
Jan 20 19:00:40 compute-0 sudo[244621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:40 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:00:40 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:00:40 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:00:40.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:00:40 compute-0 python3.9[244623]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:00:40 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:00:40 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:40 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4cc003cc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:41 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:41 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b4003c30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:41 compute-0 ceph-mon[74381]: pgmap v545: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 20 19:00:41 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:00:41 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:00:41 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:00:41.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:00:41 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:41 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4ac001fc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:41 compute-0 sudo[244621]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:41 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v546: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 20 19:00:42 compute-0 sudo[244775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmwwemfnyugrmhlsrkxjhhewyejiycek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935641.8665574-1933-251847385582392/AnsiballZ_command.py'
Jan 20 19:00:42 compute-0 sudo[244775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:42 compute-0 python3.9[244777]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:00:42 compute-0 sudo[244775]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:42 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:00:42 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:00:42 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:00:42.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:00:42 compute-0 ceph-mon[74381]: pgmap v546: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 20 19:00:42 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:42 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4ac001fc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:42 compute-0 sudo[244942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmfpvzfaubeqwqqbvdfpwbistpxcovhf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935642.6782844-1933-241043538043707/AnsiballZ_command.py'
Jan 20 19:00:42 compute-0 sudo[244942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:43 compute-0 podman[244903]: 2026-01-20 19:00:43.072410565 +0000 UTC m=+0.160448196 container health_status d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller)
Jan 20 19:00:43 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:43 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4cc003cc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:43 compute-0 python3.9[244948]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:00:43 compute-0 sudo[244942]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:43 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:00:43 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:00:43 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:00:43.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:00:43 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:43 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b4003c30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:43 compute-0 sudo[245109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcyvcduhfxdxwnnbvyiykfavkjqpwivj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935643.3612702-1933-272880947322878/AnsiballZ_command.py'
Jan 20 19:00:43 compute-0 sudo[245109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:43 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v547: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 852 B/s wr, 3 op/s
Jan 20 19:00:43 compute-0 python3.9[245111]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:00:43 compute-0 sudo[245109]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:43 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [WARNING] 019/190043 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 20 19:00:44 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:00:44 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:00:44 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:00:44.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:00:44 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:44 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b4003c30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:45 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:45 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4d80036f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:45 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:00:45 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:00:45 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:00:45.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:00:45 compute-0 sudo[245263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbibokhlvkrgvxjzaczmtillwtujmutj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935645.0842903-2140-205382585783582/AnsiballZ_file.py'
Jan 20 19:00:45 compute-0 sudo[245263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:45 compute-0 ceph-mon[74381]: pgmap v547: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 852 B/s wr, 3 op/s
Jan 20 19:00:45 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:45 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b4003c30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:45 compute-0 python3.9[245265]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:00:45 compute-0 sudo[245263]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:45 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v548: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 20 19:00:46 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:00:46 compute-0 sudo[245416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzbgxcjniqsdcdczztovudtjqqslpfbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935645.7918122-2140-249208774211993/AnsiballZ_file.py'
Jan 20 19:00:46 compute-0 sudo[245416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:46 compute-0 python3.9[245418]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:00:46 compute-0 sudo[245416]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:46 compute-0 ceph-mon[74381]: pgmap v548: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 20 19:00:46 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:00:46 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:00:46 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:00:46.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:00:46 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:46 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4cc003ce0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:46 compute-0 sudo[245569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcxbezietetxmmprszwiglejgxcxatjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935646.4007869-2140-176062271787433/AnsiballZ_file.py'
Jan 20 19:00:46 compute-0 sudo[245569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:47 compute-0 python3.9[245571]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:00:47 compute-0 sudo[245569]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:00:47.096Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:00:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:00:47.097Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:00:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:00:47.097Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:00:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:47 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4cc003ce0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:47 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:00:47 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:00:47 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:00:47.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:00:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:47 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4d80036f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:47 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v549: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Jan 20 19:00:47 compute-0 sudo[245722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlnxafeksfoylzhtzaazynjpfnvvsqtw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935647.614428-2206-108074248506575/AnsiballZ_file.py'
Jan 20 19:00:47 compute-0 sudo[245722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:48 compute-0 python3.9[245724]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:00:48 compute-0 sudo[245722]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:48 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:00:48 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:00:48 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:00:48.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:00:48 compute-0 sudo[245875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcrgwqhsmtphpluuymzrnqakfazxcixm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935648.30663-2206-141510654143061/AnsiballZ_file.py'
Jan 20 19:00:48 compute-0 sudo[245875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:48 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:48 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b4003c30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:48 compute-0 python3.9[245877]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:00:48 compute-0 sudo[245875]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:48 compute-0 ceph-mon[74381]: pgmap v549: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Jan 20 19:00:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:49 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4ac001fc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:49 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:00:49 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:00:49 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:00:49.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:00:49 compute-0 sudo[246027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-waykgpguuabcjnbrpyatwjgcgfadgbcs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935649.0174096-2206-119045828606329/AnsiballZ_file.py'
Jan 20 19:00:49 compute-0 sudo[246027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:49 compute-0 python3.9[246029]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:00:49 compute-0 sudo[246027]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:49 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4cc003d00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:49 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v550: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 20 19:00:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:00:49] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Jan 20 19:00:49 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:00:49] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Jan 20 19:00:49 compute-0 sudo[246180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gupopwkxgyvjbkmbgnbbbdxwtypwdmhe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935649.6505206-2206-190930324605540/AnsiballZ_file.py'
Jan 20 19:00:49 compute-0 sudo[246180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:50 compute-0 python3.9[246182]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:00:50 compute-0 sudo[246180]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:50 compute-0 sudo[246333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdbljnxpjyinjlpdgtanfdjyielxkglc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935650.2355661-2206-107959057235398/AnsiballZ_file.py'
Jan 20 19:00:50 compute-0 sudo[246333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:50 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:00:50 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:00:50 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:00:50.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:00:50 compute-0 python3.9[246335]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:00:50 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:50 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4d80036f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:50 compute-0 sudo[246333]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:51 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:00:51 compute-0 ceph-mon[74381]: pgmap v550: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 20 19:00:51 compute-0 sudo[246485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tumqkdqgqkboshmrlyysldtgqydlembg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935650.875419-2206-180242268055198/AnsiballZ_file.py'
Jan 20 19:00:51 compute-0 sudo[246485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:51 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:51 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b4003c30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:51 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:00:51 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:00:51 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:00:51.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:00:51 compute-0 python3.9[246487]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:00:51 compute-0 sudo[246485]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:51 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:51 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4ac0032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:51 compute-0 sudo[246638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exfzgzsxcatnxekyrojsursuvizetdpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935651.4722443-2206-5619624430104/AnsiballZ_file.py'
Jan 20 19:00:51 compute-0 sudo[246638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:51 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v551: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 20 19:00:51 compute-0 python3.9[246640]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:00:51 compute-0 sudo[246638]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:52 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:00:52 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:00:52 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:00:52.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:00:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:52 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4cc003d20 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:53 compute-0 ceph-mon[74381]: pgmap v551: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 20 19:00:53 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:53 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4d80036f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:53 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:00:53 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:00:53 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:00:53.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:00:53 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:53 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b4003c30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:53 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v552: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Jan 20 19:00:54 compute-0 sudo[246668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:00:54 compute-0 sudo[246668]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:00:54 compute-0 sudo[246668]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:54 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:00:54 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:00:54 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:00:54.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:00:54 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:54 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4ac0032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:54 compute-0 ceph-mgr[74676]: [balancer INFO root] Optimize plan auto_2026-01-20_19:00:54
Jan 20 19:00:54 compute-0 ceph-mgr[74676]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 19:00:54 compute-0 ceph-mgr[74676]: [balancer INFO root] do_upmap
Jan 20 19:00:54 compute-0 ceph-mgr[74676]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta', '.rgw.root', 'vms', 'default.rgw.control', '.nfs', 'cephfs.cephfs.meta', 'images', '.mgr', 'backups']
Jan 20 19:00:54 compute-0 ceph-mgr[74676]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 19:00:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:00:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:00:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:00:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:00:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:00:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:00:55 compute-0 ceph-mon[74381]: pgmap v552: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Jan 20 19:00:55 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:00:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 19:00:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:00:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 19:00:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:00:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:00:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:00:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:00:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:00:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:00:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:00:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:00:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:00:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 20 19:00:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:00:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:00:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:00:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 19:00:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:00:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 20 19:00:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:00:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:00:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:00:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 19:00:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:00:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 20 19:00:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 19:00:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:00:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:00:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:00:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:00:55 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:55 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4cc003d40 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 19:00:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:00:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:00:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:00:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:00:55 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:00:55 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:00:55 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:00:55.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:00:55 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:55 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4d80036f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:55 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v553: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:00:56 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:00:56 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:00:56 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:00:56 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:00:56.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:00:56 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:56 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b4003c50 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:00:57.098Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:00:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:00:57.098Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:00:57 compute-0 ceph-mon[74381]: pgmap v553: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:00:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:57 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4ac0032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:57 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:00:57 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:00:57 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:00:57.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:00:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:57 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4cc003d60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:57 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v554: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 20 19:00:58 compute-0 sudo[246821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cuqgogbltjerqejbpstjkbeyitzwxuop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935657.7055628-2531-267103754442884/AnsiballZ_getent.py'
Jan 20 19:00:58 compute-0 sudo[246821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:58 compute-0 python3.9[246823]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Jan 20 19:00:58 compute-0 sudo[246821]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:58 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:00:58 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:00:58 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:00:58.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:00:58 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:58 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4d80036f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:59 compute-0 sudo[246975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdqtwopxuhuxjqhxdttreunuoxxkzlpn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935658.667428-2555-197949447192267/AnsiballZ_group.py'
Jan 20 19:00:59 compute-0 sudo[246975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:59 compute-0 ceph-mon[74381]: pgmap v554: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 20 19:00:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:59 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b4003c70 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:59 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:00:59 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:00:59 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:00:59.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:00:59 compute-0 python3.9[246977]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 20 19:00:59 compute-0 groupadd[246978]: group added to /etc/group: name=nova, GID=42436
Jan 20 19:00:59 compute-0 groupadd[246978]: group added to /etc/gshadow: name=nova
Jan 20 19:00:59 compute-0 groupadd[246978]: new group: name=nova, GID=42436
Jan 20 19:00:59 compute-0 sudo[246975]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:00:59 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4ac0032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:00:59 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v555: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:00:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:00:59] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Jan 20 19:00:59 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:00:59] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Jan 20 19:01:00 compute-0 sudo[247135]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-loxmaqjzieifqefczqdtmlgvxzcwsoyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935659.9127123-2579-18932614145090/AnsiballZ_user.py'
Jan 20 19:01:00 compute-0 sudo[247135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:01:00 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:01:00 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:01:00 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:01:00.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:01:00 compute-0 python3.9[247137]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 20 19:01:00 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:00 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4cc003d80 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:00 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 19:01:00 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 19:01:00 compute-0 useradd[247139]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Jan 20 19:01:00 compute-0 useradd[247139]: add 'nova' to group 'libvirt'
Jan 20 19:01:00 compute-0 useradd[247139]: add 'nova' to shadow group 'libvirt'
Jan 20 19:01:00 compute-0 sudo[247135]: pam_unix(sudo:session): session closed for user root
Jan 20 19:01:01 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:01:01 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:01 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4d80036f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:01 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:01:01 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:01:01 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:01:01.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:01:01 compute-0 ceph-mon[74381]: pgmap v555: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:01:01 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:01 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b4003c90 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:01 compute-0 CROND[247175]: (root) CMD (run-parts /etc/cron.hourly)
Jan 20 19:01:01 compute-0 run-parts[247178]: (/etc/cron.hourly) starting 0anacron
Jan 20 19:01:01 compute-0 run-parts[247184]: (/etc/cron.hourly) finished 0anacron
Jan 20 19:01:01 compute-0 CROND[247174]: (root) CMDEND (run-parts /etc/cron.hourly)
Jan 20 19:01:01 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v556: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:01:01 compute-0 sshd-session[247172]: Accepted publickey for zuul from 192.168.122.30 port 36384 ssh2: ECDSA SHA256:OqjpBifwMHsrzwMbJxHbqE54q1skVz6aecL1tN9sOps
Jan 20 19:01:01 compute-0 systemd-logind[796]: New session 56 of user zuul.
Jan 20 19:01:01 compute-0 systemd[1]: Started Session 56 of User zuul.
Jan 20 19:01:01 compute-0 sshd-session[247172]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 19:01:01 compute-0 sshd-session[247186]: Received disconnect from 192.168.122.30 port 36384:11: disconnected by user
Jan 20 19:01:01 compute-0 sshd-session[247186]: Disconnected from user zuul 192.168.122.30 port 36384
Jan 20 19:01:01 compute-0 sshd-session[247172]: pam_unix(sshd:session): session closed for user zuul
Jan 20 19:01:01 compute-0 systemd[1]: session-56.scope: Deactivated successfully.
Jan 20 19:01:01 compute-0 systemd-logind[796]: Session 56 logged out. Waiting for processes to exit.
Jan 20 19:01:01 compute-0 systemd-logind[796]: Removed session 56.
Jan 20 19:01:02 compute-0 ceph-mon[74381]: pgmap v556: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:01:02 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:01:02 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:01:02 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:01:02.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:01:02 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:02 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4ac0032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:02 compute-0 python3.9[247337]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:01:03 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:03 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4cc003da0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:03 compute-0 python3.9[247458]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768935662.2576396-2654-225528451607316/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:01:03 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:01:03 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:01:03 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:01:03.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:01:03 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:03 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4d80036f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:03 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v557: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 20 19:01:03 compute-0 python3.9[247609]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:01:04 compute-0 python3.9[247685]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:01:04 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:01:04 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:01:04 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:01:04.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:01:04 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:04 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b4003cb0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:04 compute-0 python3.9[247837]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:01:05 compute-0 ceph-mon[74381]: pgmap v557: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 20 19:01:05 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:05 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4ac0032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:05 compute-0 podman[247932]: 2026-01-20 19:01:05.303251111 +0000 UTC m=+0.087487795 container health_status 7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent)
Jan 20 19:01:05 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:01:05 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:01:05 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:01:05.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:01:05 compute-0 python3.9[247968]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768935664.507965-2654-42653324902805/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:01:05 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:05 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b8001bd0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:05 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v558: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:01:06 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:01:06 compute-0 python3.9[248125]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:01:06 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:01:06 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:01:06 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:01:06.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:01:06 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:06 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4d80036f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:06 compute-0 python3.9[248247]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768935665.6028101-2654-156525393608977/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:01:07 compute-0 ceph-mon[74381]: pgmap v558: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:01:07 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:01:07.099Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:01:07 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:07 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b4003cb0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:07 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:01:07 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:01:07 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:01:07.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:01:07 compute-0 python3.9[248397]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:01:07 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:07 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4ac0032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:07 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v559: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 20 19:01:07 compute-0 python3.9[248519]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768935666.9520493-2654-58129165320891/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:01:08 compute-0 python3.9[248670]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:01:08 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:01:08 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:01:08 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:01:08.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:01:08 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:08 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b8001bd0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:09 compute-0 python3.9[248792]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768935668.0812461-2654-197764204007586/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:01:09 compute-0 ceph-mon[74381]: pgmap v559: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 20 19:01:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:09 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4d80036f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:09 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:01:09 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:01:09 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:01:09.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:01:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:09 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4c00008d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:09 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v560: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:01:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:01:09] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Jan 20 19:01:09 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:01:09] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Jan 20 19:01:10 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:01:10 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:01:10 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:01:10 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:01:10.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:01:10 compute-0 sudo[248944]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nffwansznossmxtrrrucqugbuuafgjgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935670.3960245-2903-8298460381834/AnsiballZ_file.py'
Jan 20 19:01:10 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:10 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4c00008d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:10 compute-0 sudo[248944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:01:10 compute-0 python3.9[248946]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:01:10 compute-0 sudo[248944]: pam_unix(sudo:session): session closed for user root
Jan 20 19:01:11 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:01:11 compute-0 ceph-mon[74381]: pgmap v560: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:01:11 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:11 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4ac0032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:11 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:01:11 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:01:11 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:01:11.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:01:11 compute-0 sudo[249096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txwamtrvqcxwcqopgbweqynrxlgvnjou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935671.2166474-2927-51069834294584/AnsiballZ_copy.py'
Jan 20 19:01:11 compute-0 sudo[249096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:01:11 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:11 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4c00008d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:11 compute-0 python3.9[249098]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:01:11 compute-0 sudo[249096]: pam_unix(sudo:session): session closed for user root
Jan 20 19:01:11 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v561: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:01:12 compute-0 sudo[249250]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcpglkiuordtgozsphiyxkjjpnhhimfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935672.0833113-2951-224801575903893/AnsiballZ_stat.py'
Jan 20 19:01:12 compute-0 sudo[249250]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:01:12 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:01:12 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:01:12 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:01:12.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:01:12 compute-0 python3.9[249252]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 19:01:12 compute-0 sudo[249250]: pam_unix(sudo:session): session closed for user root
Jan 20 19:01:12 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:12 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b80026f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:13 compute-0 ceph-mon[74381]: pgmap v561: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:01:13 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:13 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4d80036f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:13 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:01:13 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:01:13 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:01:13.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:01:13 compute-0 sudo[249412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iylscxagdgdcgsrdwhuiafnazuekqgah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935673.1524282-2975-85280716836492/AnsiballZ_stat.py'
Jan 20 19:01:13 compute-0 sudo[249412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:01:13 compute-0 podman[249376]: 2026-01-20 19:01:13.471516632 +0000 UTC m=+0.096790808 container health_status d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251202, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 20 19:01:13 compute-0 python3.9[249415]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:01:13 compute-0 sudo[249412]: pam_unix(sudo:session): session closed for user root
Jan 20 19:01:13 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:13 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4ac0032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:13 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v562: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 20 19:01:13 compute-0 sudo[249552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdayrzvgnzljeuforrgrxezoztilhkbg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935673.1524282-2975-85280716836492/AnsiballZ_copy.py'
Jan 20 19:01:13 compute-0 sudo[249552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:01:14 compute-0 python3.9[249554]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1768935673.1524282-2975-85280716836492/.source _original_basename=.vuvqrawj follow=False checksum=178076b22a8cf3d4cded69b18ae88b5e74e6c85a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Jan 20 19:01:14 compute-0 sudo[249552]: pam_unix(sudo:session): session closed for user root
Jan 20 19:01:14 compute-0 sudo[249582]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:01:14 compute-0 sudo[249582]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:01:14 compute-0 sudo[249582]: pam_unix(sudo:session): session closed for user root
Jan 20 19:01:14 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:01:14 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:01:14 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:01:14.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:01:14 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:14 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4c00008d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:15 compute-0 python3.9[249732]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 19:01:15 compute-0 ceph-mon[74381]: pgmap v562: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 20 19:01:15 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:15 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b80026f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:15 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:01:15 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:01:15 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:01:15.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:01:15 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:15 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4d80036f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:15 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v563: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:01:15 compute-0 python3.9[249885]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:01:16 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:01:16 compute-0 python3.9[250007]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768935675.4527845-3053-133203555560984/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=aff5546b44cf4461a7541a94e4cce1332c9b58b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:01:16 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:01:16 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:01:16 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:01:16.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:01:16 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:16 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4ac0032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:01:17.101Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:01:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:17 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4c0002b20 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:17 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:01:17 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:01:17 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:01:17.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:01:17 compute-0 ceph-mon[74381]: pgmap v563: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:01:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:17 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b80026f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:17 compute-0 python3.9[250157]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:01:17 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v564: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 20 19:01:18 compute-0 python3.9[250279]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768935676.919709-3098-277749906431155/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:01:18 compute-0 ceph-mon[74381]: pgmap v564: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 20 19:01:18 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:01:18 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:01:18 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:01:18.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:01:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:18 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4d80036f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:19 compute-0 sudo[250386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:01:19 compute-0 sudo[250386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:01:19 compute-0 sudo[250386]: pam_unix(sudo:session): session closed for user root
Jan 20 19:01:19 compute-0 sudo[250473]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crhtjfhwvjkrfugbfaytmzrplqqhduyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935678.669048-3149-44363128270499/AnsiballZ_container_config_data.py'
Jan 20 19:01:19 compute-0 sudo[250473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:01:19 compute-0 sudo[250440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 20 19:01:19 compute-0 sudo[250440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:01:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:19 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4ac0032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:19 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:01:19 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:01:19 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:01:19.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:01:19 compute-0 python3.9[250480]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Jan 20 19:01:19 compute-0 sudo[250473]: pam_unix(sudo:session): session closed for user root
Jan 20 19:01:19 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 20 19:01:19 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:01:19 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 20 19:01:19 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:01:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:19 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4ac0032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:19 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v565: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:01:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:01:19] "GET /metrics HTTP/1.1" 200 48334 "" "Prometheus/2.51.0"
Jan 20 19:01:19 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:01:19] "GET /metrics HTTP/1.1" 200 48334 "" "Prometheus/2.51.0"
Jan 20 19:01:19 compute-0 sudo[250440]: pam_unix(sudo:session): session closed for user root
Jan 20 19:01:20 compute-0 sudo[250666]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bindvltqdtrfgacubstzgcglcrdbrjxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935679.8862774-3182-183109238077115/AnsiballZ_container_config_hash.py'
Jan 20 19:01:20 compute-0 sudo[250666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:01:20 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 19:01:20 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:01:20 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 20 19:01:20 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:01:20 compute-0 sudo[250669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:01:20 compute-0 sudo[250669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:01:20 compute-0 sudo[250669]: pam_unix(sudo:session): session closed for user root
Jan 20 19:01:20 compute-0 python3.9[250668]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 20 19:01:20 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:01:20 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:01:20 compute-0 ceph-mon[74381]: pgmap v565: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:01:20 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 19:01:20 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 19:01:20 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:01:20 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:01:20 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 19:01:20 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 19:01:20 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 19:01:20 compute-0 sudo[250666]: pam_unix(sudo:session): session closed for user root
Jan 20 19:01:20 compute-0 sudo[250694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 19:01:20 compute-0 sudo[250694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:01:20 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:01:20 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:01:20 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:01:20.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:01:20 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:20 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b80026f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:20 compute-0 podman[250784]: 2026-01-20 19:01:20.981532418 +0000 UTC m=+0.036510412 container create 08bde033f309ced2ed0d00af6b3afdf6d61b2e6fc7947eec79a3b74e868430eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:01:21 compute-0 systemd[1]: Started libpod-conmon-08bde033f309ced2ed0d00af6b3afdf6d61b2e6fc7947eec79a3b74e868430eb.scope.
Jan 20 19:01:21 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:01:21 compute-0 podman[250784]: 2026-01-20 19:01:20.964881416 +0000 UTC m=+0.019859430 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:01:21 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:01:21 compute-0 podman[250784]: 2026-01-20 19:01:21.073215986 +0000 UTC m=+0.128193980 container init 08bde033f309ced2ed0d00af6b3afdf6d61b2e6fc7947eec79a3b74e868430eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_elgamal, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 19:01:21 compute-0 podman[250784]: 2026-01-20 19:01:21.089817927 +0000 UTC m=+0.144795921 container start 08bde033f309ced2ed0d00af6b3afdf6d61b2e6fc7947eec79a3b74e868430eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_elgamal, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 20 19:01:21 compute-0 podman[250784]: 2026-01-20 19:01:21.093293731 +0000 UTC m=+0.148271765 container attach 08bde033f309ced2ed0d00af6b3afdf6d61b2e6fc7947eec79a3b74e868430eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_elgamal, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Jan 20 19:01:21 compute-0 reverent_elgamal[250805]: 167 167
Jan 20 19:01:21 compute-0 systemd[1]: libpod-08bde033f309ced2ed0d00af6b3afdf6d61b2e6fc7947eec79a3b74e868430eb.scope: Deactivated successfully.
Jan 20 19:01:21 compute-0 podman[250784]: 2026-01-20 19:01:21.099092379 +0000 UTC m=+0.154070373 container died 08bde033f309ced2ed0d00af6b3afdf6d61b2e6fc7947eec79a3b74e868430eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 19:01:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-2dd972952904ccb476798a6517deeeb5e9bcad15f1d0eb8f5f554acbb57924b8-merged.mount: Deactivated successfully.
Jan 20 19:01:21 compute-0 podman[250784]: 2026-01-20 19:01:21.140727028 +0000 UTC m=+0.195705022 container remove 08bde033f309ced2ed0d00af6b3afdf6d61b2e6fc7947eec79a3b74e868430eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Jan 20 19:01:21 compute-0 systemd[1]: libpod-conmon-08bde033f309ced2ed0d00af6b3afdf6d61b2e6fc7947eec79a3b74e868430eb.scope: Deactivated successfully.
Jan 20 19:01:21 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:21 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4d80036f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:21 compute-0 podman[250876]: 2026-01-20 19:01:21.302283193 +0000 UTC m=+0.043463960 container create a4dd9730afac5bbfcce25af690a9ceafa0a68120dbc02dce9d8d8e75c9afb903 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_bhabha, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:01:21 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:01:21 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:01:21 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:01:21.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:01:21 compute-0 systemd[1]: Started libpod-conmon-a4dd9730afac5bbfcce25af690a9ceafa0a68120dbc02dce9d8d8e75c9afb903.scope.
Jan 20 19:01:21 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:01:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/925b5154672bf6d6b9aef5eeed1d633764bf3f5d81d419c74c5b52ba7a6c0612/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:01:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/925b5154672bf6d6b9aef5eeed1d633764bf3f5d81d419c74c5b52ba7a6c0612/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:01:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/925b5154672bf6d6b9aef5eeed1d633764bf3f5d81d419c74c5b52ba7a6c0612/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:01:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/925b5154672bf6d6b9aef5eeed1d633764bf3f5d81d419c74c5b52ba7a6c0612/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:01:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/925b5154672bf6d6b9aef5eeed1d633764bf3f5d81d419c74c5b52ba7a6c0612/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:01:21 compute-0 podman[250876]: 2026-01-20 19:01:21.284908131 +0000 UTC m=+0.026088918 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:01:21 compute-0 podman[250876]: 2026-01-20 19:01:21.381454231 +0000 UTC m=+0.122635028 container init a4dd9730afac5bbfcce25af690a9ceafa0a68120dbc02dce9d8d8e75c9afb903 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_bhabha, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Jan 20 19:01:21 compute-0 podman[250876]: 2026-01-20 19:01:21.390777795 +0000 UTC m=+0.131958572 container start a4dd9730afac5bbfcce25af690a9ceafa0a68120dbc02dce9d8d8e75c9afb903 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1)
Jan 20 19:01:21 compute-0 podman[250876]: 2026-01-20 19:01:21.394726612 +0000 UTC m=+0.135907389 container attach a4dd9730afac5bbfcce25af690a9ceafa0a68120dbc02dce9d8d8e75c9afb903 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_bhabha, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Jan 20 19:01:21 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:21 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4ac0032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:21 compute-0 sudo[250975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mixsjhsllbpcizmyswzkxxfuwebieefw ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1768935681.0683615-3212-195908582499669/AnsiballZ_edpm_container_manage.py'
Jan 20 19:01:21 compute-0 sudo[250975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:01:21 compute-0 epic_bhabha[250891]: --> passed data devices: 0 physical, 1 LVM
Jan 20 19:01:21 compute-0 epic_bhabha[250891]: --> All data devices are unavailable
Jan 20 19:01:21 compute-0 systemd[1]: libpod-a4dd9730afac5bbfcce25af690a9ceafa0a68120dbc02dce9d8d8e75c9afb903.scope: Deactivated successfully.
Jan 20 19:01:21 compute-0 podman[250876]: 2026-01-20 19:01:21.739296324 +0000 UTC m=+0.480477091 container died a4dd9730afac5bbfcce25af690a9ceafa0a68120dbc02dce9d8d8e75c9afb903 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_bhabha, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Jan 20 19:01:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-925b5154672bf6d6b9aef5eeed1d633764bf3f5d81d419c74c5b52ba7a6c0612-merged.mount: Deactivated successfully.
Jan 20 19:01:21 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v566: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:01:21 compute-0 podman[250876]: 2026-01-20 19:01:21.783779342 +0000 UTC m=+0.524960109 container remove a4dd9730afac5bbfcce25af690a9ceafa0a68120dbc02dce9d8d8e75c9afb903 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_bhabha, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:01:21 compute-0 systemd[1]: libpod-conmon-a4dd9730afac5bbfcce25af690a9ceafa0a68120dbc02dce9d8d8e75c9afb903.scope: Deactivated successfully.
Jan 20 19:01:21 compute-0 sudo[250694]: pam_unix(sudo:session): session closed for user root
Jan 20 19:01:21 compute-0 sudo[250993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:01:21 compute-0 sudo[250993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:01:21 compute-0 sudo[250993]: pam_unix(sudo:session): session closed for user root
Jan 20 19:01:21 compute-0 python3[250978]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 20 19:01:21 compute-0 sudo[251018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- lvm list --format json
Jan 20 19:01:21 compute-0 sudo[251018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:01:22 compute-0 podman[251109]: 2026-01-20 19:01:22.435032717 +0000 UTC m=+0.049305109 container create ecc8414b37f4d9dce8235d0db36d6b4edf5fdcaebb50b78e2d330bc218de4241 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_kowalevski, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:01:22 compute-0 systemd[1]: Started libpod-conmon-ecc8414b37f4d9dce8235d0db36d6b4edf5fdcaebb50b78e2d330bc218de4241.scope.
Jan 20 19:01:22 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:01:22 compute-0 podman[251109]: 2026-01-20 19:01:22.414192751 +0000 UTC m=+0.028465123 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:01:22 compute-0 podman[251109]: 2026-01-20 19:01:22.513747153 +0000 UTC m=+0.128019555 container init ecc8414b37f4d9dce8235d0db36d6b4edf5fdcaebb50b78e2d330bc218de4241 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_kowalevski, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Jan 20 19:01:22 compute-0 podman[251109]: 2026-01-20 19:01:22.522587703 +0000 UTC m=+0.136860085 container start ecc8414b37f4d9dce8235d0db36d6b4edf5fdcaebb50b78e2d330bc218de4241 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_kowalevski, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:01:22 compute-0 podman[251109]: 2026-01-20 19:01:22.52614893 +0000 UTC m=+0.140421312 container attach ecc8414b37f4d9dce8235d0db36d6b4edf5fdcaebb50b78e2d330bc218de4241 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_kowalevski, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid)
Jan 20 19:01:22 compute-0 clever_kowalevski[251125]: 167 167
Jan 20 19:01:22 compute-0 systemd[1]: libpod-ecc8414b37f4d9dce8235d0db36d6b4edf5fdcaebb50b78e2d330bc218de4241.scope: Deactivated successfully.
Jan 20 19:01:22 compute-0 conmon[251125]: conmon ecc8414b37f4d9dce823 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ecc8414b37f4d9dce8235d0db36d6b4edf5fdcaebb50b78e2d330bc218de4241.scope/container/memory.events
Jan 20 19:01:22 compute-0 podman[251109]: 2026-01-20 19:01:22.530870277 +0000 UTC m=+0.145142669 container died ecc8414b37f4d9dce8235d0db36d6b4edf5fdcaebb50b78e2d330bc218de4241 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_kowalevski, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:01:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-aa5e606a388ad74d554a876294cb3b76e74769708aaef35d9e3f692486a38fa4-merged.mount: Deactivated successfully.
Jan 20 19:01:22 compute-0 podman[251109]: 2026-01-20 19:01:22.577903914 +0000 UTC m=+0.192176306 container remove ecc8414b37f4d9dce8235d0db36d6b4edf5fdcaebb50b78e2d330bc218de4241 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:01:22 compute-0 systemd[1]: libpod-conmon-ecc8414b37f4d9dce8235d0db36d6b4edf5fdcaebb50b78e2d330bc218de4241.scope: Deactivated successfully.
Jan 20 19:01:22 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:01:22 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:01:22 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:01:22.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:01:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:22 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4c0003440 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:22 compute-0 podman[251154]: 2026-01-20 19:01:22.785542699 +0000 UTC m=+0.059943018 container create 6012e7ce0263937fc15c44115ee807c1a80127f3a74da1ea882543bc8b9bb62c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:01:22 compute-0 ceph-mon[74381]: pgmap v566: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:01:22 compute-0 systemd[1]: Started libpod-conmon-6012e7ce0263937fc15c44115ee807c1a80127f3a74da1ea882543bc8b9bb62c.scope.
Jan 20 19:01:22 compute-0 podman[251154]: 2026-01-20 19:01:22.766222665 +0000 UTC m=+0.040623004 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:01:22 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:01:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76c6998409fc74f47c70d98f6ad5691de47b3914897a768af194e269e35eee5c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:01:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76c6998409fc74f47c70d98f6ad5691de47b3914897a768af194e269e35eee5c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:01:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76c6998409fc74f47c70d98f6ad5691de47b3914897a768af194e269e35eee5c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:01:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76c6998409fc74f47c70d98f6ad5691de47b3914897a768af194e269e35eee5c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:01:22 compute-0 podman[251154]: 2026-01-20 19:01:22.899516143 +0000 UTC m=+0.173916472 container init 6012e7ce0263937fc15c44115ee807c1a80127f3a74da1ea882543bc8b9bb62c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_saha, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Jan 20 19:01:22 compute-0 podman[251154]: 2026-01-20 19:01:22.914258623 +0000 UTC m=+0.188658942 container start 6012e7ce0263937fc15c44115ee807c1a80127f3a74da1ea882543bc8b9bb62c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_saha, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Jan 20 19:01:22 compute-0 podman[251154]: 2026-01-20 19:01:22.917854751 +0000 UTC m=+0.192255070 container attach 6012e7ce0263937fc15c44115ee807c1a80127f3a74da1ea882543bc8b9bb62c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_saha, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 20 19:01:23 compute-0 vigorous_saha[251170]: {
Jan 20 19:01:23 compute-0 vigorous_saha[251170]:     "0": [
Jan 20 19:01:23 compute-0 vigorous_saha[251170]:         {
Jan 20 19:01:23 compute-0 vigorous_saha[251170]:             "devices": [
Jan 20 19:01:23 compute-0 vigorous_saha[251170]:                 "/dev/loop3"
Jan 20 19:01:23 compute-0 vigorous_saha[251170]:             ],
Jan 20 19:01:23 compute-0 vigorous_saha[251170]:             "lv_name": "ceph_lv0",
Jan 20 19:01:23 compute-0 vigorous_saha[251170]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:01:23 compute-0 vigorous_saha[251170]:             "lv_size": "21470642176",
Jan 20 19:01:23 compute-0 vigorous_saha[251170]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=aecbbf3b-b405-507b-97d7-637a83f5b4b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5f53c0c6-6046-4836-83f9-ff93da7e674e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:01:23 compute-0 vigorous_saha[251170]:             "lv_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 19:01:23 compute-0 vigorous_saha[251170]:             "name": "ceph_lv0",
Jan 20 19:01:23 compute-0 vigorous_saha[251170]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:01:23 compute-0 vigorous_saha[251170]:             "tags": {
Jan 20 19:01:23 compute-0 vigorous_saha[251170]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:01:23 compute-0 vigorous_saha[251170]:                 "ceph.block_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 19:01:23 compute-0 vigorous_saha[251170]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:01:23 compute-0 vigorous_saha[251170]:                 "ceph.cluster_fsid": "aecbbf3b-b405-507b-97d7-637a83f5b4b1",
Jan 20 19:01:23 compute-0 vigorous_saha[251170]:                 "ceph.cluster_name": "ceph",
Jan 20 19:01:23 compute-0 vigorous_saha[251170]:                 "ceph.crush_device_class": "",
Jan 20 19:01:23 compute-0 vigorous_saha[251170]:                 "ceph.encrypted": "0",
Jan 20 19:01:23 compute-0 vigorous_saha[251170]:                 "ceph.osd_fsid": "5f53c0c6-6046-4836-83f9-ff93da7e674e",
Jan 20 19:01:23 compute-0 vigorous_saha[251170]:                 "ceph.osd_id": "0",
Jan 20 19:01:23 compute-0 vigorous_saha[251170]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:01:23 compute-0 vigorous_saha[251170]:                 "ceph.type": "block",
Jan 20 19:01:23 compute-0 vigorous_saha[251170]:                 "ceph.vdo": "0",
Jan 20 19:01:23 compute-0 vigorous_saha[251170]:                 "ceph.with_tpm": "0"
Jan 20 19:01:23 compute-0 vigorous_saha[251170]:             },
Jan 20 19:01:23 compute-0 vigorous_saha[251170]:             "type": "block",
Jan 20 19:01:23 compute-0 vigorous_saha[251170]:             "vg_name": "ceph_vg0"
Jan 20 19:01:23 compute-0 vigorous_saha[251170]:         }
Jan 20 19:01:23 compute-0 vigorous_saha[251170]:     ]
Jan 20 19:01:23 compute-0 vigorous_saha[251170]: }
Jan 20 19:01:23 compute-0 podman[251154]: 2026-01-20 19:01:23.227559406 +0000 UTC m=+0.501959745 container died 6012e7ce0263937fc15c44115ee807c1a80127f3a74da1ea882543bc8b9bb62c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_saha, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:01:23 compute-0 systemd[1]: libpod-6012e7ce0263937fc15c44115ee807c1a80127f3a74da1ea882543bc8b9bb62c.scope: Deactivated successfully.
Jan 20 19:01:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:23 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4d80036f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:23 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:01:23 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:01:23 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:01:23.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:01:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-76c6998409fc74f47c70d98f6ad5691de47b3914897a768af194e269e35eee5c-merged.mount: Deactivated successfully.
Jan 20 19:01:23 compute-0 podman[251154]: 2026-01-20 19:01:23.377341182 +0000 UTC m=+0.651741501 container remove 6012e7ce0263937fc15c44115ee807c1a80127f3a74da1ea882543bc8b9bb62c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_saha, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:01:23 compute-0 systemd[1]: libpod-conmon-6012e7ce0263937fc15c44115ee807c1a80127f3a74da1ea882543bc8b9bb62c.scope: Deactivated successfully.
Jan 20 19:01:23 compute-0 sudo[251018]: pam_unix(sudo:session): session closed for user root
Jan 20 19:01:23 compute-0 sudo[251196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:01:23 compute-0 sudo[251196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:01:23 compute-0 sudo[251196]: pam_unix(sudo:session): session closed for user root
Jan 20 19:01:23 compute-0 sudo[251221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- raw list --format json
Jan 20 19:01:23 compute-0 sudo[251221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:01:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:23 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b80026f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:23 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v567: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 20 19:01:24 compute-0 podman[251291]: 2026-01-20 19:01:24.027587419 +0000 UTC m=+0.030363964 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:01:24 compute-0 podman[251291]: 2026-01-20 19:01:24.310701183 +0000 UTC m=+0.313477668 container create b8b87b8709e1a736135ea5710111d375ce69344d6bb2088102236f02f7ba8b62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_proskuriakova, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:01:24 compute-0 systemd[1]: Started libpod-conmon-b8b87b8709e1a736135ea5710111d375ce69344d6bb2088102236f02f7ba8b62.scope.
Jan 20 19:01:24 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:01:24 compute-0 podman[251291]: 2026-01-20 19:01:24.620322477 +0000 UTC m=+0.623098992 container init b8b87b8709e1a736135ea5710111d375ce69344d6bb2088102236f02f7ba8b62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_proskuriakova, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 20 19:01:24 compute-0 podman[251291]: 2026-01-20 19:01:24.628040136 +0000 UTC m=+0.630816631 container start b8b87b8709e1a736135ea5710111d375ce69344d6bb2088102236f02f7ba8b62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:01:24 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:01:24 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:01:24 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:01:24.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:01:24 compute-0 jovial_proskuriakova[251308]: 167 167
Jan 20 19:01:24 compute-0 podman[251291]: 2026-01-20 19:01:24.634592794 +0000 UTC m=+0.637369289 container attach b8b87b8709e1a736135ea5710111d375ce69344d6bb2088102236f02f7ba8b62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Jan 20 19:01:24 compute-0 systemd[1]: libpod-b8b87b8709e1a736135ea5710111d375ce69344d6bb2088102236f02f7ba8b62.scope: Deactivated successfully.
Jan 20 19:01:24 compute-0 podman[251291]: 2026-01-20 19:01:24.636250629 +0000 UTC m=+0.639027134 container died b8b87b8709e1a736135ea5710111d375ce69344d6bb2088102236f02f7ba8b62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_proskuriakova, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Jan 20 19:01:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-a0a8e67b845e9f2763bc26b464d480a82eaeecb06efa0ac54fde06653fea58d4-merged.mount: Deactivated successfully.
Jan 20 19:01:24 compute-0 podman[251291]: 2026-01-20 19:01:24.674294851 +0000 UTC m=+0.677071346 container remove b8b87b8709e1a736135ea5710111d375ce69344d6bb2088102236f02f7ba8b62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Jan 20 19:01:24 compute-0 systemd[1]: libpod-conmon-b8b87b8709e1a736135ea5710111d375ce69344d6bb2088102236f02f7ba8b62.scope: Deactivated successfully.
Jan 20 19:01:24 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:24 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4ac0032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:24 compute-0 podman[251331]: 2026-01-20 19:01:24.859300552 +0000 UTC m=+0.058811687 container create c391cb3545a6647469d0d8660f31419a01386d22caaf30566eca714ff57a9f55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_lalande, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 19:01:24 compute-0 podman[251331]: 2026-01-20 19:01:24.836275898 +0000 UTC m=+0.035787053 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:01:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:01:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:01:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:01:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:01:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:01:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:01:25 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:25 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4c0003440 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:25 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:01:25 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:01:25 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:01:25.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:01:25 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:25 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4d80036f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:25 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v568: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:01:26 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:01:26 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:01:26 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:01:26 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:01:26.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:01:26 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:26 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b80026f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:27 compute-0 ceph-mon[74381]: pgmap v567: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 20 19:01:27 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:01:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:01:27.102Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:01:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:01:27.103Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:01:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:27 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4ac0032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:27 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:01:27 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:01:27 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:01:27.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:01:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:27 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4c0003440 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:27 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v569: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 20 19:01:28 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:01:28 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:01:28 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:01:28.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:01:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:28 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4d8004f60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:29 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b80026f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:29 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:01:29 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:01:29 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:01:29.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:01:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:29 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4ac0032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:29 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v570: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:01:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:01:29] "GET /metrics HTTP/1.1" 200 48337 "" "Prometheus/2.51.0"
Jan 20 19:01:29 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:01:29] "GET /metrics HTTP/1.1" 200 48337 "" "Prometheus/2.51.0"
Jan 20 19:01:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:01:30.275 165659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:01:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:01:30.276 165659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:01:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:01:30.276 165659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:01:30 compute-0 ceph-mon[74381]: pgmap v568: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:01:30 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:01:30 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:01:30 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:01:30.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:01:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:30 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4c00035e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:31 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:31 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4d8004f60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:31 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:01:31 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:01:31 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:01:31.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:01:31 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:01:31 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:31 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b80026f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:31 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v571: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:01:32 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:01:32 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:01:32 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:01:32.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:01:32 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:32 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4ac0032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:33 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4c00043a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:33 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:01:33 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:01:33 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:01:33.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:01:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:33 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4d8004f60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:33 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v572: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 20 19:01:34 compute-0 systemd[1]: Started libpod-conmon-c391cb3545a6647469d0d8660f31419a01386d22caaf30566eca714ff57a9f55.scope.
Jan 20 19:01:34 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:01:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93aca4a042602eb1ab1863493986bae1164fb16396375677842fad27fb6468ab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:01:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93aca4a042602eb1ab1863493986bae1164fb16396375677842fad27fb6468ab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:01:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93aca4a042602eb1ab1863493986bae1164fb16396375677842fad27fb6468ab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:01:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93aca4a042602eb1ab1863493986bae1164fb16396375677842fad27fb6468ab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:01:34 compute-0 podman[251331]: 2026-01-20 19:01:34.055234335 +0000 UTC m=+9.254745510 container init c391cb3545a6647469d0d8660f31419a01386d22caaf30566eca714ff57a9f55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:01:34 compute-0 podman[251331]: 2026-01-20 19:01:34.070691415 +0000 UTC m=+9.270202560 container start c391cb3545a6647469d0d8660f31419a01386d22caaf30566eca714ff57a9f55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_lalande, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 20 19:01:34 compute-0 podman[251057]: 2026-01-20 19:01:34.073177822 +0000 UTC m=+12.037338200 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 20 19:01:34 compute-0 podman[251331]: 2026-01-20 19:01:34.076334418 +0000 UTC m=+9.275845573 container attach c391cb3545a6647469d0d8660f31419a01386d22caaf30566eca714ff57a9f55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_lalande, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:01:34 compute-0 podman[251424]: 2026-01-20 19:01:34.237157453 +0000 UTC m=+0.066272810 container create d85cacba8b51a3e535241cee8147b7ad2f6695a17636e951c43a1b098504f719 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, tcib_managed=true, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Jan 20 19:01:34 compute-0 podman[251424]: 2026-01-20 19:01:34.200051446 +0000 UTC m=+0.029166833 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 20 19:01:34 compute-0 python3[250978]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Jan 20 19:01:34 compute-0 sudo[250975]: pam_unix(sudo:session): session closed for user root
Jan 20 19:01:34 compute-0 sudo[251520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:01:34 compute-0 sudo[251520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:01:34 compute-0 sudo[251520]: pam_unix(sudo:session): session closed for user root
Jan 20 19:01:34 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:01:34 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:01:34 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:01:34.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:01:34 compute-0 lvm[251580]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 19:01:34 compute-0 lvm[251580]: VG ceph_vg0 finished
Jan 20 19:01:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:34 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b80026f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:34 compute-0 vibrant_lalande[251397]: {}
Jan 20 19:01:34 compute-0 systemd[1]: libpod-c391cb3545a6647469d0d8660f31419a01386d22caaf30566eca714ff57a9f55.scope: Deactivated successfully.
Jan 20 19:01:34 compute-0 podman[251331]: 2026-01-20 19:01:34.830674901 +0000 UTC m=+10.030186026 container died c391cb3545a6647469d0d8660f31419a01386d22caaf30566eca714ff57a9f55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_lalande, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Jan 20 19:01:34 compute-0 systemd[1]: libpod-c391cb3545a6647469d0d8660f31419a01386d22caaf30566eca714ff57a9f55.scope: Consumed 1.099s CPU time.
Jan 20 19:01:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-93aca4a042602eb1ab1863493986bae1164fb16396375677842fad27fb6468ab-merged.mount: Deactivated successfully.
Jan 20 19:01:34 compute-0 podman[251331]: 2026-01-20 19:01:34.885318304 +0000 UTC m=+10.084829449 container remove c391cb3545a6647469d0d8660f31419a01386d22caaf30566eca714ff57a9f55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_lalande, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Jan 20 19:01:34 compute-0 systemd[1]: libpod-conmon-c391cb3545a6647469d0d8660f31419a01386d22caaf30566eca714ff57a9f55.scope: Deactivated successfully.
Jan 20 19:01:34 compute-0 sudo[251221]: pam_unix(sudo:session): session closed for user root
Jan 20 19:01:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:35 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4ac0032f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:35 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:01:35 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:01:35 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:01:35.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:01:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:35 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4c00043a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:35 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v573: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:01:35 compute-0 ceph-mon[74381]: pgmap v569: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 20 19:01:35 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:01:36 compute-0 podman[251595]: 2026-01-20 19:01:36.098736308 +0000 UTC m=+0.063316530 container health_status 7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 20 19:01:36 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:01:36 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:01:36 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:01:36 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:01:36.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:01:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:36 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4d8004f60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:01:37.104Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:01:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:37 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b80026f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:37 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:01:37 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:01:37 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:01:37.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:01:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:37 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4cc001080 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:37 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v574: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 20 19:01:38 compute-0 ceph-mon[74381]: pgmap v570: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:01:38 compute-0 ceph-mon[74381]: pgmap v571: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:01:38 compute-0 ceph-mon[74381]: pgmap v572: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 20 19:01:38 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:01:38 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:01:38 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:01:38 compute-0 sudo[251620]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 19:01:38 compute-0 sudo[251620]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:01:38 compute-0 sudo[251620]: pam_unix(sudo:session): session closed for user root
Jan 20 19:01:38 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:01:38 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:01:38 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:01:38.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:01:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:38 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4c00043a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:39 compute-0 sudo[251770]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udiehvxurjsageaqzvtmssajbyvbzzir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935698.9097495-3236-145116078483711/AnsiballZ_stat.py'
Jan 20 19:01:39 compute-0 sudo[251770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:01:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:39 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4d8004f60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:39 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:01:39 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:01:39 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:01:39.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:01:39 compute-0 python3.9[251772]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 19:01:39 compute-0 sudo[251770]: pam_unix(sudo:session): session closed for user root
Jan 20 19:01:39 compute-0 ceph-mon[74381]: pgmap v573: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:01:39 compute-0 ceph-mon[74381]: pgmap v574: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 20 19:01:39 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:01:39 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:01:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:39 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b80026f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:39 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v575: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:01:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:01:39] "GET /metrics HTTP/1.1" 200 48337 "" "Prometheus/2.51.0"
Jan 20 19:01:39 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:01:39] "GET /metrics HTTP/1.1" 200 48337 "" "Prometheus/2.51.0"
Jan 20 19:01:40 compute-0 sudo[251926]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbnrtzoidslaunltvnxfavjdikrnqoxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935700.1730611-3272-178462768898028/AnsiballZ_container_config_data.py'
Jan 20 19:01:40 compute-0 sudo[251926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:01:40 compute-0 ceph-mon[74381]: pgmap v575: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:01:40 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:01:40 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:01:40 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:01:40 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:01:40.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:01:40 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:40 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4cc0023c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:40 compute-0 python3.9[251928]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Jan 20 19:01:40 compute-0 sudo[251926]: pam_unix(sudo:session): session closed for user root
Jan 20 19:01:41 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:41 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4c00043a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:41 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:01:41 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:01:41 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:01:41.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:01:41 compute-0 sudo[252079]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgmkeermjsvnbpxxgmweuaozbjutbidt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935701.1974273-3305-242137319132566/AnsiballZ_container_config_hash.py'
Jan 20 19:01:41 compute-0 sudo[252079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:01:41 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:01:41 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:41 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4c00043a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:41 compute-0 python3.9[252081]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 20 19:01:41 compute-0 sudo[252079]: pam_unix(sudo:session): session closed for user root
Jan 20 19:01:41 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v576: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:01:42 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:01:42 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:01:42 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:01:42.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:01:42 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:42 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b80026f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:43 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:43 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4cc002540 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:43 compute-0 sudo[252233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pejkbhddbtplxwqhpqvbjqqsoglwjtoj ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1768935702.9571888-3335-33102008723566/AnsiballZ_edpm_container_manage.py'
Jan 20 19:01:43 compute-0 sudo[252233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:01:43 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:01:43 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:01:43 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:01:43.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:01:43 compute-0 ceph-mon[74381]: pgmap v576: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:01:43 compute-0 python3[252235]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 20 19:01:43 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:43 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b40010b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:43 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v577: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 20 19:01:43 compute-0 podman[252274]: 2026-01-20 19:01:43.931061313 +0000 UTC m=+0.089181912 container create 77eb8e75ce200b0a9dcc4d021c0d36ee994fffd640610e677c8e55800d2db691 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Jan 20 19:01:43 compute-0 podman[252274]: 2026-01-20 19:01:43.890614545 +0000 UTC m=+0.048735204 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 20 19:01:43 compute-0 python3[252235]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath --volume /etc/multipath.conf:/etc/multipath.conf:ro,Z --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Jan 20 19:01:44 compute-0 sudo[252233]: pam_unix(sudo:session): session closed for user root
Jan 20 19:01:44 compute-0 podman[252298]: 2026-01-20 19:01:44.197024881 +0000 UTC m=+0.164923637 container health_status d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 20 19:01:44 compute-0 ceph-mon[74381]: pgmap v577: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 20 19:01:44 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:01:44 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:01:44 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:01:44.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:01:44 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:44 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4c00043a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:44 compute-0 sudo[252489]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usixqhuqkuelsdygokfqrxwtuezplvcb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935704.4210262-3359-24801917905937/AnsiballZ_stat.py'
Jan 20 19:01:44 compute-0 sudo[252489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:01:44 compute-0 python3.9[252491]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 19:01:44 compute-0 sudo[252489]: pam_unix(sudo:session): session closed for user root
Jan 20 19:01:45 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:45 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b80026f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:45 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:01:45 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:01:45 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:01:45.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:01:45 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:45 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4cc002ef0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:45 compute-0 sudo[252644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jowvsbuxqxsupmxoeetpwuymszjjqioq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935705.2490404-3386-128578905385384/AnsiballZ_file.py'
Jan 20 19:01:45 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v578: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:01:45 compute-0 sudo[252644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:01:46 compute-0 python3.9[252646]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:01:46 compute-0 sudo[252644]: pam_unix(sudo:session): session closed for user root
Jan 20 19:01:46 compute-0 sudo[252796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sntuiynuojgzhjhgkxfmjuhqwpwyeuok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935706.0806084-3386-262797027270822/AnsiballZ_copy.py'
Jan 20 19:01:46 compute-0 sudo[252796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:01:46 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:01:46 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:01:46 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:01:46 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:01:46.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:01:46 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:46 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b40010b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:46 compute-0 python3.9[252798]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768935706.0806084-3386-262797027270822/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:01:46 compute-0 sudo[252796]: pam_unix(sudo:session): session closed for user root
Jan 20 19:01:47 compute-0 sudo[252872]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmcfnbxneoqnppracwdkxpqziceqehvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935706.0806084-3386-262797027270822/AnsiballZ_systemd.py'
Jan 20 19:01:47 compute-0 sudo[252872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:01:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:01:47.106Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:01:47 compute-0 ceph-mon[74381]: pgmap v578: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:01:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:47 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4c00043a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:47 compute-0 python3.9[252874]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 20 19:01:47 compute-0 systemd[1]: Reloading.
Jan 20 19:01:47 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:01:47 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:01:47 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:01:47.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:01:47 compute-0 systemd-rc-local-generator[252904]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 19:01:47 compute-0 systemd-sysv-generator[252908]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 19:01:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:47 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b80026f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:47 compute-0 sudo[252872]: pam_unix(sudo:session): session closed for user root
Jan 20 19:01:47 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v579: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 20 19:01:47 compute-0 sudo[252985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqtnenpfriyveuonuxsvruprailjtyvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935706.0806084-3386-262797027270822/AnsiballZ_systemd.py'
Jan 20 19:01:47 compute-0 sudo[252985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:01:48 compute-0 python3.9[252987]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 19:01:48 compute-0 systemd[1]: Reloading.
Jan 20 19:01:48 compute-0 ceph-mon[74381]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Jan 20 19:01:48 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:01:48.370984) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 19:01:48 compute-0 ceph-mon[74381]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Jan 20 19:01:48 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935708371017, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 1133, "num_deletes": 251, "total_data_size": 2105892, "memory_usage": 2149656, "flush_reason": "Manual Compaction"}
Jan 20 19:01:48 compute-0 ceph-mon[74381]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Jan 20 19:01:48 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935708387750, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 2046489, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19303, "largest_seqno": 20434, "table_properties": {"data_size": 2041001, "index_size": 2883, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 11797, "raw_average_key_size": 19, "raw_value_size": 2030041, "raw_average_value_size": 3429, "num_data_blocks": 128, "num_entries": 592, "num_filter_entries": 592, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768935600, "oldest_key_time": 1768935600, "file_creation_time": 1768935708, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cbf3ab03-d51c-4622-b6c7-e997cd5246eb", "db_session_id": "I40O2DG19JCNHUB0JQU4", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Jan 20 19:01:48 compute-0 ceph-mon[74381]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 16848 microseconds, and 4457 cpu microseconds.
Jan 20 19:01:48 compute-0 ceph-mon[74381]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 19:01:48 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:01:48.387797) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 2046489 bytes OK
Jan 20 19:01:48 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:01:48.387848) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Jan 20 19:01:48 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:01:48.388966) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Jan 20 19:01:48 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:01:48.388978) EVENT_LOG_v1 {"time_micros": 1768935708388974, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 19:01:48 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:01:48.388997) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 19:01:48 compute-0 ceph-mon[74381]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 2100794, prev total WAL file size 2100794, number of live WAL files 2.
Jan 20 19:01:48 compute-0 ceph-mon[74381]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 19:01:48 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:01:48.389602) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Jan 20 19:01:48 compute-0 ceph-mon[74381]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 19:01:48 compute-0 ceph-mon[74381]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(1998KB)], [41(13MB)]
Jan 20 19:01:48 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935708389679, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 16092362, "oldest_snapshot_seqno": -1}
Jan 20 19:01:48 compute-0 ceph-mon[74381]: pgmap v579: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 20 19:01:48 compute-0 systemd-rc-local-generator[253018]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 19:01:48 compute-0 systemd-sysv-generator[253021]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 19:01:48 compute-0 ceph-mon[74381]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 5234 keys, 13893771 bytes, temperature: kUnknown
Jan 20 19:01:48 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935708484231, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 13893771, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13857178, "index_size": 22403, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13125, "raw_key_size": 133514, "raw_average_key_size": 25, "raw_value_size": 13760734, "raw_average_value_size": 2629, "num_data_blocks": 918, "num_entries": 5234, "num_filter_entries": 5234, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768934326, "oldest_key_time": 0, "file_creation_time": 1768935708, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cbf3ab03-d51c-4622-b6c7-e997cd5246eb", "db_session_id": "I40O2DG19JCNHUB0JQU4", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Jan 20 19:01:48 compute-0 ceph-mon[74381]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 19:01:48 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:01:48.484453) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 13893771 bytes
Jan 20 19:01:48 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:01:48.486085) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 170.1 rd, 146.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 13.4 +0.0 blob) out(13.3 +0.0 blob), read-write-amplify(14.7) write-amplify(6.8) OK, records in: 5754, records dropped: 520 output_compression: NoCompression
Jan 20 19:01:48 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:01:48.486102) EVENT_LOG_v1 {"time_micros": 1768935708486095, "job": 20, "event": "compaction_finished", "compaction_time_micros": 94603, "compaction_time_cpu_micros": 24869, "output_level": 6, "num_output_files": 1, "total_output_size": 13893771, "num_input_records": 5754, "num_output_records": 5234, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 19:01:48 compute-0 ceph-mon[74381]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 19:01:48 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935708486476, "job": 20, "event": "table_file_deletion", "file_number": 43}
Jan 20 19:01:48 compute-0 ceph-mon[74381]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 19:01:48 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935708488394, "job": 20, "event": "table_file_deletion", "file_number": 41}
Jan 20 19:01:48 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:01:48.389511) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:01:48 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:01:48.488459) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:01:48 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:01:48.488464) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:01:48 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:01:48.488466) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:01:48 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:01:48.488468) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:01:48 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:01:48.488470) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:01:48 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:01:48 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:01:48 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:01:48.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:01:48 compute-0 systemd[1]: Starting nova_compute container...
Jan 20 19:01:48 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:48 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4cc002ef0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:48 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:01:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ba362d5b39081f720856419c7a391d1f63e1c92b17e1dcaaa786beb2a21b84e/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 20 19:01:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ba362d5b39081f720856419c7a391d1f63e1c92b17e1dcaaa786beb2a21b84e/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 20 19:01:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ba362d5b39081f720856419c7a391d1f63e1c92b17e1dcaaa786beb2a21b84e/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 20 19:01:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ba362d5b39081f720856419c7a391d1f63e1c92b17e1dcaaa786beb2a21b84e/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 20 19:01:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ba362d5b39081f720856419c7a391d1f63e1c92b17e1dcaaa786beb2a21b84e/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 20 19:01:48 compute-0 podman[253028]: 2026-01-20 19:01:48.8384371 +0000 UTC m=+0.101442644 container init 77eb8e75ce200b0a9dcc4d021c0d36ee994fffd640610e677c8e55800d2db691 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.build-date=20251202, config_id=edpm, container_name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 20 19:01:48 compute-0 podman[253028]: 2026-01-20 19:01:48.846336205 +0000 UTC m=+0.109341729 container start 77eb8e75ce200b0a9dcc4d021c0d36ee994fffd640610e677c8e55800d2db691 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 20 19:01:48 compute-0 podman[253028]: nova_compute
Jan 20 19:01:48 compute-0 nova_compute[253043]: + sudo -E kolla_set_configs
Jan 20 19:01:48 compute-0 systemd[1]: Started nova_compute container.
Jan 20 19:01:48 compute-0 sudo[252985]: pam_unix(sudo:session): session closed for user root
Jan 20 19:01:48 compute-0 nova_compute[253043]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 20 19:01:48 compute-0 nova_compute[253043]: INFO:__main__:Validating config file
Jan 20 19:01:48 compute-0 nova_compute[253043]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 20 19:01:48 compute-0 nova_compute[253043]: INFO:__main__:Copying service configuration files
Jan 20 19:01:48 compute-0 nova_compute[253043]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 20 19:01:48 compute-0 nova_compute[253043]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 20 19:01:48 compute-0 nova_compute[253043]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 20 19:01:48 compute-0 nova_compute[253043]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 20 19:01:48 compute-0 nova_compute[253043]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 20 19:01:48 compute-0 nova_compute[253043]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 20 19:01:48 compute-0 nova_compute[253043]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 20 19:01:48 compute-0 nova_compute[253043]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 20 19:01:48 compute-0 nova_compute[253043]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 20 19:01:48 compute-0 nova_compute[253043]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 20 19:01:48 compute-0 nova_compute[253043]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 20 19:01:48 compute-0 nova_compute[253043]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 20 19:01:48 compute-0 nova_compute[253043]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 20 19:01:48 compute-0 nova_compute[253043]: INFO:__main__:Deleting /etc/ceph
Jan 20 19:01:48 compute-0 nova_compute[253043]: INFO:__main__:Creating directory /etc/ceph
Jan 20 19:01:48 compute-0 nova_compute[253043]: INFO:__main__:Setting permission for /etc/ceph
Jan 20 19:01:48 compute-0 nova_compute[253043]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Jan 20 19:01:48 compute-0 nova_compute[253043]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 20 19:01:48 compute-0 nova_compute[253043]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Jan 20 19:01:48 compute-0 nova_compute[253043]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 20 19:01:48 compute-0 nova_compute[253043]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 20 19:01:48 compute-0 nova_compute[253043]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 20 19:01:48 compute-0 nova_compute[253043]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 20 19:01:48 compute-0 nova_compute[253043]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 20 19:01:48 compute-0 nova_compute[253043]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 20 19:01:48 compute-0 nova_compute[253043]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 20 19:01:48 compute-0 nova_compute[253043]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 20 19:01:48 compute-0 nova_compute[253043]: INFO:__main__:Writing out command to execute
Jan 20 19:01:48 compute-0 nova_compute[253043]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 20 19:01:48 compute-0 nova_compute[253043]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 20 19:01:48 compute-0 nova_compute[253043]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 20 19:01:48 compute-0 nova_compute[253043]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 20 19:01:48 compute-0 nova_compute[253043]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 20 19:01:48 compute-0 nova_compute[253043]: ++ cat /run_command
Jan 20 19:01:48 compute-0 nova_compute[253043]: + CMD=nova-compute
Jan 20 19:01:48 compute-0 nova_compute[253043]: + ARGS=
Jan 20 19:01:48 compute-0 nova_compute[253043]: + sudo kolla_copy_cacerts
Jan 20 19:01:48 compute-0 nova_compute[253043]: + [[ ! -n '' ]]
Jan 20 19:01:48 compute-0 nova_compute[253043]: + . kolla_extend_start
Jan 20 19:01:48 compute-0 nova_compute[253043]: + echo 'Running command: '\''nova-compute'\'''
Jan 20 19:01:48 compute-0 nova_compute[253043]: Running command: 'nova-compute'
Jan 20 19:01:48 compute-0 nova_compute[253043]: + umask 0022
Jan 20 19:01:48 compute-0 nova_compute[253043]: + exec nova-compute
Jan 20 19:01:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:49 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b4002690 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:49 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:01:49 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:01:49 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:01:49.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:01:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:49 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4c00043a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:49 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v580: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:01:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:01:49] "GET /metrics HTTP/1.1" 200 48334 "" "Prometheus/2.51.0"
Jan 20 19:01:49 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:01:49] "GET /metrics HTTP/1.1" 200 48334 "" "Prometheus/2.51.0"
Jan 20 19:01:50 compute-0 python3.9[253207]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 19:01:50 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:01:50 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:01:50 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:01:50.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:01:50 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:50 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4c00043a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:51 compute-0 ceph-mon[74381]: pgmap v580: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:01:51 compute-0 nova_compute[253043]: 2026-01-20 19:01:51.056 253047 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 20 19:01:51 compute-0 nova_compute[253043]: 2026-01-20 19:01:51.057 253047 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 20 19:01:51 compute-0 nova_compute[253043]: 2026-01-20 19:01:51.058 253047 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 20 19:01:51 compute-0 nova_compute[253043]: 2026-01-20 19:01:51.058 253047 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Jan 20 19:01:51 compute-0 nova_compute[253043]: 2026-01-20 19:01:51.221 253047 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:01:51 compute-0 nova_compute[253043]: 2026-01-20 19:01:51.245 253047 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:01:51 compute-0 nova_compute[253043]: 2026-01-20 19:01:51.246 253047 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Jan 20 19:01:51 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:51 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4c00043a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:51 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:01:51 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:01:51 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:01:51.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:01:51 compute-0 python3.9[253361]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 19:01:51 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:01:51 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:51 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b4002690 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:51 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v581: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:01:51 compute-0 nova_compute[253043]: 2026-01-20 19:01:51.870 253047 INFO nova.virt.driver [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.154 253047 INFO nova.compute.provider_config [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.171 253047 DEBUG oslo_concurrency.lockutils [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.172 253047 DEBUG oslo_concurrency.lockutils [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.172 253047 DEBUG oslo_concurrency.lockutils [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.172 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.173 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.173 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.173 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.173 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.173 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.174 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.174 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.174 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.174 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.174 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.175 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.175 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.175 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.175 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.176 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.176 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.176 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.176 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.176 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.177 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.177 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.177 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.177 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.178 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.178 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.178 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.178 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.178 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.179 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.179 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.179 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.179 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.180 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.180 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.180 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.180 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.180 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.181 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.181 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.181 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.181 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.182 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.182 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.182 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.182 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.182 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.183 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.183 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.183 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.183 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.184 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.184 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.184 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.184 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.184 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.185 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.185 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.185 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.185 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.186 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.186 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.186 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.186 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.187 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.187 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.187 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.187 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.188 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.188 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.188 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.189 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.189 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.189 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.189 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.189 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.190 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.190 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.190 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.191 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.191 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.191 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.191 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.191 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.192 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.192 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.192 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.192 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.192 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.193 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.193 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.193 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.193 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.193 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.194 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.194 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.194 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.194 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.194 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.195 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.195 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.195 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.195 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.195 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.196 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.196 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.196 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.196 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.196 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.197 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.197 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.197 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.197 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.197 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.198 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.198 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.198 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.198 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.199 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.199 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.199 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.199 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.200 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.200 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.200 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.200 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.201 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.201 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.201 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.201 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.201 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.202 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.202 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.202 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.202 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.203 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.203 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.203 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.203 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.203 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.204 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.204 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.204 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.204 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.204 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.205 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.205 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.205 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.205 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.206 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.206 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.206 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.206 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.207 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.207 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.207 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.208 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.208 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.208 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.208 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.209 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.209 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.209 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.209 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.209 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.210 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.210 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.210 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.210 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.211 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.211 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.211 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.211 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.211 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.212 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.212 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.212 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.212 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.212 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.212 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.213 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.213 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.213 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.213 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.213 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.213 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.213 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.214 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.214 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.214 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.214 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.214 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.214 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.214 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.215 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.215 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.215 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.215 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.215 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.215 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.215 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.215 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.216 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.216 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.216 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.216 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.216 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.216 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.216 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.217 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.217 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.217 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.217 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.217 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.217 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.217 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.218 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.218 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.218 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.218 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.218 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.218 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.218 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.218 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.219 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.219 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.219 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.219 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.219 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.219 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.219 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.220 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.220 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.220 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.220 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.220 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.220 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.220 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.221 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.221 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.221 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.221 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.221 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.221 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.221 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.222 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.222 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.223 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.223 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.223 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.223 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.223 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.224 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.224 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.224 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.224 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.224 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.224 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.224 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.224 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.225 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.225 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.225 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.225 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.225 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.225 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.225 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.226 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.226 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.226 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.226 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.226 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.226 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.226 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.227 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.227 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.227 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.227 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.227 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.227 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.228 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.228 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.228 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.228 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.228 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.228 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.228 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.229 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.229 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.229 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.229 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.229 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.229 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.229 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.230 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.230 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.230 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.230 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.230 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.231 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.231 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.231 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.231 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.231 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.231 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.231 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.231 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.232 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.232 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.232 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.232 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.233 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.233 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.233 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.233 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.233 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.233 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.233 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.234 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.234 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.234 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.234 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.234 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.234 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.234 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.235 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.235 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.235 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.235 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.235 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.235 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.235 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.236 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.236 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.236 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.236 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.236 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.236 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.236 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.236 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.237 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.237 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.237 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.237 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.237 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.237 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.237 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.238 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.238 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.238 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.238 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.238 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.238 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.238 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.239 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.239 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.239 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.239 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.239 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.240 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.240 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.240 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.240 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.240 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.240 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.240 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.240 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.241 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.241 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.241 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.241 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.241 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.241 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.241 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.242 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.242 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.242 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.242 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.242 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.242 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.242 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.243 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.243 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.243 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.243 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.243 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.243 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.243 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.244 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.244 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.244 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.244 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.244 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.244 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.244 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.245 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.245 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.245 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.245 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.245 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.245 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.245 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.246 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.246 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.246 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.246 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.246 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.246 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.246 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.246 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.247 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.247 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.247 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.247 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.247 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.247 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.247 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.248 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.248 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.248 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.248 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.248 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.248 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.248 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.248 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.249 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.249 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.249 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.249 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.249 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.249 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.249 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.250 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.250 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.250 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.250 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.250 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.250 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.250 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.250 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.251 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.251 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.251 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.251 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.251 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.251 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.252 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.252 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.252 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.252 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.252 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.252 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.252 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.253 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.253 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.253 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.253 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.253 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.253 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.253 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.254 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.254 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.254 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.254 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.254 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.254 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.254 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.255 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.255 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.255 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.255 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.255 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.255 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.255 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.256 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.256 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.256 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.256 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.256 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.256 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.256 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.256 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.257 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.257 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.257 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.257 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.257 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.257 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.257 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.258 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.258 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.258 253047 WARNING oslo_config.cfg [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 20 19:01:52 compute-0 nova_compute[253043]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 20 19:01:52 compute-0 nova_compute[253043]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 20 19:01:52 compute-0 nova_compute[253043]: and ``live_migration_inbound_addr`` respectively.
Jan 20 19:01:52 compute-0 nova_compute[253043]: ).  Its value may be silently ignored in the future.
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.258 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.258 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.258 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.259 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.259 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.259 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.259 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.259 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.259 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.259 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.260 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.260 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.260 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.260 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.260 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.260 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.260 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.261 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.261 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.rbd_secret_uuid        = aecbbf3b-b405-507b-97d7-637a83f5b4b1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.261 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.261 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.261 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.261 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.261 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.262 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.262 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.262 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.262 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.262 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.262 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.262 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.263 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.263 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.263 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.263 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.263 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.263 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.264 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.264 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.264 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.264 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.264 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.264 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.264 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.264 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.265 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.265 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.265 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.265 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.265 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.266 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.266 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.266 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.266 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.266 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.266 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.266 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.267 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.267 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.267 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.267 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.267 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.267 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.267 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.267 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.268 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.268 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.268 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.268 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.268 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.268 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.268 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.269 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.269 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.269 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.269 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.269 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.269 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.269 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.269 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.270 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.270 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.270 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.270 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.270 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.270 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.270 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.271 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.271 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.271 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.271 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.271 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.271 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.271 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.271 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.272 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.272 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.272 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.272 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.272 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.272 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.272 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.272 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.273 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.273 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.273 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.273 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.273 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.273 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.273 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.274 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.274 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.274 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.274 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.274 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.274 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.274 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.274 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.275 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.275 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.275 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.275 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.275 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.275 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.275 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.276 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.276 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.276 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.276 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.276 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.276 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.276 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.277 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.277 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.277 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.277 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.277 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.277 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.277 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.278 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.278 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.278 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.278 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.278 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.278 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.278 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.279 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.279 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.279 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.279 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.279 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.279 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.279 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.280 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.280 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.280 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.280 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.280 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.280 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.280 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.281 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.281 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.281 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.281 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.281 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.281 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.281 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.282 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.282 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.282 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.282 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.282 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.282 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.282 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.283 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.283 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.283 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.283 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.283 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.283 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.283 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.284 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.284 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.284 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.284 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.284 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.284 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.284 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.284 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.285 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.285 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.285 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.285 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.285 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.285 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.285 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.286 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.286 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.286 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.286 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.286 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.286 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.287 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.287 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.287 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.287 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.287 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.287 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.287 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.287 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.288 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.288 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.288 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.288 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.288 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.288 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.288 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.288 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.289 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.289 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.289 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.289 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.289 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.289 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.290 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.290 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.290 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.290 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.290 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.290 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.290 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.290 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.291 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.291 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.291 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.291 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.291 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.291 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.291 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.291 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.292 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.292 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.292 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.292 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.292 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.292 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.292 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.292 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.293 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.293 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.293 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.293 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.293 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.294 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.294 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.294 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.294 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.294 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.294 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.295 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.295 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.295 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.295 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.295 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.295 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.295 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.295 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.296 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.296 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.296 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.296 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.296 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.296 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.296 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.297 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.297 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.297 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.297 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.297 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.297 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.297 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.298 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.298 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.298 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.298 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.298 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.298 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.298 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.299 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.299 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.299 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.299 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.299 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.299 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.299 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.300 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.300 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.300 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.300 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.300 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.300 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.300 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.301 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.301 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.301 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.301 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.301 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.301 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.301 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.302 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.302 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.302 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.302 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.302 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.302 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.302 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.303 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.303 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.303 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.303 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.303 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.303 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.303 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.304 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.304 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.304 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.304 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.304 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.304 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.304 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.305 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.305 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.305 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.305 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.305 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.305 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.306 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.306 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.306 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.306 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.306 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.306 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.307 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.307 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.307 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.307 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.307 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.307 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.307 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.308 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.308 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.308 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.308 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.308 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.308 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.308 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.308 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.309 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.309 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.309 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.309 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.309 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.309 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.309 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.310 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.310 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.310 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.310 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.310 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.310 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.310 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.310 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.311 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.311 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.311 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.311 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.311 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.311 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.311 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.311 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.312 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.312 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.312 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.312 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.312 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.312 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.313 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.313 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.313 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.313 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.313 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.314 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.314 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.314 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.314 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.314 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.314 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.314 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.314 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.315 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.315 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.315 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.315 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.315 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.315 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.316 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.316 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.316 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.316 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.316 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.316 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.316 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.317 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.317 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.317 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.317 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.317 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.317 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.317 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.317 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.318 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.318 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.318 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.318 253047 DEBUG oslo_service.service [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.319 253047 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.333 253047 DEBUG nova.virt.libvirt.host [None req-d85dd05d-dfa4-46a0-b2fc-1b2142ea86f2 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.334 253047 DEBUG nova.virt.libvirt.host [None req-d85dd05d-dfa4-46a0-b2fc-1b2142ea86f2 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.334 253047 DEBUG nova.virt.libvirt.host [None req-d85dd05d-dfa4-46a0-b2fc-1b2142ea86f2 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.334 253047 DEBUG nova.virt.libvirt.host [None req-d85dd05d-dfa4-46a0-b2fc-1b2142ea86f2 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Jan 20 19:01:52 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Jan 20 19:01:52 compute-0 systemd[1]: Started libvirt QEMU daemon.
Jan 20 19:01:52 compute-0 python3.9[253512]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.399 253047 DEBUG nova.virt.libvirt.host [None req-d85dd05d-dfa4-46a0-b2fc-1b2142ea86f2 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fbba10d4610> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.401 253047 DEBUG nova.virt.libvirt.host [None req-d85dd05d-dfa4-46a0-b2fc-1b2142ea86f2 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fbba10d4610> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.402 253047 INFO nova.virt.libvirt.driver [None req-d85dd05d-dfa4-46a0-b2fc-1b2142ea86f2 - - - - - -] Connection event '1' reason 'None'
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.443 253047 WARNING nova.virt.libvirt.driver [None req-d85dd05d-dfa4-46a0-b2fc-1b2142ea86f2 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Jan 20 19:01:52 compute-0 nova_compute[253043]: 2026-01-20 19:01:52.444 253047 DEBUG nova.virt.libvirt.volume.mount [None req-d85dd05d-dfa4-46a0-b2fc-1b2142ea86f2 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Jan 20 19:01:52 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:01:52 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:01:52 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:01:52.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:01:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:52 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4cc003dc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:53 compute-0 nova_compute[253043]: 2026-01-20 19:01:53.201 253047 INFO nova.virt.libvirt.host [None req-d85dd05d-dfa4-46a0-b2fc-1b2142ea86f2 - - - - - -] Libvirt host capabilities <capabilities>
Jan 20 19:01:53 compute-0 nova_compute[253043]: 
Jan 20 19:01:53 compute-0 nova_compute[253043]:   <host>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <uuid>19a62fa8-72e0-4d98-a48b-b9301ceb89c2</uuid>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <cpu>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <arch>x86_64</arch>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model>EPYC-Rome-v4</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <vendor>AMD</vendor>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <microcode version='16777317'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <signature family='23' model='49' stepping='0'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <maxphysaddr mode='emulate' bits='40'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature name='x2apic'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature name='tsc-deadline'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature name='osxsave'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature name='hypervisor'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature name='tsc_adjust'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature name='spec-ctrl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature name='stibp'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature name='arch-capabilities'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature name='ssbd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature name='cmp_legacy'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature name='topoext'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature name='virt-ssbd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature name='lbrv'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature name='tsc-scale'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature name='vmcb-clean'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature name='pause-filter'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature name='pfthreshold'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature name='svme-addr-chk'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature name='rdctl-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature name='skip-l1dfl-vmentry'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature name='mds-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature name='pschange-mc-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <pages unit='KiB' size='4'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <pages unit='KiB' size='2048'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <pages unit='KiB' size='1048576'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </cpu>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <power_management>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <suspend_mem/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </power_management>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <iommu support='no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <migration_features>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <live/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <uri_transports>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <uri_transport>tcp</uri_transport>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <uri_transport>rdma</uri_transport>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </uri_transports>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </migration_features>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <topology>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <cells num='1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <cell id='0'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:           <memory unit='KiB'>7864316</memory>
Jan 20 19:01:53 compute-0 nova_compute[253043]:           <pages unit='KiB' size='4'>1966079</pages>
Jan 20 19:01:53 compute-0 nova_compute[253043]:           <pages unit='KiB' size='2048'>0</pages>
Jan 20 19:01:53 compute-0 nova_compute[253043]:           <pages unit='KiB' size='1048576'>0</pages>
Jan 20 19:01:53 compute-0 nova_compute[253043]:           <distances>
Jan 20 19:01:53 compute-0 nova_compute[253043]:             <sibling id='0' value='10'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:           </distances>
Jan 20 19:01:53 compute-0 nova_compute[253043]:           <cpus num='8'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:           </cpus>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         </cell>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </cells>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </topology>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <cache>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </cache>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <secmodel>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model>selinux</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <doi>0</doi>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </secmodel>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <secmodel>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model>dac</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <doi>0</doi>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <baselabel type='kvm'>+107:+107</baselabel>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <baselabel type='qemu'>+107:+107</baselabel>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </secmodel>
Jan 20 19:01:53 compute-0 nova_compute[253043]:   </host>
Jan 20 19:01:53 compute-0 nova_compute[253043]: 
Jan 20 19:01:53 compute-0 nova_compute[253043]:   <guest>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <os_type>hvm</os_type>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <arch name='i686'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <wordsize>32</wordsize>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <domain type='qemu'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <domain type='kvm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </arch>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <features>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <pae/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <nonpae/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <acpi default='on' toggle='yes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <apic default='on' toggle='no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <cpuselection/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <deviceboot/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <disksnapshot default='on' toggle='no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <externalSnapshot/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </features>
Jan 20 19:01:53 compute-0 nova_compute[253043]:   </guest>
Jan 20 19:01:53 compute-0 nova_compute[253043]: 
Jan 20 19:01:53 compute-0 nova_compute[253043]:   <guest>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <os_type>hvm</os_type>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <arch name='x86_64'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <wordsize>64</wordsize>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <domain type='qemu'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <domain type='kvm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </arch>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <features>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <acpi default='on' toggle='yes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <apic default='on' toggle='no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <cpuselection/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <deviceboot/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <disksnapshot default='on' toggle='no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <externalSnapshot/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </features>
Jan 20 19:01:53 compute-0 nova_compute[253043]:   </guest>
Jan 20 19:01:53 compute-0 nova_compute[253043]: 
Jan 20 19:01:53 compute-0 nova_compute[253043]: </capabilities>
Jan 20 19:01:53 compute-0 nova_compute[253043]: 
Jan 20 19:01:53 compute-0 nova_compute[253043]: 2026-01-20 19:01:53.210 253047 DEBUG nova.virt.libvirt.host [None req-d85dd05d-dfa4-46a0-b2fc-1b2142ea86f2 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 20 19:01:53 compute-0 nova_compute[253043]: 2026-01-20 19:01:53.235 253047 DEBUG nova.virt.libvirt.host [None req-d85dd05d-dfa4-46a0-b2fc-1b2142ea86f2 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Jan 20 19:01:53 compute-0 nova_compute[253043]: <domainCapabilities>
Jan 20 19:01:53 compute-0 nova_compute[253043]:   <path>/usr/libexec/qemu-kvm</path>
Jan 20 19:01:53 compute-0 nova_compute[253043]:   <domain>kvm</domain>
Jan 20 19:01:53 compute-0 nova_compute[253043]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 20 19:01:53 compute-0 nova_compute[253043]:   <arch>i686</arch>
Jan 20 19:01:53 compute-0 nova_compute[253043]:   <vcpu max='4096'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:   <iothreads supported='yes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:   <os supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <enum name='firmware'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <loader supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='type'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>rom</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>pflash</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='readonly'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>yes</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>no</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='secure'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>no</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </loader>
Jan 20 19:01:53 compute-0 nova_compute[253043]:   </os>
Jan 20 19:01:53 compute-0 nova_compute[253043]:   <cpu>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <mode name='host-passthrough' supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='hostPassthroughMigratable'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>on</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>off</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </mode>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <mode name='maximum' supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='maximumMigratable'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>on</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>off</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </mode>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <mode name='host-model' supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <vendor>AMD</vendor>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='x2apic'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='tsc-deadline'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='hypervisor'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='tsc_adjust'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='spec-ctrl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='stibp'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='ssbd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='cmp_legacy'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='overflow-recov'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='succor'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='ibrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='amd-ssbd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='virt-ssbd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='lbrv'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='tsc-scale'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='vmcb-clean'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='flushbyasid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='pause-filter'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='pfthreshold'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='svme-addr-chk'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='disable' name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </mode>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <mode name='custom' supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Broadwell'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Broadwell-IBRS'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Broadwell-noTSX'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Broadwell-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Broadwell-v2'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Broadwell-v3'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Broadwell-v4'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Cascadelake-Server'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Cascadelake-Server-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Cascadelake-Server-v2'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Cascadelake-Server-v3'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Cascadelake-Server-v4'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Cascadelake-Server-v5'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='ClearwaterForest'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-ne-convert'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni-int16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni-int8'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bhi-ctrl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bhi-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bus-lock-detect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='cldemote'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='cmpccxadd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ddpd-u'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fbsdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='intel-psfd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ipred-ctrl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='lam'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='mcdt-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdir64b'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdiri'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pbrsb-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='prefetchiti'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='psdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rrsba-ctrl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='serialize'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sha512'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sm3'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sm4'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ss'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='ClearwaterForest-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-ne-convert'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni-int16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni-int8'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bhi-ctrl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bhi-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bus-lock-detect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='cldemote'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='cmpccxadd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ddpd-u'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fbsdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='intel-psfd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ipred-ctrl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='lam'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='mcdt-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdir64b'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdiri'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pbrsb-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='prefetchiti'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='psdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rrsba-ctrl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='serialize'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sha512'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sm3'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sm4'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ss'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Cooperlake'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='taa-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Cooperlake-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='taa-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Cooperlake-v2'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='taa-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Denverton'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='mpx'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Denverton-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='mpx'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Denverton-v2'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Denverton-v3'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Dhyana-v2'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='EPYC-Genoa'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amd-psfd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='auto-ibrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='no-nested-data-bp'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='null-sel-clr-base'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='stibp-always-on'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='EPYC-Genoa-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amd-psfd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='auto-ibrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='no-nested-data-bp'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='null-sel-clr-base'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='stibp-always-on'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='EPYC-Genoa-v2'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amd-psfd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='auto-ibrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fs-gs-base-ns'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='no-nested-data-bp'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='null-sel-clr-base'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='perfmon-v2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='stibp-always-on'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='EPYC-Milan'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='EPYC-Milan-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='EPYC-Milan-v2'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amd-psfd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='no-nested-data-bp'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='null-sel-clr-base'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='stibp-always-on'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='EPYC-Milan-v3'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amd-psfd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='no-nested-data-bp'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='null-sel-clr-base'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='stibp-always-on'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='EPYC-Rome'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='EPYC-Rome-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='EPYC-Rome-v2'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='EPYC-Rome-v3'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='EPYC-Turin'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amd-psfd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='auto-ibrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vp2intersect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fs-gs-base-ns'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibpb-brtype'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdir64b'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdiri'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='no-nested-data-bp'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='null-sel-clr-base'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='perfmon-v2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='prefetchi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sbpb'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='srso-user-kernel-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='stibp-always-on'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='EPYC-Turin-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amd-psfd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='auto-ibrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vp2intersect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fs-gs-base-ns'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibpb-brtype'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdir64b'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdiri'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='no-nested-data-bp'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='null-sel-clr-base'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='perfmon-v2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='prefetchi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sbpb'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='srso-user-kernel-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='stibp-always-on'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='EPYC-v3'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='EPYC-v4'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='EPYC-v5'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='GraniteRapids'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-fp16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-int8'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-tile'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-fp16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bus-lock-detect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fbsdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrc'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fzrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='mcdt-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pbrsb-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='prefetchiti'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='psdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='serialize'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='taa-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='tsx-ldtrk'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xfd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='GraniteRapids-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-fp16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-int8'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-tile'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-fp16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bus-lock-detect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fbsdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrc'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fzrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='mcdt-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pbrsb-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='prefetchiti'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='psdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='serialize'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='taa-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='tsx-ldtrk'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xfd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='GraniteRapids-v2'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-fp16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-int8'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-tile'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx10'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx10-128'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx10-256'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx10-512'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-fp16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bus-lock-detect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='cldemote'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fbsdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrc'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fzrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='mcdt-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdir64b'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdiri'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pbrsb-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='prefetchiti'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='psdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='serialize'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ss'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='taa-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='tsx-ldtrk'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xfd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='GraniteRapids-v3'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-fp16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-int8'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-tile'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx10'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx10-128'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx10-256'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx10-512'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-fp16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bus-lock-detect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='cldemote'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fbsdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrc'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fzrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='mcdt-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdir64b'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdiri'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pbrsb-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='prefetchiti'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='psdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='serialize'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ss'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='taa-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='tsx-ldtrk'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xfd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Haswell'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Haswell-IBRS'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Haswell-noTSX'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Haswell-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Haswell-v2'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Haswell-v3'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Haswell-v4'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Icelake-Server'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Icelake-Server-noTSX'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Icelake-Server-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Icelake-Server-v2'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Icelake-Server-v3'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='taa-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Icelake-Server-v4'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='taa-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Icelake-Server-v5'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='taa-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Icelake-Server-v6'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='taa-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Icelake-Server-v7'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='taa-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='IvyBridge'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='IvyBridge-IBRS'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='IvyBridge-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='IvyBridge-v2'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='KnightsMill'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-4fmaps'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-4vnniw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512er'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512pf'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ss'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='KnightsMill-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-4fmaps'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-4vnniw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512er'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512pf'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ss'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Opteron_G4'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fma4'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xop'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Opteron_G4-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fma4'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xop'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Opteron_G5'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fma4'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='tbm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xop'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Opteron_G5-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fma4'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='tbm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xop'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='SapphireRapids'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-int8'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-tile'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-fp16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bus-lock-detect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrc'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fzrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='serialize'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='taa-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='tsx-ldtrk'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xfd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='SapphireRapids-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-int8'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-tile'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-fp16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bus-lock-detect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrc'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fzrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='serialize'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='taa-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='tsx-ldtrk'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xfd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='SapphireRapids-v2'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-int8'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-tile'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-fp16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bus-lock-detect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fbsdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrc'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fzrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='psdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='serialize'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='taa-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='tsx-ldtrk'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xfd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='SapphireRapids-v3'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-int8'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-tile'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-fp16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bus-lock-detect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='cldemote'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fbsdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrc'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fzrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdir64b'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdiri'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='psdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='serialize'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ss'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='taa-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='tsx-ldtrk'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xfd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='SapphireRapids-v4'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-int8'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-tile'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-fp16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bus-lock-detect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='cldemote'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fbsdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrc'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fzrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdir64b'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdiri'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='psdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='serialize'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ss'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='taa-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='tsx-ldtrk'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xfd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='SierraForest'>
Jan 20 19:01:53 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:53 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4c00043a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-ne-convert'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni-int8'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bus-lock-detect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='cmpccxadd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fbsdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='mcdt-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pbrsb-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='psdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='serialize'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='SierraForest-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-ne-convert'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni-int8'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bus-lock-detect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='cmpccxadd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fbsdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='mcdt-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pbrsb-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='psdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='serialize'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='SierraForest-v2'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-ne-convert'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni-int8'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bhi-ctrl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bus-lock-detect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='cldemote'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='cmpccxadd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fbsdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='intel-psfd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ipred-ctrl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='lam'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='mcdt-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdir64b'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdiri'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pbrsb-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='psdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rrsba-ctrl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='serialize'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ss'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='SierraForest-v3'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-ne-convert'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni-int8'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bhi-ctrl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bus-lock-detect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='cldemote'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='cmpccxadd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fbsdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='intel-psfd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ipred-ctrl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='lam'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='mcdt-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdir64b'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdiri'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pbrsb-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='psdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rrsba-ctrl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='serialize'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ss'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Skylake-Client'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Skylake-Client-IBRS'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Skylake-Client-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Skylake-Client-v2'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Skylake-Client-v3'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Skylake-Client-v4'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Skylake-Server'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Skylake-Server-IBRS'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Skylake-Server-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Skylake-Server-v2'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Skylake-Server-v3'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Skylake-Server-v4'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Skylake-Server-v5'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Snowridge'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='cldemote'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='core-capability'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdir64b'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdiri'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='mpx'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='split-lock-detect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Snowridge-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='cldemote'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='core-capability'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdir64b'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdiri'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='mpx'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='split-lock-detect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Snowridge-v2'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='cldemote'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='core-capability'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdir64b'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdiri'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='split-lock-detect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Snowridge-v3'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='cldemote'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='core-capability'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdir64b'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdiri'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='split-lock-detect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Snowridge-v4'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='cldemote'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdir64b'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdiri'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='athlon'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='3dnow'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='3dnowext'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='athlon-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='3dnow'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='3dnowext'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='core2duo'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ss'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='core2duo-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ss'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='coreduo'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ss'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='coreduo-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ss'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='n270'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ss'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='n270-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ss'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='phenom'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='3dnow'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='3dnowext'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='phenom-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='3dnow'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='3dnowext'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </mode>
Jan 20 19:01:53 compute-0 nova_compute[253043]:   </cpu>
Jan 20 19:01:53 compute-0 nova_compute[253043]:   <memoryBacking supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <enum name='sourceType'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <value>file</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <value>anonymous</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <value>memfd</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:   </memoryBacking>
Jan 20 19:01:53 compute-0 nova_compute[253043]:   <devices>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <disk supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='diskDevice'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>disk</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>cdrom</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>floppy</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>lun</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='bus'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>fdc</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>scsi</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>virtio</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>usb</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>sata</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='model'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>virtio</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>virtio-transitional</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>virtio-non-transitional</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </disk>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <graphics supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='type'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>vnc</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>egl-headless</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>dbus</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </graphics>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <video supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='modelType'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>vga</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>cirrus</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>virtio</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>none</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>bochs</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>ramfb</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </video>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <hostdev supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='mode'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>subsystem</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='startupPolicy'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>default</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>mandatory</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>requisite</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>optional</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='subsysType'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>usb</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>pci</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>scsi</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='capsType'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='pciBackend'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </hostdev>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <rng supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='model'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>virtio</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>virtio-transitional</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>virtio-non-transitional</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='backendModel'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>random</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>egd</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>builtin</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </rng>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <filesystem supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='driverType'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>path</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>handle</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>virtiofs</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </filesystem>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <tpm supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='model'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>tpm-tis</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>tpm-crb</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='backendModel'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>emulator</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>external</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='backendVersion'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>2.0</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </tpm>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <redirdev supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='bus'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>usb</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </redirdev>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <channel supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='type'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>pty</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>unix</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </channel>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <crypto supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='model'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='type'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>qemu</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='backendModel'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>builtin</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </crypto>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <interface supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='backendType'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>default</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>passt</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </interface>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <panic supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='model'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>isa</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>hyperv</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </panic>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <console supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='type'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>null</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>vc</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>pty</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>dev</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>file</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>pipe</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>stdio</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>udp</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>tcp</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>unix</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>qemu-vdagent</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>dbus</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </console>
Jan 20 19:01:53 compute-0 nova_compute[253043]:   </devices>
Jan 20 19:01:53 compute-0 nova_compute[253043]:   <features>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <gic supported='no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <vmcoreinfo supported='yes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <genid supported='yes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <backingStoreInput supported='yes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <backup supported='yes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <async-teardown supported='yes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <s390-pv supported='no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <ps2 supported='yes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <tdx supported='no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <sev supported='no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <sgx supported='no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <hyperv supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='features'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>relaxed</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>vapic</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>spinlocks</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>vpindex</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>runtime</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>synic</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>stimer</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>reset</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>vendor_id</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>frequencies</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>reenlightenment</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>tlbflush</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>ipi</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>avic</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>emsr_bitmap</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>xmm_input</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <defaults>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <spinlocks>4095</spinlocks>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <stimer_direct>on</stimer_direct>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <tlbflush_direct>on</tlbflush_direct>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <tlbflush_extended>on</tlbflush_extended>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </defaults>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </hyperv>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <launchSecurity supported='no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:   </features>
Jan 20 19:01:53 compute-0 nova_compute[253043]: </domainCapabilities>
Jan 20 19:01:53 compute-0 nova_compute[253043]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 20 19:01:53 compute-0 nova_compute[253043]: 2026-01-20 19:01:53.242 253047 DEBUG nova.virt.libvirt.host [None req-d85dd05d-dfa4-46a0-b2fc-1b2142ea86f2 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Jan 20 19:01:53 compute-0 nova_compute[253043]: <domainCapabilities>
Jan 20 19:01:53 compute-0 nova_compute[253043]:   <path>/usr/libexec/qemu-kvm</path>
Jan 20 19:01:53 compute-0 nova_compute[253043]:   <domain>kvm</domain>
Jan 20 19:01:53 compute-0 nova_compute[253043]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 20 19:01:53 compute-0 nova_compute[253043]:   <arch>i686</arch>
Jan 20 19:01:53 compute-0 nova_compute[253043]:   <vcpu max='240'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:   <iothreads supported='yes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:   <os supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <enum name='firmware'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <loader supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='type'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>rom</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>pflash</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='readonly'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>yes</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>no</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='secure'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>no</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </loader>
Jan 20 19:01:53 compute-0 nova_compute[253043]:   </os>
Jan 20 19:01:53 compute-0 nova_compute[253043]:   <cpu>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <mode name='host-passthrough' supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='hostPassthroughMigratable'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>on</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>off</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </mode>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <mode name='maximum' supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='maximumMigratable'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>on</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>off</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </mode>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <mode name='host-model' supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <vendor>AMD</vendor>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='x2apic'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='tsc-deadline'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='hypervisor'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='tsc_adjust'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='spec-ctrl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='stibp'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='ssbd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='cmp_legacy'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='overflow-recov'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='succor'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='ibrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='amd-ssbd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='virt-ssbd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='lbrv'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='tsc-scale'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='vmcb-clean'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='flushbyasid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='pause-filter'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='pfthreshold'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='svme-addr-chk'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='disable' name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </mode>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <mode name='custom' supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Broadwell'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Broadwell-IBRS'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Broadwell-noTSX'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Broadwell-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Broadwell-v2'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Broadwell-v3'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Broadwell-v4'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Cascadelake-Server'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Cascadelake-Server-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Cascadelake-Server-v2'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Cascadelake-Server-v3'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Cascadelake-Server-v4'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Cascadelake-Server-v5'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='ClearwaterForest'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-ne-convert'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni-int16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni-int8'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bhi-ctrl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bhi-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bus-lock-detect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='cldemote'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='cmpccxadd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ddpd-u'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fbsdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='intel-psfd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ipred-ctrl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='lam'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='mcdt-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdir64b'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdiri'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pbrsb-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='prefetchiti'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='psdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rrsba-ctrl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='serialize'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sha512'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sm3'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sm4'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ss'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='ClearwaterForest-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-ne-convert'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni-int16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni-int8'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bhi-ctrl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bhi-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bus-lock-detect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='cldemote'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='cmpccxadd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ddpd-u'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fbsdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='intel-psfd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ipred-ctrl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='lam'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='mcdt-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdir64b'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdiri'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pbrsb-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='prefetchiti'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='psdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rrsba-ctrl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='serialize'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sha512'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sm3'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sm4'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ss'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Cooperlake'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='taa-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Cooperlake-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='taa-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Cooperlake-v2'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='taa-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Denverton'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='mpx'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Denverton-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='mpx'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Denverton-v2'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Denverton-v3'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Dhyana-v2'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='EPYC-Genoa'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amd-psfd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='auto-ibrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='no-nested-data-bp'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='null-sel-clr-base'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='stibp-always-on'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='EPYC-Genoa-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amd-psfd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='auto-ibrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='no-nested-data-bp'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='null-sel-clr-base'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='stibp-always-on'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='EPYC-Genoa-v2'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amd-psfd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='auto-ibrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fs-gs-base-ns'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='no-nested-data-bp'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='null-sel-clr-base'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='perfmon-v2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='stibp-always-on'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='EPYC-Milan'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='EPYC-Milan-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='EPYC-Milan-v2'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amd-psfd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='no-nested-data-bp'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='null-sel-clr-base'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='stibp-always-on'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='EPYC-Milan-v3'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amd-psfd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='no-nested-data-bp'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='null-sel-clr-base'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='stibp-always-on'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='EPYC-Rome'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='EPYC-Rome-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='EPYC-Rome-v2'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='EPYC-Rome-v3'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='EPYC-Turin'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amd-psfd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='auto-ibrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vp2intersect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fs-gs-base-ns'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibpb-brtype'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdir64b'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdiri'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='no-nested-data-bp'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='null-sel-clr-base'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='perfmon-v2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='prefetchi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sbpb'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='srso-user-kernel-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='stibp-always-on'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='EPYC-Turin-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amd-psfd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='auto-ibrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vp2intersect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fs-gs-base-ns'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibpb-brtype'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdir64b'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdiri'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='no-nested-data-bp'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='null-sel-clr-base'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='perfmon-v2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='prefetchi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sbpb'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='srso-user-kernel-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='stibp-always-on'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='EPYC-v3'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='EPYC-v4'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='EPYC-v5'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='GraniteRapids'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-fp16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-int8'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-tile'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-fp16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bus-lock-detect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fbsdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrc'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fzrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='mcdt-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pbrsb-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='prefetchiti'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='psdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='serialize'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='taa-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='tsx-ldtrk'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xfd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='GraniteRapids-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-fp16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-int8'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-tile'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-fp16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bus-lock-detect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fbsdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrc'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fzrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='mcdt-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pbrsb-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='prefetchiti'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='psdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='serialize'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='taa-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='tsx-ldtrk'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xfd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='GraniteRapids-v2'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-fp16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-int8'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-tile'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx10'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx10-128'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx10-256'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx10-512'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-fp16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bus-lock-detect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='cldemote'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fbsdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrc'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fzrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='mcdt-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdir64b'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdiri'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pbrsb-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='prefetchiti'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='psdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='serialize'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ss'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='taa-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='tsx-ldtrk'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xfd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='GraniteRapids-v3'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-fp16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-int8'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-tile'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx10'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx10-128'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx10-256'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx10-512'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-fp16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bus-lock-detect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='cldemote'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fbsdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrc'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fzrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='mcdt-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdir64b'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdiri'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pbrsb-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='prefetchiti'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='psdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='serialize'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ss'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='taa-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='tsx-ldtrk'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xfd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Haswell'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Haswell-IBRS'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Haswell-noTSX'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Haswell-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Haswell-v2'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Haswell-v3'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Haswell-v4'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Icelake-Server'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Icelake-Server-noTSX'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Icelake-Server-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Icelake-Server-v2'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Icelake-Server-v3'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='taa-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Icelake-Server-v4'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='taa-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Icelake-Server-v5'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='taa-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Icelake-Server-v6'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='taa-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Icelake-Server-v7'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='taa-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='IvyBridge'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='IvyBridge-IBRS'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='IvyBridge-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='IvyBridge-v2'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='KnightsMill'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-4fmaps'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-4vnniw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512er'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512pf'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ss'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='KnightsMill-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-4fmaps'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-4vnniw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512er'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512pf'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ss'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Opteron_G4'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fma4'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xop'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Opteron_G4-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fma4'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xop'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Opteron_G5'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fma4'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='tbm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xop'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Opteron_G5-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fma4'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='tbm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xop'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='SapphireRapids'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-int8'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-tile'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-fp16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bus-lock-detect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrc'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fzrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='serialize'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='taa-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='tsx-ldtrk'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xfd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='SapphireRapids-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-int8'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-tile'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-fp16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bus-lock-detect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrc'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fzrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='serialize'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='taa-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='tsx-ldtrk'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xfd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='SapphireRapids-v2'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-int8'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-tile'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-fp16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bus-lock-detect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fbsdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrc'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fzrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='psdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='serialize'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='taa-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='tsx-ldtrk'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xfd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='SapphireRapids-v3'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-int8'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-tile'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-fp16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bus-lock-detect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='cldemote'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fbsdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrc'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fzrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdir64b'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdiri'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='psdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='serialize'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ss'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='taa-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='tsx-ldtrk'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xfd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='SapphireRapids-v4'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-int8'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-tile'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-fp16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bus-lock-detect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='cldemote'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fbsdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrc'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fzrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdir64b'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdiri'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='psdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='serialize'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ss'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='taa-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='tsx-ldtrk'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xfd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='SierraForest'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-ne-convert'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni-int8'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bus-lock-detect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='cmpccxadd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fbsdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='mcdt-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pbrsb-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='psdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='serialize'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='SierraForest-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-ne-convert'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni-int8'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bus-lock-detect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='cmpccxadd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fbsdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='mcdt-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pbrsb-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='psdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='serialize'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='SierraForest-v2'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-ne-convert'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni-int8'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bhi-ctrl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bus-lock-detect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='cldemote'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='cmpccxadd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fbsdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='intel-psfd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ipred-ctrl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='lam'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='mcdt-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdir64b'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdiri'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pbrsb-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='psdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rrsba-ctrl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='serialize'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ss'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='SierraForest-v3'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-ne-convert'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni-int8'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bhi-ctrl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bus-lock-detect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='cldemote'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='cmpccxadd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fbsdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='intel-psfd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ipred-ctrl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='lam'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='mcdt-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdir64b'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdiri'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pbrsb-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='psdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rrsba-ctrl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='serialize'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ss'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Skylake-Client'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Skylake-Client-IBRS'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Skylake-Client-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Skylake-Client-v2'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Skylake-Client-v3'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Skylake-Client-v4'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Skylake-Server'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Skylake-Server-IBRS'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Skylake-Server-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Skylake-Server-v2'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Skylake-Server-v3'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Skylake-Server-v4'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Skylake-Server-v5'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Snowridge'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='cldemote'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='core-capability'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdir64b'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdiri'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='mpx'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='split-lock-detect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Snowridge-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='cldemote'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='core-capability'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdir64b'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdiri'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='mpx'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='split-lock-detect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Snowridge-v2'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='cldemote'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='core-capability'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdir64b'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdiri'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='split-lock-detect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Snowridge-v3'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='cldemote'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='core-capability'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdir64b'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdiri'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='split-lock-detect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Snowridge-v4'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='cldemote'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdir64b'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdiri'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='athlon'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='3dnow'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='3dnowext'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='athlon-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='3dnow'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='3dnowext'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='core2duo'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ss'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='core2duo-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ss'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='coreduo'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ss'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='coreduo-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ss'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='n270'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ss'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='n270-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ss'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='phenom'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='3dnow'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='3dnowext'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='phenom-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='3dnow'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='3dnowext'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </mode>
Jan 20 19:01:53 compute-0 nova_compute[253043]:   </cpu>
Jan 20 19:01:53 compute-0 nova_compute[253043]:   <memoryBacking supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <enum name='sourceType'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <value>file</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <value>anonymous</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <value>memfd</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:   </memoryBacking>
Jan 20 19:01:53 compute-0 nova_compute[253043]:   <devices>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <disk supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='diskDevice'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>disk</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>cdrom</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>floppy</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>lun</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='bus'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>ide</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>fdc</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>scsi</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>virtio</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>usb</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>sata</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='model'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>virtio</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>virtio-transitional</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>virtio-non-transitional</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </disk>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <graphics supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='type'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>vnc</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>egl-headless</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>dbus</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </graphics>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <video supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='modelType'>
Jan 20 19:01:53 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>vga</value>
Jan 20 19:01:53 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:01:53.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>cirrus</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>virtio</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>none</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>bochs</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>ramfb</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </video>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <hostdev supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='mode'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>subsystem</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='startupPolicy'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>default</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>mandatory</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>requisite</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>optional</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='subsysType'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>usb</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>pci</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>scsi</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='capsType'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='pciBackend'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </hostdev>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <rng supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='model'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>virtio</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>virtio-transitional</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>virtio-non-transitional</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='backendModel'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>random</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>egd</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>builtin</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </rng>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <filesystem supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='driverType'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>path</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>handle</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>virtiofs</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </filesystem>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <tpm supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='model'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>tpm-tis</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>tpm-crb</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='backendModel'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>emulator</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>external</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='backendVersion'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>2.0</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </tpm>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <redirdev supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='bus'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>usb</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </redirdev>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <channel supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='type'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>pty</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>unix</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </channel>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <crypto supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='model'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='type'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>qemu</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='backendModel'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>builtin</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </crypto>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <interface supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='backendType'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>default</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>passt</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </interface>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <panic supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='model'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>isa</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>hyperv</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </panic>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <console supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='type'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>null</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>vc</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>pty</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>dev</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>file</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>pipe</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>stdio</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>udp</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>tcp</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>unix</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>qemu-vdagent</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>dbus</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </console>
Jan 20 19:01:53 compute-0 nova_compute[253043]:   </devices>
Jan 20 19:01:53 compute-0 nova_compute[253043]:   <features>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <gic supported='no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <vmcoreinfo supported='yes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <genid supported='yes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <backingStoreInput supported='yes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <backup supported='yes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <async-teardown supported='yes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <s390-pv supported='no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <ps2 supported='yes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <tdx supported='no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <sev supported='no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <sgx supported='no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <hyperv supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='features'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>relaxed</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>vapic</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>spinlocks</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>vpindex</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>runtime</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>synic</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>stimer</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>reset</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>vendor_id</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>frequencies</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>reenlightenment</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>tlbflush</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>ipi</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>avic</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>emsr_bitmap</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>xmm_input</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <defaults>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <spinlocks>4095</spinlocks>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <stimer_direct>on</stimer_direct>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <tlbflush_direct>on</tlbflush_direct>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <tlbflush_extended>on</tlbflush_extended>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </defaults>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </hyperv>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <launchSecurity supported='no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:   </features>
Jan 20 19:01:53 compute-0 nova_compute[253043]: </domainCapabilities>
Jan 20 19:01:53 compute-0 nova_compute[253043]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 20 19:01:53 compute-0 nova_compute[253043]: 2026-01-20 19:01:53.306 253047 DEBUG nova.virt.libvirt.host [None req-d85dd05d-dfa4-46a0-b2fc-1b2142ea86f2 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 20 19:01:53 compute-0 nova_compute[253043]: 2026-01-20 19:01:53.310 253047 DEBUG nova.virt.libvirt.host [None req-d85dd05d-dfa4-46a0-b2fc-1b2142ea86f2 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Jan 20 19:01:53 compute-0 nova_compute[253043]: <domainCapabilities>
Jan 20 19:01:53 compute-0 nova_compute[253043]:   <path>/usr/libexec/qemu-kvm</path>
Jan 20 19:01:53 compute-0 nova_compute[253043]:   <domain>kvm</domain>
Jan 20 19:01:53 compute-0 nova_compute[253043]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 20 19:01:53 compute-0 nova_compute[253043]:   <arch>x86_64</arch>
Jan 20 19:01:53 compute-0 nova_compute[253043]:   <vcpu max='4096'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:   <iothreads supported='yes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:   <os supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <enum name='firmware'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <value>efi</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <loader supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='type'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>rom</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>pflash</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='readonly'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>yes</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>no</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='secure'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>yes</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>no</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </loader>
Jan 20 19:01:53 compute-0 nova_compute[253043]:   </os>
Jan 20 19:01:53 compute-0 nova_compute[253043]:   <cpu>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <mode name='host-passthrough' supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='hostPassthroughMigratable'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>on</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>off</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </mode>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <mode name='maximum' supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='maximumMigratable'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>on</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>off</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </mode>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <mode name='host-model' supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <vendor>AMD</vendor>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='x2apic'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='tsc-deadline'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='hypervisor'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='tsc_adjust'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='spec-ctrl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='stibp'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='ssbd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='cmp_legacy'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='overflow-recov'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='succor'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='ibrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='amd-ssbd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='virt-ssbd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='lbrv'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='tsc-scale'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='vmcb-clean'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='flushbyasid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='pause-filter'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='pfthreshold'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='svme-addr-chk'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='disable' name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </mode>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <mode name='custom' supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Broadwell'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Broadwell-IBRS'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Broadwell-noTSX'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Broadwell-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Broadwell-v2'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Broadwell-v3'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Broadwell-v4'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Cascadelake-Server'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Cascadelake-Server-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Cascadelake-Server-v2'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Cascadelake-Server-v3'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Cascadelake-Server-v4'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Cascadelake-Server-v5'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='ClearwaterForest'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-ne-convert'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni-int16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni-int8'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bhi-ctrl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bhi-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bus-lock-detect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='cldemote'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='cmpccxadd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ddpd-u'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fbsdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='intel-psfd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ipred-ctrl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='lam'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='mcdt-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdir64b'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdiri'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pbrsb-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='prefetchiti'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='psdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rrsba-ctrl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='serialize'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sha512'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sm3'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sm4'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ss'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='ClearwaterForest-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-ne-convert'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni-int16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni-int8'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bhi-ctrl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bhi-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bus-lock-detect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='cldemote'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='cmpccxadd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ddpd-u'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fbsdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='intel-psfd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ipred-ctrl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='lam'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='mcdt-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdir64b'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdiri'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pbrsb-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='prefetchiti'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='psdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rrsba-ctrl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='serialize'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sha512'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sm3'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sm4'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ss'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Cooperlake'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='taa-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Cooperlake-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='taa-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Cooperlake-v2'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='taa-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Denverton'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='mpx'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Denverton-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='mpx'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Denverton-v2'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Denverton-v3'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Dhyana-v2'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='EPYC-Genoa'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amd-psfd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='auto-ibrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='no-nested-data-bp'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='null-sel-clr-base'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='stibp-always-on'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='EPYC-Genoa-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amd-psfd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='auto-ibrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='no-nested-data-bp'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='null-sel-clr-base'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='stibp-always-on'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='EPYC-Genoa-v2'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amd-psfd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='auto-ibrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fs-gs-base-ns'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='no-nested-data-bp'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='null-sel-clr-base'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='perfmon-v2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='stibp-always-on'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='EPYC-Milan'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='EPYC-Milan-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='EPYC-Milan-v2'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amd-psfd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='no-nested-data-bp'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='null-sel-clr-base'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='stibp-always-on'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='EPYC-Milan-v3'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amd-psfd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='no-nested-data-bp'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='null-sel-clr-base'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='stibp-always-on'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='EPYC-Rome'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='EPYC-Rome-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='EPYC-Rome-v2'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='EPYC-Rome-v3'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='EPYC-Turin'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amd-psfd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='auto-ibrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vp2intersect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fs-gs-base-ns'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibpb-brtype'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdir64b'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdiri'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='no-nested-data-bp'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='null-sel-clr-base'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='perfmon-v2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='prefetchi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sbpb'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='srso-user-kernel-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='stibp-always-on'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='EPYC-Turin-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amd-psfd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='auto-ibrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vp2intersect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fs-gs-base-ns'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibpb-brtype'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdir64b'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdiri'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='no-nested-data-bp'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='null-sel-clr-base'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='perfmon-v2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='prefetchi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sbpb'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='srso-user-kernel-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='stibp-always-on'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='EPYC-v3'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='EPYC-v4'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='EPYC-v5'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='GraniteRapids'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-fp16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-int8'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-tile'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-fp16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bus-lock-detect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fbsdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrc'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fzrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='mcdt-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pbrsb-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='prefetchiti'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='psdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='serialize'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='taa-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='tsx-ldtrk'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xfd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='GraniteRapids-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-fp16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-int8'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-tile'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-fp16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bus-lock-detect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fbsdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrc'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fzrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='mcdt-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pbrsb-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='prefetchiti'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='psdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='serialize'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='taa-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='tsx-ldtrk'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xfd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='GraniteRapids-v2'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-fp16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-int8'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-tile'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx10'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx10-128'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx10-256'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx10-512'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-fp16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bus-lock-detect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='cldemote'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fbsdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrc'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fzrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='mcdt-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdir64b'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdiri'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pbrsb-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='prefetchiti'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='psdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='serialize'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ss'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='taa-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='tsx-ldtrk'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xfd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='GraniteRapids-v3'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-fp16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-int8'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-tile'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx10'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx10-128'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx10-256'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx10-512'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-fp16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bus-lock-detect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='cldemote'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fbsdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrc'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fzrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='mcdt-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdir64b'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdiri'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pbrsb-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='prefetchiti'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='psdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='serialize'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ss'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='taa-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='tsx-ldtrk'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xfd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Haswell'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Haswell-IBRS'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Haswell-noTSX'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Haswell-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Haswell-v2'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Haswell-v3'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Haswell-v4'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Icelake-Server'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Icelake-Server-noTSX'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Icelake-Server-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Icelake-Server-v2'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Icelake-Server-v3'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='taa-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Icelake-Server-v4'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='taa-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Icelake-Server-v5'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='taa-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Icelake-Server-v6'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='taa-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Icelake-Server-v7'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='taa-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='IvyBridge'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='IvyBridge-IBRS'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='IvyBridge-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='IvyBridge-v2'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='KnightsMill'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-4fmaps'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-4vnniw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512er'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512pf'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ss'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='KnightsMill-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-4fmaps'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-4vnniw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512er'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512pf'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ss'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Opteron_G4'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fma4'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xop'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Opteron_G4-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fma4'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xop'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Opteron_G5'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fma4'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='tbm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xop'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Opteron_G5-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fma4'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='tbm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xop'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='SapphireRapids'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-int8'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-tile'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-fp16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bus-lock-detect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrc'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fzrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='serialize'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='taa-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='tsx-ldtrk'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xfd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='SapphireRapids-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-int8'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-tile'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-fp16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bus-lock-detect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrc'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fzrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='serialize'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='taa-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='tsx-ldtrk'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xfd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='SapphireRapids-v2'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-int8'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-tile'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-fp16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bus-lock-detect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fbsdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrc'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fzrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='psdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='serialize'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='taa-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='tsx-ldtrk'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xfd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='SapphireRapids-v3'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-int8'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-tile'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-fp16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bus-lock-detect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='cldemote'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fbsdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrc'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fzrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdir64b'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdiri'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='psdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='serialize'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ss'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='taa-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='tsx-ldtrk'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xfd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='SapphireRapids-v4'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-int8'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-tile'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-fp16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bus-lock-detect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='cldemote'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fbsdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrc'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fzrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdir64b'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdiri'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='psdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='serialize'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ss'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='taa-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='tsx-ldtrk'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xfd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='SierraForest'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-ne-convert'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni-int8'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bus-lock-detect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='cmpccxadd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fbsdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='mcdt-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pbrsb-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='psdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='serialize'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='SierraForest-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-ne-convert'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni-int8'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bus-lock-detect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='cmpccxadd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fbsdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='mcdt-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pbrsb-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='psdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='serialize'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='SierraForest-v2'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-ne-convert'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni-int8'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bhi-ctrl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bus-lock-detect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='cldemote'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='cmpccxadd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fbsdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='intel-psfd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ipred-ctrl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='lam'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='mcdt-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdir64b'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdiri'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pbrsb-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='psdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rrsba-ctrl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='serialize'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ss'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='SierraForest-v3'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-ne-convert'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni-int8'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bhi-ctrl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bus-lock-detect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='cldemote'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='cmpccxadd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fbsdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='intel-psfd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ipred-ctrl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='lam'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='mcdt-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdir64b'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdiri'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pbrsb-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='psdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rrsba-ctrl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='serialize'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ss'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Skylake-Client'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Skylake-Client-IBRS'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Skylake-Client-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Skylake-Client-v2'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Skylake-Client-v3'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Skylake-Client-v4'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Skylake-Server'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Skylake-Server-IBRS'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Skylake-Server-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Skylake-Server-v2'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Skylake-Server-v3'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Skylake-Server-v4'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Skylake-Server-v5'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Snowridge'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='cldemote'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='core-capability'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdir64b'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdiri'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='mpx'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='split-lock-detect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Snowridge-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='cldemote'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='core-capability'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdir64b'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdiri'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='mpx'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='split-lock-detect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Snowridge-v2'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='cldemote'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='core-capability'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdir64b'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdiri'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='split-lock-detect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Snowridge-v3'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='cldemote'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='core-capability'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdir64b'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdiri'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='split-lock-detect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Snowridge-v4'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='cldemote'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdir64b'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdiri'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='athlon'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='3dnow'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='3dnowext'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='athlon-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='3dnow'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='3dnowext'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='core2duo'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ss'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='core2duo-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ss'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='coreduo'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ss'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='coreduo-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ss'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='n270'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ss'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='n270-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ss'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='phenom'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='3dnow'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='3dnowext'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='phenom-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='3dnow'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='3dnowext'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </mode>
Jan 20 19:01:53 compute-0 nova_compute[253043]:   </cpu>
Jan 20 19:01:53 compute-0 nova_compute[253043]:   <memoryBacking supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <enum name='sourceType'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <value>file</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <value>anonymous</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <value>memfd</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:   </memoryBacking>
Jan 20 19:01:53 compute-0 nova_compute[253043]:   <devices>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <disk supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='diskDevice'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>disk</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>cdrom</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>floppy</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>lun</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='bus'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>fdc</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>scsi</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>virtio</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>usb</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>sata</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='model'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>virtio</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>virtio-transitional</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>virtio-non-transitional</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </disk>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <graphics supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='type'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>vnc</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>egl-headless</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>dbus</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </graphics>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <video supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='modelType'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>vga</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>cirrus</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>virtio</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>none</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>bochs</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>ramfb</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </video>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <hostdev supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='mode'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>subsystem</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='startupPolicy'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>default</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>mandatory</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>requisite</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>optional</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='subsysType'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>usb</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>pci</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>scsi</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='capsType'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='pciBackend'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </hostdev>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <rng supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='model'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>virtio</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>virtio-transitional</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>virtio-non-transitional</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='backendModel'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>random</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>egd</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>builtin</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </rng>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <filesystem supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='driverType'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>path</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>handle</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>virtiofs</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </filesystem>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <tpm supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='model'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>tpm-tis</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>tpm-crb</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='backendModel'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>emulator</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>external</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='backendVersion'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>2.0</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </tpm>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <redirdev supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='bus'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>usb</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </redirdev>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <channel supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='type'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>pty</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>unix</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </channel>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <crypto supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='model'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='type'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>qemu</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='backendModel'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>builtin</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </crypto>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <interface supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='backendType'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>default</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>passt</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </interface>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <panic supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='model'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>isa</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>hyperv</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </panic>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <console supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='type'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>null</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>vc</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>pty</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>dev</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>file</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>pipe</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>stdio</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>udp</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>tcp</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>unix</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>qemu-vdagent</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>dbus</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </console>
Jan 20 19:01:53 compute-0 nova_compute[253043]:   </devices>
Jan 20 19:01:53 compute-0 nova_compute[253043]:   <features>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <gic supported='no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <vmcoreinfo supported='yes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <genid supported='yes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <backingStoreInput supported='yes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <backup supported='yes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <async-teardown supported='yes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <s390-pv supported='no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <ps2 supported='yes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <tdx supported='no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <sev supported='no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <sgx supported='no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <hyperv supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='features'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>relaxed</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>vapic</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>spinlocks</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>vpindex</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>runtime</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>synic</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>stimer</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>reset</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>vendor_id</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>frequencies</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>reenlightenment</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>tlbflush</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>ipi</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>avic</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>emsr_bitmap</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>xmm_input</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <defaults>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <spinlocks>4095</spinlocks>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <stimer_direct>on</stimer_direct>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <tlbflush_direct>on</tlbflush_direct>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <tlbflush_extended>on</tlbflush_extended>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </defaults>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </hyperv>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <launchSecurity supported='no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:   </features>
Jan 20 19:01:53 compute-0 nova_compute[253043]: </domainCapabilities>
Jan 20 19:01:53 compute-0 nova_compute[253043]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 20 19:01:53 compute-0 nova_compute[253043]: 2026-01-20 19:01:53.383 253047 DEBUG nova.virt.libvirt.host [None req-d85dd05d-dfa4-46a0-b2fc-1b2142ea86f2 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Jan 20 19:01:53 compute-0 nova_compute[253043]: <domainCapabilities>
Jan 20 19:01:53 compute-0 nova_compute[253043]:   <path>/usr/libexec/qemu-kvm</path>
Jan 20 19:01:53 compute-0 nova_compute[253043]:   <domain>kvm</domain>
Jan 20 19:01:53 compute-0 nova_compute[253043]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 20 19:01:53 compute-0 nova_compute[253043]:   <arch>x86_64</arch>
Jan 20 19:01:53 compute-0 nova_compute[253043]:   <vcpu max='240'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:   <iothreads supported='yes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:   <os supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <enum name='firmware'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <loader supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='type'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>rom</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>pflash</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='readonly'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>yes</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>no</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='secure'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>no</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </loader>
Jan 20 19:01:53 compute-0 nova_compute[253043]:   </os>
Jan 20 19:01:53 compute-0 nova_compute[253043]:   <cpu>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <mode name='host-passthrough' supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='hostPassthroughMigratable'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>on</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>off</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </mode>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <mode name='maximum' supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='maximumMigratable'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>on</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>off</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </mode>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <mode name='host-model' supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <vendor>AMD</vendor>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='x2apic'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='tsc-deadline'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='hypervisor'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='tsc_adjust'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='spec-ctrl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='stibp'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='ssbd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='cmp_legacy'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='overflow-recov'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='succor'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='ibrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='amd-ssbd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='virt-ssbd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='lbrv'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='tsc-scale'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='vmcb-clean'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='flushbyasid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='pause-filter'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='pfthreshold'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='svme-addr-chk'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <feature policy='disable' name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </mode>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <mode name='custom' supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Broadwell'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Broadwell-IBRS'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Broadwell-noTSX'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Broadwell-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Broadwell-v2'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Broadwell-v3'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Broadwell-v4'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Cascadelake-Server'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Cascadelake-Server-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Cascadelake-Server-v2'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Cascadelake-Server-v3'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Cascadelake-Server-v4'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Cascadelake-Server-v5'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='ClearwaterForest'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-ne-convert'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni-int16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni-int8'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bhi-ctrl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bhi-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bus-lock-detect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='cldemote'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='cmpccxadd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ddpd-u'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fbsdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='intel-psfd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ipred-ctrl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='lam'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='mcdt-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdir64b'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdiri'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pbrsb-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='prefetchiti'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='psdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rrsba-ctrl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='serialize'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sha512'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sm3'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sm4'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ss'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='ClearwaterForest-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-ne-convert'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni-int16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni-int8'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bhi-ctrl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bhi-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bus-lock-detect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='cldemote'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='cmpccxadd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ddpd-u'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fbsdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='intel-psfd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ipred-ctrl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='lam'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='mcdt-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdir64b'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdiri'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pbrsb-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='prefetchiti'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='psdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rrsba-ctrl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='serialize'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sha512'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sm3'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sm4'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ss'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Cooperlake'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='taa-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Cooperlake-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='taa-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Cooperlake-v2'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='taa-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Denverton'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='mpx'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Denverton-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='mpx'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Denverton-v2'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Denverton-v3'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Dhyana-v2'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='EPYC-Genoa'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amd-psfd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='auto-ibrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='no-nested-data-bp'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='null-sel-clr-base'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='stibp-always-on'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='EPYC-Genoa-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amd-psfd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='auto-ibrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='no-nested-data-bp'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='null-sel-clr-base'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='stibp-always-on'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='EPYC-Genoa-v2'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amd-psfd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='auto-ibrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fs-gs-base-ns'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='no-nested-data-bp'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='null-sel-clr-base'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='perfmon-v2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='stibp-always-on'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='EPYC-Milan'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='EPYC-Milan-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='EPYC-Milan-v2'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amd-psfd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='no-nested-data-bp'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='null-sel-clr-base'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='stibp-always-on'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='EPYC-Milan-v3'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amd-psfd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='no-nested-data-bp'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='null-sel-clr-base'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='stibp-always-on'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='EPYC-Rome'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='EPYC-Rome-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='EPYC-Rome-v2'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='EPYC-Rome-v3'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='EPYC-Turin'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amd-psfd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='auto-ibrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vp2intersect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fs-gs-base-ns'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibpb-brtype'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdir64b'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdiri'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='no-nested-data-bp'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='null-sel-clr-base'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='perfmon-v2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='prefetchi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sbpb'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='srso-user-kernel-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='stibp-always-on'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='EPYC-Turin-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amd-psfd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='auto-ibrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vp2intersect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fs-gs-base-ns'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibpb-brtype'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdir64b'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdiri'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='no-nested-data-bp'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='null-sel-clr-base'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='perfmon-v2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='prefetchi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sbpb'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='srso-user-kernel-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='stibp-always-on'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='EPYC-v3'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='EPYC-v4'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='EPYC-v5'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='GraniteRapids'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-fp16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-int8'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-tile'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-fp16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bus-lock-detect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fbsdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrc'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fzrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='mcdt-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pbrsb-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='prefetchiti'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='psdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='serialize'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='taa-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='tsx-ldtrk'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xfd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='GraniteRapids-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-fp16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-int8'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-tile'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-fp16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bus-lock-detect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fbsdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrc'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fzrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='mcdt-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pbrsb-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='prefetchiti'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='psdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='serialize'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='taa-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='tsx-ldtrk'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xfd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='GraniteRapids-v2'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-fp16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-int8'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-tile'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx10'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx10-128'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx10-256'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx10-512'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-fp16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bus-lock-detect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='cldemote'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fbsdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrc'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fzrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='mcdt-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdir64b'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdiri'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pbrsb-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='prefetchiti'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='psdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='serialize'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ss'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='taa-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='tsx-ldtrk'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xfd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='GraniteRapids-v3'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-fp16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-int8'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-tile'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx10'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx10-128'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx10-256'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx10-512'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-fp16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bus-lock-detect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='cldemote'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fbsdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrc'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fzrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='mcdt-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdir64b'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdiri'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pbrsb-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='prefetchiti'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='psdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='serialize'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ss'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='taa-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='tsx-ldtrk'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xfd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Haswell'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Haswell-IBRS'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Haswell-noTSX'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Haswell-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Haswell-v2'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Haswell-v3'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Haswell-v4'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Icelake-Server'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Icelake-Server-noTSX'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Icelake-Server-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Icelake-Server-v2'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Icelake-Server-v3'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='taa-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Icelake-Server-v4'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='taa-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Icelake-Server-v5'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='taa-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Icelake-Server-v6'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='taa-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Icelake-Server-v7'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='taa-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='IvyBridge'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='IvyBridge-IBRS'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='IvyBridge-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='IvyBridge-v2'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='KnightsMill'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-4fmaps'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-4vnniw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512er'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512pf'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ss'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='KnightsMill-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-4fmaps'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-4vnniw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512er'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512pf'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ss'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Opteron_G4'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fma4'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xop'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Opteron_G4-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fma4'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xop'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Opteron_G5'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fma4'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='tbm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xop'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Opteron_G5-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fma4'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='tbm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xop'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='SapphireRapids'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-int8'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-tile'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-fp16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bus-lock-detect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrc'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fzrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='serialize'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='taa-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='tsx-ldtrk'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xfd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='SapphireRapids-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-int8'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-tile'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-fp16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bus-lock-detect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrc'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fzrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='serialize'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='taa-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='tsx-ldtrk'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xfd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='SapphireRapids-v2'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-int8'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-tile'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-fp16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bus-lock-detect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fbsdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrc'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fzrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='psdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='serialize'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='taa-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='tsx-ldtrk'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xfd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='SapphireRapids-v3'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-int8'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-tile'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-fp16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bus-lock-detect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='cldemote'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fbsdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrc'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fzrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdir64b'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdiri'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='psdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='serialize'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ss'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='taa-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='tsx-ldtrk'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xfd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='SapphireRapids-v4'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-int8'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='amx-tile'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-bf16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-fp16'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bitalg'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vbmi2'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bus-lock-detect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='cldemote'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fbsdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrc'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fzrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='la57'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdir64b'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdiri'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='psdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='serialize'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ss'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='taa-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='tsx-ldtrk'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xfd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='SierraForest'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-ne-convert'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni-int8'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bus-lock-detect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='cmpccxadd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fbsdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='mcdt-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pbrsb-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='psdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='serialize'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='SierraForest-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-ne-convert'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni-int8'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bus-lock-detect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='cmpccxadd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fbsdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='mcdt-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pbrsb-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='psdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='serialize'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='SierraForest-v2'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-ne-convert'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni-int8'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bhi-ctrl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bus-lock-detect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='cldemote'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='cmpccxadd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fbsdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='intel-psfd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ipred-ctrl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='lam'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='mcdt-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdir64b'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdiri'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pbrsb-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='psdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rrsba-ctrl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='serialize'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ss'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='SierraForest-v3'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-ifma'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-ne-convert'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx-vnni-int8'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bhi-ctrl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='bus-lock-detect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='cldemote'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='cmpccxadd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fbsdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='fsrs'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ibrs-all'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='intel-psfd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ipred-ctrl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='lam'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='mcdt-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdir64b'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdiri'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pbrsb-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='psdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rrsba-ctrl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='serialize'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ss'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vaes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='vpclmulqdq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Skylake-Client'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Skylake-Client-IBRS'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Skylake-Client-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Skylake-Client-v2'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Skylake-Client-v3'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Skylake-Client-v4'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Skylake-Server'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Skylake-Server-IBRS'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Skylake-Server-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Skylake-Server-v2'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='hle'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='rtm'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Skylake-Server-v3'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Skylake-Server-v4'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Skylake-Server-v5'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512bw'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512cd'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512dq'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512f'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='avx512vl'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='invpcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pcid'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='pku'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Snowridge'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='cldemote'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='core-capability'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdir64b'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdiri'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='mpx'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='split-lock-detect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Snowridge-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='cldemote'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='core-capability'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdir64b'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdiri'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='mpx'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='split-lock-detect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Snowridge-v2'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='cldemote'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='core-capability'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdir64b'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdiri'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='split-lock-detect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Snowridge-v3'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='cldemote'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='core-capability'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdir64b'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdiri'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='split-lock-detect'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='Snowridge-v4'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='cldemote'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='erms'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='gfni'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdir64b'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='movdiri'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='xsaves'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='athlon'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='3dnow'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='3dnowext'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='athlon-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='3dnow'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='3dnowext'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='core2duo'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ss'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='core2duo-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ss'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='coreduo'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ss'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='coreduo-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ss'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='n270'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ss'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='n270-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='ss'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='phenom'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='3dnow'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='3dnowext'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <blockers model='phenom-v1'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='3dnow'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <feature name='3dnowext'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </blockers>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </mode>
Jan 20 19:01:53 compute-0 nova_compute[253043]:   </cpu>
Jan 20 19:01:53 compute-0 nova_compute[253043]:   <memoryBacking supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <enum name='sourceType'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <value>file</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <value>anonymous</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <value>memfd</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:   </memoryBacking>
Jan 20 19:01:53 compute-0 nova_compute[253043]:   <devices>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <disk supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='diskDevice'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>disk</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>cdrom</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>floppy</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>lun</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='bus'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>ide</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>fdc</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>scsi</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>virtio</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>usb</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>sata</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='model'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>virtio</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>virtio-transitional</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>virtio-non-transitional</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </disk>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <graphics supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='type'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>vnc</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>egl-headless</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>dbus</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </graphics>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <video supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='modelType'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>vga</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>cirrus</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>virtio</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>none</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>bochs</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>ramfb</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </video>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <hostdev supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='mode'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>subsystem</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='startupPolicy'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>default</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>mandatory</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>requisite</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>optional</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='subsysType'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>usb</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>pci</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>scsi</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='capsType'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='pciBackend'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </hostdev>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <rng supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='model'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>virtio</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>virtio-transitional</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>virtio-non-transitional</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='backendModel'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>random</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>egd</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>builtin</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </rng>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <filesystem supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='driverType'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>path</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>handle</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>virtiofs</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </filesystem>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <tpm supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='model'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>tpm-tis</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>tpm-crb</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='backendModel'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>emulator</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>external</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='backendVersion'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>2.0</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </tpm>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <redirdev supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='bus'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>usb</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </redirdev>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <channel supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='type'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>pty</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>unix</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </channel>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <crypto supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='model'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='type'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>qemu</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='backendModel'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>builtin</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </crypto>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <interface supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='backendType'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>default</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>passt</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </interface>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <panic supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='model'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>isa</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>hyperv</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </panic>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <console supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='type'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>null</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>vc</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>pty</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>dev</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>file</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>pipe</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>stdio</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>udp</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>tcp</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>unix</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>qemu-vdagent</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>dbus</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </console>
Jan 20 19:01:53 compute-0 nova_compute[253043]:   </devices>
Jan 20 19:01:53 compute-0 nova_compute[253043]:   <features>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <gic supported='no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <vmcoreinfo supported='yes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <genid supported='yes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <backingStoreInput supported='yes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <backup supported='yes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <async-teardown supported='yes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <s390-pv supported='no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <ps2 supported='yes'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <tdx supported='no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <sev supported='no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <sgx supported='no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <hyperv supported='yes'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <enum name='features'>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>relaxed</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>vapic</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>spinlocks</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>vpindex</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>runtime</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>synic</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>stimer</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>reset</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>vendor_id</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>frequencies</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>reenlightenment</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>tlbflush</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>ipi</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>avic</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>emsr_bitmap</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <value>xmm_input</value>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </enum>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       <defaults>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <spinlocks>4095</spinlocks>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <stimer_direct>on</stimer_direct>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <tlbflush_direct>on</tlbflush_direct>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <tlbflush_extended>on</tlbflush_extended>
Jan 20 19:01:53 compute-0 nova_compute[253043]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 20 19:01:53 compute-0 nova_compute[253043]:       </defaults>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     </hyperv>
Jan 20 19:01:53 compute-0 nova_compute[253043]:     <launchSecurity supported='no'/>
Jan 20 19:01:53 compute-0 nova_compute[253043]:   </features>
Jan 20 19:01:53 compute-0 nova_compute[253043]: </domainCapabilities>
Jan 20 19:01:53 compute-0 nova_compute[253043]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 20 19:01:53 compute-0 nova_compute[253043]: 2026-01-20 19:01:53.462 253047 DEBUG nova.virt.libvirt.host [None req-d85dd05d-dfa4-46a0-b2fc-1b2142ea86f2 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 20 19:01:53 compute-0 nova_compute[253043]: 2026-01-20 19:01:53.463 253047 INFO nova.virt.libvirt.host [None req-d85dd05d-dfa4-46a0-b2fc-1b2142ea86f2 - - - - - -] Secure Boot support detected
Jan 20 19:01:53 compute-0 nova_compute[253043]: 2026-01-20 19:01:53.465 253047 INFO nova.virt.libvirt.driver [None req-d85dd05d-dfa4-46a0-b2fc-1b2142ea86f2 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 20 19:01:53 compute-0 nova_compute[253043]: 2026-01-20 19:01:53.466 253047 INFO nova.virt.libvirt.driver [None req-d85dd05d-dfa4-46a0-b2fc-1b2142ea86f2 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 20 19:01:53 compute-0 nova_compute[253043]: 2026-01-20 19:01:53.480 253047 DEBUG nova.virt.libvirt.driver [None req-d85dd05d-dfa4-46a0-b2fc-1b2142ea86f2 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Jan 20 19:01:53 compute-0 nova_compute[253043]: 2026-01-20 19:01:53.551 253047 INFO nova.virt.node [None req-d85dd05d-dfa4-46a0-b2fc-1b2142ea86f2 - - - - - -] Determined node identity cb9161e5-191d-495c-920a-01144f42a215 from /var/lib/nova/compute_id
Jan 20 19:01:53 compute-0 nova_compute[253043]: 2026-01-20 19:01:53.602 253047 WARNING nova.compute.manager [None req-d85dd05d-dfa4-46a0-b2fc-1b2142ea86f2 - - - - - -] Compute nodes ['cb9161e5-191d-495c-920a-01144f42a215'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Jan 20 19:01:53 compute-0 nova_compute[253043]: 2026-01-20 19:01:53.654 253047 INFO nova.compute.manager [None req-d85dd05d-dfa4-46a0-b2fc-1b2142ea86f2 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Jan 20 19:01:53 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:53 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4c00043a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:53 compute-0 nova_compute[253043]: 2026-01-20 19:01:53.732 253047 WARNING nova.compute.manager [None req-d85dd05d-dfa4-46a0-b2fc-1b2142ea86f2 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Jan 20 19:01:53 compute-0 nova_compute[253043]: 2026-01-20 19:01:53.733 253047 DEBUG oslo_concurrency.lockutils [None req-d85dd05d-dfa4-46a0-b2fc-1b2142ea86f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:01:53 compute-0 nova_compute[253043]: 2026-01-20 19:01:53.733 253047 DEBUG oslo_concurrency.lockutils [None req-d85dd05d-dfa4-46a0-b2fc-1b2142ea86f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:01:53 compute-0 nova_compute[253043]: 2026-01-20 19:01:53.733 253047 DEBUG oslo_concurrency.lockutils [None req-d85dd05d-dfa4-46a0-b2fc-1b2142ea86f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:01:53 compute-0 nova_compute[253043]: 2026-01-20 19:01:53.733 253047 DEBUG nova.compute.resource_tracker [None req-d85dd05d-dfa4-46a0-b2fc-1b2142ea86f2 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 19:01:53 compute-0 nova_compute[253043]: 2026-01-20 19:01:53.734 253047 DEBUG oslo_concurrency.processutils [None req-d85dd05d-dfa4-46a0-b2fc-1b2142ea86f2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:01:53 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v582: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 20 19:01:54 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:01:54 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/185293698' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:01:54 compute-0 nova_compute[253043]: 2026-01-20 19:01:54.152 253047 DEBUG oslo_concurrency.processutils [None req-d85dd05d-dfa4-46a0-b2fc-1b2142ea86f2 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:01:54 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Jan 20 19:01:54 compute-0 systemd[1]: Started libvirt nodedev daemon.
Jan 20 19:01:54 compute-0 sudo[253724]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:01:54 compute-0 sudo[253724]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:01:54 compute-0 sudo[253724]: pam_unix(sudo:session): session closed for user root
Jan 20 19:01:54 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:01:54 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:01:54 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:01:54.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:01:54 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:54 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b4002690 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:54 compute-0 nova_compute[253043]: 2026-01-20 19:01:54.749 253047 WARNING nova.virt.libvirt.driver [None req-d85dd05d-dfa4-46a0-b2fc-1b2142ea86f2 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 19:01:54 compute-0 nova_compute[253043]: 2026-01-20 19:01:54.751 253047 DEBUG nova.compute.resource_tracker [None req-d85dd05d-dfa4-46a0-b2fc-1b2142ea86f2 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4865MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 19:01:54 compute-0 nova_compute[253043]: 2026-01-20 19:01:54.751 253047 DEBUG oslo_concurrency.lockutils [None req-d85dd05d-dfa4-46a0-b2fc-1b2142ea86f2 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:01:54 compute-0 nova_compute[253043]: 2026-01-20 19:01:54.751 253047 DEBUG oslo_concurrency.lockutils [None req-d85dd05d-dfa4-46a0-b2fc-1b2142ea86f2 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:01:54 compute-0 nova_compute[253043]: 2026-01-20 19:01:54.890 253047 WARNING nova.compute.resource_tracker [None req-d85dd05d-dfa4-46a0-b2fc-1b2142ea86f2 - - - - - -] No compute node record for compute-0.ctlplane.example.com:cb9161e5-191d-495c-920a-01144f42a215: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host cb9161e5-191d-495c-920a-01144f42a215 could not be found.
Jan 20 19:01:54 compute-0 nova_compute[253043]: 2026-01-20 19:01:54.927 253047 INFO nova.compute.resource_tracker [None req-d85dd05d-dfa4-46a0-b2fc-1b2142ea86f2 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: cb9161e5-191d-495c-920a-01144f42a215
Jan 20 19:01:54 compute-0 sudo[253802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzhojkhnhfmgnxqpzdzmiuelkzgtahak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935714.04136-3566-48237670934156/AnsiballZ_podman_container.py'
Jan 20 19:01:54 compute-0 ceph-mgr[74676]: [balancer INFO root] Optimize plan auto_2026-01-20_19:01:54
Jan 20 19:01:54 compute-0 ceph-mgr[74676]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 19:01:54 compute-0 ceph-mgr[74676]: [balancer INFO root] do_upmap
Jan 20 19:01:54 compute-0 ceph-mgr[74676]: [balancer INFO root] pools ['default.rgw.meta', 'backups', 'default.rgw.log', 'volumes', 'cephfs.cephfs.data', '.rgw.root', '.nfs', '.mgr', 'images', 'vms', 'cephfs.cephfs.meta', 'default.rgw.control']
Jan 20 19:01:54 compute-0 ceph-mgr[74676]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 19:01:54 compute-0 sudo[253802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:01:55 compute-0 nova_compute[253043]: 2026-01-20 19:01:55.010 253047 DEBUG nova.compute.resource_tracker [None req-d85dd05d-dfa4-46a0-b2fc-1b2142ea86f2 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 19:01:55 compute-0 nova_compute[253043]: 2026-01-20 19:01:55.011 253047 DEBUG nova.compute.resource_tracker [None req-d85dd05d-dfa4-46a0-b2fc-1b2142ea86f2 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 19:01:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:01:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:01:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:01:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:01:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:01:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:01:55 compute-0 python3.9[253804]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 20 19:01:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 19:01:55 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 19:01:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:01:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 19:01:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:01:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:01:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:01:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:01:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:01:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:01:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:01:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:01:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:01:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 20 19:01:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:01:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:01:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:01:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 19:01:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:01:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 20 19:01:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:01:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:01:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:01:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 19:01:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:01:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 20 19:01:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 19:01:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:01:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:01:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:01:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:01:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 19:01:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:01:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:01:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:01:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:01:55 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:55 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4cc003dc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:55 compute-0 sudo[253802]: pam_unix(sudo:session): session closed for user root
Jan 20 19:01:55 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:01:55 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:01:55 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:01:55.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:01:55 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:55 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b80026f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:55 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v583: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:01:56 compute-0 sudo[253976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aasywjxjdknnloewfdbjrgzfoodvvqre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935715.6311867-3590-121895593884731/AnsiballZ_systemd.py'
Jan 20 19:01:56 compute-0 sudo[253976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:01:56 compute-0 nova_compute[253043]: 2026-01-20 19:01:56.114 253047 INFO nova.scheduler.client.report [None req-d85dd05d-dfa4-46a0-b2fc-1b2142ea86f2 - - - - - -] [req-97e8e7d6-4505-4a2f-8c9a-8d4975e51da8] Created resource provider record via placement API for resource provider with UUID cb9161e5-191d-495c-920a-01144f42a215 and name compute-0.ctlplane.example.com.
Jan 20 19:01:56 compute-0 nova_compute[253043]: 2026-01-20 19:01:56.141 253047 DEBUG oslo_concurrency.processutils [None req-d85dd05d-dfa4-46a0-b2fc-1b2142ea86f2 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:01:56 compute-0 python3.9[253978]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 19:01:56 compute-0 systemd[1]: Stopping nova_compute container...
Jan 20 19:01:56 compute-0 nova_compute[253043]: 2026-01-20 19:01:56.436 253047 DEBUG oslo_concurrency.lockutils [None req-d85dd05d-dfa4-46a0-b2fc-1b2142ea86f2 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.685s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:01:56 compute-0 nova_compute[253043]: 2026-01-20 19:01:56.437 253047 DEBUG oslo_concurrency.lockutils [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 19:01:56 compute-0 nova_compute[253043]: 2026-01-20 19:01:56.437 253047 DEBUG oslo_concurrency.lockutils [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 19:01:56 compute-0 nova_compute[253043]: 2026-01-20 19:01:56.437 253047 DEBUG oslo_concurrency.lockutils [None req-ec639a5a-fc09-46e9-bca6-ee7bfa67bb75 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 19:01:56 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/185293698' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:01:56 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:01:56 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:01:56 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:01:56 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:01:56.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:01:56 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:56 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b80026f0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:56 compute-0 virtqemud[253535]: libvirt version: 11.10.0, package: 2.el9 (builder@centos.org, 2025-12-18-15:09:54, )
Jan 20 19:01:56 compute-0 virtqemud[253535]: hostname: compute-0
Jan 20 19:01:56 compute-0 virtqemud[253535]: End of file while reading data: Input/output error
Jan 20 19:01:56 compute-0 systemd[1]: libpod-77eb8e75ce200b0a9dcc4d021c0d36ee994fffd640610e677c8e55800d2db691.scope: Deactivated successfully.
Jan 20 19:01:56 compute-0 systemd[1]: libpod-77eb8e75ce200b0a9dcc4d021c0d36ee994fffd640610e677c8e55800d2db691.scope: Consumed 3.941s CPU time.
Jan 20 19:01:56 compute-0 podman[254003]: 2026-01-20 19:01:56.852185338 +0000 UTC m=+0.456302515 container died 77eb8e75ce200b0a9dcc4d021c0d36ee994fffd640610e677c8e55800d2db691 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=nova_compute, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 20 19:01:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:01:57.107Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:01:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:57 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b4002830 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:57 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:01:57 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:01:57 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:01:57.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:01:57 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-77eb8e75ce200b0a9dcc4d021c0d36ee994fffd640610e677c8e55800d2db691-userdata-shm.mount: Deactivated successfully.
Jan 20 19:01:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ba362d5b39081f720856419c7a391d1f63e1c92b17e1dcaaa786beb2a21b84e-merged.mount: Deactivated successfully.
Jan 20 19:01:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:57 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4cc003dc0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:01:57 compute-0 podman[254003]: 2026-01-20 19:01:57.72547442 +0000 UTC m=+1.329591577 container cleanup 77eb8e75ce200b0a9dcc4d021c0d36ee994fffd640610e677c8e55800d2db691 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm, container_name=nova_compute, org.label-schema.license=GPLv2)
Jan 20 19:01:57 compute-0 podman[254003]: nova_compute
Jan 20 19:01:57 compute-0 podman[254032]: nova_compute
Jan 20 19:01:57 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Jan 20 19:01:57 compute-0 systemd[1]: Stopped nova_compute container.
Jan 20 19:01:57 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v584: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 20 19:01:57 compute-0 systemd[1]: Starting nova_compute container...
Jan 20 19:01:57 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:01:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ba362d5b39081f720856419c7a391d1f63e1c92b17e1dcaaa786beb2a21b84e/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 20 19:01:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ba362d5b39081f720856419c7a391d1f63e1c92b17e1dcaaa786beb2a21b84e/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 20 19:01:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ba362d5b39081f720856419c7a391d1f63e1c92b17e1dcaaa786beb2a21b84e/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 20 19:01:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ba362d5b39081f720856419c7a391d1f63e1c92b17e1dcaaa786beb2a21b84e/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 20 19:01:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ba362d5b39081f720856419c7a391d1f63e1c92b17e1dcaaa786beb2a21b84e/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 20 19:01:57 compute-0 podman[254045]: 2026-01-20 19:01:57.91633456 +0000 UTC m=+0.093585431 container init 77eb8e75ce200b0a9dcc4d021c0d36ee994fffd640610e677c8e55800d2db691 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 20 19:01:57 compute-0 podman[254045]: 2026-01-20 19:01:57.927151354 +0000 UTC m=+0.104402195 container start 77eb8e75ce200b0a9dcc4d021c0d36ee994fffd640610e677c8e55800d2db691 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, container_name=nova_compute, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Jan 20 19:01:57 compute-0 nova_compute[254061]: + sudo -E kolla_set_configs
Jan 20 19:01:57 compute-0 podman[254045]: nova_compute
Jan 20 19:01:57 compute-0 systemd[1]: Started nova_compute container.
Jan 20 19:01:57 compute-0 sudo[253976]: pam_unix(sudo:session): session closed for user root
Jan 20 19:01:57 compute-0 nova_compute[254061]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 20 19:01:57 compute-0 nova_compute[254061]: INFO:__main__:Validating config file
Jan 20 19:01:57 compute-0 nova_compute[254061]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 20 19:01:57 compute-0 nova_compute[254061]: INFO:__main__:Copying service configuration files
Jan 20 19:01:57 compute-0 nova_compute[254061]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 20 19:01:57 compute-0 nova_compute[254061]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 20 19:01:57 compute-0 nova_compute[254061]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 20 19:01:57 compute-0 nova_compute[254061]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Jan 20 19:01:57 compute-0 nova_compute[254061]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 20 19:01:57 compute-0 nova_compute[254061]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 20 19:01:57 compute-0 nova_compute[254061]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 20 19:01:57 compute-0 nova_compute[254061]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 20 19:01:57 compute-0 nova_compute[254061]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 20 19:01:57 compute-0 nova_compute[254061]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 20 19:01:57 compute-0 nova_compute[254061]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 20 19:01:57 compute-0 nova_compute[254061]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 20 19:01:57 compute-0 nova_compute[254061]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Jan 20 19:01:57 compute-0 nova_compute[254061]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 20 19:01:57 compute-0 nova_compute[254061]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 20 19:01:57 compute-0 nova_compute[254061]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 20 19:01:57 compute-0 nova_compute[254061]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 20 19:01:57 compute-0 nova_compute[254061]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 20 19:01:57 compute-0 nova_compute[254061]: INFO:__main__:Deleting /etc/ceph
Jan 20 19:01:57 compute-0 nova_compute[254061]: INFO:__main__:Creating directory /etc/ceph
Jan 20 19:01:57 compute-0 nova_compute[254061]: INFO:__main__:Setting permission for /etc/ceph
Jan 20 19:01:57 compute-0 nova_compute[254061]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Jan 20 19:01:57 compute-0 nova_compute[254061]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 20 19:01:57 compute-0 nova_compute[254061]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Jan 20 19:01:57 compute-0 nova_compute[254061]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 20 19:01:57 compute-0 nova_compute[254061]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Jan 20 19:01:57 compute-0 nova_compute[254061]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 20 19:01:58 compute-0 nova_compute[254061]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 20 19:01:58 compute-0 nova_compute[254061]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Jan 20 19:01:58 compute-0 nova_compute[254061]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 20 19:01:58 compute-0 nova_compute[254061]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 20 19:01:58 compute-0 nova_compute[254061]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 20 19:01:58 compute-0 nova_compute[254061]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 20 19:01:58 compute-0 nova_compute[254061]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 20 19:01:58 compute-0 nova_compute[254061]: INFO:__main__:Writing out command to execute
Jan 20 19:01:58 compute-0 nova_compute[254061]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 20 19:01:58 compute-0 nova_compute[254061]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 20 19:01:58 compute-0 nova_compute[254061]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 20 19:01:58 compute-0 nova_compute[254061]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 20 19:01:58 compute-0 nova_compute[254061]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 20 19:01:58 compute-0 nova_compute[254061]: ++ cat /run_command
Jan 20 19:01:58 compute-0 nova_compute[254061]: + CMD=nova-compute
Jan 20 19:01:58 compute-0 nova_compute[254061]: + ARGS=
Jan 20 19:01:58 compute-0 nova_compute[254061]: + sudo kolla_copy_cacerts
Jan 20 19:01:58 compute-0 nova_compute[254061]: + [[ ! -n '' ]]
Jan 20 19:01:58 compute-0 nova_compute[254061]: + . kolla_extend_start
Jan 20 19:01:58 compute-0 nova_compute[254061]: + echo 'Running command: '\''nova-compute'\'''
Jan 20 19:01:58 compute-0 nova_compute[254061]: Running command: 'nova-compute'
Jan 20 19:01:58 compute-0 nova_compute[254061]: + umask 0022
Jan 20 19:01:58 compute-0 nova_compute[254061]: + exec nova-compute
Jan 20 19:01:58 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:01:58 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:01:58 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:01:58.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:01:58 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[231400]: 20/01/2026 19:01:58 : epoch 696fd089 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb4b80026f0 fd 42 proxy ignored for local
Jan 20 19:01:58 compute-0 kernel: ganesha.nfsd[248718]: segfault at 50 ip 00007fb55e1ac32e sp 00007fb4d2ffc210 error 4 in libntirpc.so.5.8[7fb55e191000+2c000] likely on CPU 0 (core 0, socket 0)
Jan 20 19:01:58 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Jan 20 19:01:58 compute-0 systemd[1]: Started Process Core Dump (PID 254099/UID 0).
Jan 20 19:01:59 compute-0 ceph-mon[74381]: pgmap v581: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:01:59 compute-0 ceph-mon[74381]: pgmap v582: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 20 19:01:59 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/1423622285' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:01:59 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/2746094165' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:01:59 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:01:59 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:01:59 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:01:59.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:01:59 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v585: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:01:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:01:59] "GET /metrics HTTP/1.1" 200 48334 "" "Prometheus/2.51.0"
Jan 20 19:01:59 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:01:59] "GET /metrics HTTP/1.1" 200 48334 "" "Prometheus/2.51.0"
Jan 20 19:01:59 compute-0 nova_compute[254061]: 2026-01-20 19:01:59.950 254065 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 20 19:01:59 compute-0 nova_compute[254061]: 2026-01-20 19:01:59.950 254065 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 20 19:01:59 compute-0 nova_compute[254061]: 2026-01-20 19:01:59.950 254065 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 20 19:01:59 compute-0 nova_compute[254061]: 2026-01-20 19:01:59.950 254065 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Jan 20 19:02:00 compute-0 systemd-coredump[254100]: Process 231406 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 59:
                                                    #0  0x00007fb55e1ac32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.097 254065 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:02:00 compute-0 sudo[254230]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxqpmpztvofgprxdzwldqhqotgxdnvwl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935719.8013477-3617-71042597099929/AnsiballZ_podman_container.py'
Jan 20 19:02:00 compute-0 sudo[254230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.119 254065 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.119 254065 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Jan 20 19:02:00 compute-0 systemd[1]: systemd-coredump@11-254099-0.service: Deactivated successfully.
Jan 20 19:02:00 compute-0 systemd[1]: systemd-coredump@11-254099-0.service: Consumed 1.308s CPU time.
Jan 20 19:02:00 compute-0 podman[254238]: 2026-01-20 19:02:00.240183351 +0000 UTC m=+0.031794244 container died e9ff7cb93c378d4a91bb0fc81458df1345cae54b85feb4cefa344df0eea62bff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:02:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-f086c1e9211c56265660d5300b2a57482eda3ad0c183e51709af7aed1603004e-merged.mount: Deactivated successfully.
Jan 20 19:02:00 compute-0 podman[254238]: 2026-01-20 19:02:00.282414657 +0000 UTC m=+0.074025540 container remove e9ff7cb93c378d4a91bb0fc81458df1345cae54b85feb4cefa344df0eea62bff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 20 19:02:00 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Main process exited, code=exited, status=139/n/a
Jan 20 19:02:00 compute-0 ceph-mon[74381]: pgmap v583: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:02:00 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:02:00 compute-0 ceph-mon[74381]: pgmap v584: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Jan 20 19:02:00 compute-0 ceph-mon[74381]: pgmap v585: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:02:00 compute-0 python3.9[254233]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 20 19:02:00 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Failed with result 'exit-code'.
Jan 20 19:02:00 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Consumed 1.873s CPU time.
Jan 20 19:02:00 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 19:02:00 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 19:02:00 compute-0 systemd[1]: Started libpod-conmon-d85cacba8b51a3e535241cee8147b7ad2f6695a17636e951c43a1b098504f719.scope.
Jan 20 19:02:00 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:02:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/794e47668d51559253f4749e4160b4ebf04bb0fbbbab2b055a34fe436c889b41/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/794e47668d51559253f4749e4160b4ebf04bb0fbbbab2b055a34fe436c889b41/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/794e47668d51559253f4749e4160b4ebf04bb0fbbbab2b055a34fe436c889b41/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:00 compute-0 podman[254307]: 2026-01-20 19:02:00.567967827 +0000 UTC m=+0.115647740 container init d85cacba8b51a3e535241cee8147b7ad2f6695a17636e951c43a1b098504f719 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, managed_by=edpm_ansible)
Jan 20 19:02:00 compute-0 podman[254307]: 2026-01-20 19:02:00.575003018 +0000 UTC m=+0.122682931 container start d85cacba8b51a3e535241cee8147b7ad2f6695a17636e951c43a1b098504f719 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=nova_compute_init, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 19:02:00 compute-0 python3.9[254233]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Jan 20 19:02:00 compute-0 nova_compute_init[254330]: INFO:nova_statedir:Applying nova statedir ownership
Jan 20 19:02:00 compute-0 nova_compute_init[254330]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Jan 20 19:02:00 compute-0 nova_compute_init[254330]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Jan 20 19:02:00 compute-0 nova_compute_init[254330]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Jan 20 19:02:00 compute-0 nova_compute_init[254330]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Jan 20 19:02:00 compute-0 nova_compute_init[254330]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Jan 20 19:02:00 compute-0 nova_compute_init[254330]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Jan 20 19:02:00 compute-0 nova_compute_init[254330]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Jan 20 19:02:00 compute-0 nova_compute_init[254330]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Jan 20 19:02:00 compute-0 nova_compute_init[254330]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Jan 20 19:02:00 compute-0 nova_compute_init[254330]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Jan 20 19:02:00 compute-0 nova_compute_init[254330]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Jan 20 19:02:00 compute-0 nova_compute_init[254330]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Jan 20 19:02:00 compute-0 nova_compute_init[254330]: INFO:nova_statedir:Nova statedir ownership complete
Jan 20 19:02:00 compute-0 systemd[1]: libpod-d85cacba8b51a3e535241cee8147b7ad2f6695a17636e951c43a1b098504f719.scope: Deactivated successfully.
Jan 20 19:02:00 compute-0 podman[254331]: 2026-01-20 19:02:00.662198804 +0000 UTC m=+0.050938193 container died d85cacba8b51a3e535241cee8147b7ad2f6695a17636e951c43a1b098504f719 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=nova_compute_init, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 19:02:00 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:02:00 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:02:00 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:02:00.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:02:00 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d85cacba8b51a3e535241cee8147b7ad2f6695a17636e951c43a1b098504f719-userdata-shm.mount: Deactivated successfully.
Jan 20 19:02:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-794e47668d51559253f4749e4160b4ebf04bb0fbbbab2b055a34fe436c889b41-merged.mount: Deactivated successfully.
Jan 20 19:02:00 compute-0 podman[254341]: 2026-01-20 19:02:00.716277502 +0000 UTC m=+0.069531618 container cleanup d85cacba8b51a3e535241cee8147b7ad2f6695a17636e951c43a1b098504f719 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, container_name=nova_compute_init, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 20 19:02:00 compute-0 systemd[1]: libpod-conmon-d85cacba8b51a3e535241cee8147b7ad2f6695a17636e951c43a1b098504f719.scope: Deactivated successfully.
Jan 20 19:02:00 compute-0 sudo[254230]: pam_unix(sudo:session): session closed for user root
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.776 254065 INFO nova.virt.driver [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.878 254065 INFO nova.compute.provider_config [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.888 254065 DEBUG oslo_concurrency.lockutils [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.888 254065 DEBUG oslo_concurrency.lockutils [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.889 254065 DEBUG oslo_concurrency.lockutils [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.889 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.889 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.889 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.889 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.889 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.890 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.890 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.890 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.890 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.890 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.890 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.890 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.891 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.891 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.891 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.891 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.891 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.891 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.891 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.892 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.892 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.892 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.892 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.892 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.892 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.892 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.893 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.893 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.893 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.893 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.893 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.894 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.894 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.894 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.894 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.894 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.894 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.894 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.895 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.895 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.895 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.895 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.895 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.895 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.896 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.896 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.896 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.896 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.896 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.896 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.896 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.897 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.897 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.897 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.897 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.897 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.897 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.897 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.898 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.898 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.898 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.898 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.898 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.898 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.898 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.898 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.899 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.899 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.899 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.899 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.899 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.899 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.899 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.900 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.900 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.900 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.900 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.900 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.900 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.900 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.901 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.901 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.901 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.901 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.902 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.902 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.902 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.902 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.902 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.902 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.902 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.903 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.903 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.903 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.903 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.903 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.903 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.903 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.904 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.904 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.904 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.904 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.904 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.904 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.904 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.905 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.905 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.905 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.905 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.905 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.905 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.906 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.906 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.906 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.906 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.906 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.906 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.907 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.907 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.907 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.907 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.907 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.907 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.908 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.908 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.908 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.908 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.908 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.908 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.908 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.909 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.909 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.909 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.909 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.909 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.910 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.910 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.910 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.910 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.910 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.911 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.911 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.911 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.911 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.911 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.912 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.912 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.912 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.912 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.912 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.912 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.913 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.913 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.913 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.913 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.913 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.913 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.913 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.914 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.914 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.914 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.914 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.914 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.915 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.915 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.915 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.915 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.915 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.915 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.915 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.916 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.916 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.916 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.916 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.916 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.917 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.917 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.917 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.917 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.918 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.918 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.918 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.918 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.918 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.918 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.918 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.919 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.919 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.919 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.919 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.919 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.919 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.920 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.920 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.920 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.920 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.920 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.921 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.921 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.921 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.921 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.921 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.922 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.922 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.922 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.922 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.922 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.922 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.923 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.923 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.923 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.923 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.923 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.923 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.923 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.924 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.924 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.924 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.924 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.924 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.924 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.924 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.925 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.925 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.925 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.925 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.925 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.925 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.925 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.926 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.926 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.926 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.926 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.926 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.926 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.926 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.927 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.927 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.927 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.927 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.927 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.927 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.927 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.928 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.928 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.928 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.928 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.928 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.928 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.928 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.929 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.929 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.929 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.929 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.929 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.929 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.929 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.930 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.930 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.930 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.930 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.930 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.930 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.930 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.931 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.931 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.931 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.931 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.931 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.931 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.931 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.932 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.932 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.932 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.932 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.932 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.932 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.932 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.933 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.933 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.933 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.933 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.933 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.933 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.934 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.934 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.934 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.934 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.934 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.934 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.934 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.935 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.935 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.935 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.935 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.935 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.935 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.935 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.936 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.936 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.936 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.936 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.936 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.936 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.936 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.937 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.937 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.937 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.937 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.937 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.937 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.938 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.938 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.938 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.938 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.938 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.939 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.939 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.939 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.939 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.940 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.940 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.940 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.940 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.940 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.941 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.941 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.941 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.941 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.941 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.942 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.942 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.942 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.942 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.942 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.943 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.943 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.943 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.943 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.943 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.944 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.944 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.944 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.944 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.945 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.945 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.945 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.945 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.945 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.946 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.946 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.946 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.946 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.946 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.947 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.947 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.947 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.948 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.948 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.948 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.948 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.948 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.949 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.949 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.949 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.949 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.950 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.950 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.950 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.950 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.950 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.951 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.951 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.951 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.951 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.951 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.952 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.952 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.952 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.952 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.953 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.953 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.953 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.953 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.953 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.954 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.954 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.954 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.954 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.954 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.955 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.955 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.955 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.955 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.955 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.956 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.956 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.956 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.956 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.956 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.957 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.957 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.957 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.957 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.957 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.958 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.958 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.958 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.958 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.958 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.959 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.959 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.959 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.959 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.960 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.960 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.960 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.960 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.960 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.961 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.961 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.961 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.961 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.961 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.962 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.962 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.962 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.962 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.963 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.963 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.963 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.963 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.963 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.964 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.964 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.964 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.964 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.964 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.965 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.965 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.965 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.965 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.966 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.966 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.966 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.966 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.966 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.967 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.967 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.967 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.967 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.968 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.968 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.968 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.968 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.968 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.969 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.969 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.969 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.969 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.970 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.970 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.970 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.970 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.970 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.970 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.971 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.971 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.971 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.971 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.971 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.971 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.971 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.972 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.972 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.972 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.972 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.972 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.972 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.972 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.973 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.973 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.973 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.973 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.973 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.973 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.973 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.973 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.974 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.974 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.974 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.974 254065 WARNING oslo_config.cfg [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 20 19:02:00 compute-0 nova_compute[254061]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 20 19:02:00 compute-0 nova_compute[254061]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 20 19:02:00 compute-0 nova_compute[254061]: and ``live_migration_inbound_addr`` respectively.
Jan 20 19:02:00 compute-0 nova_compute[254061]: ).  Its value may be silently ignored in the future.
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.974 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.975 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.975 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.975 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.975 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.975 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.975 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.975 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.976 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.976 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.976 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.976 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.976 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.976 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.977 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.977 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.977 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.977 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.977 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.rbd_secret_uuid        = aecbbf3b-b405-507b-97d7-637a83f5b4b1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.977 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.977 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.977 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.978 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.978 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.978 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.978 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.978 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.978 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.978 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.979 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.979 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.979 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.979 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.979 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.979 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.979 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.980 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.980 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.980 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.980 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.980 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.980 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.981 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.981 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.981 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.981 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.981 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.981 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.981 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.982 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.982 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.982 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.982 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.982 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.982 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.982 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.983 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.983 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.983 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.983 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.983 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.983 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.983 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.984 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.984 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.984 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.984 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.984 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.984 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.984 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.984 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.985 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.985 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.985 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.985 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.985 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.985 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.985 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.986 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.986 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.986 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.986 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.986 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.986 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.986 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.987 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.987 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.987 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.987 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.987 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.987 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.988 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.988 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.988 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.988 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.988 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.988 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.988 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.988 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.989 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.989 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.989 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.989 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.989 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.989 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.989 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.990 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.990 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.990 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.990 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.990 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.990 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.990 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.991 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.991 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.991 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.991 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.991 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.991 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.991 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.991 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.992 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.992 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.992 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.992 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.992 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.992 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.992 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.993 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.993 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.993 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.993 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.993 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.993 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.993 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.994 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.994 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.994 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.994 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.994 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.994 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.995 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.995 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.995 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.995 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.995 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.995 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.995 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.995 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.996 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.996 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.996 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.996 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.996 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.996 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.997 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.997 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.997 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.997 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.997 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.997 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.998 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.998 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.998 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.998 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.998 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.998 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.998 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.999 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.999 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.999 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.999 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.999 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:00 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.999 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:00.999 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.000 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.000 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.000 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.000 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.000 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.000 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.001 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.001 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.001 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.001 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.001 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.001 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.001 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.002 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.002 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.002 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.002 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.002 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.002 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.002 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.003 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.003 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.003 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.003 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.003 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.003 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.003 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.004 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.004 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.004 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.004 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.004 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.004 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.004 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.005 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.005 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.005 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.005 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.005 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.005 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.005 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.005 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.006 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.006 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.006 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.006 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.006 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.006 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.006 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.007 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.007 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.007 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.007 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.007 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.007 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.007 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.008 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.008 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.008 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.008 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.008 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.008 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.008 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.008 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.009 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.009 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.009 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.009 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.009 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.009 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.009 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.010 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.010 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.010 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.010 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.010 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.010 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.011 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.011 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.011 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.011 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.011 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.011 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.011 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.012 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.012 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.012 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.012 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.012 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.012 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.012 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.013 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.013 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.013 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.013 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.013 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.013 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.013 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.013 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.014 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.014 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.014 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.014 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.014 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.014 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.014 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.015 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.015 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.015 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.015 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.015 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.015 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.015 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.016 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.016 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.016 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.016 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.016 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.016 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.017 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.017 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.017 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.017 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.017 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.017 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.018 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.018 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.018 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.018 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.018 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.018 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.018 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.018 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.019 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.019 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.019 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.019 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.019 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.019 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.020 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.020 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.020 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.020 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.020 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.020 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.020 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.020 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.021 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.021 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.021 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.021 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.021 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.021 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.021 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.022 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.022 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.022 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.022 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.022 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.022 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.022 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.023 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.023 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.023 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.023 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.023 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.023 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.023 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.024 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.024 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.024 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.024 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.024 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.024 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.024 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.025 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.025 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.025 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.025 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.025 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.025 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.025 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.025 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.026 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.026 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.026 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.026 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.026 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.026 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.026 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.027 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.027 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.027 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.027 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.027 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.027 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.027 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.027 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.028 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.028 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.028 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.028 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.028 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.028 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.029 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.029 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.029 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.029 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.029 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.029 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.030 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.030 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.030 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.030 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.030 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.030 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.031 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.031 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.031 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.031 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.031 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.031 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.032 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.032 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.032 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.032 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.032 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.032 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.032 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.033 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.033 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.033 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.033 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.033 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.033 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.033 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.034 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.034 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.034 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.034 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.034 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.034 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.034 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.035 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.035 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.035 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.035 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.035 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.035 254065 DEBUG oslo_service.service [None req-6a5e4d00-7237-49c2-bfce-1a3cde3ab03d - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.036 254065 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.056 254065 INFO nova.virt.node [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] Determined node identity cb9161e5-191d-495c-920a-01144f42a215 from /var/lib/nova/compute_id
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.057 254065 DEBUG nova.virt.libvirt.host [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.057 254065 DEBUG nova.virt.libvirt.host [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.058 254065 DEBUG nova.virt.libvirt.host [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.058 254065 DEBUG nova.virt.libvirt.host [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.071 254065 DEBUG nova.virt.libvirt.host [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f2c8c3b1d30> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.073 254065 DEBUG nova.virt.libvirt.host [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f2c8c3b1d30> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.073 254065 INFO nova.virt.libvirt.driver [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] Connection event '1' reason 'None'
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.080 254065 INFO nova.virt.libvirt.host [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] Libvirt host capabilities <capabilities>
Jan 20 19:02:01 compute-0 nova_compute[254061]: 
Jan 20 19:02:01 compute-0 nova_compute[254061]:   <host>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <uuid>19a62fa8-72e0-4d98-a48b-b9301ceb89c2</uuid>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <cpu>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <arch>x86_64</arch>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model>EPYC-Rome-v4</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <vendor>AMD</vendor>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <microcode version='16777317'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <signature family='23' model='49' stepping='0'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <maxphysaddr mode='emulate' bits='40'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature name='x2apic'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature name='tsc-deadline'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature name='osxsave'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature name='hypervisor'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature name='tsc_adjust'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature name='spec-ctrl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature name='stibp'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature name='arch-capabilities'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature name='ssbd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature name='cmp_legacy'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature name='topoext'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature name='virt-ssbd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature name='lbrv'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature name='tsc-scale'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature name='vmcb-clean'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature name='pause-filter'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature name='pfthreshold'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature name='svme-addr-chk'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature name='rdctl-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature name='skip-l1dfl-vmentry'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature name='mds-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature name='pschange-mc-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <pages unit='KiB' size='4'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <pages unit='KiB' size='2048'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <pages unit='KiB' size='1048576'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </cpu>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <power_management>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <suspend_mem/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </power_management>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <iommu support='no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <migration_features>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <live/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <uri_transports>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <uri_transport>tcp</uri_transport>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <uri_transport>rdma</uri_transport>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </uri_transports>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </migration_features>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <topology>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <cells num='1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <cell id='0'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:           <memory unit='KiB'>7864316</memory>
Jan 20 19:02:01 compute-0 nova_compute[254061]:           <pages unit='KiB' size='4'>1966079</pages>
Jan 20 19:02:01 compute-0 nova_compute[254061]:           <pages unit='KiB' size='2048'>0</pages>
Jan 20 19:02:01 compute-0 nova_compute[254061]:           <pages unit='KiB' size='1048576'>0</pages>
Jan 20 19:02:01 compute-0 nova_compute[254061]:           <distances>
Jan 20 19:02:01 compute-0 nova_compute[254061]:             <sibling id='0' value='10'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:           </distances>
Jan 20 19:02:01 compute-0 nova_compute[254061]:           <cpus num='8'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:           </cpus>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         </cell>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </cells>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </topology>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <cache>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </cache>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <secmodel>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model>selinux</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <doi>0</doi>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </secmodel>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <secmodel>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model>dac</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <doi>0</doi>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <baselabel type='kvm'>+107:+107</baselabel>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <baselabel type='qemu'>+107:+107</baselabel>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </secmodel>
Jan 20 19:02:01 compute-0 nova_compute[254061]:   </host>
Jan 20 19:02:01 compute-0 nova_compute[254061]: 
Jan 20 19:02:01 compute-0 nova_compute[254061]:   <guest>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <os_type>hvm</os_type>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <arch name='i686'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <wordsize>32</wordsize>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <domain type='qemu'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <domain type='kvm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </arch>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <features>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <pae/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <nonpae/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <acpi default='on' toggle='yes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <apic default='on' toggle='no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <cpuselection/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <deviceboot/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <disksnapshot default='on' toggle='no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <externalSnapshot/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </features>
Jan 20 19:02:01 compute-0 nova_compute[254061]:   </guest>
Jan 20 19:02:01 compute-0 nova_compute[254061]: 
Jan 20 19:02:01 compute-0 nova_compute[254061]:   <guest>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <os_type>hvm</os_type>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <arch name='x86_64'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <wordsize>64</wordsize>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <domain type='qemu'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <domain type='kvm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </arch>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <features>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <acpi default='on' toggle='yes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <apic default='on' toggle='no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <cpuselection/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <deviceboot/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <disksnapshot default='on' toggle='no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <externalSnapshot/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </features>
Jan 20 19:02:01 compute-0 nova_compute[254061]:   </guest>
Jan 20 19:02:01 compute-0 nova_compute[254061]: 
Jan 20 19:02:01 compute-0 nova_compute[254061]: </capabilities>
Jan 20 19:02:01 compute-0 nova_compute[254061]: 
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.087 254065 DEBUG nova.virt.libvirt.host [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.089 254065 DEBUG nova.virt.libvirt.volume.mount [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.091 254065 DEBUG nova.virt.libvirt.host [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Jan 20 19:02:01 compute-0 nova_compute[254061]: <domainCapabilities>
Jan 20 19:02:01 compute-0 nova_compute[254061]:   <path>/usr/libexec/qemu-kvm</path>
Jan 20 19:02:01 compute-0 nova_compute[254061]:   <domain>kvm</domain>
Jan 20 19:02:01 compute-0 nova_compute[254061]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 20 19:02:01 compute-0 nova_compute[254061]:   <arch>i686</arch>
Jan 20 19:02:01 compute-0 nova_compute[254061]:   <vcpu max='4096'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:   <iothreads supported='yes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:   <os supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <enum name='firmware'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <loader supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='type'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>rom</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>pflash</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='readonly'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>yes</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>no</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='secure'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>no</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </loader>
Jan 20 19:02:01 compute-0 nova_compute[254061]:   </os>
Jan 20 19:02:01 compute-0 nova_compute[254061]:   <cpu>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <mode name='host-passthrough' supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='hostPassthroughMigratable'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>on</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>off</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </mode>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <mode name='maximum' supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='maximumMigratable'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>on</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>off</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </mode>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <mode name='host-model' supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <vendor>AMD</vendor>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='x2apic'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='tsc-deadline'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='hypervisor'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='tsc_adjust'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='spec-ctrl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='stibp'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='ssbd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='cmp_legacy'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='overflow-recov'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='succor'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='ibrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='amd-ssbd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='virt-ssbd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='lbrv'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='tsc-scale'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='vmcb-clean'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='flushbyasid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='pause-filter'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='pfthreshold'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='svme-addr-chk'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='disable' name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </mode>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <mode name='custom' supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Broadwell'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Broadwell-IBRS'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Broadwell-noTSX'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Broadwell-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Broadwell-v2'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Broadwell-v3'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Broadwell-v4'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Cascadelake-Server'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Cascadelake-Server-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Cascadelake-Server-v2'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Cascadelake-Server-v3'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Cascadelake-Server-v4'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Cascadelake-Server-v5'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='ClearwaterForest'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-ne-convert'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni-int16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni-int8'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bhi-ctrl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bhi-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bus-lock-detect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='cldemote'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='cmpccxadd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ddpd-u'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fbsdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='intel-psfd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ipred-ctrl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='lam'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='mcdt-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdir64b'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdiri'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pbrsb-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='prefetchiti'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='psdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rrsba-ctrl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='serialize'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sha512'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sm3'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sm4'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ss'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='ClearwaterForest-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-ne-convert'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni-int16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni-int8'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bhi-ctrl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bhi-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bus-lock-detect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='cldemote'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='cmpccxadd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ddpd-u'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fbsdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='intel-psfd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ipred-ctrl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='lam'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='mcdt-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdir64b'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdiri'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pbrsb-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='prefetchiti'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='psdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rrsba-ctrl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='serialize'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sha512'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sm3'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sm4'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ss'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Cooperlake'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='taa-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Cooperlake-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='taa-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Cooperlake-v2'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='taa-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Denverton'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='mpx'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Denverton-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='mpx'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Denverton-v2'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Denverton-v3'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Dhyana-v2'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='EPYC-Genoa'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amd-psfd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='auto-ibrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='no-nested-data-bp'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='null-sel-clr-base'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='stibp-always-on'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='EPYC-Genoa-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amd-psfd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='auto-ibrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='no-nested-data-bp'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='null-sel-clr-base'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='stibp-always-on'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='EPYC-Genoa-v2'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amd-psfd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='auto-ibrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fs-gs-base-ns'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='no-nested-data-bp'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='null-sel-clr-base'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='perfmon-v2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='stibp-always-on'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='EPYC-Milan'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='EPYC-Milan-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='EPYC-Milan-v2'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amd-psfd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='no-nested-data-bp'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='null-sel-clr-base'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='stibp-always-on'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='EPYC-Milan-v3'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amd-psfd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='no-nested-data-bp'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='null-sel-clr-base'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='stibp-always-on'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='EPYC-Rome'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='EPYC-Rome-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='EPYC-Rome-v2'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='EPYC-Rome-v3'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='EPYC-Turin'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amd-psfd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='auto-ibrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vp2intersect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fs-gs-base-ns'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibpb-brtype'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdir64b'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdiri'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='no-nested-data-bp'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='null-sel-clr-base'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='perfmon-v2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='prefetchi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sbpb'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='srso-user-kernel-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='stibp-always-on'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='EPYC-Turin-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amd-psfd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='auto-ibrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vp2intersect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fs-gs-base-ns'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibpb-brtype'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdir64b'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdiri'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='no-nested-data-bp'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='null-sel-clr-base'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='perfmon-v2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='prefetchi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sbpb'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='srso-user-kernel-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='stibp-always-on'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='EPYC-v3'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='EPYC-v4'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='EPYC-v5'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='GraniteRapids'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-fp16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-int8'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-tile'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-fp16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bus-lock-detect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fbsdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrc'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fzrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='mcdt-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pbrsb-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='prefetchiti'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='psdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='serialize'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='taa-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='tsx-ldtrk'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xfd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='GraniteRapids-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-fp16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-int8'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-tile'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-fp16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bus-lock-detect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fbsdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrc'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fzrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='mcdt-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pbrsb-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='prefetchiti'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='psdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='serialize'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='taa-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='tsx-ldtrk'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xfd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='GraniteRapids-v2'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-fp16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-int8'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-tile'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx10'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx10-128'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx10-256'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx10-512'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-fp16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bus-lock-detect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='cldemote'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fbsdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrc'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fzrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='mcdt-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdir64b'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdiri'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pbrsb-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='prefetchiti'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='psdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='serialize'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ss'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='taa-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='tsx-ldtrk'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xfd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='GraniteRapids-v3'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-fp16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-int8'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-tile'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx10'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx10-128'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx10-256'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx10-512'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-fp16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bus-lock-detect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='cldemote'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fbsdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrc'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fzrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='mcdt-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdir64b'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdiri'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pbrsb-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='prefetchiti'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='psdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='serialize'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ss'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='taa-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='tsx-ldtrk'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xfd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Haswell'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Haswell-IBRS'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Haswell-noTSX'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Haswell-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Haswell-v2'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Haswell-v3'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Haswell-v4'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Icelake-Server'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Icelake-Server-noTSX'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Icelake-Server-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Icelake-Server-v2'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Icelake-Server-v3'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='taa-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Icelake-Server-v4'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='taa-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Icelake-Server-v5'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='taa-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Icelake-Server-v6'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='taa-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Icelake-Server-v7'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='taa-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='IvyBridge'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='IvyBridge-IBRS'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='IvyBridge-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='IvyBridge-v2'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='KnightsMill'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-4fmaps'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-4vnniw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512er'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512pf'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ss'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='KnightsMill-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-4fmaps'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-4vnniw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512er'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512pf'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ss'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Opteron_G4'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fma4'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xop'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Opteron_G4-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fma4'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xop'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Opteron_G5'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fma4'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='tbm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xop'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Opteron_G5-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fma4'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='tbm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xop'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='SapphireRapids'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-int8'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-tile'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-fp16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bus-lock-detect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrc'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fzrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='serialize'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='taa-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='tsx-ldtrk'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xfd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='SapphireRapids-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-int8'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-tile'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-fp16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bus-lock-detect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrc'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fzrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='serialize'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='taa-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='tsx-ldtrk'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xfd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='SapphireRapids-v2'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-int8'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-tile'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-fp16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bus-lock-detect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fbsdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrc'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fzrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='psdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='serialize'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='taa-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='tsx-ldtrk'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xfd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='SapphireRapids-v3'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-int8'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-tile'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-fp16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bus-lock-detect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='cldemote'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fbsdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrc'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fzrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdir64b'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdiri'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='psdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='serialize'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ss'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='taa-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='tsx-ldtrk'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xfd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='SapphireRapids-v4'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-int8'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-tile'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-fp16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bus-lock-detect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='cldemote'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fbsdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrc'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fzrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdir64b'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdiri'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='psdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='serialize'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ss'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='taa-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='tsx-ldtrk'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xfd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='SierraForest'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-ne-convert'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni-int8'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bus-lock-detect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='cmpccxadd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fbsdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='mcdt-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pbrsb-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='psdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='serialize'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='SierraForest-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-ne-convert'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni-int8'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bus-lock-detect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='cmpccxadd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fbsdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='mcdt-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pbrsb-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='psdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='serialize'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='SierraForest-v2'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-ne-convert'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni-int8'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bhi-ctrl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bus-lock-detect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='cldemote'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='cmpccxadd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fbsdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='intel-psfd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ipred-ctrl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='lam'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='mcdt-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdir64b'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdiri'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pbrsb-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='psdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rrsba-ctrl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='serialize'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ss'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='SierraForest-v3'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-ne-convert'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni-int8'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bhi-ctrl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bus-lock-detect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='cldemote'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='cmpccxadd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fbsdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='intel-psfd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ipred-ctrl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='lam'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='mcdt-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdir64b'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdiri'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pbrsb-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='psdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rrsba-ctrl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='serialize'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ss'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Skylake-Client'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Skylake-Client-IBRS'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Skylake-Client-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Skylake-Client-v2'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Skylake-Client-v3'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Skylake-Client-v4'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Skylake-Server'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Skylake-Server-IBRS'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Skylake-Server-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Skylake-Server-v2'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Skylake-Server-v3'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Skylake-Server-v4'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Skylake-Server-v5'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Snowridge'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='cldemote'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='core-capability'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdir64b'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdiri'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='mpx'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='split-lock-detect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Snowridge-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='cldemote'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='core-capability'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdir64b'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdiri'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='mpx'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='split-lock-detect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Snowridge-v2'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='cldemote'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='core-capability'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdir64b'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdiri'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='split-lock-detect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Snowridge-v3'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='cldemote'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='core-capability'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdir64b'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdiri'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='split-lock-detect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Snowridge-v4'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='cldemote'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdir64b'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdiri'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='athlon'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='3dnow'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='3dnowext'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='athlon-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='3dnow'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='3dnowext'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='core2duo'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ss'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='core2duo-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ss'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='coreduo'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ss'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='coreduo-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ss'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='n270'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ss'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='n270-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ss'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='phenom'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='3dnow'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='3dnowext'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='phenom-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='3dnow'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='3dnowext'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </mode>
Jan 20 19:02:01 compute-0 nova_compute[254061]:   </cpu>
Jan 20 19:02:01 compute-0 nova_compute[254061]:   <memoryBacking supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <enum name='sourceType'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <value>file</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <value>anonymous</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <value>memfd</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:   </memoryBacking>
Jan 20 19:02:01 compute-0 nova_compute[254061]:   <devices>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <disk supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='diskDevice'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>disk</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>cdrom</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>floppy</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>lun</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='bus'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>fdc</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>scsi</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>virtio</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>usb</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>sata</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='model'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>virtio</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>virtio-transitional</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>virtio-non-transitional</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </disk>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <graphics supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='type'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>vnc</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>egl-headless</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>dbus</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </graphics>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <video supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='modelType'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>vga</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>cirrus</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>virtio</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>none</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>bochs</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>ramfb</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </video>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <hostdev supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='mode'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>subsystem</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='startupPolicy'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>default</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>mandatory</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>requisite</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>optional</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='subsysType'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>usb</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>pci</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>scsi</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='capsType'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='pciBackend'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </hostdev>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <rng supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='model'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>virtio</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>virtio-transitional</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>virtio-non-transitional</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='backendModel'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>random</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>egd</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>builtin</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </rng>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <filesystem supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='driverType'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>path</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>handle</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>virtiofs</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </filesystem>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <tpm supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='model'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>tpm-tis</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>tpm-crb</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='backendModel'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>emulator</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>external</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='backendVersion'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>2.0</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </tpm>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <redirdev supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='bus'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>usb</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </redirdev>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <channel supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='type'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>pty</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>unix</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </channel>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <crypto supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='model'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='type'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>qemu</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='backendModel'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>builtin</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </crypto>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <interface supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='backendType'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>default</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>passt</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </interface>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <panic supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='model'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>isa</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>hyperv</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </panic>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <console supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='type'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>null</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>vc</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>pty</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>dev</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>file</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>pipe</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>stdio</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>udp</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>tcp</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>unix</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>qemu-vdagent</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>dbus</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </console>
Jan 20 19:02:01 compute-0 nova_compute[254061]:   </devices>
Jan 20 19:02:01 compute-0 nova_compute[254061]:   <features>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <gic supported='no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <vmcoreinfo supported='yes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <genid supported='yes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <backingStoreInput supported='yes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <backup supported='yes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <async-teardown supported='yes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <s390-pv supported='no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <ps2 supported='yes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <tdx supported='no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <sev supported='no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <sgx supported='no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <hyperv supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='features'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>relaxed</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>vapic</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>spinlocks</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>vpindex</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>runtime</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>synic</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>stimer</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>reset</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>vendor_id</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>frequencies</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>reenlightenment</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>tlbflush</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>ipi</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>avic</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>emsr_bitmap</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>xmm_input</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <defaults>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <spinlocks>4095</spinlocks>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <stimer_direct>on</stimer_direct>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <tlbflush_direct>on</tlbflush_direct>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <tlbflush_extended>on</tlbflush_extended>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </defaults>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </hyperv>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <launchSecurity supported='no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:   </features>
Jan 20 19:02:01 compute-0 nova_compute[254061]: </domainCapabilities>
Jan 20 19:02:01 compute-0 nova_compute[254061]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.098 254065 DEBUG nova.virt.libvirt.host [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Jan 20 19:02:01 compute-0 nova_compute[254061]: <domainCapabilities>
Jan 20 19:02:01 compute-0 nova_compute[254061]:   <path>/usr/libexec/qemu-kvm</path>
Jan 20 19:02:01 compute-0 nova_compute[254061]:   <domain>kvm</domain>
Jan 20 19:02:01 compute-0 nova_compute[254061]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 20 19:02:01 compute-0 nova_compute[254061]:   <arch>i686</arch>
Jan 20 19:02:01 compute-0 nova_compute[254061]:   <vcpu max='240'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:   <iothreads supported='yes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:   <os supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <enum name='firmware'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <loader supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='type'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>rom</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>pflash</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='readonly'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>yes</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>no</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='secure'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>no</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </loader>
Jan 20 19:02:01 compute-0 nova_compute[254061]:   </os>
Jan 20 19:02:01 compute-0 nova_compute[254061]:   <cpu>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <mode name='host-passthrough' supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='hostPassthroughMigratable'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>on</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>off</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </mode>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <mode name='maximum' supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='maximumMigratable'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>on</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>off</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </mode>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <mode name='host-model' supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <vendor>AMD</vendor>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='x2apic'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='tsc-deadline'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='hypervisor'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='tsc_adjust'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='spec-ctrl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='stibp'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='ssbd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='cmp_legacy'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='overflow-recov'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='succor'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='ibrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='amd-ssbd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='virt-ssbd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='lbrv'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='tsc-scale'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='vmcb-clean'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='flushbyasid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='pause-filter'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='pfthreshold'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='svme-addr-chk'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='disable' name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </mode>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <mode name='custom' supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Broadwell'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Broadwell-IBRS'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Broadwell-noTSX'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Broadwell-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Broadwell-v2'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Broadwell-v3'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Broadwell-v4'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Cascadelake-Server'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Cascadelake-Server-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Cascadelake-Server-v2'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Cascadelake-Server-v3'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Cascadelake-Server-v4'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Cascadelake-Server-v5'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='ClearwaterForest'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-ne-convert'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni-int16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni-int8'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bhi-ctrl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bhi-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bus-lock-detect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='cldemote'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='cmpccxadd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ddpd-u'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fbsdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='intel-psfd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ipred-ctrl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='lam'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='mcdt-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdir64b'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdiri'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pbrsb-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='prefetchiti'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='psdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rrsba-ctrl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='serialize'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sha512'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sm3'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sm4'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ss'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='ClearwaterForest-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-ne-convert'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni-int16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni-int8'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bhi-ctrl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bhi-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bus-lock-detect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='cldemote'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='cmpccxadd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ddpd-u'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fbsdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='intel-psfd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ipred-ctrl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='lam'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='mcdt-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdir64b'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdiri'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pbrsb-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='prefetchiti'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='psdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rrsba-ctrl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='serialize'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sha512'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sm3'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sm4'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ss'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Cooperlake'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='taa-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Cooperlake-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='taa-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Cooperlake-v2'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='taa-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Denverton'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='mpx'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Denverton-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='mpx'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Denverton-v2'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Denverton-v3'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Dhyana-v2'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='EPYC-Genoa'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amd-psfd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='auto-ibrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='no-nested-data-bp'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='null-sel-clr-base'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='stibp-always-on'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='EPYC-Genoa-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amd-psfd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='auto-ibrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='no-nested-data-bp'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='null-sel-clr-base'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='stibp-always-on'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='EPYC-Genoa-v2'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amd-psfd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='auto-ibrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fs-gs-base-ns'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='no-nested-data-bp'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='null-sel-clr-base'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='perfmon-v2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='stibp-always-on'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='EPYC-Milan'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='EPYC-Milan-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='EPYC-Milan-v2'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amd-psfd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='no-nested-data-bp'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='null-sel-clr-base'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='stibp-always-on'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='EPYC-Milan-v3'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amd-psfd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='no-nested-data-bp'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='null-sel-clr-base'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='stibp-always-on'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='EPYC-Rome'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='EPYC-Rome-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='EPYC-Rome-v2'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='EPYC-Rome-v3'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='EPYC-Turin'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amd-psfd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='auto-ibrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vp2intersect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fs-gs-base-ns'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibpb-brtype'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdir64b'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdiri'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='no-nested-data-bp'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='null-sel-clr-base'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='perfmon-v2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='prefetchi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sbpb'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='srso-user-kernel-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='stibp-always-on'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='EPYC-Turin-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amd-psfd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='auto-ibrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vp2intersect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512ifma'/>
Jan 20 19:02:01 compute-0 rsyslogd[1003]: imjournal from <np0005589270:nova_compute>: begin to drop messages due to rate-limiting
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fs-gs-base-ns'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibpb-brtype'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdir64b'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdiri'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='no-nested-data-bp'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='null-sel-clr-base'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='perfmon-v2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='prefetchi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sbpb'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='srso-user-kernel-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='stibp-always-on'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='EPYC-v3'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='EPYC-v4'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='EPYC-v5'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='GraniteRapids'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-fp16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-int8'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-tile'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-fp16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bus-lock-detect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fbsdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrc'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fzrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='mcdt-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pbrsb-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='prefetchiti'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='psdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='serialize'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='taa-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='tsx-ldtrk'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xfd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='GraniteRapids-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-fp16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-int8'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-tile'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-fp16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bus-lock-detect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fbsdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrc'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fzrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='mcdt-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pbrsb-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='prefetchiti'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='psdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='serialize'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='taa-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='tsx-ldtrk'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xfd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='GraniteRapids-v2'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-fp16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-int8'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-tile'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx10'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx10-128'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx10-256'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx10-512'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-fp16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bus-lock-detect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='cldemote'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fbsdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrc'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fzrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='mcdt-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdir64b'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdiri'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pbrsb-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='prefetchiti'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='psdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='serialize'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ss'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='taa-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='tsx-ldtrk'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xfd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='GraniteRapids-v3'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-fp16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-int8'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-tile'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx10'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx10-128'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx10-256'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx10-512'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-fp16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bus-lock-detect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='cldemote'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fbsdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrc'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fzrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='mcdt-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdir64b'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdiri'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pbrsb-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='prefetchiti'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='psdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='serialize'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ss'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='taa-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='tsx-ldtrk'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xfd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Haswell'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Haswell-IBRS'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Haswell-noTSX'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Haswell-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Haswell-v2'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Haswell-v3'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Haswell-v4'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Icelake-Server'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Icelake-Server-noTSX'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Icelake-Server-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Icelake-Server-v2'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Icelake-Server-v3'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='taa-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Icelake-Server-v4'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='taa-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Icelake-Server-v5'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='taa-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Icelake-Server-v6'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='taa-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Icelake-Server-v7'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='taa-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='IvyBridge'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='IvyBridge-IBRS'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='IvyBridge-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='IvyBridge-v2'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='KnightsMill'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-4fmaps'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-4vnniw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512er'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512pf'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ss'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='KnightsMill-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-4fmaps'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-4vnniw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512er'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512pf'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ss'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Opteron_G4'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fma4'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xop'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Opteron_G4-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fma4'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xop'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Opteron_G5'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fma4'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='tbm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xop'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Opteron_G5-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fma4'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='tbm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xop'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='SapphireRapids'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-int8'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-tile'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-fp16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bus-lock-detect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrc'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fzrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='serialize'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='taa-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='tsx-ldtrk'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xfd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='SapphireRapids-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-int8'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-tile'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-fp16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bus-lock-detect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrc'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fzrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='serialize'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='taa-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='tsx-ldtrk'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xfd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='SapphireRapids-v2'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-int8'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-tile'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-fp16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bus-lock-detect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fbsdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrc'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fzrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='psdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='serialize'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='taa-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='tsx-ldtrk'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xfd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='SapphireRapids-v3'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-int8'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-tile'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-fp16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bus-lock-detect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='cldemote'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fbsdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrc'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fzrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdir64b'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdiri'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='psdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='serialize'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ss'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='taa-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='tsx-ldtrk'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xfd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='SapphireRapids-v4'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-int8'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-tile'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-fp16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bus-lock-detect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='cldemote'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fbsdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrc'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fzrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdir64b'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdiri'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='psdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='serialize'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ss'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='taa-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='tsx-ldtrk'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xfd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='SierraForest'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-ne-convert'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni-int8'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bus-lock-detect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='cmpccxadd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fbsdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='mcdt-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pbrsb-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='psdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='serialize'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='SierraForest-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-ne-convert'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni-int8'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bus-lock-detect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='cmpccxadd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fbsdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='mcdt-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pbrsb-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='psdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='serialize'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='SierraForest-v2'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-ne-convert'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni-int8'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bhi-ctrl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bus-lock-detect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='cldemote'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='cmpccxadd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fbsdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='intel-psfd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ipred-ctrl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='lam'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='mcdt-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdir64b'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdiri'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pbrsb-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='psdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rrsba-ctrl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='serialize'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ss'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='SierraForest-v3'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-ne-convert'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni-int8'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bhi-ctrl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bus-lock-detect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='cldemote'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='cmpccxadd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fbsdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='intel-psfd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ipred-ctrl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='lam'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='mcdt-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdir64b'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdiri'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pbrsb-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='psdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rrsba-ctrl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='serialize'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ss'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Skylake-Client'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Skylake-Client-IBRS'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Skylake-Client-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Skylake-Client-v2'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Skylake-Client-v3'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Skylake-Client-v4'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Skylake-Server'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Skylake-Server-IBRS'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Skylake-Server-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Skylake-Server-v2'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Skylake-Server-v3'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Skylake-Server-v4'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Skylake-Server-v5'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Snowridge'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='cldemote'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='core-capability'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdir64b'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdiri'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='mpx'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='split-lock-detect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Snowridge-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='cldemote'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='core-capability'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdir64b'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdiri'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='mpx'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='split-lock-detect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Snowridge-v2'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='cldemote'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='core-capability'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdir64b'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdiri'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='split-lock-detect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Snowridge-v3'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='cldemote'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='core-capability'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdir64b'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdiri'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='split-lock-detect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Snowridge-v4'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='cldemote'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdir64b'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdiri'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='athlon'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='3dnow'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='3dnowext'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='athlon-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='3dnow'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='3dnowext'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='core2duo'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ss'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='core2duo-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ss'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='coreduo'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ss'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='coreduo-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ss'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='n270'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ss'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='n270-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ss'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='phenom'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='3dnow'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='3dnowext'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='phenom-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='3dnow'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='3dnowext'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </mode>
Jan 20 19:02:01 compute-0 nova_compute[254061]:   </cpu>
Jan 20 19:02:01 compute-0 nova_compute[254061]:   <memoryBacking supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <enum name='sourceType'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <value>file</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <value>anonymous</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <value>memfd</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:   </memoryBacking>
Jan 20 19:02:01 compute-0 nova_compute[254061]:   <devices>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <disk supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='diskDevice'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>disk</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>cdrom</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>floppy</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>lun</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='bus'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>ide</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>fdc</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>scsi</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>virtio</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>usb</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>sata</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='model'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>virtio</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>virtio-transitional</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>virtio-non-transitional</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </disk>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <graphics supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='type'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>vnc</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>egl-headless</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>dbus</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </graphics>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <video supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='modelType'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>vga</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>cirrus</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>virtio</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>none</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>bochs</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>ramfb</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </video>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <hostdev supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='mode'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>subsystem</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='startupPolicy'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>default</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>mandatory</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>requisite</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>optional</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='subsysType'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>usb</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>pci</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>scsi</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='capsType'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='pciBackend'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </hostdev>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <rng supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='model'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>virtio</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>virtio-transitional</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>virtio-non-transitional</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='backendModel'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>random</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>egd</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>builtin</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </rng>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <filesystem supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='driverType'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>path</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>handle</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>virtiofs</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </filesystem>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <tpm supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='model'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>tpm-tis</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>tpm-crb</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='backendModel'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>emulator</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>external</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='backendVersion'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>2.0</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </tpm>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <redirdev supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='bus'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>usb</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </redirdev>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <channel supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='type'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>pty</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>unix</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </channel>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <crypto supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='model'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='type'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>qemu</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='backendModel'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>builtin</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </crypto>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <interface supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='backendType'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>default</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>passt</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </interface>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <panic supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='model'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>isa</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>hyperv</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </panic>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <console supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='type'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>null</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>vc</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>pty</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>dev</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>file</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>pipe</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>stdio</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>udp</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>tcp</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>unix</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>qemu-vdagent</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>dbus</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </console>
Jan 20 19:02:01 compute-0 nova_compute[254061]:   </devices>
Jan 20 19:02:01 compute-0 nova_compute[254061]:   <features>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <gic supported='no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <vmcoreinfo supported='yes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <genid supported='yes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <backingStoreInput supported='yes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <backup supported='yes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <async-teardown supported='yes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <s390-pv supported='no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <ps2 supported='yes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <tdx supported='no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <sev supported='no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <sgx supported='no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <hyperv supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='features'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>relaxed</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>vapic</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>spinlocks</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>vpindex</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>runtime</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>synic</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>stimer</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>reset</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>vendor_id</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>frequencies</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>reenlightenment</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>tlbflush</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>ipi</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>avic</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>emsr_bitmap</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>xmm_input</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <defaults>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <spinlocks>4095</spinlocks>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <stimer_direct>on</stimer_direct>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <tlbflush_direct>on</tlbflush_direct>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <tlbflush_extended>on</tlbflush_extended>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </defaults>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </hyperv>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <launchSecurity supported='no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:   </features>
Jan 20 19:02:01 compute-0 nova_compute[254061]: </domainCapabilities>
Jan 20 19:02:01 compute-0 nova_compute[254061]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.162 254065 DEBUG nova.virt.libvirt.host [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.167 254065 DEBUG nova.virt.libvirt.host [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Jan 20 19:02:01 compute-0 nova_compute[254061]: <domainCapabilities>
Jan 20 19:02:01 compute-0 nova_compute[254061]:   <path>/usr/libexec/qemu-kvm</path>
Jan 20 19:02:01 compute-0 nova_compute[254061]:   <domain>kvm</domain>
Jan 20 19:02:01 compute-0 nova_compute[254061]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 20 19:02:01 compute-0 nova_compute[254061]:   <arch>x86_64</arch>
Jan 20 19:02:01 compute-0 nova_compute[254061]:   <vcpu max='4096'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:   <iothreads supported='yes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:   <os supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <enum name='firmware'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <value>efi</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <loader supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='type'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>rom</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>pflash</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='readonly'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>yes</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>no</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='secure'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>yes</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>no</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </loader>
Jan 20 19:02:01 compute-0 nova_compute[254061]:   </os>
Jan 20 19:02:01 compute-0 nova_compute[254061]:   <cpu>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <mode name='host-passthrough' supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='hostPassthroughMigratable'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>on</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>off</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </mode>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <mode name='maximum' supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='maximumMigratable'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>on</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>off</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </mode>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <mode name='host-model' supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <vendor>AMD</vendor>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='x2apic'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='tsc-deadline'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='hypervisor'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='tsc_adjust'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='spec-ctrl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='stibp'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='ssbd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='cmp_legacy'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='overflow-recov'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='succor'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='ibrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='amd-ssbd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='virt-ssbd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='lbrv'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='tsc-scale'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='vmcb-clean'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='flushbyasid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='pause-filter'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='pfthreshold'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='svme-addr-chk'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='disable' name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </mode>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <mode name='custom' supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Broadwell'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Broadwell-IBRS'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Broadwell-noTSX'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Broadwell-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Broadwell-v2'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Broadwell-v3'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Broadwell-v4'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Cascadelake-Server'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Cascadelake-Server-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Cascadelake-Server-v2'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Cascadelake-Server-v3'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Cascadelake-Server-v4'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Cascadelake-Server-v5'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='ClearwaterForest'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-ne-convert'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni-int16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni-int8'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bhi-ctrl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bhi-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bus-lock-detect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='cldemote'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='cmpccxadd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ddpd-u'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fbsdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='intel-psfd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ipred-ctrl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='lam'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='mcdt-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdir64b'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdiri'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pbrsb-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='prefetchiti'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='psdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rrsba-ctrl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='serialize'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sha512'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sm3'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sm4'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ss'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='ClearwaterForest-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-ne-convert'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni-int16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni-int8'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bhi-ctrl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bhi-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bus-lock-detect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='cldemote'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='cmpccxadd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ddpd-u'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fbsdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='intel-psfd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ipred-ctrl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='lam'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='mcdt-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdir64b'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdiri'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pbrsb-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='prefetchiti'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='psdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rrsba-ctrl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='serialize'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sha512'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sm3'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sm4'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ss'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Cooperlake'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='taa-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Cooperlake-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='taa-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Cooperlake-v2'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='taa-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Denverton'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='mpx'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Denverton-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='mpx'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Denverton-v2'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Denverton-v3'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Dhyana-v2'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='EPYC-Genoa'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amd-psfd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='auto-ibrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='no-nested-data-bp'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='null-sel-clr-base'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='stibp-always-on'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='EPYC-Genoa-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amd-psfd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='auto-ibrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='no-nested-data-bp'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='null-sel-clr-base'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='stibp-always-on'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='EPYC-Genoa-v2'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amd-psfd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='auto-ibrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fs-gs-base-ns'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='no-nested-data-bp'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='null-sel-clr-base'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='perfmon-v2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='stibp-always-on'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='EPYC-Milan'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='EPYC-Milan-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='EPYC-Milan-v2'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amd-psfd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='no-nested-data-bp'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='null-sel-clr-base'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='stibp-always-on'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='EPYC-Milan-v3'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amd-psfd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='no-nested-data-bp'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='null-sel-clr-base'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='stibp-always-on'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='EPYC-Rome'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='EPYC-Rome-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='EPYC-Rome-v2'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='EPYC-Rome-v3'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='EPYC-Turin'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amd-psfd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='auto-ibrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vp2intersect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fs-gs-base-ns'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibpb-brtype'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdir64b'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdiri'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='no-nested-data-bp'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='null-sel-clr-base'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='perfmon-v2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='prefetchi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sbpb'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='srso-user-kernel-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='stibp-always-on'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='EPYC-Turin-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amd-psfd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='auto-ibrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vp2intersect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fs-gs-base-ns'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibpb-brtype'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdir64b'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdiri'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='no-nested-data-bp'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='null-sel-clr-base'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='perfmon-v2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='prefetchi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sbpb'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='srso-user-kernel-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='stibp-always-on'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='EPYC-v3'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='EPYC-v4'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='EPYC-v5'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='GraniteRapids'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-fp16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-int8'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-tile'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-fp16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bus-lock-detect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fbsdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrc'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fzrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='mcdt-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pbrsb-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='prefetchiti'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='psdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='serialize'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='taa-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='tsx-ldtrk'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xfd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='GraniteRapids-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-fp16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-int8'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-tile'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-fp16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bus-lock-detect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fbsdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrc'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fzrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='mcdt-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pbrsb-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='prefetchiti'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='psdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='serialize'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='taa-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='tsx-ldtrk'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xfd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='GraniteRapids-v2'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-fp16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-int8'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-tile'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx10'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx10-128'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx10-256'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx10-512'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-fp16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bus-lock-detect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='cldemote'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fbsdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrc'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fzrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='mcdt-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdir64b'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdiri'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pbrsb-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='prefetchiti'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='psdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='serialize'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ss'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='taa-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='tsx-ldtrk'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xfd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='GraniteRapids-v3'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-fp16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-int8'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-tile'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx10'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx10-128'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx10-256'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx10-512'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-fp16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bus-lock-detect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='cldemote'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fbsdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrc'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fzrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='mcdt-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdir64b'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdiri'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pbrsb-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='prefetchiti'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='psdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='serialize'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ss'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='taa-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='tsx-ldtrk'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xfd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Haswell'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Haswell-IBRS'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Haswell-noTSX'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Haswell-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Haswell-v2'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Haswell-v3'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Haswell-v4'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Icelake-Server'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Icelake-Server-noTSX'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Icelake-Server-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Icelake-Server-v2'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Icelake-Server-v3'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='taa-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Icelake-Server-v4'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='taa-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Icelake-Server-v5'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='taa-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Icelake-Server-v6'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='taa-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Icelake-Server-v7'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='taa-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='IvyBridge'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='IvyBridge-IBRS'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='IvyBridge-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='IvyBridge-v2'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='KnightsMill'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-4fmaps'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-4vnniw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512er'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512pf'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ss'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='KnightsMill-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-4fmaps'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-4vnniw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512er'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512pf'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ss'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Opteron_G4'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fma4'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xop'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Opteron_G4-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fma4'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xop'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Opteron_G5'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fma4'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='tbm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xop'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Opteron_G5-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fma4'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='tbm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xop'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='SapphireRapids'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-int8'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-tile'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-fp16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bus-lock-detect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrc'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fzrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='serialize'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='taa-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='tsx-ldtrk'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xfd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='SapphireRapids-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-int8'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-tile'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-fp16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bus-lock-detect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrc'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fzrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='serialize'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='taa-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='tsx-ldtrk'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xfd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='SapphireRapids-v2'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-int8'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-tile'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-fp16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bus-lock-detect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fbsdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrc'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fzrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='psdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='serialize'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='taa-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='tsx-ldtrk'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xfd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='SapphireRapids-v3'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-int8'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-tile'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-fp16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bus-lock-detect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='cldemote'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fbsdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrc'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fzrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdir64b'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdiri'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='psdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='serialize'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ss'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='taa-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='tsx-ldtrk'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xfd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='SapphireRapids-v4'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-int8'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-tile'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-fp16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bus-lock-detect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='cldemote'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fbsdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrc'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fzrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdir64b'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdiri'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='psdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='serialize'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ss'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='taa-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='tsx-ldtrk'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xfd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='SierraForest'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-ne-convert'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni-int8'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bus-lock-detect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='cmpccxadd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fbsdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='mcdt-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pbrsb-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='psdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='serialize'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='SierraForest-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-ne-convert'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni-int8'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bus-lock-detect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='cmpccxadd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fbsdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='mcdt-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pbrsb-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='psdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='serialize'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='SierraForest-v2'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-ne-convert'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni-int8'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bhi-ctrl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bus-lock-detect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='cldemote'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='cmpccxadd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fbsdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='intel-psfd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ipred-ctrl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='lam'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='mcdt-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdir64b'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdiri'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pbrsb-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='psdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rrsba-ctrl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='serialize'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ss'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='SierraForest-v3'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-ne-convert'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni-int8'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bhi-ctrl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bus-lock-detect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='cldemote'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='cmpccxadd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fbsdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='intel-psfd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ipred-ctrl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='lam'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='mcdt-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdir64b'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdiri'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pbrsb-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='psdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rrsba-ctrl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='serialize'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ss'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Skylake-Client'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Skylake-Client-IBRS'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Skylake-Client-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Skylake-Client-v2'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Skylake-Client-v3'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Skylake-Client-v4'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Skylake-Server'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Skylake-Server-IBRS'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Skylake-Server-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Skylake-Server-v2'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Skylake-Server-v3'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Skylake-Server-v4'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Skylake-Server-v5'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Snowridge'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='cldemote'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='core-capability'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdir64b'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdiri'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='mpx'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='split-lock-detect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Snowridge-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='cldemote'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='core-capability'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdir64b'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdiri'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='mpx'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='split-lock-detect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Snowridge-v2'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='cldemote'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='core-capability'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdir64b'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdiri'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='split-lock-detect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Snowridge-v3'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='cldemote'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='core-capability'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdir64b'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdiri'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='split-lock-detect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Snowridge-v4'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='cldemote'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdir64b'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdiri'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='athlon'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='3dnow'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='3dnowext'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='athlon-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='3dnow'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='3dnowext'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='core2duo'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ss'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='core2duo-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ss'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='coreduo'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ss'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='coreduo-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ss'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='n270'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ss'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='n270-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ss'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='phenom'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='3dnow'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='3dnowext'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='phenom-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='3dnow'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='3dnowext'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </mode>
Jan 20 19:02:01 compute-0 nova_compute[254061]:   </cpu>
Jan 20 19:02:01 compute-0 nova_compute[254061]:   <memoryBacking supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <enum name='sourceType'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <value>file</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <value>anonymous</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <value>memfd</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:   </memoryBacking>
Jan 20 19:02:01 compute-0 nova_compute[254061]:   <devices>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <disk supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='diskDevice'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>disk</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>cdrom</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>floppy</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>lun</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='bus'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>fdc</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>scsi</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>virtio</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>usb</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>sata</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='model'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>virtio</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>virtio-transitional</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>virtio-non-transitional</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </disk>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <graphics supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='type'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>vnc</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>egl-headless</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>dbus</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </graphics>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <video supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='modelType'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>vga</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>cirrus</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>virtio</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>none</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>bochs</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>ramfb</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </video>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <hostdev supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='mode'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>subsystem</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='startupPolicy'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>default</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>mandatory</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>requisite</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>optional</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='subsysType'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>usb</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>pci</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>scsi</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='capsType'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='pciBackend'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </hostdev>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <rng supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='model'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>virtio</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>virtio-transitional</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>virtio-non-transitional</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='backendModel'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>random</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>egd</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>builtin</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </rng>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <filesystem supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='driverType'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>path</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>handle</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>virtiofs</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </filesystem>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <tpm supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='model'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>tpm-tis</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>tpm-crb</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='backendModel'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>emulator</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>external</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='backendVersion'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>2.0</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </tpm>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <redirdev supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='bus'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>usb</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </redirdev>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <channel supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='type'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>pty</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>unix</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </channel>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <crypto supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='model'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='type'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>qemu</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='backendModel'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>builtin</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </crypto>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <interface supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='backendType'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>default</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>passt</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </interface>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <panic supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='model'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>isa</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>hyperv</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </panic>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <console supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='type'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>null</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>vc</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>pty</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>dev</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>file</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>pipe</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>stdio</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>udp</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>tcp</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>unix</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>qemu-vdagent</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>dbus</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </console>
Jan 20 19:02:01 compute-0 nova_compute[254061]:   </devices>
Jan 20 19:02:01 compute-0 nova_compute[254061]:   <features>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <gic supported='no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <vmcoreinfo supported='yes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <genid supported='yes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <backingStoreInput supported='yes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <backup supported='yes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <async-teardown supported='yes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <s390-pv supported='no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <ps2 supported='yes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <tdx supported='no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <sev supported='no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <sgx supported='no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <hyperv supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='features'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>relaxed</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>vapic</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>spinlocks</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>vpindex</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>runtime</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>synic</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>stimer</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>reset</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>vendor_id</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>frequencies</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>reenlightenment</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>tlbflush</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>ipi</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>avic</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>emsr_bitmap</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>xmm_input</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <defaults>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <spinlocks>4095</spinlocks>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <stimer_direct>on</stimer_direct>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <tlbflush_direct>on</tlbflush_direct>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <tlbflush_extended>on</tlbflush_extended>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </defaults>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </hyperv>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <launchSecurity supported='no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:   </features>
Jan 20 19:02:01 compute-0 nova_compute[254061]: </domainCapabilities>
Jan 20 19:02:01 compute-0 nova_compute[254061]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.249 254065 DEBUG nova.virt.libvirt.host [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Jan 20 19:02:01 compute-0 nova_compute[254061]: <domainCapabilities>
Jan 20 19:02:01 compute-0 nova_compute[254061]:   <path>/usr/libexec/qemu-kvm</path>
Jan 20 19:02:01 compute-0 nova_compute[254061]:   <domain>kvm</domain>
Jan 20 19:02:01 compute-0 nova_compute[254061]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 20 19:02:01 compute-0 nova_compute[254061]:   <arch>x86_64</arch>
Jan 20 19:02:01 compute-0 nova_compute[254061]:   <vcpu max='240'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:   <iothreads supported='yes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:   <os supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <enum name='firmware'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <loader supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='type'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>rom</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>pflash</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='readonly'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>yes</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>no</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='secure'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>no</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </loader>
Jan 20 19:02:01 compute-0 nova_compute[254061]:   </os>
Jan 20 19:02:01 compute-0 nova_compute[254061]:   <cpu>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <mode name='host-passthrough' supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='hostPassthroughMigratable'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>on</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>off</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </mode>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <mode name='maximum' supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='maximumMigratable'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>on</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>off</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </mode>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <mode name='host-model' supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <vendor>AMD</vendor>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='x2apic'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='tsc-deadline'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='hypervisor'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='tsc_adjust'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='spec-ctrl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='stibp'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='ssbd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='cmp_legacy'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='overflow-recov'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='succor'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='ibrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='amd-ssbd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='virt-ssbd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='lbrv'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='tsc-scale'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='vmcb-clean'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='flushbyasid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='pause-filter'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='pfthreshold'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='svme-addr-chk'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <feature policy='disable' name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </mode>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <mode name='custom' supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Broadwell'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Broadwell-IBRS'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Broadwell-noTSX'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Broadwell-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Broadwell-v2'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Broadwell-v3'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Broadwell-v4'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Cascadelake-Server'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Cascadelake-Server-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Cascadelake-Server-v2'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Cascadelake-Server-v3'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Cascadelake-Server-v4'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Cascadelake-Server-v5'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='ClearwaterForest'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-ne-convert'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni-int16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni-int8'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bhi-ctrl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bhi-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bus-lock-detect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='cldemote'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='cmpccxadd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ddpd-u'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fbsdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='intel-psfd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ipred-ctrl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='lam'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='mcdt-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdir64b'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdiri'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pbrsb-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='prefetchiti'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='psdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rrsba-ctrl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='serialize'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sha512'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sm3'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sm4'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ss'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='ClearwaterForest-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-ne-convert'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni-int16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni-int8'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bhi-ctrl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bhi-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bus-lock-detect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='cldemote'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='cmpccxadd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ddpd-u'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fbsdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='intel-psfd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ipred-ctrl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='lam'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='mcdt-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdir64b'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdiri'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pbrsb-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='prefetchiti'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='psdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rrsba-ctrl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='serialize'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sha512'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sm3'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sm4'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ss'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Cooperlake'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='taa-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Cooperlake-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='taa-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Cooperlake-v2'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='taa-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Denverton'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='mpx'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Denverton-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='mpx'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Denverton-v2'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Denverton-v3'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Dhyana-v2'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='EPYC-Genoa'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amd-psfd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='auto-ibrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='no-nested-data-bp'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='null-sel-clr-base'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='stibp-always-on'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='EPYC-Genoa-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amd-psfd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='auto-ibrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='no-nested-data-bp'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='null-sel-clr-base'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='stibp-always-on'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='EPYC-Genoa-v2'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amd-psfd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='auto-ibrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fs-gs-base-ns'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='no-nested-data-bp'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='null-sel-clr-base'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='perfmon-v2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='stibp-always-on'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='EPYC-Milan'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='EPYC-Milan-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='EPYC-Milan-v2'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amd-psfd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='no-nested-data-bp'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='null-sel-clr-base'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='stibp-always-on'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='EPYC-Milan-v3'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amd-psfd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='no-nested-data-bp'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='null-sel-clr-base'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='stibp-always-on'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='EPYC-Rome'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='EPYC-Rome-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='EPYC-Rome-v2'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='EPYC-Rome-v3'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='EPYC-Turin'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amd-psfd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='auto-ibrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vp2intersect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fs-gs-base-ns'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibpb-brtype'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdir64b'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdiri'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='no-nested-data-bp'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='null-sel-clr-base'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='perfmon-v2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='prefetchi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sbpb'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='srso-user-kernel-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='stibp-always-on'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='EPYC-Turin-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amd-psfd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='auto-ibrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vp2intersect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fs-gs-base-ns'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibpb-brtype'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdir64b'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdiri'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='no-nested-data-bp'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='null-sel-clr-base'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='perfmon-v2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='prefetchi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sbpb'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='srso-user-kernel-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='stibp-always-on'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='EPYC-v3'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='EPYC-v4'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='EPYC-v5'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='GraniteRapids'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-fp16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-int8'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-tile'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-fp16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bus-lock-detect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fbsdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrc'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fzrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='mcdt-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pbrsb-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='prefetchiti'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='psdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='serialize'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='taa-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='tsx-ldtrk'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xfd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='GraniteRapids-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-fp16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-int8'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-tile'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-fp16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bus-lock-detect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fbsdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrc'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fzrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='mcdt-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pbrsb-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='prefetchiti'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='psdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='serialize'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='taa-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='tsx-ldtrk'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xfd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='GraniteRapids-v2'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-fp16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-int8'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-tile'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx10'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx10-128'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx10-256'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx10-512'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-fp16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bus-lock-detect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='cldemote'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fbsdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrc'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fzrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='mcdt-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdir64b'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdiri'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pbrsb-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='prefetchiti'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='psdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='serialize'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ss'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='taa-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='tsx-ldtrk'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xfd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='GraniteRapids-v3'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-fp16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-int8'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-tile'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx10'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx10-128'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx10-256'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx10-512'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-fp16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bus-lock-detect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='cldemote'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fbsdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrc'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fzrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='mcdt-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdir64b'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdiri'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pbrsb-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='prefetchiti'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='psdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='serialize'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ss'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='taa-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='tsx-ldtrk'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xfd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Haswell'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Haswell-IBRS'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Haswell-noTSX'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Haswell-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Haswell-v2'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Haswell-v3'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Haswell-v4'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Icelake-Server'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Icelake-Server-noTSX'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Icelake-Server-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Icelake-Server-v2'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Icelake-Server-v3'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='taa-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Icelake-Server-v4'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='taa-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Icelake-Server-v5'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='taa-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Icelake-Server-v6'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='taa-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Icelake-Server-v7'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='taa-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='IvyBridge'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='IvyBridge-IBRS'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='IvyBridge-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='IvyBridge-v2'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='KnightsMill'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-4fmaps'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-4vnniw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512er'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512pf'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ss'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='KnightsMill-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-4fmaps'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-4vnniw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512er'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512pf'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ss'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Opteron_G4'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fma4'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xop'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Opteron_G4-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fma4'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xop'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Opteron_G5'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fma4'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='tbm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xop'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Opteron_G5-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fma4'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='tbm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xop'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='SapphireRapids'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-int8'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-tile'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-fp16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bus-lock-detect'/>
Jan 20 19:02:01 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:02:01.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrc'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fzrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='serialize'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='taa-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='tsx-ldtrk'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xfd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='SapphireRapids-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-int8'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-tile'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-fp16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bus-lock-detect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrc'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fzrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='serialize'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='taa-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='tsx-ldtrk'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xfd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='SapphireRapids-v2'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-int8'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-tile'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-fp16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bus-lock-detect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fbsdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrc'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fzrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='psdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='serialize'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='taa-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='tsx-ldtrk'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xfd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='SapphireRapids-v3'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-int8'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-tile'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-fp16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bus-lock-detect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='cldemote'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fbsdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrc'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fzrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdir64b'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdiri'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='psdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='serialize'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ss'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='taa-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='tsx-ldtrk'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xfd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='SapphireRapids-v4'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-int8'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='amx-tile'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-bf16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-fp16'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bitalg'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vbmi2'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bus-lock-detect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='cldemote'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fbsdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrc'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fzrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='la57'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdir64b'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdiri'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='psdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='serialize'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ss'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='taa-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='tsx-ldtrk'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xfd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='SierraForest'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-ne-convert'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni-int8'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bus-lock-detect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='cmpccxadd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fbsdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='mcdt-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pbrsb-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='psdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='serialize'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='SierraForest-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-ne-convert'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni-int8'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bus-lock-detect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='cmpccxadd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fbsdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='mcdt-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pbrsb-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='psdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='serialize'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='SierraForest-v2'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-ne-convert'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni-int8'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bhi-ctrl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bus-lock-detect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='cldemote'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='cmpccxadd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fbsdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='intel-psfd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ipred-ctrl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='lam'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='mcdt-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdir64b'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdiri'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pbrsb-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='psdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rrsba-ctrl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='serialize'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ss'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='SierraForest-v3'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-ifma'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-ne-convert'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx-vnni-int8'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bhi-ctrl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='bus-lock-detect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='cldemote'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='cmpccxadd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fbsdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='fsrs'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ibrs-all'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='intel-psfd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ipred-ctrl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='lam'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='mcdt-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdir64b'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdiri'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pbrsb-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='psdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rrsba-ctrl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='serialize'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ss'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vaes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='vpclmulqdq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Skylake-Client'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Skylake-Client-IBRS'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Skylake-Client-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Skylake-Client-v2'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Skylake-Client-v3'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Skylake-Client-v4'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Skylake-Server'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Skylake-Server-IBRS'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Skylake-Server-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Skylake-Server-v2'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='hle'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='rtm'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Skylake-Server-v3'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Skylake-Server-v4'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Skylake-Server-v5'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512bw'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512cd'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512dq'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512f'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='avx512vl'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='invpcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pcid'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='pku'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Snowridge'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='cldemote'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='core-capability'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdir64b'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdiri'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='mpx'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='split-lock-detect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Snowridge-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='cldemote'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='core-capability'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdir64b'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdiri'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='mpx'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='split-lock-detect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Snowridge-v2'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='cldemote'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='core-capability'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdir64b'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdiri'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='split-lock-detect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Snowridge-v3'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='cldemote'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='core-capability'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdir64b'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdiri'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='split-lock-detect'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='Snowridge-v4'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='cldemote'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='erms'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='gfni'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdir64b'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='movdiri'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='xsaves'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='athlon'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='3dnow'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='3dnowext'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='athlon-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='3dnow'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='3dnowext'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='core2duo'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ss'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='core2duo-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ss'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='coreduo'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ss'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='coreduo-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ss'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='n270'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ss'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='n270-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='ss'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='phenom'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='3dnow'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='3dnowext'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <blockers model='phenom-v1'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='3dnow'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <feature name='3dnowext'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </blockers>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </mode>
Jan 20 19:02:01 compute-0 nova_compute[254061]:   </cpu>
Jan 20 19:02:01 compute-0 nova_compute[254061]:   <memoryBacking supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <enum name='sourceType'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <value>file</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <value>anonymous</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <value>memfd</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:   </memoryBacking>
Jan 20 19:02:01 compute-0 nova_compute[254061]:   <devices>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <disk supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='diskDevice'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>disk</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>cdrom</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>floppy</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>lun</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='bus'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>ide</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>fdc</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>scsi</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>virtio</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>usb</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>sata</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='model'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>virtio</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>virtio-transitional</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>virtio-non-transitional</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </disk>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <graphics supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='type'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>vnc</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>egl-headless</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>dbus</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </graphics>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <video supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='modelType'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>vga</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>cirrus</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>virtio</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>none</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>bochs</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>ramfb</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </video>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <hostdev supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='mode'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>subsystem</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='startupPolicy'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>default</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>mandatory</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>requisite</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>optional</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='subsysType'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>usb</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>pci</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>scsi</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='capsType'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='pciBackend'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </hostdev>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <rng supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='model'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>virtio</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>virtio-transitional</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>virtio-non-transitional</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='backendModel'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>random</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>egd</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>builtin</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </rng>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <filesystem supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='driverType'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>path</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>handle</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>virtiofs</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </filesystem>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <tpm supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='model'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>tpm-tis</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>tpm-crb</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='backendModel'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>emulator</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>external</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='backendVersion'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>2.0</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </tpm>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <redirdev supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='bus'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>usb</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </redirdev>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <channel supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='type'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>pty</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>unix</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </channel>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <crypto supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='model'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='type'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>qemu</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='backendModel'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>builtin</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </crypto>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <interface supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='backendType'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>default</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>passt</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </interface>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <panic supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='model'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>isa</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>hyperv</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </panic>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <console supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='type'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>null</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>vc</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>pty</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>dev</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>file</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>pipe</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>stdio</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>udp</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>tcp</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>unix</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>qemu-vdagent</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>dbus</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </console>
Jan 20 19:02:01 compute-0 nova_compute[254061]:   </devices>
Jan 20 19:02:01 compute-0 nova_compute[254061]:   <features>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <gic supported='no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <vmcoreinfo supported='yes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <genid supported='yes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <backingStoreInput supported='yes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <backup supported='yes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <async-teardown supported='yes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <s390-pv supported='no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <ps2 supported='yes'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <tdx supported='no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <sev supported='no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <sgx supported='no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <hyperv supported='yes'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <enum name='features'>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>relaxed</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>vapic</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>spinlocks</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>vpindex</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>runtime</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>synic</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>stimer</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>reset</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>vendor_id</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>frequencies</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>reenlightenment</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>tlbflush</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>ipi</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>avic</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>emsr_bitmap</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <value>xmm_input</value>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </enum>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       <defaults>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <spinlocks>4095</spinlocks>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <stimer_direct>on</stimer_direct>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <tlbflush_direct>on</tlbflush_direct>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <tlbflush_extended>on</tlbflush_extended>
Jan 20 19:02:01 compute-0 nova_compute[254061]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 20 19:02:01 compute-0 nova_compute[254061]:       </defaults>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     </hyperv>
Jan 20 19:02:01 compute-0 nova_compute[254061]:     <launchSecurity supported='no'/>
Jan 20 19:02:01 compute-0 nova_compute[254061]:   </features>
Jan 20 19:02:01 compute-0 nova_compute[254061]: </domainCapabilities>
Jan 20 19:02:01 compute-0 nova_compute[254061]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.327 254065 DEBUG nova.virt.libvirt.host [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.327 254065 INFO nova.virt.libvirt.host [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] Secure Boot support detected
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.329 254065 INFO nova.virt.libvirt.driver [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.329 254065 INFO nova.virt.libvirt.driver [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.337 254065 DEBUG nova.virt.libvirt.driver [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.357 254065 INFO nova.virt.node [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] Determined node identity cb9161e5-191d-495c-920a-01144f42a215 from /var/lib/nova/compute_id
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.383 254065 DEBUG nova.compute.manager [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] Verified node cb9161e5-191d-495c-920a-01144f42a215 matches my host compute-0.ctlplane.example.com _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.403 254065 INFO nova.compute.manager [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Jan 20 19:02:01 compute-0 sshd-session[228096]: Connection closed by 192.168.122.30 port 35134
Jan 20 19:02:01 compute-0 sshd-session[228093]: pam_unix(sshd:session): session closed for user zuul
Jan 20 19:02:01 compute-0 systemd[1]: session-55.scope: Deactivated successfully.
Jan 20 19:02:01 compute-0 systemd[1]: session-55.scope: Consumed 2min 4.203s CPU time.
Jan 20 19:02:01 compute-0 systemd-logind[796]: Session 55 logged out. Waiting for processes to exit.
Jan 20 19:02:01 compute-0 systemd-logind[796]: Removed session 55.
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.534 254065 DEBUG oslo_concurrency.lockutils [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.535 254065 DEBUG oslo_concurrency.lockutils [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.535 254065 DEBUG oslo_concurrency.lockutils [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.535 254065 DEBUG nova.compute.resource_tracker [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 19:02:01 compute-0 nova_compute[254061]: 2026-01-20 19:02:01.536 254065 DEBUG oslo_concurrency.processutils [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:02:01 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:02:01 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v586: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:02:02 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:02:02 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1309350642' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:02:02 compute-0 nova_compute[254061]: 2026-01-20 19:02:02.079 254065 DEBUG oslo_concurrency.processutils [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:02:02 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/1309350642' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:02:02 compute-0 nova_compute[254061]: 2026-01-20 19:02:02.274 254065 WARNING nova.virt.libvirt.driver [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 19:02:02 compute-0 nova_compute[254061]: 2026-01-20 19:02:02.276 254065 DEBUG nova.compute.resource_tracker [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4814MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 19:02:02 compute-0 nova_compute[254061]: 2026-01-20 19:02:02.276 254065 DEBUG oslo_concurrency.lockutils [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:02:02 compute-0 nova_compute[254061]: 2026-01-20 19:02:02.277 254065 DEBUG oslo_concurrency.lockutils [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:02:02 compute-0 nova_compute[254061]: 2026-01-20 19:02:02.443 254065 DEBUG nova.compute.resource_tracker [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 19:02:02 compute-0 nova_compute[254061]: 2026-01-20 19:02:02.444 254065 DEBUG nova.compute.resource_tracker [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 19:02:02 compute-0 nova_compute[254061]: 2026-01-20 19:02:02.465 254065 DEBUG nova.scheduler.client.report [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] Refreshing inventories for resource provider cb9161e5-191d-495c-920a-01144f42a215 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 20 19:02:02 compute-0 nova_compute[254061]: 2026-01-20 19:02:02.522 254065 DEBUG nova.scheduler.client.report [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] Updating ProviderTree inventory for provider cb9161e5-191d-495c-920a-01144f42a215 from _refresh_and_get_inventory using data: {} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 20 19:02:02 compute-0 nova_compute[254061]: 2026-01-20 19:02:02.522 254065 DEBUG nova.compute.provider_tree [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] Inventory has not changed in ProviderTree for provider: cb9161e5-191d-495c-920a-01144f42a215 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 19:02:02 compute-0 nova_compute[254061]: 2026-01-20 19:02:02.548 254065 DEBUG nova.scheduler.client.report [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] Refreshing aggregate associations for resource provider cb9161e5-191d-495c-920a-01144f42a215, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 20 19:02:02 compute-0 nova_compute[254061]: 2026-01-20 19:02:02.572 254065 DEBUG nova.scheduler.client.report [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] Refreshing trait associations for resource provider cb9161e5-191d-495c-920a-01144f42a215, traits: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 20 19:02:02 compute-0 nova_compute[254061]: 2026-01-20 19:02:02.599 254065 DEBUG oslo_concurrency.processutils [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:02:02 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:02:02 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:02:02 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:02:02.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:02:03 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:02:03 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4120022383' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:02:03 compute-0 nova_compute[254061]: 2026-01-20 19:02:03.115 254065 DEBUG oslo_concurrency.processutils [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:02:03 compute-0 nova_compute[254061]: 2026-01-20 19:02:03.119 254065 DEBUG nova.virt.libvirt.host [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Jan 20 19:02:03 compute-0 nova_compute[254061]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Jan 20 19:02:03 compute-0 nova_compute[254061]: 2026-01-20 19:02:03.119 254065 INFO nova.virt.libvirt.host [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] kernel doesn't support AMD SEV
Jan 20 19:02:03 compute-0 nova_compute[254061]: 2026-01-20 19:02:03.120 254065 DEBUG nova.compute.provider_tree [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] Updating inventory in ProviderTree for provider cb9161e5-191d-495c-920a-01144f42a215 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 20 19:02:03 compute-0 nova_compute[254061]: 2026-01-20 19:02:03.121 254065 DEBUG nova.virt.libvirt.driver [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 19:02:03 compute-0 nova_compute[254061]: 2026-01-20 19:02:03.190 254065 DEBUG nova.scheduler.client.report [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] Updated inventory for provider cb9161e5-191d-495c-920a-01144f42a215 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Jan 20 19:02:03 compute-0 nova_compute[254061]: 2026-01-20 19:02:03.190 254065 DEBUG nova.compute.provider_tree [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] Updating resource provider cb9161e5-191d-495c-920a-01144f42a215 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Jan 20 19:02:03 compute-0 nova_compute[254061]: 2026-01-20 19:02:03.190 254065 DEBUG nova.compute.provider_tree [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] Updating inventory in ProviderTree for provider cb9161e5-191d-495c-920a-01144f42a215 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 20 19:02:03 compute-0 ceph-mon[74381]: pgmap v586: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:02:03 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/710667332' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:02:03 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/4120022383' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:02:03 compute-0 nova_compute[254061]: 2026-01-20 19:02:03.288 254065 DEBUG nova.compute.provider_tree [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] Updating resource provider cb9161e5-191d-495c-920a-01144f42a215 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Jan 20 19:02:03 compute-0 nova_compute[254061]: 2026-01-20 19:02:03.326 254065 DEBUG nova.compute.resource_tracker [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 19:02:03 compute-0 nova_compute[254061]: 2026-01-20 19:02:03.326 254065 DEBUG oslo_concurrency.lockutils [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.049s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:02:03 compute-0 nova_compute[254061]: 2026-01-20 19:02:03.326 254065 DEBUG nova.service [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Jan 20 19:02:03 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:02:03 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:02:03 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:02:03.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:02:03 compute-0 nova_compute[254061]: 2026-01-20 19:02:03.440 254065 DEBUG nova.service [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Jan 20 19:02:03 compute-0 nova_compute[254061]: 2026-01-20 19:02:03.440 254065 DEBUG nova.servicegroup.drivers.db [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Jan 20 19:02:03 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v587: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 20 19:02:04 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/3165804201' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:02:04 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/1435896566' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:02:04 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:02:04 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:02:04 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:02:04.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:02:04 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [WARNING] 019/190204 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 20 19:02:05 compute-0 ceph-mon[74381]: pgmap v587: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 20 19:02:05 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/2728200517' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:02:05 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:02:05 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:02:05 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:02:05.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:02:05 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v588: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:02:06 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:02:06 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:02:06 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:02:06 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:02:06.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:02:07 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:02:07.108Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:02:07 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:02:07.108Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:02:07 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:02:07.109Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:02:07 compute-0 podman[254466]: 2026-01-20 19:02:07.109658993 +0000 UTC m=+0.079482488 container health_status 7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 20 19:02:07 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:02:07 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:02:07 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:02:07.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:02:07 compute-0 ceph-mon[74381]: pgmap v588: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:02:07 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v589: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 20 19:02:08 compute-0 ceph-mon[74381]: pgmap v589: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 20 19:02:08 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:02:08 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:02:08 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:02:08.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:02:09 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:02:09 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:02:09 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:02:09.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:02:09 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v590: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 19:02:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:02:09] "GET /metrics HTTP/1.1" 200 48334 "" "Prometheus/2.51.0"
Jan 20 19:02:09 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:02:09] "GET /metrics HTTP/1.1" 200 48334 "" "Prometheus/2.51.0"
Jan 20 19:02:10 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Scheduled restart job, restart counter is at 12.
Jan 20 19:02:10 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.ulclbx for aecbbf3b-b405-507b-97d7-637a83f5b4b1.
Jan 20 19:02:10 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Consumed 1.873s CPU time.
Jan 20 19:02:10 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.ulclbx for aecbbf3b-b405-507b-97d7-637a83f5b4b1...
Jan 20 19:02:10 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:02:10 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:02:10 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:02:10.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:02:10 compute-0 podman[254542]: 2026-01-20 19:02:10.773660157 +0000 UTC m=+0.053634077 container create c295f9130b000349283ee1d76cc63270dc10a72561b2da89947bc3a0790f6477 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Jan 20 19:02:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3cc76d181d8b14a607c2f0b6c5c1c15018584fa4feefedcd2183d36a91c8131/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3cc76d181d8b14a607c2f0b6c5c1c15018584fa4feefedcd2183d36a91c8131/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3cc76d181d8b14a607c2f0b6c5c1c15018584fa4feefedcd2183d36a91c8131/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3cc76d181d8b14a607c2f0b6c5c1c15018584fa4feefedcd2183d36a91c8131/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.ulclbx-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:10 compute-0 podman[254542]: 2026-01-20 19:02:10.841844998 +0000 UTC m=+0.121818898 container init c295f9130b000349283ee1d76cc63270dc10a72561b2da89947bc3a0790f6477 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325)
Jan 20 19:02:10 compute-0 podman[254542]: 2026-01-20 19:02:10.748289958 +0000 UTC m=+0.028263908 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:02:10 compute-0 podman[254542]: 2026-01-20 19:02:10.847010167 +0000 UTC m=+0.126984067 container start c295f9130b000349283ee1d76cc63270dc10a72561b2da89947bc3a0790f6477 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2)
Jan 20 19:02:10 compute-0 bash[254542]: c295f9130b000349283ee1d76cc63270dc10a72561b2da89947bc3a0790f6477
Jan 20 19:02:10 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:10 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Jan 20 19:02:10 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.ulclbx for aecbbf3b-b405-507b-97d7-637a83f5b4b1.
Jan 20 19:02:10 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:10 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Jan 20 19:02:10 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:10 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Jan 20 19:02:10 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:10 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Jan 20 19:02:10 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:10 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Jan 20 19:02:10 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:10 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Jan 20 19:02:10 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:10 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Jan 20 19:02:11 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:11 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 20 19:02:11 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:02:11 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:02:11 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:02:11.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:02:11 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:02:11 compute-0 ceph-mon[74381]: pgmap v590: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 19:02:11 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:02:11 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v591: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 19:02:12 compute-0 ceph-mon[74381]: pgmap v591: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 19:02:12 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:02:12 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:02:12 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:02:12.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:02:13 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:02:13 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:02:13 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:02:13.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:02:13 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v592: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Jan 20 19:02:14 compute-0 sudo[254604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:02:14 compute-0 sudo[254604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:02:14 compute-0 sudo[254604]: pam_unix(sudo:session): session closed for user root
Jan 20 19:02:14 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:02:14 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:02:14 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:02:14.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:02:14 compute-0 podman[254628]: 2026-01-20 19:02:14.776064064 +0000 UTC m=+0.082471549 container health_status d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:02:14 compute-0 ceph-mon[74381]: pgmap v592: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Jan 20 19:02:15 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:02:15 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:02:15 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:02:15.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:02:15 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v593: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 20 19:02:16 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:02:16 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:02:16 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:02:16 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:02:16.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:02:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:02:17.110Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:02:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:02:17.110Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:02:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:02:17.110Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:02:17 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:02:17 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:02:17 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:02:17.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:02:17 compute-0 ceph-mon[74381]: pgmap v593: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 20 19:02:17 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v594: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 20 19:02:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:17 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 20 19:02:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:17 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 20 19:02:18 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:02:18 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:02:18 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:02:18.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:02:19 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:02:19 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:02:19 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:02:19.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:02:19 compute-0 ceph-mon[74381]: pgmap v594: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 20 19:02:19 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v595: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 596 B/s wr, 1 op/s
Jan 20 19:02:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:02:19] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Jan 20 19:02:19 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:02:19] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Jan 20 19:02:20 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:02:20 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:02:20 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:02:20.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:02:21 compute-0 ceph-mon[74381]: pgmap v595: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 596 B/s wr, 1 op/s
Jan 20 19:02:21 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:02:21 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:02:21 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:02:21.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:02:21 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:02:21 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v596: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 20 19:02:22 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:02:22 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:02:22 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:02:22.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:02:23 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:02:23 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:02:23 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:02:23.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:02:23 compute-0 ceph-mon[74381]: pgmap v596: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 20 19:02:23 compute-0 radosgw[89571]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Jan 20 19:02:23 compute-0 radosgw[89571]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Jan 20 19:02:23 compute-0 radosgw[89571]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Jan 20 19:02:23 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v597: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Jan 20 19:02:23 compute-0 radosgw[89571]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Jan 20 19:02:23 compute-0 radosgw[89571]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Jan 20 19:02:23 compute-0 radosgw[89571]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Jan 20 19:02:23 compute-0 radosgw[89571]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Jan 20 19:02:24 compute-0 radosgw[89571]: INFO: RGWReshardLock::lock found lock on reshard.0000000015 to be held by another RGW process; skipping for now
Jan 20 19:02:24 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [WARNING] 019/190224 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 20 19:02:24 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:24 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 20 19:02:24 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:24 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Jan 20 19:02:24 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:24 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Jan 20 19:02:24 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:24 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Jan 20 19:02:24 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:24 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Jan 20 19:02:24 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:24 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Jan 20 19:02:24 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:24 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Jan 20 19:02:24 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:24 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 20 19:02:24 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:24 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 20 19:02:24 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:24 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 20 19:02:24 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:24 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Jan 20 19:02:24 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:24 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 20 19:02:24 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:24 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Jan 20 19:02:24 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:24 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Jan 20 19:02:24 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:24 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Jan 20 19:02:24 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:24 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Jan 20 19:02:24 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:24 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Jan 20 19:02:24 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:24 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Jan 20 19:02:24 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:24 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Jan 20 19:02:24 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:24 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Jan 20 19:02:24 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:24 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Jan 20 19:02:24 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:24 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Jan 20 19:02:24 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:24 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Jan 20 19:02:24 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:24 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Jan 20 19:02:24 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:24 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 20 19:02:24 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:24 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Jan 20 19:02:24 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:24 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 20 19:02:24 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:02:24 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:02:24 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:02:24.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:02:24 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:24 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c94000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:02:24 compute-0 ceph-mon[74381]: pgmap v597: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Jan 20 19:02:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:02:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:02:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:02:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:02:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:02:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:02:25 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:25 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c8c001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:02:25 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:02:25 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:02:25 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:02:25.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:02:25 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:25 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c8c001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:02:25 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v598: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 852 B/s wr, 2 op/s
Jan 20 19:02:26 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:02:26 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:02:26 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:02:26 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:02:26 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:02:26.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:02:26 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [WARNING] 019/190226 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 20 19:02:26 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:26 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c70000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:02:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:02:27.111Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:02:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:27 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c7c000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:02:27 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:02:27 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:02:27 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:02:27.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:02:27 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 20 19:02:27 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/545847550' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 19:02:27 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 20 19:02:27 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/545847550' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 19:02:27 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v599: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 852 B/s wr, 43 op/s
Jan 20 19:02:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:28 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c78000d00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:02:28 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 20 19:02:28 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/785923387' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 19:02:28 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 20 19:02:28 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/785923387' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 19:02:28 compute-0 ceph-mon[74381]: pgmap v598: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 852 B/s wr, 2 op/s
Jan 20 19:02:28 compute-0 ceph-mon[74381]: from='client.? 192.168.122.10:0/545847550' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 19:02:28 compute-0 ceph-mon[74381]: from='client.? 192.168.122.10:0/545847550' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 19:02:28 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:02:28 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:02:28 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:02:28.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:02:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:28 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c8c001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:02:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:29 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c78000d00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:02:29 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:02:29 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:02:29 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:02:29.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:02:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:29 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c7c001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:02:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:02:29] "GET /metrics HTTP/1.1" 200 48338 "" "Prometheus/2.51.0"
Jan 20 19:02:29 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:02:29] "GET /metrics HTTP/1.1" 200 48338 "" "Prometheus/2.51.0"
Jan 20 19:02:29 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v600: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 341 B/s wr, 42 op/s
Jan 20 19:02:30 compute-0 ceph-mon[74381]: from='client.? 192.168.122.10:0/668216129' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 19:02:30 compute-0 ceph-mon[74381]: from='client.? 192.168.122.10:0/668216129' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 19:02:30 compute-0 ceph-mon[74381]: from='client.? 192.168.122.10:0/785923387' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 19:02:30 compute-0 ceph-mon[74381]: from='client.? 192.168.122.10:0/785923387' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 19:02:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:02:30.277 165659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:02:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:02:30.277 165659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:02:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:02:30.277 165659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:02:30 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:02:30 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:02:30 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:02:30.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:02:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:30 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c700016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:02:31 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:31 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c8c001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:02:31 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:02:31 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:02:31 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:02:31.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:02:31 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:02:31 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:31 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c78001cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:02:31 compute-0 rsyslogd[1003]: imjournal: 7258 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Jan 20 19:02:31 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v601: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 341 B/s wr, 49 op/s
Jan 20 19:02:32 compute-0 ceph-mon[74381]: pgmap v599: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 852 B/s wr, 43 op/s
Jan 20 19:02:32 compute-0 ceph-mon[74381]: pgmap v600: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 341 B/s wr, 42 op/s
Jan 20 19:02:32 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:02:32 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:02:32 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:02:32.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:02:32 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:32 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c7c001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:02:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:33 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c70001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:02:33 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:02:33 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:02:33 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:02:33.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:02:33 compute-0 ceph-mon[74381]: pgmap v601: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 341 B/s wr, 49 op/s
Jan 20 19:02:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:33 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c8c001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:02:33 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v602: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 341 B/s wr, 54 op/s
Jan 20 19:02:34 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:02:34 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:02:34 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:02:34.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:02:34 compute-0 sudo[254692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:02:34 compute-0 sudo[254692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:02:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:34 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c78001cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:02:34 compute-0 sudo[254692]: pam_unix(sudo:session): session closed for user root
Jan 20 19:02:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:35 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c7c001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:02:35 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:02:35 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:02:35 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:02:35.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:02:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:35 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c70001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:02:35 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v603: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 0 B/s wr, 53 op/s
Jan 20 19:02:36 compute-0 ceph-mon[74381]: pgmap v602: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 341 B/s wr, 54 op/s
Jan 20 19:02:36 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:02:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:36 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 20 19:02:36 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:02:36 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:02:36 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:02:36.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:02:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:36 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c8c001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:02:37 compute-0 ceph-mon[74381]: pgmap v603: 337 pgs: 337 active+clean; 458 KiB data, 154 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 0 B/s wr, 53 op/s
Jan 20 19:02:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:02:37.112Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:02:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:02:37.112Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:02:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:02:37.112Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:02:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:37 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c78001cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:02:37 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:02:37 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:02:37 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:02:37.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:02:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:37 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c7c002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:02:37 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v604: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 91 KiB/s rd, 511 B/s wr, 151 op/s
Jan 20 19:02:38 compute-0 podman[254720]: 2026-01-20 19:02:38.074942625 +0000 UTC m=+0.049327401 container health_status 7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 20 19:02:38 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:02:38 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:02:38 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:02:38.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:02:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:38 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c70001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:02:38 compute-0 sudo[254740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:02:38 compute-0 sudo[254740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:02:38 compute-0 sudo[254740]: pam_unix(sudo:session): session closed for user root
Jan 20 19:02:38 compute-0 sudo[254765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 20 19:02:38 compute-0 sudo[254765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:02:39 compute-0 ceph-mon[74381]: pgmap v604: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 91 KiB/s rd, 511 B/s wr, 151 op/s
Jan 20 19:02:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:39 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c8c001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:02:39 compute-0 sudo[254765]: pam_unix(sudo:session): session closed for user root
Jan 20 19:02:39 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:02:39 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:02:39 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:02:39.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:02:39 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 19:02:39 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:02:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:39 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c780030a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:02:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:39 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 20 19:02:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:39 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 20 19:02:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:02:39] "GET /metrics HTTP/1.1" 200 48338 "" "Prometheus/2.51.0"
Jan 20 19:02:39 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:02:39] "GET /metrics HTTP/1.1" 200 48338 "" "Prometheus/2.51.0"
Jan 20 19:02:39 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v605: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 66 KiB/s rd, 511 B/s wr, 109 op/s
Jan 20 19:02:39 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 20 19:02:40 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:02:40 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 19:02:40 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 19:02:40 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:02:40 compute-0 sudo[254824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:02:40 compute-0 sudo[254824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:02:40 compute-0 sudo[254824]: pam_unix(sudo:session): session closed for user root
Jan 20 19:02:40 compute-0 sudo[254849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 19:02:40 compute-0 sudo[254849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:02:40 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:02:40 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:02:40 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:02:40.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:02:40 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:40 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c7c002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:02:40 compute-0 podman[254916]: 2026-01-20 19:02:40.884280383 +0000 UTC m=+0.066530971 container create f88e0f4a51355b24aa803e006f58b05208f1b5ccd2c18e45c6576f8b0eb60c4a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_sinoussi, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:02:40 compute-0 systemd[1]: Started libpod-conmon-f88e0f4a51355b24aa803e006f58b05208f1b5ccd2c18e45c6576f8b0eb60c4a.scope.
Jan 20 19:02:40 compute-0 podman[254916]: 2026-01-20 19:02:40.847739545 +0000 UTC m=+0.029990233 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:02:40 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:02:40 compute-0 podman[254916]: 2026-01-20 19:02:40.9869729 +0000 UTC m=+0.169223508 container init f88e0f4a51355b24aa803e006f58b05208f1b5ccd2c18e45c6576f8b0eb60c4a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 19:02:40 compute-0 podman[254916]: 2026-01-20 19:02:40.997640405 +0000 UTC m=+0.179891003 container start f88e0f4a51355b24aa803e006f58b05208f1b5ccd2c18e45c6576f8b0eb60c4a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:02:41 compute-0 podman[254916]: 2026-01-20 19:02:41.001150669 +0000 UTC m=+0.183401277 container attach f88e0f4a51355b24aa803e006f58b05208f1b5ccd2c18e45c6576f8b0eb60c4a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 19:02:41 compute-0 hardcore_sinoussi[254932]: 167 167
Jan 20 19:02:41 compute-0 systemd[1]: libpod-f88e0f4a51355b24aa803e006f58b05208f1b5ccd2c18e45c6576f8b0eb60c4a.scope: Deactivated successfully.
Jan 20 19:02:41 compute-0 podman[254916]: 2026-01-20 19:02:41.002463265 +0000 UTC m=+0.184713863 container died f88e0f4a51355b24aa803e006f58b05208f1b5ccd2c18e45c6576f8b0eb60c4a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_sinoussi, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 20 19:02:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-f2cc3dcae1f2a58d281a479c051a7cb0062a33577ccd7ce828619dbb614a803f-merged.mount: Deactivated successfully.
Jan 20 19:02:41 compute-0 podman[254916]: 2026-01-20 19:02:41.03820164 +0000 UTC m=+0.220452238 container remove f88e0f4a51355b24aa803e006f58b05208f1b5ccd2c18e45c6576f8b0eb60c4a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_sinoussi, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Jan 20 19:02:41 compute-0 systemd[1]: libpod-conmon-f88e0f4a51355b24aa803e006f58b05208f1b5ccd2c18e45c6576f8b0eb60c4a.scope: Deactivated successfully.
Jan 20 19:02:41 compute-0 podman[254955]: 2026-01-20 19:02:41.197219784 +0000 UTC m=+0.044747548 container create de8fb404672b060909734a046d3b291dc1be286718a0c5807298626b05ae1fd9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_shamir, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 20 19:02:41 compute-0 systemd[1]: Started libpod-conmon-de8fb404672b060909734a046d3b291dc1be286718a0c5807298626b05ae1fd9.scope.
Jan 20 19:02:41 compute-0 ceph-mon[74381]: pgmap v605: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 66 KiB/s rd, 511 B/s wr, 109 op/s
Jan 20 19:02:41 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:02:41 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:02:41 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 19:02:41 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 19:02:41 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 19:02:41 compute-0 podman[254955]: 2026-01-20 19:02:41.18023091 +0000 UTC m=+0.027758694 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:02:41 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:02:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4458be5d7614da4b4a70905be6be768824001dfc3e0070291348769e4a31c89/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4458be5d7614da4b4a70905be6be768824001dfc3e0070291348769e4a31c89/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4458be5d7614da4b4a70905be6be768824001dfc3e0070291348769e4a31c89/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4458be5d7614da4b4a70905be6be768824001dfc3e0070291348769e4a31c89/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4458be5d7614da4b4a70905be6be768824001dfc3e0070291348769e4a31c89/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:41 compute-0 podman[254955]: 2026-01-20 19:02:41.310882814 +0000 UTC m=+0.158410668 container init de8fb404672b060909734a046d3b291dc1be286718a0c5807298626b05ae1fd9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_shamir, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 20 19:02:41 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:41 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c700032f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:02:41 compute-0 podman[254955]: 2026-01-20 19:02:41.323925933 +0000 UTC m=+0.171453707 container start de8fb404672b060909734a046d3b291dc1be286718a0c5807298626b05ae1fd9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_shamir, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 19:02:41 compute-0 podman[254955]: 2026-01-20 19:02:41.327426337 +0000 UTC m=+0.174954151 container attach de8fb404672b060909734a046d3b291dc1be286718a0c5807298626b05ae1fd9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Jan 20 19:02:41 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:02:41 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:02:41 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:02:41.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:02:41 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:02:41 compute-0 naughty_shamir[254971]: --> passed data devices: 0 physical, 1 LVM
Jan 20 19:02:41 compute-0 naughty_shamir[254971]: --> All data devices are unavailable
Jan 20 19:02:41 compute-0 systemd[1]: libpod-de8fb404672b060909734a046d3b291dc1be286718a0c5807298626b05ae1fd9.scope: Deactivated successfully.
Jan 20 19:02:41 compute-0 podman[254955]: 2026-01-20 19:02:41.717165162 +0000 UTC m=+0.564692926 container died de8fb404672b060909734a046d3b291dc1be286718a0c5807298626b05ae1fd9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_shamir, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Jan 20 19:02:41 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:41 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c700032f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:02:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-c4458be5d7614da4b4a70905be6be768824001dfc3e0070291348769e4a31c89-merged.mount: Deactivated successfully.
Jan 20 19:02:41 compute-0 podman[254955]: 2026-01-20 19:02:41.758772135 +0000 UTC m=+0.606299909 container remove de8fb404672b060909734a046d3b291dc1be286718a0c5807298626b05ae1fd9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 19:02:41 compute-0 systemd[1]: libpod-conmon-de8fb404672b060909734a046d3b291dc1be286718a0c5807298626b05ae1fd9.scope: Deactivated successfully.
Jan 20 19:02:41 compute-0 sudo[254849]: pam_unix(sudo:session): session closed for user root
Jan 20 19:02:41 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v606: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 597 B/s wr, 112 op/s
Jan 20 19:02:41 compute-0 sudo[254999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:02:41 compute-0 sudo[254999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:02:41 compute-0 sudo[254999]: pam_unix(sudo:session): session closed for user root
Jan 20 19:02:41 compute-0 sudo[255024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- lvm list --format json
Jan 20 19:02:41 compute-0 sudo[255024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:02:42 compute-0 podman[255089]: 2026-01-20 19:02:42.347989057 +0000 UTC m=+0.053297427 container create 4107ad7c45011fb34de700b05a415eb77ee93b888fb13f9b78dc2e66af56b914 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:02:42 compute-0 systemd[1]: Started libpod-conmon-4107ad7c45011fb34de700b05a415eb77ee93b888fb13f9b78dc2e66af56b914.scope.
Jan 20 19:02:42 compute-0 podman[255089]: 2026-01-20 19:02:42.321623741 +0000 UTC m=+0.026932121 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:02:42 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:02:42 compute-0 podman[255089]: 2026-01-20 19:02:42.447489438 +0000 UTC m=+0.152797778 container init 4107ad7c45011fb34de700b05a415eb77ee93b888fb13f9b78dc2e66af56b914 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_grothendieck, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Jan 20 19:02:42 compute-0 podman[255089]: 2026-01-20 19:02:42.457783743 +0000 UTC m=+0.163092073 container start 4107ad7c45011fb34de700b05a415eb77ee93b888fb13f9b78dc2e66af56b914 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:02:42 compute-0 podman[255089]: 2026-01-20 19:02:42.46217423 +0000 UTC m=+0.167482861 container attach 4107ad7c45011fb34de700b05a415eb77ee93b888fb13f9b78dc2e66af56b914 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_grothendieck, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 20 19:02:42 compute-0 quizzical_grothendieck[255106]: 167 167
Jan 20 19:02:42 compute-0 systemd[1]: libpod-4107ad7c45011fb34de700b05a415eb77ee93b888fb13f9b78dc2e66af56b914.scope: Deactivated successfully.
Jan 20 19:02:42 compute-0 podman[255089]: 2026-01-20 19:02:42.466064325 +0000 UTC m=+0.171372685 container died 4107ad7c45011fb34de700b05a415eb77ee93b888fb13f9b78dc2e66af56b914 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid)
Jan 20 19:02:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-8306cc9044485d19b10dedac99139477bbe0d822db964b97a883c95c8b838ac6-merged.mount: Deactivated successfully.
Jan 20 19:02:42 compute-0 podman[255089]: 2026-01-20 19:02:42.524172069 +0000 UTC m=+0.229480429 container remove 4107ad7c45011fb34de700b05a415eb77ee93b888fb13f9b78dc2e66af56b914 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 20 19:02:42 compute-0 systemd[1]: libpod-conmon-4107ad7c45011fb34de700b05a415eb77ee93b888fb13f9b78dc2e66af56b914.scope: Deactivated successfully.
Jan 20 19:02:42 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:02:42 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:02:42 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:02:42.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:02:42 compute-0 podman[255132]: 2026-01-20 19:02:42.745401866 +0000 UTC m=+0.061423343 container create 320b1ad73cdb479517d3797549f7ae0e2985be70773462c9922793db811a3170 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_engelbart, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:02:42 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:42 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c8c001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:02:42 compute-0 systemd[1]: Started libpod-conmon-320b1ad73cdb479517d3797549f7ae0e2985be70773462c9922793db811a3170.scope.
Jan 20 19:02:42 compute-0 podman[255132]: 2026-01-20 19:02:42.716190276 +0000 UTC m=+0.032211843 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:02:42 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:02:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0538c1ef29eb0c5551897a20c4bf17575eee08a835b57f4fdc4a00de1fb5aff8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0538c1ef29eb0c5551897a20c4bf17575eee08a835b57f4fdc4a00de1fb5aff8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0538c1ef29eb0c5551897a20c4bf17575eee08a835b57f4fdc4a00de1fb5aff8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0538c1ef29eb0c5551897a20c4bf17575eee08a835b57f4fdc4a00de1fb5aff8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:42 compute-0 podman[255132]: 2026-01-20 19:02:42.855686557 +0000 UTC m=+0.171708084 container init 320b1ad73cdb479517d3797549f7ae0e2985be70773462c9922793db811a3170 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:02:42 compute-0 podman[255132]: 2026-01-20 19:02:42.871179901 +0000 UTC m=+0.187201408 container start 320b1ad73cdb479517d3797549f7ae0e2985be70773462c9922793db811a3170 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_engelbart, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:02:42 compute-0 podman[255132]: 2026-01-20 19:02:42.875711083 +0000 UTC m=+0.191732600 container attach 320b1ad73cdb479517d3797549f7ae0e2985be70773462c9922793db811a3170 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_engelbart, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:02:43 compute-0 condescending_engelbart[255148]: {
Jan 20 19:02:43 compute-0 condescending_engelbart[255148]:     "0": [
Jan 20 19:02:43 compute-0 condescending_engelbart[255148]:         {
Jan 20 19:02:43 compute-0 condescending_engelbart[255148]:             "devices": [
Jan 20 19:02:43 compute-0 condescending_engelbart[255148]:                 "/dev/loop3"
Jan 20 19:02:43 compute-0 condescending_engelbart[255148]:             ],
Jan 20 19:02:43 compute-0 condescending_engelbart[255148]:             "lv_name": "ceph_lv0",
Jan 20 19:02:43 compute-0 condescending_engelbart[255148]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:02:43 compute-0 condescending_engelbart[255148]:             "lv_size": "21470642176",
Jan 20 19:02:43 compute-0 condescending_engelbart[255148]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=aecbbf3b-b405-507b-97d7-637a83f5b4b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5f53c0c6-6046-4836-83f9-ff93da7e674e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:02:43 compute-0 condescending_engelbart[255148]:             "lv_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 19:02:43 compute-0 condescending_engelbart[255148]:             "name": "ceph_lv0",
Jan 20 19:02:43 compute-0 condescending_engelbart[255148]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:02:43 compute-0 condescending_engelbart[255148]:             "tags": {
Jan 20 19:02:43 compute-0 condescending_engelbart[255148]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:02:43 compute-0 condescending_engelbart[255148]:                 "ceph.block_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 19:02:43 compute-0 condescending_engelbart[255148]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:02:43 compute-0 condescending_engelbart[255148]:                 "ceph.cluster_fsid": "aecbbf3b-b405-507b-97d7-637a83f5b4b1",
Jan 20 19:02:43 compute-0 condescending_engelbart[255148]:                 "ceph.cluster_name": "ceph",
Jan 20 19:02:43 compute-0 condescending_engelbart[255148]:                 "ceph.crush_device_class": "",
Jan 20 19:02:43 compute-0 condescending_engelbart[255148]:                 "ceph.encrypted": "0",
Jan 20 19:02:43 compute-0 condescending_engelbart[255148]:                 "ceph.osd_fsid": "5f53c0c6-6046-4836-83f9-ff93da7e674e",
Jan 20 19:02:43 compute-0 condescending_engelbart[255148]:                 "ceph.osd_id": "0",
Jan 20 19:02:43 compute-0 condescending_engelbart[255148]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:02:43 compute-0 condescending_engelbart[255148]:                 "ceph.type": "block",
Jan 20 19:02:43 compute-0 condescending_engelbart[255148]:                 "ceph.vdo": "0",
Jan 20 19:02:43 compute-0 condescending_engelbart[255148]:                 "ceph.with_tpm": "0"
Jan 20 19:02:43 compute-0 condescending_engelbart[255148]:             },
Jan 20 19:02:43 compute-0 condescending_engelbart[255148]:             "type": "block",
Jan 20 19:02:43 compute-0 condescending_engelbart[255148]:             "vg_name": "ceph_vg0"
Jan 20 19:02:43 compute-0 condescending_engelbart[255148]:         }
Jan 20 19:02:43 compute-0 condescending_engelbart[255148]:     ]
Jan 20 19:02:43 compute-0 condescending_engelbart[255148]: }
Jan 20 19:02:43 compute-0 systemd[1]: libpod-320b1ad73cdb479517d3797549f7ae0e2985be70773462c9922793db811a3170.scope: Deactivated successfully.
Jan 20 19:02:43 compute-0 podman[255132]: 2026-01-20 19:02:43.234006516 +0000 UTC m=+0.550028023 container died 320b1ad73cdb479517d3797549f7ae0e2985be70773462c9922793db811a3170 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_engelbart, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:02:43 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:43 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 20 19:02:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-0538c1ef29eb0c5551897a20c4bf17575eee08a835b57f4fdc4a00de1fb5aff8-merged.mount: Deactivated successfully.
Jan 20 19:02:43 compute-0 podman[255132]: 2026-01-20 19:02:43.284335343 +0000 UTC m=+0.600356820 container remove 320b1ad73cdb479517d3797549f7ae0e2985be70773462c9922793db811a3170 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2)
Jan 20 19:02:43 compute-0 systemd[1]: libpod-conmon-320b1ad73cdb479517d3797549f7ae0e2985be70773462c9922793db811a3170.scope: Deactivated successfully.
Jan 20 19:02:43 compute-0 ceph-mon[74381]: pgmap v606: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 597 B/s wr, 112 op/s
Jan 20 19:02:43 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:43 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c7c002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:02:43 compute-0 sudo[255024]: pam_unix(sudo:session): session closed for user root
Jan 20 19:02:43 compute-0 sudo[255170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:02:43 compute-0 sudo[255170]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:02:43 compute-0 sudo[255170]: pam_unix(sudo:session): session closed for user root
Jan 20 19:02:43 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:02:43 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:02:43 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:02:43.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:02:43 compute-0 sudo[255195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- raw list --format json
Jan 20 19:02:43 compute-0 sudo[255195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:02:43 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:43 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c700032f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:02:43 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v607: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 938 B/s wr, 106 op/s
Jan 20 19:02:43 compute-0 podman[255261]: 2026-01-20 19:02:43.946936098 +0000 UTC m=+0.055418284 container create 4bc615f55b35c6866f8cc43db75ee6d2de04ca6a708b4fb723942512cf8a9aff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_villani, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:02:43 compute-0 systemd[1]: Started libpod-conmon-4bc615f55b35c6866f8cc43db75ee6d2de04ca6a708b4fb723942512cf8a9aff.scope.
Jan 20 19:02:44 compute-0 podman[255261]: 2026-01-20 19:02:43.920260334 +0000 UTC m=+0.028742630 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:02:44 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:02:44 compute-0 podman[255261]: 2026-01-20 19:02:44.050508768 +0000 UTC m=+0.158991274 container init 4bc615f55b35c6866f8cc43db75ee6d2de04ca6a708b4fb723942512cf8a9aff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_villani, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 19:02:44 compute-0 podman[255261]: 2026-01-20 19:02:44.061276005 +0000 UTC m=+0.169758231 container start 4bc615f55b35c6866f8cc43db75ee6d2de04ca6a708b4fb723942512cf8a9aff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_villani, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:02:44 compute-0 podman[255261]: 2026-01-20 19:02:44.065732745 +0000 UTC m=+0.174214991 container attach 4bc615f55b35c6866f8cc43db75ee6d2de04ca6a708b4fb723942512cf8a9aff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_villani, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 20 19:02:44 compute-0 silly_villani[255277]: 167 167
Jan 20 19:02:44 compute-0 systemd[1]: libpod-4bc615f55b35c6866f8cc43db75ee6d2de04ca6a708b4fb723942512cf8a9aff.scope: Deactivated successfully.
Jan 20 19:02:44 compute-0 conmon[255277]: conmon 4bc615f55b35c6866f8c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4bc615f55b35c6866f8cc43db75ee6d2de04ca6a708b4fb723942512cf8a9aff.scope/container/memory.events
Jan 20 19:02:44 compute-0 podman[255261]: 2026-01-20 19:02:44.071143279 +0000 UTC m=+0.179625495 container died 4bc615f55b35c6866f8cc43db75ee6d2de04ca6a708b4fb723942512cf8a9aff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_villani, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:02:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-b78e0b147b6ee524aaab785789f29da5260c179f6f92fae9ccf8f1895b3c051c-merged.mount: Deactivated successfully.
Jan 20 19:02:44 compute-0 podman[255261]: 2026-01-20 19:02:44.12574248 +0000 UTC m=+0.234224696 container remove 4bc615f55b35c6866f8cc43db75ee6d2de04ca6a708b4fb723942512cf8a9aff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_villani, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:02:44 compute-0 systemd[1]: libpod-conmon-4bc615f55b35c6866f8cc43db75ee6d2de04ca6a708b4fb723942512cf8a9aff.scope: Deactivated successfully.
Jan 20 19:02:44 compute-0 podman[255301]: 2026-01-20 19:02:44.332994194 +0000 UTC m=+0.055600668 container create e444f7413167e2d8249a8b231fbf82d40c28b6172536f65f0af21fb20a5fb654 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:02:44 compute-0 systemd[1]: Started libpod-conmon-e444f7413167e2d8249a8b231fbf82d40c28b6172536f65f0af21fb20a5fb654.scope.
Jan 20 19:02:44 compute-0 podman[255301]: 2026-01-20 19:02:44.310525603 +0000 UTC m=+0.033132107 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:02:44 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:02:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a844912185b781a47343463a7385d20c5843f2c46eb49fac3d6ee3c85736c494/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a844912185b781a47343463a7385d20c5843f2c46eb49fac3d6ee3c85736c494/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a844912185b781a47343463a7385d20c5843f2c46eb49fac3d6ee3c85736c494/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a844912185b781a47343463a7385d20c5843f2c46eb49fac3d6ee3c85736c494/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:44 compute-0 nova_compute[254061]: 2026-01-20 19:02:44.441 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:02:44 compute-0 podman[255301]: 2026-01-20 19:02:44.454222367 +0000 UTC m=+0.176828891 container init e444f7413167e2d8249a8b231fbf82d40c28b6172536f65f0af21fb20a5fb654 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_chandrasekhar, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Jan 20 19:02:44 compute-0 nova_compute[254061]: 2026-01-20 19:02:44.465 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:02:44 compute-0 podman[255301]: 2026-01-20 19:02:44.474474558 +0000 UTC m=+0.197081042 container start e444f7413167e2d8249a8b231fbf82d40c28b6172536f65f0af21fb20a5fb654 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_chandrasekhar, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:02:44 compute-0 podman[255301]: 2026-01-20 19:02:44.478567568 +0000 UTC m=+0.201174062 container attach e444f7413167e2d8249a8b231fbf82d40c28b6172536f65f0af21fb20a5fb654 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_chandrasekhar, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 19:02:44 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:02:44 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:02:44 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:02:44.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:02:44 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:44 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c780039c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:02:45 compute-0 podman[255361]: 2026-01-20 19:02:45.130307512 +0000 UTC m=+0.097894770 container health_status d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:02:45 compute-0 lvm[255418]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 19:02:45 compute-0 lvm[255418]: VG ceph_vg0 finished
Jan 20 19:02:45 compute-0 affectionate_chandrasekhar[255318]: {}
Jan 20 19:02:45 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:45 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c8c001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:02:45 compute-0 systemd[1]: libpod-e444f7413167e2d8249a8b231fbf82d40c28b6172536f65f0af21fb20a5fb654.scope: Deactivated successfully.
Jan 20 19:02:45 compute-0 podman[255301]: 2026-01-20 19:02:45.35303939 +0000 UTC m=+1.075645884 container died e444f7413167e2d8249a8b231fbf82d40c28b6172536f65f0af21fb20a5fb654 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_chandrasekhar, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:02:45 compute-0 systemd[1]: libpod-e444f7413167e2d8249a8b231fbf82d40c28b6172536f65f0af21fb20a5fb654.scope: Consumed 1.526s CPU time.
Jan 20 19:02:45 compute-0 ceph-mon[74381]: pgmap v607: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 938 B/s wr, 106 op/s
Jan 20 19:02:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-a844912185b781a47343463a7385d20c5843f2c46eb49fac3d6ee3c85736c494-merged.mount: Deactivated successfully.
Jan 20 19:02:45 compute-0 podman[255301]: 2026-01-20 19:02:45.4042837 +0000 UTC m=+1.126890154 container remove e444f7413167e2d8249a8b231fbf82d40c28b6172536f65f0af21fb20a5fb654 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_chandrasekhar, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Jan 20 19:02:45 compute-0 systemd[1]: libpod-conmon-e444f7413167e2d8249a8b231fbf82d40c28b6172536f65f0af21fb20a5fb654.scope: Deactivated successfully.
Jan 20 19:02:45 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:02:45 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:02:45 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:02:45.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:02:45 compute-0 sudo[255195]: pam_unix(sudo:session): session closed for user root
Jan 20 19:02:45 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:02:45 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:02:45 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:02:45 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:02:45 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:45 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c700032f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:02:45 compute-0 sudo[255433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 19:02:45 compute-0 sudo[255433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:02:45 compute-0 sudo[255433]: pam_unix(sudo:session): session closed for user root
Jan 20 19:02:45 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v608: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 61 KiB/s rd, 938 B/s wr, 101 op/s
Jan 20 19:02:46 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:02:46 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:02:46 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:02:46 compute-0 ceph-mon[74381]: pgmap v608: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 61 KiB/s rd, 938 B/s wr, 101 op/s
Jan 20 19:02:46 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:02:46 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:02:46 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:02:46.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:02:46 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:46 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c7c004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:02:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:02:47.114Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:02:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:02:47.114Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:02:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:02:47.114Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:02:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:47 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c780039c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:02:47 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:02:47 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:02:47 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:02:47.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:02:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:47 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c8c001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:02:47 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v609: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 61 KiB/s rd, 1023 B/s wr, 101 op/s
Jan 20 19:02:48 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [WARNING] 019/190248 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 20 19:02:48 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:02:48 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:02:48 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:02:48.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:02:48 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:48 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c700032f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:02:48 compute-0 ceph-mon[74381]: pgmap v609: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 61 KiB/s rd, 1023 B/s wr, 101 op/s
Jan 20 19:02:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:49 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c7c004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:02:49 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:02:49 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:02:49 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:02:49.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:02:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:49 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c780039c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:02:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:02:49] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Jan 20 19:02:49 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:02:49] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Jan 20 19:02:49 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v610: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 511 B/s wr, 4 op/s
Jan 20 19:02:50 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:02:50 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:02:50 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:02:50.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:02:50 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:50 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c8c001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:02:51 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:51 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c700032f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:02:51 compute-0 ceph-mon[74381]: pgmap v610: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 511 B/s wr, 4 op/s
Jan 20 19:02:51 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:02:51 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:02:51 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:02:51.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:02:51 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:02:51 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:51 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c7c004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:02:51 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v611: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 511 B/s wr, 4 op/s
Jan 20 19:02:52 compute-0 ceph-mon[74381]: pgmap v611: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 511 B/s wr, 4 op/s
Jan 20 19:02:52 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:02:52 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:02:52 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:02:52.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:02:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:52 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c7c004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:02:53 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:53 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c780039c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:02:53 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:02:53 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:02:53 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:02:53.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:02:53 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:53 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c74000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:02:53 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v612: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 20 19:02:54 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:02:54 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:02:54 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:02:54.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:02:54 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:54 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c8c003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:02:54 compute-0 sudo[255467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:02:54 compute-0 sudo[255467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:02:54 compute-0 sudo[255467]: pam_unix(sudo:session): session closed for user root
Jan 20 19:02:54 compute-0 ceph-mgr[74676]: [balancer INFO root] Optimize plan auto_2026-01-20_19:02:54
Jan 20 19:02:54 compute-0 ceph-mgr[74676]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 19:02:54 compute-0 ceph-mgr[74676]: [balancer INFO root] do_upmap
Jan 20 19:02:54 compute-0 ceph-mgr[74676]: [balancer INFO root] pools ['default.rgw.log', 'vms', 'volumes', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.data', '.mgr', '.nfs', 'cephfs.cephfs.meta', 'backups', 'default.rgw.meta', 'images']
Jan 20 19:02:54 compute-0 ceph-mgr[74676]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 19:02:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:02:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:02:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:02:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:02:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:02:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:02:55 compute-0 ceph-mon[74381]: pgmap v612: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 20 19:02:55 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:02:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 19:02:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:02:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 19:02:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:02:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 19:02:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:02:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:02:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:02:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:02:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:02:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:02:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:02:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:02:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:02:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 20 19:02:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:02:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:02:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:02:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 19:02:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:02:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 20 19:02:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:02:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:02:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:02:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 19:02:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:02:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 20 19:02:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 19:02:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:02:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:02:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:02:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:02:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:02:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:02:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:02:55 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:55 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c7c004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:02:55 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:02:55 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:02:55 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:02:55.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:02:55 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:55 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c780039c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:02:55 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v613: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Jan 20 19:02:56 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:02:56 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:02:56 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:02:56 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:02:56.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:02:56 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:56 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c8c003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:02:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:02:57.115Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:02:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:57 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c740016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:02:57 compute-0 ceph-mon[74381]: pgmap v613: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Jan 20 19:02:57 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:02:57 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:02:57 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:02:57.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:02:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:57 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c98001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:02:57 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v614: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Jan 20 19:02:58 compute-0 ceph-mon[74381]: pgmap v614: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Jan 20 19:02:58 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:02:58 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:02:58 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:02:58.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:02:58 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:58 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c780039c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:02:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:59 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c8c003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:02:59 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:02:59 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:02:59 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:02:59.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:02:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:02:59 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c740016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:02:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:02:59] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Jan 20 19:02:59 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:02:59] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Jan 20 19:02:59 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v615: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:03:00 compute-0 nova_compute[254061]: 2026-01-20 19:03:00.132 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:03:00 compute-0 nova_compute[254061]: 2026-01-20 19:03:00.133 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:03:00 compute-0 nova_compute[254061]: 2026-01-20 19:03:00.134 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 19:03:00 compute-0 nova_compute[254061]: 2026-01-20 19:03:00.134 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 19:03:00 compute-0 nova_compute[254061]: 2026-01-20 19:03:00.162 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 19:03:00 compute-0 nova_compute[254061]: 2026-01-20 19:03:00.163 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:03:00 compute-0 nova_compute[254061]: 2026-01-20 19:03:00.164 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:03:00 compute-0 nova_compute[254061]: 2026-01-20 19:03:00.164 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:03:00 compute-0 nova_compute[254061]: 2026-01-20 19:03:00.165 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:03:00 compute-0 nova_compute[254061]: 2026-01-20 19:03:00.165 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:03:00 compute-0 nova_compute[254061]: 2026-01-20 19:03:00.165 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:03:00 compute-0 nova_compute[254061]: 2026-01-20 19:03:00.166 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 19:03:00 compute-0 nova_compute[254061]: 2026-01-20 19:03:00.166 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:03:00 compute-0 nova_compute[254061]: 2026-01-20 19:03:00.222 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:03:00 compute-0 nova_compute[254061]: 2026-01-20 19:03:00.223 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:03:00 compute-0 nova_compute[254061]: 2026-01-20 19:03:00.223 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:03:00 compute-0 nova_compute[254061]: 2026-01-20 19:03:00.223 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 19:03:00 compute-0 nova_compute[254061]: 2026-01-20 19:03:00.224 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:03:00 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:03:00 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3329327621' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:03:00 compute-0 nova_compute[254061]: 2026-01-20 19:03:00.693 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:03:00 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:03:00 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:03:00 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:03:00.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:03:00 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:00 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c98001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:00 compute-0 nova_compute[254061]: 2026-01-20 19:03:00.871 254065 WARNING nova.virt.libvirt.driver [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 19:03:00 compute-0 nova_compute[254061]: 2026-01-20 19:03:00.872 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4898MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 19:03:00 compute-0 nova_compute[254061]: 2026-01-20 19:03:00.872 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:03:00 compute-0 nova_compute[254061]: 2026-01-20 19:03:00.872 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:03:00 compute-0 ceph-mon[74381]: pgmap v615: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:03:00 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/3329327621' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:03:01 compute-0 nova_compute[254061]: 2026-01-20 19:03:01.018 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 19:03:01 compute-0 nova_compute[254061]: 2026-01-20 19:03:01.019 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 19:03:01 compute-0 nova_compute[254061]: 2026-01-20 19:03:01.075 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:03:01 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:01 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c780039c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:01 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:03:01 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:03:01 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:03:01.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:03:01 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:03:01 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1022891902' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:03:01 compute-0 nova_compute[254061]: 2026-01-20 19:03:01.505 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:03:01 compute-0 nova_compute[254061]: 2026-01-20 19:03:01.511 254065 DEBUG nova.compute.provider_tree [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Inventory has not changed in ProviderTree for provider: cb9161e5-191d-495c-920a-01144f42a215 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 19:03:01 compute-0 nova_compute[254061]: 2026-01-20 19:03:01.534 254065 DEBUG nova.scheduler.client.report [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Inventory has not changed for provider cb9161e5-191d-495c-920a-01144f42a215 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 19:03:01 compute-0 nova_compute[254061]: 2026-01-20 19:03:01.536 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 19:03:01 compute-0 nova_compute[254061]: 2026-01-20 19:03:01.536 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:03:01 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:03:01 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:01 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c8c003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:01 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v616: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:03:01 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/4271987647' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:03:01 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/1022891902' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:03:01 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/3578055572' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:03:02 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:03:02 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:03:02 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:03:02.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:03:02 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:02 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c740016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:03 compute-0 ceph-mon[74381]: pgmap v616: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:03:03 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/1842160609' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:03:03 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:03 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c980027d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:03 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:03:03 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:03:03 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:03:03.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:03:03 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:03 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c780039c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:03 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v617: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:03:04 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/2289971360' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:03:04 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:03:04 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:03:04 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:03:04.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:03:04 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:04 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c8c003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:05 compute-0 ceph-mon[74381]: pgmap v617: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:03:05 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:05 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c74002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:05 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:03:05 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:03:05 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:03:05.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:03:05 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:05 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c980027d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:05 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v618: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:03:06 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:03:06 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:03:06 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:03:06 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:03:06.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:03:06 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:06 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c780039c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:07 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:03:07.117Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:03:07 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:07 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c8c003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:07 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:03:07 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:03:07 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:03:07.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:03:07 compute-0 ceph-mon[74381]: pgmap v618: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:03:07 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:07 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c74002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:07 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v619: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 20 19:03:08 compute-0 ceph-mon[74381]: pgmap v619: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 20 19:03:08 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:03:08 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:03:08 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:03:08.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:03:08 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:08 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c980027d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:09 compute-0 podman[255551]: 2026-01-20 19:03:09.10599814 +0000 UTC m=+0.080049013 container health_status 7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3)
Jan 20 19:03:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:09 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c780039c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:09 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:03:09 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:03:09 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:03:09.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:03:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:09 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c8c003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:03:09] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Jan 20 19:03:09 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:03:09] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Jan 20 19:03:09 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v620: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:03:10 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:03:10 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:03:10 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:03:10.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:03:10 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:10 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c74002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:11 compute-0 ceph-mon[74381]: pgmap v620: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:03:11 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:03:11 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:11 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c98003c60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:11 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:03:11 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:03:11 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:03:11.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:03:11 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:03:11 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:11 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c780039c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:11 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v621: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:03:12 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:03:12 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:03:12 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:03:12.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:03:12 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:12 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c8c003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:13 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.24454 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Jan 20 19:03:13 compute-0 ceph-mgr[74676]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 20 19:03:13 compute-0 ceph-mgr[74676]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 20 19:03:13 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Jan 20 19:03:13 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1631395252' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Jan 20 19:03:13 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.15075 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Jan 20 19:03:13 compute-0 ceph-mgr[74676]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 20 19:03:13 compute-0 ceph-mgr[74676]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 20 19:03:13 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.15075 -' entity='client.openstack' cmd=[{"prefix": "nfs cluster info", "cluster_id": "cephfs", "format": "json"}]: dispatch
Jan 20 19:03:13 compute-0 ceph-mon[74381]: pgmap v621: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:03:13 compute-0 ceph-mon[74381]: from='client.? 192.168.122.10:0/2089565920' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Jan 20 19:03:13 compute-0 ceph-mon[74381]: from='client.? 192.168.122.10:0/1631395252' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Jan 20 19:03:13 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:13 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c74003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:13 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:03:13 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:03:13 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:03:13.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:03:13 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:13 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c74003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:13 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v622: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:03:14 compute-0 ceph-mon[74381]: from='client.24454 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Jan 20 19:03:14 compute-0 ceph-mon[74381]: from='client.15075 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Jan 20 19:03:14 compute-0 ceph-mon[74381]: from='client.15075 -' entity='client.openstack' cmd=[{"prefix": "nfs cluster info", "cluster_id": "cephfs", "format": "json"}]: dispatch
Jan 20 19:03:14 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:03:14 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:03:14 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:03:14.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:03:14 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:14 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c780039c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:14 compute-0 sudo[255576]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:03:14 compute-0 sudo[255576]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:03:14 compute-0 sudo[255576]: pam_unix(sudo:session): session closed for user root
Jan 20 19:03:15 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:15 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c8c003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:15 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:03:15 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:03:15 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:03:15.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:03:15 compute-0 ceph-mon[74381]: pgmap v622: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:03:15 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:15 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c98003c60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:15 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v623: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:03:16 compute-0 podman[255602]: 2026-01-20 19:03:16.11621544 +0000 UTC m=+0.093043699 container health_status d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3)
Jan 20 19:03:16 compute-0 ceph-mon[74381]: pgmap v623: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:03:16 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:03:16 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:03:16 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:03:16 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:03:16.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:03:16 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:16 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c74003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:03:17.117Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:03:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:17 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c780039c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:17 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:03:17 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:03:17 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:03:17.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:03:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:17 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c8c003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:17 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v624: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 20 19:03:18 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:03:18 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:03:18 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:03:18.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:03:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:18 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c98004970 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:19 compute-0 ceph-mon[74381]: pgmap v624: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 20 19:03:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:19 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c74003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:19 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:03:19 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:03:19 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:03:19.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:03:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:19 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c780042e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:03:19] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Jan 20 19:03:19 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:03:19] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Jan 20 19:03:19 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v625: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:03:20 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:03:20 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:03:20 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:03:20.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:03:20 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:20 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c8c003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:21 compute-0 ceph-mon[74381]: pgmap v625: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:03:21 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:21 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c98004970 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:21 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:03:21 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:03:21 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:03:21.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:03:21 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:03:21 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:21 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c74003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:21 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v626: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:03:22 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:03:22 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:03:22 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:03:22.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:03:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:22 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c780042e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:23 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c8c003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:23 compute-0 ceph-mon[74381]: pgmap v626: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:03:23 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:03:23 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:03:23 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:03:23.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:03:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:23 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c70001ba0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:23 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v627: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:03:24 compute-0 ceph-mon[74381]: pgmap v627: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:03:24 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:03:24 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:03:24 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:03:24.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:03:24 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:24 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c74003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:03:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:03:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:03:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:03:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:03:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:03:25 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:25 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c780042e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:25 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:03:25 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:03:25 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:03:25.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:03:25 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:25 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c8c003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:25 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v628: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:03:26 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:03:26 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:03:26 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:03:26 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:03:26 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:03:26.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:03:26 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:26 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c70001ba0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:03:27.118Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:03:27 compute-0 ceph-mon[74381]: pgmap v628: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:03:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:27 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c74003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:27 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:03:27 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:03:27 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:03:27.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:03:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:27 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c74003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:27 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v629: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 20 19:03:28 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:03:28 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:03:28 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:03:28.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:03:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:29 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c8c003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:29 compute-0 ceph-mon[74381]: pgmap v629: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 20 19:03:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:29 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c70001ba0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:29 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:03:29 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:03:29 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:03:29.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:03:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:29 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c7c002ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:03:29] "GET /metrics HTTP/1.1" 200 48340 "" "Prometheus/2.51.0"
Jan 20 19:03:29 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:03:29] "GET /metrics HTTP/1.1" 200 48340 "" "Prometheus/2.51.0"
Jan 20 19:03:29 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v630: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:03:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:03:30.278 165659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:03:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:03:30.278 165659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:03:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:03:30.279 165659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:03:30 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:03:30 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:03:30 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:03:30.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:03:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:30 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c780042e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:31 compute-0 ceph-mon[74381]: pgmap v630: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:03:31 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:31 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c8c003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:31 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:03:31 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:03:31 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:03:31.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:03:31 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:03:31 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:31 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c8c003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:31 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v631: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:03:32 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:03:32 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:03:32 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:03:32.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:03:32 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:32 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c7c002ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:33 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Jan 20 19:03:33 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/167554235' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Jan 20 19:03:33 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.15123 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Jan 20 19:03:33 compute-0 ceph-mgr[74676]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 20 19:03:33 compute-0 ceph-mgr[74676]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 20 19:03:33 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.24542 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Jan 20 19:03:33 compute-0 ceph-mgr[74676]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 20 19:03:33 compute-0 ceph-mgr[74676]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 20 19:03:33 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.24542 -' entity='client.openstack' cmd=[{"prefix": "nfs cluster info", "cluster_id": "cephfs", "format": "json"}]: dispatch
Jan 20 19:03:33 compute-0 ceph-mon[74381]: pgmap v631: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:03:33 compute-0 ceph-mon[74381]: from='client.? 192.168.122.10:0/167554235' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Jan 20 19:03:33 compute-0 ceph-mon[74381]: from='client.? 192.168.122.10:0/2579592767' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Jan 20 19:03:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:33 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c780042e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:33 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:03:33 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:03:33 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:03:33.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:03:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:33 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c70002b70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:33 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v632: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:03:34 compute-0 ceph-mon[74381]: from='client.15123 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Jan 20 19:03:34 compute-0 ceph-mon[74381]: from='client.24542 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Jan 20 19:03:34 compute-0 ceph-mon[74381]: from='client.24542 -' entity='client.openstack' cmd=[{"prefix": "nfs cluster info", "cluster_id": "cephfs", "format": "json"}]: dispatch
Jan 20 19:03:34 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:03:34 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:03:34 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:03:34.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:03:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:34 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c8c003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:35 compute-0 sudo[255649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:03:35 compute-0 sudo[255649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:03:35 compute-0 sudo[255649]: pam_unix(sudo:session): session closed for user root
Jan 20 19:03:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:35 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c7c002ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:35 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:03:35 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:03:35 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:03:35.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:03:35 compute-0 ceph-mon[74381]: pgmap v632: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:03:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:35 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c780042e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:35 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v633: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:03:36 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:03:36 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:03:36 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:03:36 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:03:36.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:03:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:36 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c70002b70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:36 compute-0 ceph-mon[74381]: pgmap v633: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:03:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:03:37.119Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:03:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:03:37.119Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:03:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:03:37.120Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:03:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:37 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c8c003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:37 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:03:37 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:03:37 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:03:37.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:03:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:37 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c7c002ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:37 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v634: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 20 19:03:38 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:03:38 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:03:38 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:03:38.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:03:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:38 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c780042e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:39 compute-0 ceph-mon[74381]: pgmap v634: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 20 19:03:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:39 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c70002b70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:39 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:03:39 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:03:39 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:03:39.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:03:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:39 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c8c003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:03:39] "GET /metrics HTTP/1.1" 200 48340 "" "Prometheus/2.51.0"
Jan 20 19:03:39 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:03:39] "GET /metrics HTTP/1.1" 200 48340 "" "Prometheus/2.51.0"
Jan 20 19:03:39 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v635: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:03:40 compute-0 podman[255679]: 2026-01-20 19:03:40.080187295 +0000 UTC m=+0.056996896 container health_status 7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 19:03:40 compute-0 ceph-mon[74381]: pgmap v635: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:03:40 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:03:40 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:03:40 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:03:40 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:03:40.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:03:40 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:40 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c8c003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:41 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:41 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c780042e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:41 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:03:41 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:03:41 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:03:41.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:03:41 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:03:41 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:41 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c70002b70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:41 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v636: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:03:42 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:03:42 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:03:42 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:03:42.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:03:42 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:42 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c70002b70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:43 compute-0 ceph-mon[74381]: pgmap v636: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:03:43 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:43 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c8c003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:43 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:03:43 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:03:43 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:03:43.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:03:43 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:43 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c780042e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:43 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v637: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:03:44 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:03:44 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:03:44 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:03:44.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:03:44 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:44 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c70002b70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:45 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:45 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c7c0034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:45 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:03:45 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:03:45 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:03:45.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:03:45 compute-0 ceph-mon[74381]: pgmap v637: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:03:45 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:45 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c8c003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:45 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v638: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:03:46 compute-0 sudo[255706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:03:46 compute-0 sudo[255706]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:03:46 compute-0 sudo[255706]: pam_unix(sudo:session): session closed for user root
Jan 20 19:03:46 compute-0 sudo[255731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 20 19:03:46 compute-0 sudo[255731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:03:46 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:03:46 compute-0 sudo[255731]: pam_unix(sudo:session): session closed for user root
Jan 20 19:03:46 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:03:46 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:03:46 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:03:46.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:03:46 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:46 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c780042e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:03:47.121Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:03:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:03:47.121Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:03:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:03:47.121Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:03:47 compute-0 podman[255786]: 2026-01-20 19:03:47.131020002 +0000 UTC m=+0.108177864 container health_status d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true)
Jan 20 19:03:47 compute-0 ceph-mon[74381]: pgmap v638: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:03:47 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 19:03:47 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:03:47 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 20 19:03:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:47 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c70004060 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:47 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:03:47 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:03:47 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:03:47.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:03:47 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:03:47 compute-0 sudo[255815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:03:47 compute-0 sudo[255815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:03:47 compute-0 sudo[255815]: pam_unix(sudo:session): session closed for user root
Jan 20 19:03:47 compute-0 sudo[255840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 19:03:47 compute-0 sudo[255840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:03:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:47 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c7c0034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:47 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v639: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 20 19:03:48 compute-0 podman[255906]: 2026-01-20 19:03:48.162927075 +0000 UTC m=+0.047731998 container create 99201d000a43a0311380bd3fdb40982fd229860ae9e9facfd1b117726595d857 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Jan 20 19:03:48 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 19:03:48 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 19:03:48 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:03:48 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:03:48 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 19:03:48 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 19:03:48 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 19:03:48 compute-0 systemd[1]: Started libpod-conmon-99201d000a43a0311380bd3fdb40982fd229860ae9e9facfd1b117726595d857.scope.
Jan 20 19:03:48 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:03:48 compute-0 podman[255906]: 2026-01-20 19:03:48.139231552 +0000 UTC m=+0.024036575 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:03:48 compute-0 podman[255906]: 2026-01-20 19:03:48.236493843 +0000 UTC m=+0.121298806 container init 99201d000a43a0311380bd3fdb40982fd229860ae9e9facfd1b117726595d857 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:03:48 compute-0 podman[255906]: 2026-01-20 19:03:48.243416669 +0000 UTC m=+0.128221602 container start 99201d000a43a0311380bd3fdb40982fd229860ae9e9facfd1b117726595d857 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_yalow, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:03:48 compute-0 podman[255906]: 2026-01-20 19:03:48.246601934 +0000 UTC m=+0.131406907 container attach 99201d000a43a0311380bd3fdb40982fd229860ae9e9facfd1b117726595d857 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:03:48 compute-0 pensive_yalow[255922]: 167 167
Jan 20 19:03:48 compute-0 systemd[1]: libpod-99201d000a43a0311380bd3fdb40982fd229860ae9e9facfd1b117726595d857.scope: Deactivated successfully.
Jan 20 19:03:48 compute-0 conmon[255922]: conmon 99201d000a43a0311380 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-99201d000a43a0311380bd3fdb40982fd229860ae9e9facfd1b117726595d857.scope/container/memory.events
Jan 20 19:03:48 compute-0 podman[255906]: 2026-01-20 19:03:48.250079307 +0000 UTC m=+0.134884240 container died 99201d000a43a0311380bd3fdb40982fd229860ae9e9facfd1b117726595d857 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_yalow, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default)
Jan 20 19:03:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-ea644a121e4f9b70061583d74ab3debae5b7eb19299a8da06e7d168c44ff5430-merged.mount: Deactivated successfully.
Jan 20 19:03:48 compute-0 podman[255906]: 2026-01-20 19:03:48.284642701 +0000 UTC m=+0.169447614 container remove 99201d000a43a0311380bd3fdb40982fd229860ae9e9facfd1b117726595d857 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_yalow, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 19:03:48 compute-0 systemd[1]: libpod-conmon-99201d000a43a0311380bd3fdb40982fd229860ae9e9facfd1b117726595d857.scope: Deactivated successfully.
Jan 20 19:03:48 compute-0 podman[255946]: 2026-01-20 19:03:48.456075737 +0000 UTC m=+0.052720441 container create 8897a7c43b8502b00a13a49a35d900f1cefaec17b7c3b3b5d478000b2e91182c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_clarke, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:03:48 compute-0 systemd[1]: Started libpod-conmon-8897a7c43b8502b00a13a49a35d900f1cefaec17b7c3b3b5d478000b2e91182c.scope.
Jan 20 19:03:48 compute-0 podman[255946]: 2026-01-20 19:03:48.435956579 +0000 UTC m=+0.032601323 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:03:48 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:03:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80fd23ae9ebb4e14f3e3a4fba615f24a9413b86fe04770ef7eb8844a1aa06981/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80fd23ae9ebb4e14f3e3a4fba615f24a9413b86fe04770ef7eb8844a1aa06981/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80fd23ae9ebb4e14f3e3a4fba615f24a9413b86fe04770ef7eb8844a1aa06981/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80fd23ae9ebb4e14f3e3a4fba615f24a9413b86fe04770ef7eb8844a1aa06981/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80fd23ae9ebb4e14f3e3a4fba615f24a9413b86fe04770ef7eb8844a1aa06981/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:48 compute-0 podman[255946]: 2026-01-20 19:03:48.557719676 +0000 UTC m=+0.154364390 container init 8897a7c43b8502b00a13a49a35d900f1cefaec17b7c3b3b5d478000b2e91182c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_clarke, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:03:48 compute-0 podman[255946]: 2026-01-20 19:03:48.571586767 +0000 UTC m=+0.168231461 container start 8897a7c43b8502b00a13a49a35d900f1cefaec17b7c3b3b5d478000b2e91182c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_clarke, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 19:03:48 compute-0 podman[255946]: 2026-01-20 19:03:48.575250145 +0000 UTC m=+0.171894879 container attach 8897a7c43b8502b00a13a49a35d900f1cefaec17b7c3b3b5d478000b2e91182c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_clarke, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Jan 20 19:03:48 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:03:48 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:03:48 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:03:48.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:03:48 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:48 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c8c003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:48 compute-0 peaceful_clarke[255962]: --> passed data devices: 0 physical, 1 LVM
Jan 20 19:03:48 compute-0 peaceful_clarke[255962]: --> All data devices are unavailable
Jan 20 19:03:48 compute-0 systemd[1]: libpod-8897a7c43b8502b00a13a49a35d900f1cefaec17b7c3b3b5d478000b2e91182c.scope: Deactivated successfully.
Jan 20 19:03:48 compute-0 podman[255946]: 2026-01-20 19:03:48.9608663 +0000 UTC m=+0.557511034 container died 8897a7c43b8502b00a13a49a35d900f1cefaec17b7c3b3b5d478000b2e91182c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 20 19:03:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-80fd23ae9ebb4e14f3e3a4fba615f24a9413b86fe04770ef7eb8844a1aa06981-merged.mount: Deactivated successfully.
Jan 20 19:03:49 compute-0 podman[255946]: 2026-01-20 19:03:49.005047252 +0000 UTC m=+0.601691956 container remove 8897a7c43b8502b00a13a49a35d900f1cefaec17b7c3b3b5d478000b2e91182c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_clarke, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:03:49 compute-0 systemd[1]: libpod-conmon-8897a7c43b8502b00a13a49a35d900f1cefaec17b7c3b3b5d478000b2e91182c.scope: Deactivated successfully.
Jan 20 19:03:49 compute-0 sudo[255840]: pam_unix(sudo:session): session closed for user root
Jan 20 19:03:49 compute-0 sudo[255988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:03:49 compute-0 sudo[255988]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:03:49 compute-0 sudo[255988]: pam_unix(sudo:session): session closed for user root
Jan 20 19:03:49 compute-0 ceph-mon[74381]: pgmap v639: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 20 19:03:49 compute-0 ceph-mon[74381]: from='client.? 192.168.122.10:0/1081987795' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 19:03:49 compute-0 ceph-mon[74381]: from='client.? 192.168.122.10:0/1081987795' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 19:03:49 compute-0 sudo[256013]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- lvm list --format json
Jan 20 19:03:49 compute-0 sudo[256013]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:03:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:49 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c780042e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:49 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:03:49 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:03:49 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:03:49.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:03:49 compute-0 podman[256078]: 2026-01-20 19:03:49.643683265 +0000 UTC m=+0.036472617 container create 8f32d92b7b4e14f42a272a7ec596899a67fa9bfa7a23252c7169a0d6844cf3e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_einstein, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Jan 20 19:03:49 compute-0 systemd[1]: Started libpod-conmon-8f32d92b7b4e14f42a272a7ec596899a67fa9bfa7a23252c7169a0d6844cf3e4.scope.
Jan 20 19:03:49 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:03:49 compute-0 podman[256078]: 2026-01-20 19:03:49.723261974 +0000 UTC m=+0.116051346 container init 8f32d92b7b4e14f42a272a7ec596899a67fa9bfa7a23252c7169a0d6844cf3e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_einstein, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:03:49 compute-0 podman[256078]: 2026-01-20 19:03:49.629420024 +0000 UTC m=+0.022209396 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:03:49 compute-0 podman[256078]: 2026-01-20 19:03:49.730528778 +0000 UTC m=+0.123318130 container start 8f32d92b7b4e14f42a272a7ec596899a67fa9bfa7a23252c7169a0d6844cf3e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_einstein, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:03:49 compute-0 podman[256078]: 2026-01-20 19:03:49.733033305 +0000 UTC m=+0.125822677 container attach 8f32d92b7b4e14f42a272a7ec596899a67fa9bfa7a23252c7169a0d6844cf3e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_einstein, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 19:03:49 compute-0 sad_einstein[256095]: 167 167
Jan 20 19:03:49 compute-0 systemd[1]: libpod-8f32d92b7b4e14f42a272a7ec596899a67fa9bfa7a23252c7169a0d6844cf3e4.scope: Deactivated successfully.
Jan 20 19:03:49 compute-0 podman[256078]: 2026-01-20 19:03:49.736044386 +0000 UTC m=+0.128833778 container died 8f32d92b7b4e14f42a272a7ec596899a67fa9bfa7a23252c7169a0d6844cf3e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:03:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-34c70097c688a50ba7f696df85e4cb18ffaf315340207fa360b97d845bb07b60-merged.mount: Deactivated successfully.
Jan 20 19:03:49 compute-0 podman[256078]: 2026-01-20 19:03:49.779164369 +0000 UTC m=+0.171953711 container remove 8f32d92b7b4e14f42a272a7ec596899a67fa9bfa7a23252c7169a0d6844cf3e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_einstein, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Jan 20 19:03:49 compute-0 systemd[1]: libpod-conmon-8f32d92b7b4e14f42a272a7ec596899a67fa9bfa7a23252c7169a0d6844cf3e4.scope: Deactivated successfully.
Jan 20 19:03:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:49 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c70004060 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:03:49] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Jan 20 19:03:49 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:03:49] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Jan 20 19:03:49 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v640: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:03:49 compute-0 podman[256120]: 2026-01-20 19:03:49.961676741 +0000 UTC m=+0.043070133 container create 4780576ab0a986e3ee287678c98513283c224a88a9493df2c1aa60c32b8bbb3e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_maxwell, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:03:49 compute-0 systemd[1]: Started libpod-conmon-4780576ab0a986e3ee287678c98513283c224a88a9493df2c1aa60c32b8bbb3e.scope.
Jan 20 19:03:50 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:03:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7ade7f8de1132b632b92030b8d5739bc3b08071bf4ffcb87d496342b24e65d9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7ade7f8de1132b632b92030b8d5739bc3b08071bf4ffcb87d496342b24e65d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7ade7f8de1132b632b92030b8d5739bc3b08071bf4ffcb87d496342b24e65d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7ade7f8de1132b632b92030b8d5739bc3b08071bf4ffcb87d496342b24e65d9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:50 compute-0 podman[256120]: 2026-01-20 19:03:50.026157276 +0000 UTC m=+0.107550688 container init 4780576ab0a986e3ee287678c98513283c224a88a9493df2c1aa60c32b8bbb3e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_maxwell, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Jan 20 19:03:50 compute-0 podman[256120]: 2026-01-20 19:03:50.035892346 +0000 UTC m=+0.117285738 container start 4780576ab0a986e3ee287678c98513283c224a88a9493df2c1aa60c32b8bbb3e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_maxwell, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Jan 20 19:03:50 compute-0 podman[256120]: 2026-01-20 19:03:50.038568798 +0000 UTC m=+0.119962200 container attach 4780576ab0a986e3ee287678c98513283c224a88a9493df2c1aa60c32b8bbb3e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 20 19:03:50 compute-0 podman[256120]: 2026-01-20 19:03:49.944940473 +0000 UTC m=+0.026333865 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:03:50 compute-0 romantic_maxwell[256136]: {
Jan 20 19:03:50 compute-0 romantic_maxwell[256136]:     "0": [
Jan 20 19:03:50 compute-0 romantic_maxwell[256136]:         {
Jan 20 19:03:50 compute-0 romantic_maxwell[256136]:             "devices": [
Jan 20 19:03:50 compute-0 romantic_maxwell[256136]:                 "/dev/loop3"
Jan 20 19:03:50 compute-0 romantic_maxwell[256136]:             ],
Jan 20 19:03:50 compute-0 romantic_maxwell[256136]:             "lv_name": "ceph_lv0",
Jan 20 19:03:50 compute-0 romantic_maxwell[256136]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:03:50 compute-0 romantic_maxwell[256136]:             "lv_size": "21470642176",
Jan 20 19:03:50 compute-0 romantic_maxwell[256136]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=aecbbf3b-b405-507b-97d7-637a83f5b4b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5f53c0c6-6046-4836-83f9-ff93da7e674e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:03:50 compute-0 romantic_maxwell[256136]:             "lv_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 19:03:50 compute-0 romantic_maxwell[256136]:             "name": "ceph_lv0",
Jan 20 19:03:50 compute-0 romantic_maxwell[256136]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:03:50 compute-0 romantic_maxwell[256136]:             "tags": {
Jan 20 19:03:50 compute-0 romantic_maxwell[256136]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:03:50 compute-0 romantic_maxwell[256136]:                 "ceph.block_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 19:03:50 compute-0 romantic_maxwell[256136]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:03:50 compute-0 romantic_maxwell[256136]:                 "ceph.cluster_fsid": "aecbbf3b-b405-507b-97d7-637a83f5b4b1",
Jan 20 19:03:50 compute-0 romantic_maxwell[256136]:                 "ceph.cluster_name": "ceph",
Jan 20 19:03:50 compute-0 romantic_maxwell[256136]:                 "ceph.crush_device_class": "",
Jan 20 19:03:50 compute-0 romantic_maxwell[256136]:                 "ceph.encrypted": "0",
Jan 20 19:03:50 compute-0 romantic_maxwell[256136]:                 "ceph.osd_fsid": "5f53c0c6-6046-4836-83f9-ff93da7e674e",
Jan 20 19:03:50 compute-0 romantic_maxwell[256136]:                 "ceph.osd_id": "0",
Jan 20 19:03:50 compute-0 romantic_maxwell[256136]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:03:50 compute-0 romantic_maxwell[256136]:                 "ceph.type": "block",
Jan 20 19:03:50 compute-0 romantic_maxwell[256136]:                 "ceph.vdo": "0",
Jan 20 19:03:50 compute-0 romantic_maxwell[256136]:                 "ceph.with_tpm": "0"
Jan 20 19:03:50 compute-0 romantic_maxwell[256136]:             },
Jan 20 19:03:50 compute-0 romantic_maxwell[256136]:             "type": "block",
Jan 20 19:03:50 compute-0 romantic_maxwell[256136]:             "vg_name": "ceph_vg0"
Jan 20 19:03:50 compute-0 romantic_maxwell[256136]:         }
Jan 20 19:03:50 compute-0 romantic_maxwell[256136]:     ]
Jan 20 19:03:50 compute-0 romantic_maxwell[256136]: }
Jan 20 19:03:50 compute-0 systemd[1]: libpod-4780576ab0a986e3ee287678c98513283c224a88a9493df2c1aa60c32b8bbb3e.scope: Deactivated successfully.
Jan 20 19:03:50 compute-0 podman[256120]: 2026-01-20 19:03:50.353152613 +0000 UTC m=+0.434546075 container died 4780576ab0a986e3ee287678c98513283c224a88a9493df2c1aa60c32b8bbb3e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 19:03:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-d7ade7f8de1132b632b92030b8d5739bc3b08071bf4ffcb87d496342b24e65d9-merged.mount: Deactivated successfully.
Jan 20 19:03:50 compute-0 podman[256120]: 2026-01-20 19:03:50.394723914 +0000 UTC m=+0.476117306 container remove 4780576ab0a986e3ee287678c98513283c224a88a9493df2c1aa60c32b8bbb3e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_maxwell, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 19:03:50 compute-0 systemd[1]: libpod-conmon-4780576ab0a986e3ee287678c98513283c224a88a9493df2c1aa60c32b8bbb3e.scope: Deactivated successfully.
Jan 20 19:03:50 compute-0 sudo[256013]: pam_unix(sudo:session): session closed for user root
Jan 20 19:03:50 compute-0 sudo[256160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:03:50 compute-0 sudo[256160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:03:50 compute-0 sudo[256160]: pam_unix(sudo:session): session closed for user root
Jan 20 19:03:50 compute-0 sudo[256185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- raw list --format json
Jan 20 19:03:50 compute-0 sudo[256185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:03:50 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:03:50 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:03:50 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:03:50.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:03:50 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:50 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c7c0034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:50 compute-0 podman[256252]: 2026-01-20 19:03:50.924714742 +0000 UTC m=+0.035691676 container create 11804c523f71503dd24159e3f220d618532e4407259dd2fd550223d65471fd88 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_goldberg, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:03:50 compute-0 systemd[1]: Started libpod-conmon-11804c523f71503dd24159e3f220d618532e4407259dd2fd550223d65471fd88.scope.
Jan 20 19:03:50 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:03:50 compute-0 podman[256252]: 2026-01-20 19:03:50.993856541 +0000 UTC m=+0.104833515 container init 11804c523f71503dd24159e3f220d618532e4407259dd2fd550223d65471fd88 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_goldberg, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Jan 20 19:03:51 compute-0 podman[256252]: 2026-01-20 19:03:50.999871263 +0000 UTC m=+0.110848207 container start 11804c523f71503dd24159e3f220d618532e4407259dd2fd550223d65471fd88 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_goldberg, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 19:03:51 compute-0 podman[256252]: 2026-01-20 19:03:51.00313269 +0000 UTC m=+0.114109664 container attach 11804c523f71503dd24159e3f220d618532e4407259dd2fd550223d65471fd88 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:03:51 compute-0 systemd[1]: libpod-11804c523f71503dd24159e3f220d618532e4407259dd2fd550223d65471fd88.scope: Deactivated successfully.
Jan 20 19:03:51 compute-0 wonderful_goldberg[256269]: 167 167
Jan 20 19:03:51 compute-0 conmon[256269]: conmon 11804c523f71503dd241 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-11804c523f71503dd24159e3f220d618532e4407259dd2fd550223d65471fd88.scope/container/memory.events
Jan 20 19:03:51 compute-0 podman[256252]: 2026-01-20 19:03:51.004285401 +0000 UTC m=+0.115262345 container died 11804c523f71503dd24159e3f220d618532e4407259dd2fd550223d65471fd88 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:03:51 compute-0 podman[256252]: 2026-01-20 19:03:50.909963297 +0000 UTC m=+0.020940261 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:03:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-b29e2dde341bd6906c084d317c5b8bd5e34651f7ed8a7ac730f317aa36c5493c-merged.mount: Deactivated successfully.
Jan 20 19:03:51 compute-0 podman[256252]: 2026-01-20 19:03:51.038255659 +0000 UTC m=+0.149232603 container remove 11804c523f71503dd24159e3f220d618532e4407259dd2fd550223d65471fd88 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_goldberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True)
Jan 20 19:03:51 compute-0 systemd[1]: libpod-conmon-11804c523f71503dd24159e3f220d618532e4407259dd2fd550223d65471fd88.scope: Deactivated successfully.
Jan 20 19:03:51 compute-0 ceph-mon[74381]: pgmap v640: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:03:51 compute-0 podman[256292]: 2026-01-20 19:03:51.221661215 +0000 UTC m=+0.056926124 container create 55bc8abd64800aba6a307c991356a7037362e207e09069ec4d5d0ae5d649c76c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 20 19:03:51 compute-0 systemd[1]: Started libpod-conmon-55bc8abd64800aba6a307c991356a7037362e207e09069ec4d5d0ae5d649c76c.scope.
Jan 20 19:03:51 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:03:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81af9d4ef145f2b95b3e93b229e98ecb22ca1702813743a5699a7f7e0247acfb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81af9d4ef145f2b95b3e93b229e98ecb22ca1702813743a5699a7f7e0247acfb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81af9d4ef145f2b95b3e93b229e98ecb22ca1702813743a5699a7f7e0247acfb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81af9d4ef145f2b95b3e93b229e98ecb22ca1702813743a5699a7f7e0247acfb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:51 compute-0 podman[256292]: 2026-01-20 19:03:51.296442725 +0000 UTC m=+0.131707684 container init 55bc8abd64800aba6a307c991356a7037362e207e09069ec4d5d0ae5d649c76c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_hamilton, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 20 19:03:51 compute-0 podman[256292]: 2026-01-20 19:03:51.205503282 +0000 UTC m=+0.040768211 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:03:51 compute-0 podman[256292]: 2026-01-20 19:03:51.303515594 +0000 UTC m=+0.138780503 container start 55bc8abd64800aba6a307c991356a7037362e207e09069ec4d5d0ae5d649c76c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Jan 20 19:03:51 compute-0 podman[256292]: 2026-01-20 19:03:51.306093943 +0000 UTC m=+0.141358882 container attach 55bc8abd64800aba6a307c991356a7037362e207e09069ec4d5d0ae5d649c76c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 20 19:03:51 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:51 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c8c003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:51 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:03:51 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:03:51 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:03:51.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:03:51 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:03:51 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:51 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c780042e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:51 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v641: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:03:51 compute-0 lvm[256384]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 19:03:51 compute-0 lvm[256384]: VG ceph_vg0 finished
Jan 20 19:03:51 compute-0 nervous_hamilton[256308]: {}
Jan 20 19:03:51 compute-0 systemd[1]: libpod-55bc8abd64800aba6a307c991356a7037362e207e09069ec4d5d0ae5d649c76c.scope: Deactivated successfully.
Jan 20 19:03:51 compute-0 systemd[1]: libpod-55bc8abd64800aba6a307c991356a7037362e207e09069ec4d5d0ae5d649c76c.scope: Consumed 1.041s CPU time.
Jan 20 19:03:51 compute-0 podman[256292]: 2026-01-20 19:03:51.979269831 +0000 UTC m=+0.814534770 container died 55bc8abd64800aba6a307c991356a7037362e207e09069ec4d5d0ae5d649c76c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_hamilton, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Jan 20 19:03:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-81af9d4ef145f2b95b3e93b229e98ecb22ca1702813743a5699a7f7e0247acfb-merged.mount: Deactivated successfully.
Jan 20 19:03:52 compute-0 podman[256292]: 2026-01-20 19:03:52.023489694 +0000 UTC m=+0.858754603 container remove 55bc8abd64800aba6a307c991356a7037362e207e09069ec4d5d0ae5d649c76c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_hamilton, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:03:52 compute-0 systemd[1]: libpod-conmon-55bc8abd64800aba6a307c991356a7037362e207e09069ec4d5d0ae5d649c76c.scope: Deactivated successfully.
Jan 20 19:03:52 compute-0 sudo[256185]: pam_unix(sudo:session): session closed for user root
Jan 20 19:03:52 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:03:52 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:03:52 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:03:52 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:03:52 compute-0 sudo[256398]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 19:03:52 compute-0 sudo[256398]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:03:52 compute-0 sudo[256398]: pam_unix(sudo:session): session closed for user root
Jan 20 19:03:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:52 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c780042e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:52 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:03:52 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:03:52 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:03:52.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:03:53 compute-0 ceph-mon[74381]: pgmap v641: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:03:53 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:03:53 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:03:53 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:53 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c7c0034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:53 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:03:53 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:03:53 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:03:53.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:03:53 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:53 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c7c0034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:53 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v642: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:03:54 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:54 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c780042e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:54 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:03:54 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:03:54 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:03:54.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:03:54 compute-0 ceph-mgr[74676]: [balancer INFO root] Optimize plan auto_2026-01-20_19:03:54
Jan 20 19:03:54 compute-0 ceph-mgr[74676]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 19:03:54 compute-0 ceph-mgr[74676]: [balancer INFO root] do_upmap
Jan 20 19:03:54 compute-0 ceph-mgr[74676]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.control', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.nfs', 'default.rgw.log', 'images', 'backups', 'vms', 'default.rgw.meta', '.mgr']
Jan 20 19:03:54 compute-0 ceph-mgr[74676]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 19:03:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:03:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:03:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:03:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:03:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:03:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:03:55 compute-0 ceph-mon[74381]: pgmap v642: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:03:55 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:03:55 compute-0 sudo[256428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:03:55 compute-0 sudo[256428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:03:55 compute-0 sudo[256428]: pam_unix(sudo:session): session closed for user root
Jan 20 19:03:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 19:03:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 19:03:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:03:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 19:03:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:03:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 19:03:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:03:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:03:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:03:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:03:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:03:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:03:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:03:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:03:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:03:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 20 19:03:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:03:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:03:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:03:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 19:03:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:03:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 20 19:03:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:03:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:03:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:03:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 19:03:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:03:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 20 19:03:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:03:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:03:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:03:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:03:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:03:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:03:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:03:55 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:55 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c7c0034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:55 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:03:55 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 19:03:55 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:03:55.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 19:03:55 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:55 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c98003680 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:55 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v643: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:03:56 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:03:56 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:56 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c68000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:56 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:03:56 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:03:56 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:03:56.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:03:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:03:57.122Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:03:57 compute-0 ceph-mon[74381]: pgmap v643: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:03:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:57 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c70004060 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:57 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:03:57 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:03:57 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:03:57.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:03:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:57 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c7c0034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:57 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v644: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 20 19:03:58 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:58 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c980023c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:58 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:03:58 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:03:58 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:03:58.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:03:59 compute-0 ceph-mon[74381]: pgmap v644: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 20 19:03:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:59 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c680016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:59 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:03:59 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:03:59 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:03:59.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:03:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:03:59 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c70004060 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:03:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:03:59] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Jan 20 19:03:59 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:03:59] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Jan 20 19:03:59 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v645: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:04:00 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:04:00 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c7c0034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:04:00 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:04:00 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:04:00 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:04:00.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:04:01 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:04:01 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c980023c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:04:01 compute-0 ceph-mon[74381]: pgmap v645: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:04:01 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:04:01 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:04:01 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:04:01.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:04:01 compute-0 nova_compute[254061]: 2026-01-20 19:04:01.528 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:04:01 compute-0 nova_compute[254061]: 2026-01-20 19:04:01.548 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:04:01 compute-0 nova_compute[254061]: 2026-01-20 19:04:01.548 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:04:01 compute-0 nova_compute[254061]: 2026-01-20 19:04:01.549 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:04:01 compute-0 nova_compute[254061]: 2026-01-20 19:04:01.549 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:04:01 compute-0 nova_compute[254061]: 2026-01-20 19:04:01.549 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 19:04:01 compute-0 nova_compute[254061]: 2026-01-20 19:04:01.550 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:04:01 compute-0 nova_compute[254061]: 2026-01-20 19:04:01.574 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:04:01 compute-0 nova_compute[254061]: 2026-01-20 19:04:01.574 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:04:01 compute-0 nova_compute[254061]: 2026-01-20 19:04:01.575 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:04:01 compute-0 nova_compute[254061]: 2026-01-20 19:04:01.576 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 19:04:01 compute-0 nova_compute[254061]: 2026-01-20 19:04:01.576 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:04:01 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:04:01 compute-0 anacron[5103]: Job `cron.weekly' started
Jan 20 19:04:01 compute-0 anacron[5103]: Job `cron.weekly' terminated
Jan 20 19:04:01 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:04:01 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c680016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:04:01 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v646: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:04:02 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:04:02 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2511768980' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:04:02 compute-0 nova_compute[254061]: 2026-01-20 19:04:02.074 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:04:02 compute-0 nova_compute[254061]: 2026-01-20 19:04:02.250 254065 WARNING nova.virt.libvirt.driver [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 19:04:02 compute-0 nova_compute[254061]: 2026-01-20 19:04:02.251 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4899MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 19:04:02 compute-0 nova_compute[254061]: 2026-01-20 19:04:02.252 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:04:02 compute-0 nova_compute[254061]: 2026-01-20 19:04:02.252 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:04:02 compute-0 nova_compute[254061]: 2026-01-20 19:04:02.314 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 19:04:02 compute-0 nova_compute[254061]: 2026-01-20 19:04:02.314 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 19:04:02 compute-0 nova_compute[254061]: 2026-01-20 19:04:02.333 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:04:02 compute-0 ceph-mon[74381]: pgmap v646: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:04:02 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/2511768980' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:04:02 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:04:02 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3236069433' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:04:02 compute-0 nova_compute[254061]: 2026-01-20 19:04:02.805 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:04:02 compute-0 nova_compute[254061]: 2026-01-20 19:04:02.809 254065 DEBUG nova.compute.provider_tree [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Inventory has not changed in ProviderTree for provider: cb9161e5-191d-495c-920a-01144f42a215 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 19:04:02 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:04:02 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c70004060 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:04:02 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:04:02 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:04:02 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:04:02.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:04:02 compute-0 nova_compute[254061]: 2026-01-20 19:04:02.854 254065 DEBUG nova.scheduler.client.report [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Inventory has not changed for provider cb9161e5-191d-495c-920a-01144f42a215 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 19:04:02 compute-0 nova_compute[254061]: 2026-01-20 19:04:02.855 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 19:04:02 compute-0 nova_compute[254061]: 2026-01-20 19:04:02.856 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.604s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:04:03 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:04:03 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c70004060 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:04:03 compute-0 nova_compute[254061]: 2026-01-20 19:04:03.436 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:04:03 compute-0 nova_compute[254061]: 2026-01-20 19:04:03.436 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:04:03 compute-0 nova_compute[254061]: 2026-01-20 19:04:03.437 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 19:04:03 compute-0 nova_compute[254061]: 2026-01-20 19:04:03.437 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 19:04:03 compute-0 nova_compute[254061]: 2026-01-20 19:04:03.456 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 19:04:03 compute-0 nova_compute[254061]: 2026-01-20 19:04:03.457 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:04:03 compute-0 nova_compute[254061]: 2026-01-20 19:04:03.458 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:04:03 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:04:03 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:04:03 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:04:03.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:04:03 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/3236069433' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:04:03 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/2652091757' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:04:03 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:04:03 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c980023c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:04:03 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v647: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:04:04 compute-0 ceph-mon[74381]: pgmap v647: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:04:04 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/678316524' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:04:04 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/2854096681' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:04:04 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:04:04 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c680016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:04:04 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:04:04 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:04:04 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:04:04.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:04:05 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:04:05 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c70004060 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:04:05 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:04:05 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:04:05 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:04:05.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:04:05 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/3639349' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:04:05 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:04:05 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c7c0034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:04:05 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v648: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:04:06 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:04:06 compute-0 ceph-mon[74381]: pgmap v648: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:04:06 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:04:06 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c980023c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:04:06 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:04:06 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:04:06 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:04:06.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:04:07 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:04:07.123Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:04:07 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:04:07 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c68002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:04:07 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:04:07 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:04:07 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:04:07.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:04:07 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:04:07 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c70004060 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:04:07 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v649: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 20 19:04:08 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:04:08 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c7c0034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:04:08 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:04:08 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:04:08 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:04:08.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:04:08 compute-0 ceph-mon[74381]: pgmap v649: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 20 19:04:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:04:09 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c980041f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:04:09 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:04:09 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:04:09 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:04:09.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:04:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:04:09 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c68002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:04:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:04:09] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Jan 20 19:04:09 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:04:09] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Jan 20 19:04:09 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v650: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:04:10 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:04:10 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c70004060 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:04:10 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:04:10 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:04:10 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:04:10.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:04:10 compute-0 ceph-mon[74381]: pgmap v650: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:04:10 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:04:11 compute-0 podman[256515]: 2026-01-20 19:04:11.101527922 +0000 UTC m=+0.071519874 container health_status 7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Jan 20 19:04:11 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:04:11 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c7c0034e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:04:11 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:04:11 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:04:11 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:04:11.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:04:11 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:04:11 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:04:11 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c980041f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:04:11 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v651: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:04:12 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:04:12 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c70004060 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:04:12 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:04:12 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:04:12 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:04:12.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:04:13 compute-0 ceph-mon[74381]: pgmap v651: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:04:13 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:04:13 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c68002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:04:13 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:04:13 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:04:13 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:04:13.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:04:13 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:04:13 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c68002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:04:13 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v652: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:04:14 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:04:14 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c980041f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:04:14 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:04:14 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:04:14 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:04:14.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:04:15 compute-0 sudo[256538]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:04:15 compute-0 sudo[256538]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:04:15 compute-0 sudo[256538]: pam_unix(sudo:session): session closed for user root
Jan 20 19:04:15 compute-0 ceph-mon[74381]: pgmap v652: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:04:15 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:04:15 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c70004060 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:04:15 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:04:15 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:04:15 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:04:15.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:04:15 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:04:15 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c68002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:04:15 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v653: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:04:16 compute-0 ceph-mon[74381]: pgmap v653: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:04:16 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:04:16 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:04:16 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c68002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:04:16 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:04:16 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:04:16 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:04:16.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:04:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:04:17.124Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:04:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:04:17 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c98004b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:04:17 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:04:17 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:04:17 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:04:17.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:04:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:04:17 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c70004060 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:04:17 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v654: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 20 19:04:18 compute-0 podman[256566]: 2026-01-20 19:04:18.111603457 +0000 UTC m=+0.087055890 container health_status d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 20 19:04:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:04:18 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c70004060 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:04:18 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:04:18 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:04:18 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:04:18.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:04:18 compute-0 ceph-mon[74381]: pgmap v654: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 20 19:04:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:04:19 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c68003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:04:19 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:04:19 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:04:19 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:04:19.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:04:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:04:19] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Jan 20 19:04:19 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:04:19] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Jan 20 19:04:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:04:19 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c98004b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:04:19 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v655: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:04:20 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:04:20 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c98004b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:04:20 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:04:20 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:04:20 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:04:20.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:04:20 compute-0 ceph-mon[74381]: pgmap v655: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:04:21 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:04:21 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c7c0045e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:04:21 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:04:21 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:04:21 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:04:21.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:04:21 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:04:21 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:04:21 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c68003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:04:21 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v656: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:04:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:04:22 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c70004060 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:04:22 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:04:22 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:04:22 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:04:22.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:04:22 compute-0 ceph-mon[74381]: pgmap v656: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:04:23 compute-0 sshd-session[256599]: Connection closed by 143.244.178.70 port 58400
Jan 20 19:04:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:04:23 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c98004b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:04:23 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:04:23 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:04:23 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:04:23.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:04:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:04:23 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c7c0045e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:04:23 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v657: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:04:24 compute-0 kernel: ganesha.nfsd[255642]: segfault at 50 ip 00007f9d2036c32e sp 00007f9ca57f9210 error 4 in libntirpc.so.5.8[7f9d20351000+2c000] likely on CPU 2 (core 0, socket 2)
Jan 20 19:04:24 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Jan 20 19:04:24 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[254557]: 20/01/2026 19:04:24 : epoch 696fd132 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9c7c0045e0 fd 39 proxy ignored for local
Jan 20 19:04:24 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:04:24 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:04:24 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:04:24.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:04:24 compute-0 systemd[1]: Started Process Core Dump (PID 256602/UID 0).
Jan 20 19:04:24 compute-0 ceph-mon[74381]: pgmap v657: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:04:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:04:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:04:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:04:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:04:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:04:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:04:25 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:04:25 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:04:25 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:04:25.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:04:25 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v658: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:04:25 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:04:26 compute-0 systemd-coredump[256603]: Process 254562 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 57:
                                                    #0  0x00007f9d2036c32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Jan 20 19:04:26 compute-0 systemd[1]: systemd-coredump@12-256602-0.service: Deactivated successfully.
Jan 20 19:04:26 compute-0 systemd[1]: systemd-coredump@12-256602-0.service: Consumed 1.255s CPU time.
Jan 20 19:04:26 compute-0 podman[256609]: 2026-01-20 19:04:26.289650267 +0000 UTC m=+0.045558891 container died c295f9130b000349283ee1d76cc63270dc10a72561b2da89947bc3a0790f6477 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:04:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3cc76d181d8b14a607c2f0b6c5c1c15018584fa4feefedcd2183d36a91c8131-merged.mount: Deactivated successfully.
Jan 20 19:04:26 compute-0 podman[256609]: 2026-01-20 19:04:26.336680745 +0000 UTC m=+0.092589359 container remove c295f9130b000349283ee1d76cc63270dc10a72561b2da89947bc3a0790f6477 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:04:26 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Main process exited, code=exited, status=139/n/a
Jan 20 19:04:26 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Failed with result 'exit-code'.
Jan 20 19:04:26 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Consumed 1.852s CPU time.
Jan 20 19:04:26 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:04:26 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:04:26 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:04:26 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:04:26.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:04:27 compute-0 ceph-mon[74381]: pgmap v658: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:04:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:04:27.125Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:04:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:04:27.126Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:04:27 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:04:27.182 165659 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'be:fe:f2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '0e:71:69:cd:a8:95'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 19:04:27 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:04:27.183 165659 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 19:04:27 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:04:27.184 165659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7018ca8a-de0e-4b56-bb43-675238d4f8b3, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 19:04:27 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:04:27 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:04:27 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:04:27.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:04:27 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v659: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 20 19:04:28 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:04:28 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:04:28 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:04:28.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:04:29 compute-0 ceph-mon[74381]: pgmap v659: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 20 19:04:29 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:04:29 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:04:29 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:04:29.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:04:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:04:29] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Jan 20 19:04:29 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:04:29] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Jan 20 19:04:29 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v660: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:04:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:04:30.278 165659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:04:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:04:30.279 165659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:04:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:04:30.279 165659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:04:30 compute-0 ceph-mon[74381]: pgmap v660: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:04:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [WARNING] 019/190430 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 20 19:04:30 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:04:30 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:04:30 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:04:30.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:04:31 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:04:31 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:04:31 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:04:31.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:04:31 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:04:31 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v661: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:04:32 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:04:32 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:04:32 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:04:32.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:04:32 compute-0 ceph-mon[74381]: pgmap v661: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:04:33 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:04:33 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:04:33 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:04:33.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:04:33 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v662: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:04:34 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:04:34 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:04:34 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:04:34.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:04:34 compute-0 ceph-mon[74381]: pgmap v662: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:04:35 compute-0 sudo[256661]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:04:35 compute-0 sudo[256661]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:04:35 compute-0 sudo[256661]: pam_unix(sudo:session): session closed for user root
Jan 20 19:04:35 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:04:35 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:04:35 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:04:35.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:04:35 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v663: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:04:36 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:04:36 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Scheduled restart job, restart counter is at 13.
Jan 20 19:04:36 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.ulclbx for aecbbf3b-b405-507b-97d7-637a83f5b4b1.
Jan 20 19:04:36 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Consumed 1.852s CPU time.
Jan 20 19:04:36 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.ulclbx for aecbbf3b-b405-507b-97d7-637a83f5b4b1...
Jan 20 19:04:36 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:04:36 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:04:36 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:04:36.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:04:36 compute-0 ceph-mon[74381]: pgmap v663: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:04:37 compute-0 podman[256734]: 2026-01-20 19:04:37.070152612 +0000 UTC m=+0.066268374 container create 523e647171cf291007556fd13fcb42908a492154545ae31cc25017a55d13fcaf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:04:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fc94cea780b6fb43253686b5ad1213af61b36ee1e8306b6d4947b9b0d16d359/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fc94cea780b6fb43253686b5ad1213af61b36ee1e8306b6d4947b9b0d16d359/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fc94cea780b6fb43253686b5ad1213af61b36ee1e8306b6d4947b9b0d16d359/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fc94cea780b6fb43253686b5ad1213af61b36ee1e8306b6d4947b9b0d16d359/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.ulclbx-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:37 compute-0 podman[256734]: 2026-01-20 19:04:37.125607005 +0000 UTC m=+0.121722787 container init 523e647171cf291007556fd13fcb42908a492154545ae31cc25017a55d13fcaf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True)
Jan 20 19:04:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:04:37.127Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:04:37 compute-0 podman[256734]: 2026-01-20 19:04:37.132346095 +0000 UTC m=+0.128461837 container start 523e647171cf291007556fd13fcb42908a492154545ae31cc25017a55d13fcaf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:04:37 compute-0 podman[256734]: 2026-01-20 19:04:37.042530783 +0000 UTC m=+0.038646615 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:04:37 compute-0 bash[256734]: 523e647171cf291007556fd13fcb42908a492154545ae31cc25017a55d13fcaf
Jan 20 19:04:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[256749]: 20/01/2026 19:04:37 : epoch 696fd1c5 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Jan 20 19:04:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[256749]: 20/01/2026 19:04:37 : epoch 696fd1c5 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Jan 20 19:04:37 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.ulclbx for aecbbf3b-b405-507b-97d7-637a83f5b4b1.
Jan 20 19:04:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[256749]: 20/01/2026 19:04:37 : epoch 696fd1c5 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Jan 20 19:04:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[256749]: 20/01/2026 19:04:37 : epoch 696fd1c5 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Jan 20 19:04:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[256749]: 20/01/2026 19:04:37 : epoch 696fd1c5 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Jan 20 19:04:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[256749]: 20/01/2026 19:04:37 : epoch 696fd1c5 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Jan 20 19:04:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[256749]: 20/01/2026 19:04:37 : epoch 696fd1c5 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Jan 20 19:04:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[256749]: 20/01/2026 19:04:37 : epoch 696fd1c5 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 20 19:04:37 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:04:37 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:04:37 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:04:37.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:04:37 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v664: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 85 B/s wr, 0 op/s
Jan 20 19:04:38 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:04:38 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:04:38 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:04:38.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:04:38 compute-0 ceph-mon[74381]: pgmap v664: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 853 B/s rd, 85 B/s wr, 0 op/s
Jan 20 19:04:39 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:04:39 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:04:39 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:04:39.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:04:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:04:39] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Jan 20 19:04:39 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:04:39] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Jan 20 19:04:39 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v665: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 20 19:04:40 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:04:40 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:04:40 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:04:40.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:04:41 compute-0 ceph-mon[74381]: pgmap v665: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 20 19:04:41 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:04:41 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:04:41 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:04:41 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:04:41.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:04:41 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:04:41 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v666: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 255 B/s wr, 0 op/s
Jan 20 19:04:42 compute-0 podman[256796]: 2026-01-20 19:04:42.094619643 +0000 UTC m=+0.059549493 container health_status 7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 20 19:04:42 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:04:42 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:04:42 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:04:42.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:04:43 compute-0 ceph-mon[74381]: pgmap v666: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 255 B/s wr, 0 op/s
Jan 20 19:04:43 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[256749]: 20/01/2026 19:04:43 : epoch 696fd1c5 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 20 19:04:43 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[256749]: 20/01/2026 19:04:43 : epoch 696fd1c5 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 20 19:04:43 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:04:43 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:04:43 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:04:43.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:04:43 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v667: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 20 19:04:44 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:04:44 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:04:44 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:04:44.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:04:45 compute-0 ceph-mon[74381]: pgmap v667: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 20 19:04:45 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:04:45 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:04:45 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:04:45.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:04:45 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v668: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 20 19:04:46 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:04:46 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:04:46 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:04:46 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:04:46.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:04:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:04:47.128Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:04:47 compute-0 ceph-mon[74381]: pgmap v668: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Jan 20 19:04:47 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:04:47 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:04:47 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:04:47.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:04:47 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v669: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 20 19:04:48 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:04:48 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:04:48 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:04:48.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:04:49 compute-0 podman[256822]: 2026-01-20 19:04:49.168470688 +0000 UTC m=+0.139419793 container health_status d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 19:04:49 compute-0 ceph-mon[74381]: pgmap v669: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 20 19:04:49 compute-0 ceph-mon[74381]: from='client.? 192.168.122.10:0/2775371772' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 19:04:49 compute-0 ceph-mon[74381]: from='client.? 192.168.122.10:0/2775371772' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 19:04:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[256749]: 20/01/2026 19:04:49 : epoch 696fd1c5 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 20 19:04:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[256749]: 20/01/2026 19:04:49 : epoch 696fd1c5 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Jan 20 19:04:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[256749]: 20/01/2026 19:04:49 : epoch 696fd1c5 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Jan 20 19:04:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[256749]: 20/01/2026 19:04:49 : epoch 696fd1c5 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Jan 20 19:04:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[256749]: 20/01/2026 19:04:49 : epoch 696fd1c5 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Jan 20 19:04:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[256749]: 20/01/2026 19:04:49 : epoch 696fd1c5 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Jan 20 19:04:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[256749]: 20/01/2026 19:04:49 : epoch 696fd1c5 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Jan 20 19:04:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[256749]: 20/01/2026 19:04:49 : epoch 696fd1c5 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 20 19:04:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[256749]: 20/01/2026 19:04:49 : epoch 696fd1c5 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 20 19:04:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[256749]: 20/01/2026 19:04:49 : epoch 696fd1c5 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 20 19:04:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[256749]: 20/01/2026 19:04:49 : epoch 696fd1c5 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Jan 20 19:04:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[256749]: 20/01/2026 19:04:49 : epoch 696fd1c5 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 20 19:04:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[256749]: 20/01/2026 19:04:49 : epoch 696fd1c5 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Jan 20 19:04:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[256749]: 20/01/2026 19:04:49 : epoch 696fd1c5 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Jan 20 19:04:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[256749]: 20/01/2026 19:04:49 : epoch 696fd1c5 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Jan 20 19:04:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[256749]: 20/01/2026 19:04:49 : epoch 696fd1c5 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Jan 20 19:04:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[256749]: 20/01/2026 19:04:49 : epoch 696fd1c5 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Jan 20 19:04:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[256749]: 20/01/2026 19:04:49 : epoch 696fd1c5 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Jan 20 19:04:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[256749]: 20/01/2026 19:04:49 : epoch 696fd1c5 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Jan 20 19:04:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[256749]: 20/01/2026 19:04:49 : epoch 696fd1c5 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Jan 20 19:04:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[256749]: 20/01/2026 19:04:49 : epoch 696fd1c5 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Jan 20 19:04:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[256749]: 20/01/2026 19:04:49 : epoch 696fd1c5 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Jan 20 19:04:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[256749]: 20/01/2026 19:04:49 : epoch 696fd1c5 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 20 19:04:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[256749]: 20/01/2026 19:04:49 : epoch 696fd1c5 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Jan 20 19:04:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[256749]: 20/01/2026 19:04:49 : epoch 696fd1c5 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 20 19:04:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[256749]: 20/01/2026 19:04:49 : epoch 696fd1c5 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Jan 20 19:04:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[256749]: 20/01/2026 19:04:49 : epoch 696fd1c5 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Jan 20 19:04:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[256749]: 20/01/2026 19:04:49 : epoch 696fd1c5 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc8c4000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:04:49 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:04:49 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:04:49 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:04:49.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:04:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:04:49] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Jan 20 19:04:49 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:04:49] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Jan 20 19:04:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[256749]: 20/01/2026 19:04:49 : epoch 696fd1c5 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc8c4000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:04:49 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v670: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Jan 20 19:04:50 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[256749]: 20/01/2026 19:04:50 : epoch 696fd1c5 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc8a4000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:04:50 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:04:50 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:04:50 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:04:50.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:04:51 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[256749]: 20/01/2026 19:04:51 : epoch 696fd1c5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc89c000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:04:51 compute-0 ceph-mon[74381]: pgmap v670: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Jan 20 19:04:51 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:04:51 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:04:51 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:04:51.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:04:51 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:04:51 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[256749]: 20/01/2026 19:04:51 : epoch 696fd1c5 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc8b40016e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:04:51 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v671: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Jan 20 19:04:52 compute-0 ceph-mon[74381]: pgmap v671: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Jan 20 19:04:52 compute-0 sudo[256869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:04:52 compute-0 sudo[256869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:04:52 compute-0 sudo[256869]: pam_unix(sudo:session): session closed for user root
Jan 20 19:04:52 compute-0 sudo[256894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 20 19:04:52 compute-0 sudo[256894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:04:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [WARNING] 019/190452 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 20 19:04:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[256749]: 20/01/2026 19:04:52 : epoch 696fd1c5 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc8c4001dd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:04:52 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:04:52 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:04:52 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:04:52.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:04:53 compute-0 sudo[256894]: pam_unix(sudo:session): session closed for user root
Jan 20 19:04:53 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 19:04:53 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[256749]: 20/01/2026 19:04:53 : epoch 696fd1c5 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc8a40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:04:53 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:04:53 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:04:53 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:04:53 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:04:53.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:04:53 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 19:04:53 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 19:04:53 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 20 19:04:53 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:04:53 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[256749]: 20/01/2026 19:04:53 : epoch 696fd1c5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc89c0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:04:53 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v672: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 767 B/s wr, 2 op/s
Jan 20 19:04:53 compute-0 sudo[256951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:04:53 compute-0 sudo[256951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:04:53 compute-0 sudo[256951]: pam_unix(sudo:session): session closed for user root
Jan 20 19:04:54 compute-0 sudo[256976]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 19:04:54 compute-0 sudo[256976]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:04:54 compute-0 podman[257043]: 2026-01-20 19:04:54.417927943 +0000 UTC m=+0.043850755 container create 468dcd38eb9be2f5b8d74bfcfe98faf94dbbd2af93dd9ab439c2ebe63e3e34c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_napier, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 20 19:04:54 compute-0 systemd[1]: Started libpod-conmon-468dcd38eb9be2f5b8d74bfcfe98faf94dbbd2af93dd9ab439c2ebe63e3e34c2.scope.
Jan 20 19:04:54 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:04:54 compute-0 podman[257043]: 2026-01-20 19:04:54.395492408 +0000 UTC m=+0.021415280 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:04:54 compute-0 podman[257043]: 2026-01-20 19:04:54.505396563 +0000 UTC m=+0.131319405 container init 468dcd38eb9be2f5b8d74bfcfe98faf94dbbd2af93dd9ab439c2ebe63e3e34c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_napier, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 19:04:54 compute-0 podman[257043]: 2026-01-20 19:04:54.511709404 +0000 UTC m=+0.137632226 container start 468dcd38eb9be2f5b8d74bfcfe98faf94dbbd2af93dd9ab439c2ebe63e3e34c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_napier, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Jan 20 19:04:54 compute-0 podman[257043]: 2026-01-20 19:04:54.515486985 +0000 UTC m=+0.141409827 container attach 468dcd38eb9be2f5b8d74bfcfe98faf94dbbd2af93dd9ab439c2ebe63e3e34c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:04:54 compute-0 thirsty_napier[257060]: 167 167
Jan 20 19:04:54 compute-0 systemd[1]: libpod-468dcd38eb9be2f5b8d74bfcfe98faf94dbbd2af93dd9ab439c2ebe63e3e34c2.scope: Deactivated successfully.
Jan 20 19:04:54 compute-0 conmon[257060]: conmon 468dcd38eb9be2f5b8d7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-468dcd38eb9be2f5b8d74bfcfe98faf94dbbd2af93dd9ab439c2ebe63e3e34c2.scope/container/memory.events
Jan 20 19:04:54 compute-0 podman[257043]: 2026-01-20 19:04:54.519074473 +0000 UTC m=+0.144997275 container died 468dcd38eb9be2f5b8d74bfcfe98faf94dbbd2af93dd9ab439c2ebe63e3e34c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default)
Jan 20 19:04:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-ebc9ad3738576773ac2a6d3d92f45a59b1ba9093be4c3faa8efebbfa475fe5ab-merged.mount: Deactivated successfully.
Jan 20 19:04:54 compute-0 podman[257043]: 2026-01-20 19:04:54.567424777 +0000 UTC m=+0.193347599 container remove 468dcd38eb9be2f5b8d74bfcfe98faf94dbbd2af93dd9ab439c2ebe63e3e34c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_napier, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:04:54 compute-0 systemd[1]: libpod-conmon-468dcd38eb9be2f5b8d74bfcfe98faf94dbbd2af93dd9ab439c2ebe63e3e34c2.scope: Deactivated successfully.
Jan 20 19:04:54 compute-0 podman[257086]: 2026-01-20 19:04:54.752082241 +0000 UTC m=+0.038658644 container create d77dda397dc26c1dd2b807a88365e3a286234a79745fa15bbb82ad224e6227de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_bouman, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Jan 20 19:04:54 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:04:54 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:04:54 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 19:04:54 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 19:04:54 compute-0 ceph-mon[74381]: pgmap v672: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 767 B/s wr, 2 op/s
Jan 20 19:04:54 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 19:04:54 compute-0 systemd[1]: Started libpod-conmon-d77dda397dc26c1dd2b807a88365e3a286234a79745fa15bbb82ad224e6227de.scope.
Jan 20 19:04:54 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:04:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe0de0410a4725291e540afb593fc16b5193752104d5b8e0fdb01639f210f226/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe0de0410a4725291e540afb593fc16b5193752104d5b8e0fdb01639f210f226/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe0de0410a4725291e540afb593fc16b5193752104d5b8e0fdb01639f210f226/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe0de0410a4725291e540afb593fc16b5193752104d5b8e0fdb01639f210f226/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe0de0410a4725291e540afb593fc16b5193752104d5b8e0fdb01639f210f226/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:54 compute-0 podman[257086]: 2026-01-20 19:04:54.735779461 +0000 UTC m=+0.022355884 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:04:54 compute-0 podman[257086]: 2026-01-20 19:04:54.839334046 +0000 UTC m=+0.125910469 container init d77dda397dc26c1dd2b807a88365e3a286234a79745fa15bbb82ad224e6227de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_bouman, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Jan 20 19:04:54 compute-0 podman[257086]: 2026-01-20 19:04:54.847490715 +0000 UTC m=+0.134067118 container start d77dda397dc26c1dd2b807a88365e3a286234a79745fa15bbb82ad224e6227de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 20 19:04:54 compute-0 podman[257086]: 2026-01-20 19:04:54.851476463 +0000 UTC m=+0.138052876 container attach d77dda397dc26c1dd2b807a88365e3a286234a79745fa15bbb82ad224e6227de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_bouman, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:04:54 compute-0 kernel: ganesha.nfsd[256857]: segfault at 50 ip 00007fc95003f32e sp 00007fc8bbffe210 error 4 in libntirpc.so.5.8[7fc950024000+2c000] likely on CPU 7 (core 0, socket 7)
Jan 20 19:04:54 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Jan 20 19:04:54 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[256749]: 20/01/2026 19:04:54 : epoch 696fd1c5 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc8c4001dd0 fd 38 proxy ignored for local
Jan 20 19:04:54 compute-0 systemd[1]: Started Process Core Dump (PID 257107/UID 0).
Jan 20 19:04:54 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:04:54 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:04:54 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:04:54.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:04:54 compute-0 ceph-mgr[74676]: [balancer INFO root] Optimize plan auto_2026-01-20_19:04:54
Jan 20 19:04:54 compute-0 ceph-mgr[74676]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 19:04:54 compute-0 ceph-mgr[74676]: [balancer INFO root] do_upmap
Jan 20 19:04:54 compute-0 ceph-mgr[74676]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', '.nfs', 'volumes', 'default.rgw.control', 'backups', 'default.rgw.log', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.meta', 'images', 'vms']
Jan 20 19:04:54 compute-0 ceph-mgr[74676]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 19:04:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:04:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:04:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:04:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:04:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:04:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:04:55 compute-0 stoic_bouman[257102]: --> passed data devices: 0 physical, 1 LVM
Jan 20 19:04:55 compute-0 stoic_bouman[257102]: --> All data devices are unavailable
Jan 20 19:04:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 19:04:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:04:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 19:04:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:04:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:04:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:04:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:04:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:04:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:04:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:04:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:04:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:04:55 compute-0 podman[257086]: 2026-01-20 19:04:55.304394867 +0000 UTC m=+0.590971320 container died d77dda397dc26c1dd2b807a88365e3a286234a79745fa15bbb82ad224e6227de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_bouman, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:04:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 20 19:04:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:04:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:04:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:04:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 19:04:55 compute-0 systemd[1]: libpod-d77dda397dc26c1dd2b807a88365e3a286234a79745fa15bbb82ad224e6227de.scope: Deactivated successfully.
Jan 20 19:04:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:04:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 20 19:04:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:04:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:04:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:04:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 19:04:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:04:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 20 19:04:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 19:04:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 19:04:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:04:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:04:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:04:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:04:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:04:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:04:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:04:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:04:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe0de0410a4725291e540afb593fc16b5193752104d5b8e0fdb01639f210f226-merged.mount: Deactivated successfully.
Jan 20 19:04:55 compute-0 podman[257086]: 2026-01-20 19:04:55.414889729 +0000 UTC m=+0.701466132 container remove d77dda397dc26c1dd2b807a88365e3a286234a79745fa15bbb82ad224e6227de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_bouman, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Jan 20 19:04:55 compute-0 systemd[1]: libpod-conmon-d77dda397dc26c1dd2b807a88365e3a286234a79745fa15bbb82ad224e6227de.scope: Deactivated successfully.
Jan 20 19:04:55 compute-0 sudo[256976]: pam_unix(sudo:session): session closed for user root
Jan 20 19:04:55 compute-0 sudo[257133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:04:55 compute-0 sudo[257133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:04:55 compute-0 sudo[257133]: pam_unix(sudo:session): session closed for user root
Jan 20 19:04:55 compute-0 sudo[257141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:04:55 compute-0 sudo[257141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:04:55 compute-0 sudo[257141]: pam_unix(sudo:session): session closed for user root
Jan 20 19:04:55 compute-0 sudo[257183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- lvm list --format json
Jan 20 19:04:55 compute-0 sudo[257183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:04:55 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:04:55 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:04:55 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:04:55.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:04:55 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v673: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 20 19:04:55 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:04:55 compute-0 podman[257248]: 2026-01-20 19:04:55.922252622 +0000 UTC m=+0.040423492 container create a3336c3ece4c6c458d753eee6d953795798fdc57e87a4c642a1eb16f05910df5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_brown, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 20 19:04:55 compute-0 systemd[1]: Started libpod-conmon-a3336c3ece4c6c458d753eee6d953795798fdc57e87a4c642a1eb16f05910df5.scope.
Jan 20 19:04:55 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:04:55 compute-0 podman[257248]: 2026-01-20 19:04:55.993894095 +0000 UTC m=+0.112064985 container init a3336c3ece4c6c458d753eee6d953795798fdc57e87a4c642a1eb16f05910df5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_brown, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Jan 20 19:04:55 compute-0 podman[257248]: 2026-01-20 19:04:55.903950738 +0000 UTC m=+0.022121628 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:04:56 compute-0 podman[257248]: 2026-01-20 19:04:56.004062149 +0000 UTC m=+0.122233029 container start a3336c3ece4c6c458d753eee6d953795798fdc57e87a4c642a1eb16f05910df5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_brown, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 20 19:04:56 compute-0 podman[257248]: 2026-01-20 19:04:56.007522733 +0000 UTC m=+0.125693653 container attach a3336c3ece4c6c458d753eee6d953795798fdc57e87a4c642a1eb16f05910df5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_brown, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Jan 20 19:04:56 compute-0 eager_brown[257264]: 167 167
Jan 20 19:04:56 compute-0 systemd[1]: libpod-a3336c3ece4c6c458d753eee6d953795798fdc57e87a4c642a1eb16f05910df5.scope: Deactivated successfully.
Jan 20 19:04:56 compute-0 podman[257248]: 2026-01-20 19:04:56.009366093 +0000 UTC m=+0.127536973 container died a3336c3ece4c6c458d753eee6d953795798fdc57e87a4c642a1eb16f05910df5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_brown, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 19:04:56 compute-0 systemd-coredump[257108]: Process 256753 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 47:
                                                    #0  0x00007fc95003f32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Jan 20 19:04:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-3aa0ff6f18ad70fac5c05a43a99ec0ea110218f57ccb1449577eac2df52f2485-merged.mount: Deactivated successfully.
Jan 20 19:04:56 compute-0 podman[257248]: 2026-01-20 19:04:56.041441758 +0000 UTC m=+0.159612628 container remove a3336c3ece4c6c458d753eee6d953795798fdc57e87a4c642a1eb16f05910df5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Jan 20 19:04:56 compute-0 systemd[1]: libpod-conmon-a3336c3ece4c6c458d753eee6d953795798fdc57e87a4c642a1eb16f05910df5.scope: Deactivated successfully.
Jan 20 19:04:56 compute-0 systemd[1]: systemd-coredump@13-257107-0.service: Deactivated successfully.
Jan 20 19:04:56 compute-0 systemd[1]: systemd-coredump@13-257107-0.service: Consumed 1.112s CPU time.
Jan 20 19:04:56 compute-0 podman[257288]: 2026-01-20 19:04:56.184282173 +0000 UTC m=+0.027338298 container died 523e647171cf291007556fd13fcb42908a492154545ae31cc25017a55d13fcaf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid)
Jan 20 19:04:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-0fc94cea780b6fb43253686b5ad1213af61b36ee1e8306b6d4947b9b0d16d359-merged.mount: Deactivated successfully.
Jan 20 19:04:56 compute-0 podman[257293]: 2026-01-20 19:04:56.209571146 +0000 UTC m=+0.045851039 container create b895d843d4410d5c55c11a02ebd2ef625bde3182b2ce646f86d31655d8ff2045 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_moser, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 20 19:04:56 compute-0 podman[257288]: 2026-01-20 19:04:56.224759316 +0000 UTC m=+0.067815431 container remove 523e647171cf291007556fd13fcb42908a492154545ae31cc25017a55d13fcaf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 19:04:56 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Main process exited, code=exited, status=139/n/a
Jan 20 19:04:56 compute-0 systemd[1]: Started libpod-conmon-b895d843d4410d5c55c11a02ebd2ef625bde3182b2ce646f86d31655d8ff2045.scope.
Jan 20 19:04:56 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:04:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58d44b58ac107983ae1e05d0b229c6fcc8320b703312588b3c45f44926a7445a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58d44b58ac107983ae1e05d0b229c6fcc8320b703312588b3c45f44926a7445a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58d44b58ac107983ae1e05d0b229c6fcc8320b703312588b3c45f44926a7445a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58d44b58ac107983ae1e05d0b229c6fcc8320b703312588b3c45f44926a7445a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:56 compute-0 podman[257293]: 2026-01-20 19:04:56.189146035 +0000 UTC m=+0.025425948 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:04:56 compute-0 podman[257293]: 2026-01-20 19:04:56.284271672 +0000 UTC m=+0.120551585 container init b895d843d4410d5c55c11a02ebd2ef625bde3182b2ce646f86d31655d8ff2045 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_moser, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Jan 20 19:04:56 compute-0 podman[257293]: 2026-01-20 19:04:56.293533172 +0000 UTC m=+0.129813075 container start b895d843d4410d5c55c11a02ebd2ef625bde3182b2ce646f86d31655d8ff2045 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_moser, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 20 19:04:56 compute-0 podman[257293]: 2026-01-20 19:04:56.298886256 +0000 UTC m=+0.135166169 container attach b895d843d4410d5c55c11a02ebd2ef625bde3182b2ce646f86d31655d8ff2045 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:04:56 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Failed with result 'exit-code'.
Jan 20 19:04:56 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Consumed 1.412s CPU time.
Jan 20 19:04:56 compute-0 friendly_moser[257325]: {
Jan 20 19:04:56 compute-0 friendly_moser[257325]:     "0": [
Jan 20 19:04:56 compute-0 friendly_moser[257325]:         {
Jan 20 19:04:56 compute-0 friendly_moser[257325]:             "devices": [
Jan 20 19:04:56 compute-0 friendly_moser[257325]:                 "/dev/loop3"
Jan 20 19:04:56 compute-0 friendly_moser[257325]:             ],
Jan 20 19:04:56 compute-0 friendly_moser[257325]:             "lv_name": "ceph_lv0",
Jan 20 19:04:56 compute-0 friendly_moser[257325]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:04:56 compute-0 friendly_moser[257325]:             "lv_size": "21470642176",
Jan 20 19:04:56 compute-0 friendly_moser[257325]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=aecbbf3b-b405-507b-97d7-637a83f5b4b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5f53c0c6-6046-4836-83f9-ff93da7e674e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:04:56 compute-0 friendly_moser[257325]:             "lv_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 19:04:56 compute-0 friendly_moser[257325]:             "name": "ceph_lv0",
Jan 20 19:04:56 compute-0 friendly_moser[257325]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:04:56 compute-0 friendly_moser[257325]:             "tags": {
Jan 20 19:04:56 compute-0 friendly_moser[257325]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:04:56 compute-0 friendly_moser[257325]:                 "ceph.block_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 19:04:56 compute-0 friendly_moser[257325]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:04:56 compute-0 friendly_moser[257325]:                 "ceph.cluster_fsid": "aecbbf3b-b405-507b-97d7-637a83f5b4b1",
Jan 20 19:04:56 compute-0 friendly_moser[257325]:                 "ceph.cluster_name": "ceph",
Jan 20 19:04:56 compute-0 friendly_moser[257325]:                 "ceph.crush_device_class": "",
Jan 20 19:04:56 compute-0 friendly_moser[257325]:                 "ceph.encrypted": "0",
Jan 20 19:04:56 compute-0 friendly_moser[257325]:                 "ceph.osd_fsid": "5f53c0c6-6046-4836-83f9-ff93da7e674e",
Jan 20 19:04:56 compute-0 friendly_moser[257325]:                 "ceph.osd_id": "0",
Jan 20 19:04:56 compute-0 friendly_moser[257325]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:04:56 compute-0 friendly_moser[257325]:                 "ceph.type": "block",
Jan 20 19:04:56 compute-0 friendly_moser[257325]:                 "ceph.vdo": "0",
Jan 20 19:04:56 compute-0 friendly_moser[257325]:                 "ceph.with_tpm": "0"
Jan 20 19:04:56 compute-0 friendly_moser[257325]:             },
Jan 20 19:04:56 compute-0 friendly_moser[257325]:             "type": "block",
Jan 20 19:04:56 compute-0 friendly_moser[257325]:             "vg_name": "ceph_vg0"
Jan 20 19:04:56 compute-0 friendly_moser[257325]:         }
Jan 20 19:04:56 compute-0 friendly_moser[257325]:     ]
Jan 20 19:04:56 compute-0 friendly_moser[257325]: }
Jan 20 19:04:56 compute-0 systemd[1]: libpod-b895d843d4410d5c55c11a02ebd2ef625bde3182b2ce646f86d31655d8ff2045.scope: Deactivated successfully.
Jan 20 19:04:56 compute-0 podman[257293]: 2026-01-20 19:04:56.590011583 +0000 UTC m=+0.426291446 container died b895d843d4410d5c55c11a02ebd2ef625bde3182b2ce646f86d31655d8ff2045 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_moser, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default)
Jan 20 19:04:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-58d44b58ac107983ae1e05d0b229c6fcc8320b703312588b3c45f44926a7445a-merged.mount: Deactivated successfully.
Jan 20 19:04:56 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:04:56 compute-0 podman[257293]: 2026-01-20 19:04:56.639675463 +0000 UTC m=+0.475955326 container remove b895d843d4410d5c55c11a02ebd2ef625bde3182b2ce646f86d31655d8ff2045 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Jan 20 19:04:56 compute-0 systemd[1]: libpod-conmon-b895d843d4410d5c55c11a02ebd2ef625bde3182b2ce646f86d31655d8ff2045.scope: Deactivated successfully.
Jan 20 19:04:56 compute-0 sudo[257183]: pam_unix(sudo:session): session closed for user root
Jan 20 19:04:56 compute-0 sudo[257373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:04:56 compute-0 sudo[257373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:04:56 compute-0 sudo[257373]: pam_unix(sudo:session): session closed for user root
Jan 20 19:04:56 compute-0 sudo[257398]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- raw list --format json
Jan 20 19:04:56 compute-0 sudo[257398]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:04:56 compute-0 ceph-mon[74381]: pgmap v673: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Jan 20 19:04:56 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:04:56 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:04:56 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:04:56.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:04:57 compute-0 ceph-mgr[74676]: [devicehealth INFO root] Check health
Jan 20 19:04:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:04:57.130Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:04:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:04:57.130Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:04:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:04:57.131Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:04:57 compute-0 podman[257460]: 2026-01-20 19:04:57.228253429 +0000 UTC m=+0.051794809 container create 54fe8bc7896e30c8cedd28e84709fa1a4e8e28ea2a56b16991c92c01d6d76f9a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_bhabha, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid)
Jan 20 19:04:57 compute-0 systemd[1]: Started libpod-conmon-54fe8bc7896e30c8cedd28e84709fa1a4e8e28ea2a56b16991c92c01d6d76f9a.scope.
Jan 20 19:04:57 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:04:57 compute-0 podman[257460]: 2026-01-20 19:04:57.200065978 +0000 UTC m=+0.023607228 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:04:57 compute-0 podman[257460]: 2026-01-20 19:04:57.305826582 +0000 UTC m=+0.129367762 container init 54fe8bc7896e30c8cedd28e84709fa1a4e8e28ea2a56b16991c92c01d6d76f9a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_bhabha, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:04:57 compute-0 podman[257460]: 2026-01-20 19:04:57.313413437 +0000 UTC m=+0.136954597 container start 54fe8bc7896e30c8cedd28e84709fa1a4e8e28ea2a56b16991c92c01d6d76f9a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_bhabha, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS)
Jan 20 19:04:57 compute-0 podman[257460]: 2026-01-20 19:04:57.31686798 +0000 UTC m=+0.140409160 container attach 54fe8bc7896e30c8cedd28e84709fa1a4e8e28ea2a56b16991c92c01d6d76f9a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_bhabha, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 19:04:57 compute-0 bold_bhabha[257476]: 167 167
Jan 20 19:04:57 compute-0 systemd[1]: libpod-54fe8bc7896e30c8cedd28e84709fa1a4e8e28ea2a56b16991c92c01d6d76f9a.scope: Deactivated successfully.
Jan 20 19:04:57 compute-0 conmon[257476]: conmon 54fe8bc7896e30c8cedd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-54fe8bc7896e30c8cedd28e84709fa1a4e8e28ea2a56b16991c92c01d6d76f9a.scope/container/memory.events
Jan 20 19:04:57 compute-0 podman[257460]: 2026-01-20 19:04:57.32057234 +0000 UTC m=+0.144113500 container died 54fe8bc7896e30c8cedd28e84709fa1a4e8e28ea2a56b16991c92c01d6d76f9a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_bhabha, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:04:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-4fe5b53af6bffb6e26d0c9d44fa2793203a352b121c55dde90a19d0021a25d82-merged.mount: Deactivated successfully.
Jan 20 19:04:57 compute-0 podman[257460]: 2026-01-20 19:04:57.358971936 +0000 UTC m=+0.182513116 container remove 54fe8bc7896e30c8cedd28e84709fa1a4e8e28ea2a56b16991c92c01d6d76f9a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_bhabha, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Jan 20 19:04:57 compute-0 systemd[1]: libpod-conmon-54fe8bc7896e30c8cedd28e84709fa1a4e8e28ea2a56b16991c92c01d6d76f9a.scope: Deactivated successfully.
Jan 20 19:04:57 compute-0 podman[257500]: 2026-01-20 19:04:57.531604325 +0000 UTC m=+0.048332065 container create 6fecca30daee3cf849a4074d2840c02cbad3e1da72c79563e181c641bc429e68 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_hoover, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:04:57 compute-0 systemd[1]: Started libpod-conmon-6fecca30daee3cf849a4074d2840c02cbad3e1da72c79563e181c641bc429e68.scope.
Jan 20 19:04:57 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:04:57 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:04:57 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:04:57.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:04:57 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:04:57 compute-0 podman[257500]: 2026-01-20 19:04:57.513539367 +0000 UTC m=+0.030267127 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:04:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/beb9638c0aa8f4a5a29ad7ec9b30630504a0ae46de41115dd309ab2e50ed69fb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/beb9638c0aa8f4a5a29ad7ec9b30630504a0ae46de41115dd309ab2e50ed69fb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/beb9638c0aa8f4a5a29ad7ec9b30630504a0ae46de41115dd309ab2e50ed69fb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/beb9638c0aa8f4a5a29ad7ec9b30630504a0ae46de41115dd309ab2e50ed69fb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:57 compute-0 podman[257500]: 2026-01-20 19:04:57.628495221 +0000 UTC m=+0.145222981 container init 6fecca30daee3cf849a4074d2840c02cbad3e1da72c79563e181c641bc429e68 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_hoover, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Jan 20 19:04:57 compute-0 podman[257500]: 2026-01-20 19:04:57.635270463 +0000 UTC m=+0.151998203 container start 6fecca30daee3cf849a4074d2840c02cbad3e1da72c79563e181c641bc429e68 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_hoover, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Jan 20 19:04:57 compute-0 podman[257500]: 2026-01-20 19:04:57.638780688 +0000 UTC m=+0.155508448 container attach 6fecca30daee3cf849a4074d2840c02cbad3e1da72c79563e181c641bc429e68 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_hoover, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Jan 20 19:04:57 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v674: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Jan 20 19:04:58 compute-0 lvm[257592]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 19:04:58 compute-0 lvm[257592]: VG ceph_vg0 finished
Jan 20 19:04:58 compute-0 goofy_hoover[257517]: {}
Jan 20 19:04:58 compute-0 systemd[1]: libpod-6fecca30daee3cf849a4074d2840c02cbad3e1da72c79563e181c641bc429e68.scope: Deactivated successfully.
Jan 20 19:04:58 compute-0 systemd[1]: libpod-6fecca30daee3cf849a4074d2840c02cbad3e1da72c79563e181c641bc429e68.scope: Consumed 1.052s CPU time.
Jan 20 19:04:58 compute-0 podman[257500]: 2026-01-20 19:04:58.312204223 +0000 UTC m=+0.828931983 container died 6fecca30daee3cf849a4074d2840c02cbad3e1da72c79563e181c641bc429e68 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_hoover, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:04:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-beb9638c0aa8f4a5a29ad7ec9b30630504a0ae46de41115dd309ab2e50ed69fb-merged.mount: Deactivated successfully.
Jan 20 19:04:58 compute-0 podman[257500]: 2026-01-20 19:04:58.358265116 +0000 UTC m=+0.874992866 container remove 6fecca30daee3cf849a4074d2840c02cbad3e1da72c79563e181c641bc429e68 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_hoover, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:04:58 compute-0 systemd[1]: libpod-conmon-6fecca30daee3cf849a4074d2840c02cbad3e1da72c79563e181c641bc429e68.scope: Deactivated successfully.
Jan 20 19:04:58 compute-0 sudo[257398]: pam_unix(sudo:session): session closed for user root
Jan 20 19:04:58 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:04:58 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:04:58 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:04:58 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:04:58 compute-0 sudo[257608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 19:04:58 compute-0 sudo[257608]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:04:58 compute-0 sudo[257608]: pam_unix(sudo:session): session closed for user root
Jan 20 19:04:58 compute-0 ceph-mon[74381]: pgmap v674: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Jan 20 19:04:58 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:04:58 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:04:58 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:04:58 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:04:58 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:04:58.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:04:59 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:04:59 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:04:59 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:04:59.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:04:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:04:59] "GET /metrics HTTP/1.1" 200 48338 "" "Prometheus/2.51.0"
Jan 20 19:04:59 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:04:59] "GET /metrics HTTP/1.1" 200 48338 "" "Prometheus/2.51.0"
Jan 20 19:04:59 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v675: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 20 19:05:00 compute-0 nova_compute[254061]: 2026-01-20 19:05:00.129 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:05:00 compute-0 nova_compute[254061]: 2026-01-20 19:05:00.130 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:05:00 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [WARNING] 019/190500 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 20 19:05:00 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:05:00 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:05:00 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:05:00.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:05:00 compute-0 ceph-mon[74381]: pgmap v675: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 20 19:05:01 compute-0 nova_compute[254061]: 2026-01-20 19:05:01.128 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:05:01 compute-0 nova_compute[254061]: 2026-01-20 19:05:01.162 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:05:01 compute-0 nova_compute[254061]: 2026-01-20 19:05:01.162 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:05:01 compute-0 nova_compute[254061]: 2026-01-20 19:05:01.163 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:05:01 compute-0 nova_compute[254061]: 2026-01-20 19:05:01.163 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 19:05:01 compute-0 nova_compute[254061]: 2026-01-20 19:05:01.164 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:05:01 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:05:01 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:05:01 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:05:01.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:05:01 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:05:01 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:05:01 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1128320347' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:05:01 compute-0 nova_compute[254061]: 2026-01-20 19:05:01.723 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:05:01 compute-0 nova_compute[254061]: 2026-01-20 19:05:01.861 254065 WARNING nova.virt.libvirt.driver [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 19:05:01 compute-0 nova_compute[254061]: 2026-01-20 19:05:01.862 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4794MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 19:05:01 compute-0 nova_compute[254061]: 2026-01-20 19:05:01.863 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:05:01 compute-0 nova_compute[254061]: 2026-01-20 19:05:01.863 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:05:01 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v676: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 20 19:05:01 compute-0 nova_compute[254061]: 2026-01-20 19:05:01.931 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 19:05:01 compute-0 nova_compute[254061]: 2026-01-20 19:05:01.931 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 19:05:01 compute-0 nova_compute[254061]: 2026-01-20 19:05:01.948 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:05:01 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/1128320347' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:05:02 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:05:02 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2960078393' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:05:02 compute-0 nova_compute[254061]: 2026-01-20 19:05:02.369 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:05:02 compute-0 nova_compute[254061]: 2026-01-20 19:05:02.374 254065 DEBUG nova.compute.provider_tree [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Inventory has not changed in ProviderTree for provider: cb9161e5-191d-495c-920a-01144f42a215 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 19:05:02 compute-0 nova_compute[254061]: 2026-01-20 19:05:02.393 254065 DEBUG nova.scheduler.client.report [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Inventory has not changed for provider cb9161e5-191d-495c-920a-01144f42a215 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 19:05:02 compute-0 nova_compute[254061]: 2026-01-20 19:05:02.394 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 19:05:02 compute-0 nova_compute[254061]: 2026-01-20 19:05:02.394 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.532s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:05:02 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:05:02 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:05:02 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:05:02.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:05:03 compute-0 ceph-mon[74381]: pgmap v676: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 20 19:05:03 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/2960078393' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:05:03 compute-0 nova_compute[254061]: 2026-01-20 19:05:03.394 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:05:03 compute-0 nova_compute[254061]: 2026-01-20 19:05:03.394 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:05:03 compute-0 nova_compute[254061]: 2026-01-20 19:05:03.395 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 19:05:03 compute-0 nova_compute[254061]: 2026-01-20 19:05:03.395 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 19:05:03 compute-0 nova_compute[254061]: 2026-01-20 19:05:03.414 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 19:05:03 compute-0 nova_compute[254061]: 2026-01-20 19:05:03.414 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:05:03 compute-0 nova_compute[254061]: 2026-01-20 19:05:03.415 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:05:03 compute-0 nova_compute[254061]: 2026-01-20 19:05:03.415 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:05:03 compute-0 nova_compute[254061]: 2026-01-20 19:05:03.415 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:05:03 compute-0 nova_compute[254061]: 2026-01-20 19:05:03.415 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 19:05:03 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:05:03 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:05:03 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:05:03.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:05:03 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v677: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:05:04 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/997487984' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:05:04 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:05:04 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:05:04 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:05:04.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:05:05 compute-0 ceph-mon[74381]: pgmap v677: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:05:05 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/677859158' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:05:05 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/1589941888' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:05:05 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:05:05 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:05:05 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:05:05.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:05:05 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v678: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 19:05:06 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/1648530255' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:05:06 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [WARNING] 019/190506 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 20 19:05:06 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Scheduled restart job, restart counter is at 14.
Jan 20 19:05:06 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.ulclbx for aecbbf3b-b405-507b-97d7-637a83f5b4b1.
Jan 20 19:05:06 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Consumed 1.412s CPU time.
Jan 20 19:05:06 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.ulclbx for aecbbf3b-b405-507b-97d7-637a83f5b4b1...
Jan 20 19:05:06 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:05:06 compute-0 podman[257735]: 2026-01-20 19:05:06.768332936 +0000 UTC m=+0.048543090 container create 5256222f490819f563fd54b46fc4b0b27af425fe64ecea947d310103fd077b6f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 20 19:05:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/515e31a79b7400cdbd59d87239d8412710ed25b4f278eb9416ecc0881fce7c26/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Jan 20 19:05:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/515e31a79b7400cdbd59d87239d8412710ed25b4f278eb9416ecc0881fce7c26/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:05:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/515e31a79b7400cdbd59d87239d8412710ed25b4f278eb9416ecc0881fce7c26/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:05:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/515e31a79b7400cdbd59d87239d8412710ed25b4f278eb9416ecc0881fce7c26/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.ulclbx-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:05:06 compute-0 podman[257735]: 2026-01-20 19:05:06.813784413 +0000 UTC m=+0.093994597 container init 5256222f490819f563fd54b46fc4b0b27af425fe64ecea947d310103fd077b6f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Jan 20 19:05:06 compute-0 podman[257735]: 2026-01-20 19:05:06.819294062 +0000 UTC m=+0.099504216 container start 5256222f490819f563fd54b46fc4b0b27af425fe64ecea947d310103fd077b6f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:05:06 compute-0 bash[257735]: 5256222f490819f563fd54b46fc4b0b27af425fe64ecea947d310103fd077b6f
Jan 20 19:05:06 compute-0 podman[257735]: 2026-01-20 19:05:06.751933414 +0000 UTC m=+0.032143588 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:05:06 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.ulclbx for aecbbf3b-b405-507b-97d7-637a83f5b4b1.
Jan 20 19:05:06 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:06 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Jan 20 19:05:06 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:06 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Jan 20 19:05:06 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:06 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Jan 20 19:05:06 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:06 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Jan 20 19:05:06 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:06 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Jan 20 19:05:06 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:06 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Jan 20 19:05:06 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:06 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Jan 20 19:05:06 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:06 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 20 19:05:06 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:05:06 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:05:06 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:05:06.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:05:07 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:05:07.132Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:05:07 compute-0 ceph-mon[74381]: pgmap v678: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 20 19:05:07 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:05:07 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:05:07 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:05:07.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:05:07 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v679: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 20 19:05:08 compute-0 ceph-mon[74381]: pgmap v679: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Jan 20 19:05:08 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:05:08 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:05:08 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:05:08.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:05:09 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:05:09 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:05:09 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:05:09.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:05:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:05:09] "GET /metrics HTTP/1.1" 200 48338 "" "Prometheus/2.51.0"
Jan 20 19:05:09 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:05:09] "GET /metrics HTTP/1.1" 200 48338 "" "Prometheus/2.51.0"
Jan 20 19:05:09 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v680: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Jan 20 19:05:10 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:05:10 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:05:10 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:05:10.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:05:11 compute-0 ceph-mon[74381]: pgmap v680: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Jan 20 19:05:11 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:05:11 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:05:11 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:05:11 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:05:11.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:05:11 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:05:11 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v681: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 170 B/s wr, 0 op/s
Jan 20 19:05:12 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:12 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 20 19:05:12 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:12 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 20 19:05:12 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:05:12 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:05:12 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:05:12.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:05:13 compute-0 podman[257797]: 2026-01-20 19:05:13.12896511 +0000 UTC m=+0.088025117 container health_status 7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 19:05:13 compute-0 ceph-mon[74381]: pgmap v681: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 170 B/s wr, 0 op/s
Jan 20 19:05:13 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:05:13 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:05:13 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:05:13.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:05:13 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v682: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 511 B/s wr, 1 op/s
Jan 20 19:05:14 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:05:14 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:05:14 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:05:14.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:05:15 compute-0 ceph-mon[74381]: pgmap v682: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 511 B/s wr, 1 op/s
Jan 20 19:05:15 compute-0 sudo[257820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:05:15 compute-0 sudo[257820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:05:15 compute-0 sudo[257820]: pam_unix(sudo:session): session closed for user root
Jan 20 19:05:15 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:05:15 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:05:15 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:05:15.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:05:15 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v683: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 511 B/s wr, 1 op/s
Jan 20 19:05:16 compute-0 ceph-mon[74381]: pgmap v683: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 511 B/s wr, 1 op/s
Jan 20 19:05:16 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:05:16 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:05:16 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:05:16 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:05:16.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:05:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:05:17.133Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:05:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[106777]: logger=cleanup t=2026-01-20T19:05:17.531080885Z level=info msg="Completed cleanup jobs" duration=23.34819ms
Jan 20 19:05:17 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:05:17 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:05:17 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:05:17.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:05:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[106777]: logger=grafana.update.checker t=2026-01-20T19:05:17.644588958Z level=info msg="Update check succeeded" duration=52.451555ms
Jan 20 19:05:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[106777]: logger=plugins.update.checker t=2026-01-20T19:05:17.646911111Z level=info msg="Update check succeeded" duration=54.151622ms
Jan 20 19:05:17 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v684: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 20 19:05:18 compute-0 ceph-mon[74381]: pgmap v684: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 1023 B/s wr, 3 op/s
Jan 20 19:05:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:18 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 20 19:05:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:18 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Jan 20 19:05:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:18 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Jan 20 19:05:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:18 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Jan 20 19:05:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:18 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Jan 20 19:05:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:18 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Jan 20 19:05:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:18 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Jan 20 19:05:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:18 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 20 19:05:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:18 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 20 19:05:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:18 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 20 19:05:18 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:05:18 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:05:18 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:05:18.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:05:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:18 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Jan 20 19:05:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:18 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 20 19:05:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:18 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Jan 20 19:05:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:18 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Jan 20 19:05:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:18 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Jan 20 19:05:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:18 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Jan 20 19:05:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:18 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Jan 20 19:05:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:18 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Jan 20 19:05:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:18 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Jan 20 19:05:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:18 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Jan 20 19:05:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:18 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Jan 20 19:05:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:18 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Jan 20 19:05:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:18 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Jan 20 19:05:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:18 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Jan 20 19:05:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:18 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 20 19:05:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:18 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Jan 20 19:05:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:18 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 20 19:05:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:19 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 20 19:05:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:19 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6f8000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:05:19 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:05:19 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:05:19 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:05:19.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:05:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:05:19] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Jan 20 19:05:19 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:05:19] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Jan 20 19:05:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:19 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6f0001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:05:19 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v685: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 938 B/s wr, 2 op/s
Jan 20 19:05:20 compute-0 podman[257865]: 2026-01-20 19:05:20.145054752 +0000 UTC m=+0.108624232 container health_status d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 20 19:05:20 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:20 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6d4000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:05:20 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:05:20 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:05:20 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:05:20.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:05:20 compute-0 ceph-mon[74381]: pgmap v685: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 938 B/s wr, 2 op/s
Jan 20 19:05:21 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:21 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6cc000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:05:21 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:05:21 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:05:21 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:05:21.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:05:21 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:05:21 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:21 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6cc000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:05:21 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v686: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1023 B/s wr, 2 op/s
Jan 20 19:05:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:22 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 20 19:05:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:22 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 20 19:05:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [WARNING] 019/190522 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 20 19:05:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:22 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6f0001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:05:22 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:05:22 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:05:22 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:05:22.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:05:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:23 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6d40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:05:23 compute-0 ceph-mon[74381]: pgmap v686: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1023 B/s wr, 2 op/s
Jan 20 19:05:23 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:05:23 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:05:23 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:05:23.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:05:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:23 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6cc000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:05:23 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v687: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 3.6 KiB/s rd, 1.6 KiB/s wr, 5 op/s
Jan 20 19:05:24 compute-0 ceph-mon[74381]: pgmap v687: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 3.6 KiB/s rd, 1.6 KiB/s wr, 5 op/s
Jan 20 19:05:24 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:24 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6cc000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:05:24 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:05:24 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:05:24 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:05:24.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:05:25 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:25 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 20 19:05:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:05:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:05:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:05:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:05:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:05:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:05:25 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:25 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6f0001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:05:25 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:05:25 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:05:25 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:05:25 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:05:25.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:05:25 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:25 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6d40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:05:25 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v688: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 1.2 KiB/s wr, 4 op/s
Jan 20 19:05:26 compute-0 ceph-mon[74381]: pgmap v688: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 1.2 KiB/s wr, 4 op/s
Jan 20 19:05:26 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:05:26 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:26 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6e0001480 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:05:26 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:05:26 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:05:26 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:05:26.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:05:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:05:27.134Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:05:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:05:27.134Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:05:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:05:27.134Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:05:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:27 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6cc000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:05:27 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:05:27 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:05:27 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:05:27.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:05:27 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v689: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 1.2 KiB/s wr, 4 op/s
Jan 20 19:05:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:27 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6f0001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:05:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [WARNING] 019/190528 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 20 19:05:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:28 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6d40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:05:28 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:05:28 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:05:28 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:05:28.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:05:29 compute-0 ceph-mon[74381]: pgmap v689: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 1.2 KiB/s wr, 4 op/s
Jan 20 19:05:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:29 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6e0001fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:05:29 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:05:29 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:05:29 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:05:29.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:05:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:05:29] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Jan 20 19:05:29 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:05:29] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Jan 20 19:05:29 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v690: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Jan 20 19:05:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:29 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6cc002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:05:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:05:30.280 165659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:05:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:05:30.281 165659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:05:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:05:30.281 165659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:05:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:30 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6f0001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:05:30 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:05:30 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:05:30 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:05:30.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:05:31 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:31 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6d4002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:05:31 compute-0 ceph-mon[74381]: pgmap v690: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Jan 20 19:05:31 compute-0 ceph-mon[74381]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Jan 20 19:05:31 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:05:31.558127) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 19:05:31 compute-0 ceph-mon[74381]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Jan 20 19:05:31 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935931558211, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 2118, "num_deletes": 251, "total_data_size": 4079855, "memory_usage": 4130640, "flush_reason": "Manual Compaction"}
Jan 20 19:05:31 compute-0 ceph-mon[74381]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Jan 20 19:05:31 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935931586142, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 3998103, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20435, "largest_seqno": 22552, "table_properties": {"data_size": 3988770, "index_size": 5827, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19529, "raw_average_key_size": 20, "raw_value_size": 3969916, "raw_average_value_size": 4092, "num_data_blocks": 257, "num_entries": 970, "num_filter_entries": 970, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768935708, "oldest_key_time": 1768935708, "file_creation_time": 1768935931, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cbf3ab03-d51c-4622-b6c7-e997cd5246eb", "db_session_id": "I40O2DG19JCNHUB0JQU4", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Jan 20 19:05:31 compute-0 ceph-mon[74381]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 28059 microseconds, and 13715 cpu microseconds.
Jan 20 19:05:31 compute-0 ceph-mon[74381]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 19:05:31 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:05:31.586192) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 3998103 bytes OK
Jan 20 19:05:31 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:05:31.586217) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Jan 20 19:05:31 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:05:31.588076) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Jan 20 19:05:31 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:05:31.588098) EVENT_LOG_v1 {"time_micros": 1768935931588091, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 19:05:31 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:05:31.588119) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 19:05:31 compute-0 ceph-mon[74381]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 4071210, prev total WAL file size 4071210, number of live WAL files 2.
Jan 20 19:05:31 compute-0 ceph-mon[74381]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 19:05:31 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:05:31.590098) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Jan 20 19:05:31 compute-0 ceph-mon[74381]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 19:05:31 compute-0 ceph-mon[74381]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(3904KB)], [44(13MB)]
Jan 20 19:05:31 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935931590186, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 17891874, "oldest_snapshot_seqno": -1}
Jan 20 19:05:31 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:05:31 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:05:31 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:05:31.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:05:31 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:05:31 compute-0 ceph-mon[74381]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 5688 keys, 15697879 bytes, temperature: kUnknown
Jan 20 19:05:31 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935931718870, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 15697879, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15657062, "index_size": 25475, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14277, "raw_key_size": 143531, "raw_average_key_size": 25, "raw_value_size": 15551470, "raw_average_value_size": 2734, "num_data_blocks": 1049, "num_entries": 5688, "num_filter_entries": 5688, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768934326, "oldest_key_time": 0, "file_creation_time": 1768935931, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cbf3ab03-d51c-4622-b6c7-e997cd5246eb", "db_session_id": "I40O2DG19JCNHUB0JQU4", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Jan 20 19:05:31 compute-0 ceph-mon[74381]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 19:05:31 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:05:31.719095) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 15697879 bytes
Jan 20 19:05:31 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:05:31.720675) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 139.0 rd, 121.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.8, 13.3 +0.0 blob) out(15.0 +0.0 blob), read-write-amplify(8.4) write-amplify(3.9) OK, records in: 6204, records dropped: 516 output_compression: NoCompression
Jan 20 19:05:31 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:05:31.720689) EVENT_LOG_v1 {"time_micros": 1768935931720682, "job": 22, "event": "compaction_finished", "compaction_time_micros": 128733, "compaction_time_cpu_micros": 60296, "output_level": 6, "num_output_files": 1, "total_output_size": 15697879, "num_input_records": 6204, "num_output_records": 5688, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 19:05:31 compute-0 ceph-mon[74381]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 19:05:31 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935931721483, "job": 22, "event": "table_file_deletion", "file_number": 46}
Jan 20 19:05:31 compute-0 ceph-mon[74381]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 19:05:31 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935931723936, "job": 22, "event": "table_file_deletion", "file_number": 44}
Jan 20 19:05:31 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:05:31.589938) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:05:31 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:05:31.724023) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:05:31 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:05:31.724029) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:05:31 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:05:31.724030) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:05:31 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:05:31.724031) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:05:31 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:05:31.724033) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:05:31 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v691: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Jan 20 19:05:31 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:31 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6e0001fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:05:32 compute-0 ceph-mon[74381]: pgmap v691: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Jan 20 19:05:32 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:32 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6cc002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:05:32 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:05:32 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:05:32 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:05:32.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:05:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:33 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6f0001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:05:33 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:05:33 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:05:33 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:05:33.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:05:33 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v692: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 682 B/s wr, 2 op/s
Jan 20 19:05:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:33 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6d4002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:05:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:34 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6e0001fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:05:34 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:05:34 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:05:34 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:05:34.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:05:35 compute-0 ceph-mon[74381]: pgmap v692: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 682 B/s wr, 2 op/s
Jan 20 19:05:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:35 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6cc003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:05:35 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:05:35 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:05:35 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:05:35.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:05:35 compute-0 sudo[257907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:05:35 compute-0 sudo[257907]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:05:35 compute-0 sudo[257907]: pam_unix(sudo:session): session closed for user root
Jan 20 19:05:35 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v693: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Jan 20 19:05:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:35 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6f0001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:05:36 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:05:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:36 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6d4002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:05:36 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:05:36 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:05:36 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:05:36.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:05:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:05:37.135Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:05:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:05:37.136Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:05:37 compute-0 ceph-mon[74381]: pgmap v693: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Jan 20 19:05:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:37 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6e0003430 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:05:37 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:05:37 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:05:37 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:05:37.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:05:37 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v694: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Jan 20 19:05:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:37 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6e0003430 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:05:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:38 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6f0001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:05:38 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:05:38 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:05:38 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:05:38.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:05:39 compute-0 ceph-mon[74381]: pgmap v694: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Jan 20 19:05:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:39 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6d4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:05:39 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:05:39 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:05:39 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:05:39.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:05:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:05:39] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Jan 20 19:05:39 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:05:39] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Jan 20 19:05:39 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v695: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:05:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:39 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6e0003430 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:05:40 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:05:40 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:40 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6e0003430 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:05:40 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:05:40 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:05:40 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:05:40.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:05:41 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:41 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6f0001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:05:41 compute-0 ceph-mon[74381]: pgmap v695: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:05:41 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:05:41 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:05:41 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:05:41.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:05:41 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:05:41 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v696: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:05:41 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:41 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6d4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:05:42 compute-0 ceph-mon[74381]: pgmap v696: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:05:42 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:42 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6d4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:05:43 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:05:43 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:05:43 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:05:42.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:05:43 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:43 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6d4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:05:43 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:05:43 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:05:43 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:05:43.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:05:43 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v697: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:05:43 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:43 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6f0001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:05:44 compute-0 podman[257940]: 2026-01-20 19:05:44.091111366 +0000 UTC m=+0.068539191 container health_status 7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 19:05:44 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:44 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6f0001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:05:45 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:05:45 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:05:45 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:05:45.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:05:45 compute-0 ceph-mon[74381]: pgmap v697: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:05:45 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:45 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6ec001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:05:45 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:05:45 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:05:45 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:05:45.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:05:45 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v698: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:05:45 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:45 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6d4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:05:46 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:05:46 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:46 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6d4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:05:47 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:05:47 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:05:47 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:05:47.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:05:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:05:47.137Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:05:47 compute-0 ceph-mon[74381]: pgmap v698: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:05:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:47 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6f0003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:05:47 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:05:47 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:05:47 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:05:47.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:05:47 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v699: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 20 19:05:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:47 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6f0003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:05:48 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 20 19:05:48 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3408564017' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 19:05:48 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 20 19:05:48 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3408564017' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 19:05:48 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:48 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6f0003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:05:49 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:05:49 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:05:49 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:05:49.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:05:49 compute-0 ceph-mon[74381]: pgmap v699: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 20 19:05:49 compute-0 ceph-mon[74381]: from='client.? 192.168.122.10:0/3408564017' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 19:05:49 compute-0 ceph-mon[74381]: from='client.? 192.168.122.10:0/3408564017' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 19:05:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:49 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6f0003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:05:49 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:05:49 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:05:49 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:05:49.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:05:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:05:49] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Jan 20 19:05:49 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:05:49] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Jan 20 19:05:49 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v700: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:05:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:49 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6e0003430 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:05:50 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:50 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6ec0025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:05:51 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:05:51 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:05:51 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:05:51.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:05:51 compute-0 podman[257967]: 2026-01-20 19:05:51.099608044 +0000 UTC m=+0.074544673 container health_status d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Jan 20 19:05:51 compute-0 ceph-mon[74381]: pgmap v700: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:05:51 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:51 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6d4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:05:51 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:05:51 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:05:51 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:05:51.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:05:51 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:05:51 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v701: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:05:51 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:51 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6c4000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:05:52 compute-0 ceph-mon[74381]: pgmap v701: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:05:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:52 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6e0003430 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:05:53 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:05:53 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:05:53 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:05:53.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:05:53 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:53 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6ec002ee0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:05:53 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:05:53 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:05:53 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:05:53.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:05:53 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v702: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:05:53 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:53 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6d4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:05:54 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:54 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6c40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:05:54 compute-0 ceph-mon[74381]: pgmap v702: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:05:54 compute-0 ceph-mgr[74676]: [balancer INFO root] Optimize plan auto_2026-01-20_19:05:54
Jan 20 19:05:54 compute-0 ceph-mgr[74676]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 19:05:54 compute-0 ceph-mgr[74676]: [balancer INFO root] do_upmap
Jan 20 19:05:54 compute-0 ceph-mgr[74676]: [balancer INFO root] pools ['default.rgw.meta', 'vms', 'images', 'cephfs.cephfs.meta', 'backups', 'default.rgw.control', '.nfs', 'default.rgw.log', 'cephfs.cephfs.data', '.rgw.root', '.mgr', 'volumes']
Jan 20 19:05:54 compute-0 ceph-mgr[74676]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 19:05:55 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:05:55 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:05:55 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:05:55.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:05:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:05:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:05:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:05:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:05:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:05:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:05:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 19:05:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:05:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 19:05:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:05:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:05:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:05:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:05:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:05:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:05:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:05:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:05:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:05:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 20 19:05:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:05:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:05:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:05:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 19:05:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:05:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 20 19:05:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:05:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:05:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:05:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 19:05:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:05:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 20 19:05:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 19:05:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:05:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 19:05:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:05:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:05:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:05:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:05:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:05:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:05:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:05:55 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:55 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6c40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:05:55 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:05:55 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:05:55 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:05:55.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:05:55 compute-0 sudo[258001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:05:55 compute-0 sudo[258001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:05:55 compute-0 sudo[258001]: pam_unix(sudo:session): session closed for user root
Jan 20 19:05:55 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v703: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:05:55 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:55 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6f0003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:05:56 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:05:56 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:05:56 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=404 latency=0.001000027s ======
Jan 20 19:05:56 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:05:56.485 +0000] "GET /info HTTP/1.1" 404 152 - "python-urllib3/1.26.5" - latency=0.001000027s
Jan 20 19:05:56 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:05:56 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:05:56 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - - [20/Jan/2026:19:05:56.501 +0000] "GET /swift/healthcheck HTTP/1.1" 200 0 - "python-urllib3/1.26.5" - latency=0.001000026s
Jan 20 19:05:56 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:05:56 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:56 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6f0003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:05:57 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:05:57 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:05:57 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:05:57.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:05:57 compute-0 ceph-mon[74381]: pgmap v703: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:05:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:05:57.138Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:05:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:57 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6e0003430 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:05:57 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:05:57 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:05:57 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:05:57.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:05:57 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v704: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 20 19:05:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:57 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6c40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:05:58 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [WARNING] 019/190558 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 20 19:05:58 compute-0 sudo[258029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:05:58 compute-0 sudo[258029]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:05:58 compute-0 sudo[258029]: pam_unix(sudo:session): session closed for user root
Jan 20 19:05:58 compute-0 sudo[258054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 20 19:05:58 compute-0 sudo[258054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:05:58 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:58 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6f0003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:05:59 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:05:59 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:05:59 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:05:59.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:05:59 compute-0 ceph-mon[74381]: pgmap v704: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 20 19:05:59 compute-0 sudo[258054]: pam_unix(sudo:session): session closed for user root
Jan 20 19:05:59 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 20 19:05:59 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 20 19:05:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:59 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6ec002ee0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:05:59 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:05:59 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:05:59 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:05:59.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:05:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:05:59] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Jan 20 19:05:59 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:05:59] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Jan 20 19:05:59 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v705: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Jan 20 19:05:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:05:59 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6ec002ee0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:00 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 20 19:06:00 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 20 19:06:00 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:00 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6c40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:00 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [WARNING] 019/190600 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 20 19:06:01 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:06:01 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:06:01 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:06:01.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:06:01 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 20 19:06:01 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:06:01 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e159 do_prune osdmap full prune enabled
Jan 20 19:06:01 compute-0 nova_compute[254061]: 2026-01-20 19:06:01.128 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:06:01 compute-0 nova_compute[254061]: 2026-01-20 19:06:01.129 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:06:01 compute-0 nova_compute[254061]: 2026-01-20 19:06:01.166 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:06:01 compute-0 nova_compute[254061]: 2026-01-20 19:06:01.166 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:06:01 compute-0 nova_compute[254061]: 2026-01-20 19:06:01.167 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:06:01 compute-0 nova_compute[254061]: 2026-01-20 19:06:01.167 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 19:06:01 compute-0 nova_compute[254061]: 2026-01-20 19:06:01.167 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:06:01 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:01 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6f0003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:01 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:06:01 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1385698615' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:06:01 compute-0 nova_compute[254061]: 2026-01-20 19:06:01.638 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:06:01 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:06:01 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:06:01 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:06:01.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:06:01 compute-0 nova_compute[254061]: 2026-01-20 19:06:01.820 254065 WARNING nova.virt.libvirt.driver [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 19:06:01 compute-0 nova_compute[254061]: 2026-01-20 19:06:01.821 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4905MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 19:06:01 compute-0 nova_compute[254061]: 2026-01-20 19:06:01.822 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:06:01 compute-0 nova_compute[254061]: 2026-01-20 19:06:01.822 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:06:01 compute-0 nova_compute[254061]: 2026-01-20 19:06:01.876 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 19:06:01 compute-0 nova_compute[254061]: 2026-01-20 19:06:01.876 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 19:06:01 compute-0 nova_compute[254061]: 2026-01-20 19:06:01.892 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:06:01 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v706: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Jan 20 19:06:01 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:01 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6ec002ee0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:02 compute-0 ceph-mon[74381]: pgmap v705: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Jan 20 19:06:02 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:06:02 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 20 19:06:02 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e160 e160: 3 total, 3 up, 3 in
Jan 20 19:06:02 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e160: 3 total, 3 up, 3 in
Jan 20 19:06:02 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:06:02 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:06:02 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2053503602' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:06:02 compute-0 nova_compute[254061]: 2026-01-20 19:06:02.393 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:06:02 compute-0 nova_compute[254061]: 2026-01-20 19:06:02.398 254065 DEBUG nova.compute.provider_tree [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Inventory has not changed in ProviderTree for provider: cb9161e5-191d-495c-920a-01144f42a215 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 19:06:02 compute-0 nova_compute[254061]: 2026-01-20 19:06:02.423 254065 DEBUG nova.scheduler.client.report [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Inventory has not changed for provider cb9161e5-191d-495c-920a-01144f42a215 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 19:06:02 compute-0 nova_compute[254061]: 2026-01-20 19:06:02.426 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 19:06:02 compute-0 nova_compute[254061]: 2026-01-20 19:06:02.426 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.604s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:06:02 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 20 19:06:02 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:06:02 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 20 19:06:02 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:06:02 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:02 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6ec002ee0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:03 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:06:03 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:06:03 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:06:03.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:06:03 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Jan 20 19:06:03 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 20 19:06:03 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/1385698615' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:06:03 compute-0 ceph-mon[74381]: pgmap v706: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Jan 20 19:06:03 compute-0 ceph-mon[74381]: osdmap e160: 3 total, 3 up, 3 in
Jan 20 19:06:03 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:06:03 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/2053503602' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:06:03 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:06:03 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:06:03 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 20 19:06:03 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 20 19:06:03 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e160 do_prune osdmap full prune enabled
Jan 20 19:06:03 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Jan 20 19:06:03 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 20 19:06:03 compute-0 nova_compute[254061]: 2026-01-20 19:06:03.425 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:06:03 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e161 e161: 3 total, 3 up, 3 in
Jan 20 19:06:03 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e161: 3 total, 3 up, 3 in
Jan 20 19:06:03 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v709: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 135 B/s rd, 0 op/s
Jan 20 19:06:03 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 19:06:03 compute-0 nova_compute[254061]: 2026-01-20 19:06:03.444 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:06:03 compute-0 nova_compute[254061]: 2026-01-20 19:06:03.444 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 19:06:03 compute-0 nova_compute[254061]: 2026-01-20 19:06:03.445 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 19:06:03 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:06:03 compute-0 nova_compute[254061]: 2026-01-20 19:06:03.456 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 19:06:03 compute-0 nova_compute[254061]: 2026-01-20 19:06:03.456 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:06:03 compute-0 nova_compute[254061]: 2026-01-20 19:06:03.456 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:06:03 compute-0 nova_compute[254061]: 2026-01-20 19:06:03.457 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:06:03 compute-0 nova_compute[254061]: 2026-01-20 19:06:03.457 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:06:03 compute-0 nova_compute[254061]: 2026-01-20 19:06:03.457 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 19:06:03 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 20 19:06:03 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:06:03 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:03 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6c4002f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:03 compute-0 sudo[258158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:06:03 compute-0 sudo[258158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:06:03 compute-0 sudo[258158]: pam_unix(sudo:session): session closed for user root
Jan 20 19:06:03 compute-0 sudo[258183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 19:06:03 compute-0 sudo[258183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:06:03 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:06:03 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:06:03 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:06:03.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:06:03 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:03 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6f0003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:04 compute-0 podman[258252]: 2026-01-20 19:06:04.027862406 +0000 UTC m=+0.061963673 container create 2d9dd20215f10c8ed5fa6f395542674b9289e41668dbdf885a256744be527607 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_franklin, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 19:06:04 compute-0 systemd[1]: Started libpod-conmon-2d9dd20215f10c8ed5fa6f395542674b9289e41668dbdf885a256744be527607.scope.
Jan 20 19:06:04 compute-0 podman[258252]: 2026-01-20 19:06:03.999635724 +0000 UTC m=+0.033737081 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:06:04 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:06:04 compute-0 nova_compute[254061]: 2026-01-20 19:06:04.129 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:06:04 compute-0 podman[258252]: 2026-01-20 19:06:04.130147756 +0000 UTC m=+0.164249023 container init 2d9dd20215f10c8ed5fa6f395542674b9289e41668dbdf885a256744be527607 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_franklin, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True)
Jan 20 19:06:04 compute-0 podman[258252]: 2026-01-20 19:06:04.137987837 +0000 UTC m=+0.172089104 container start 2d9dd20215f10c8ed5fa6f395542674b9289e41668dbdf885a256744be527607 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:06:04 compute-0 podman[258252]: 2026-01-20 19:06:04.141129823 +0000 UTC m=+0.175231090 container attach 2d9dd20215f10c8ed5fa6f395542674b9289e41668dbdf885a256744be527607 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:06:04 compute-0 keen_franklin[258269]: 167 167
Jan 20 19:06:04 compute-0 systemd[1]: libpod-2d9dd20215f10c8ed5fa6f395542674b9289e41668dbdf885a256744be527607.scope: Deactivated successfully.
Jan 20 19:06:04 compute-0 podman[258252]: 2026-01-20 19:06:04.146000934 +0000 UTC m=+0.180102181 container died 2d9dd20215f10c8ed5fa6f395542674b9289e41668dbdf885a256744be527607 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_franklin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Jan 20 19:06:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-bb3fc24f613fc92befb24e957183af2bdaabddba79384d84ed34b62b48071fd9-merged.mount: Deactivated successfully.
Jan 20 19:06:04 compute-0 podman[258252]: 2026-01-20 19:06:04.193472275 +0000 UTC m=+0.227573532 container remove 2d9dd20215f10c8ed5fa6f395542674b9289e41668dbdf885a256744be527607 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_franklin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid)
Jan 20 19:06:04 compute-0 systemd[1]: libpod-conmon-2d9dd20215f10c8ed5fa6f395542674b9289e41668dbdf885a256744be527607.scope: Deactivated successfully.
Jan 20 19:06:04 compute-0 podman[258293]: 2026-01-20 19:06:04.345598851 +0000 UTC m=+0.032969541 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:06:04 compute-0 ceph-mon[74381]: log_channel(cluster) log [WRN] : Health check failed: 2 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Jan 20 19:06:04 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e161 do_prune osdmap full prune enabled
Jan 20 19:06:04 compute-0 podman[258293]: 2026-01-20 19:06:04.686025748 +0000 UTC m=+0.373396388 container create c2e2100408973c9ef5cb290b2f4ad9a293a71c70b41fd23c7e7a9a333c515648 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_allen, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325)
Jan 20 19:06:04 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 20 19:06:04 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 20 19:06:04 compute-0 ceph-mon[74381]: osdmap e161: 3 total, 3 up, 3 in
Jan 20 19:06:04 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 19:06:04 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 19:06:04 compute-0 ceph-mon[74381]: pgmap v709: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail; 135 B/s rd, 0 op/s
Jan 20 19:06:04 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:06:04 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:06:04 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 19:06:04 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 19:06:04 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 19:06:04 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/262353834' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:06:04 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/3696767094' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:06:04 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:04 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6f0003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:04 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e162 e162: 3 total, 3 up, 3 in
Jan 20 19:06:04 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e162: 3 total, 3 up, 3 in
Jan 20 19:06:05 compute-0 systemd[1]: Started libpod-conmon-c2e2100408973c9ef5cb290b2f4ad9a293a71c70b41fd23c7e7a9a333c515648.scope.
Jan 20 19:06:05 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:06:05 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:06:05 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:06:05.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:06:05 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:06:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99ae8c9821fcc0079767aa2e5e9e154caa707f29963251d451a84db51c7d1dd4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:06:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99ae8c9821fcc0079767aa2e5e9e154caa707f29963251d451a84db51c7d1dd4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:06:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99ae8c9821fcc0079767aa2e5e9e154caa707f29963251d451a84db51c7d1dd4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:06:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99ae8c9821fcc0079767aa2e5e9e154caa707f29963251d451a84db51c7d1dd4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:06:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99ae8c9821fcc0079767aa2e5e9e154caa707f29963251d451a84db51c7d1dd4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:06:05 compute-0 podman[258293]: 2026-01-20 19:06:05.084314237 +0000 UTC m=+0.771684957 container init c2e2100408973c9ef5cb290b2f4ad9a293a71c70b41fd23c7e7a9a333c515648 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_allen, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 20 19:06:05 compute-0 podman[258293]: 2026-01-20 19:06:05.094362989 +0000 UTC m=+0.781733619 container start c2e2100408973c9ef5cb290b2f4ad9a293a71c70b41fd23c7e7a9a333c515648 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_allen, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid)
Jan 20 19:06:05 compute-0 podman[258293]: 2026-01-20 19:06:05.098102689 +0000 UTC m=+0.785473359 container attach c2e2100408973c9ef5cb290b2f4ad9a293a71c70b41fd23c7e7a9a333c515648 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Jan 20 19:06:05 compute-0 nova_compute[254061]: 2026-01-20 19:06:05.124 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:06:05 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v711: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:06:05 compute-0 priceless_allen[258309]: --> passed data devices: 0 physical, 1 LVM
Jan 20 19:06:05 compute-0 priceless_allen[258309]: --> All data devices are unavailable
Jan 20 19:06:05 compute-0 systemd[1]: libpod-c2e2100408973c9ef5cb290b2f4ad9a293a71c70b41fd23c7e7a9a333c515648.scope: Deactivated successfully.
Jan 20 19:06:05 compute-0 podman[258293]: 2026-01-20 19:06:05.484188769 +0000 UTC m=+1.171559399 container died c2e2100408973c9ef5cb290b2f4ad9a293a71c70b41fd23c7e7a9a333c515648 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_allen, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Jan 20 19:06:05 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:05 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6e0004530 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-99ae8c9821fcc0079767aa2e5e9e154caa707f29963251d451a84db51c7d1dd4-merged.mount: Deactivated successfully.
Jan 20 19:06:05 compute-0 podman[258293]: 2026-01-20 19:06:05.535822553 +0000 UTC m=+1.223193193 container remove c2e2100408973c9ef5cb290b2f4ad9a293a71c70b41fd23c7e7a9a333c515648 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_allen, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 20 19:06:05 compute-0 systemd[1]: libpod-conmon-c2e2100408973c9ef5cb290b2f4ad9a293a71c70b41fd23c7e7a9a333c515648.scope: Deactivated successfully.
Jan 20 19:06:05 compute-0 sudo[258183]: pam_unix(sudo:session): session closed for user root
Jan 20 19:06:05 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:06:05 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:06:05 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:06:05.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:06:05 compute-0 sudo[258337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:06:05 compute-0 sudo[258337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:06:05 compute-0 sudo[258337]: pam_unix(sudo:session): session closed for user root
Jan 20 19:06:05 compute-0 sudo[258362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- lvm list --format json
Jan 20 19:06:05 compute-0 sudo[258362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:06:05 compute-0 ceph-mon[74381]: Health check failed: 2 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Jan 20 19:06:05 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/1098871063' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:06:05 compute-0 ceph-mon[74381]: osdmap e162: 3 total, 3 up, 3 in
Jan 20 19:06:05 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/521467190' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:06:05 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:05 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6c4002f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:06 compute-0 podman[258427]: 2026-01-20 19:06:06.147930552 +0000 UTC m=+0.054647725 container create 5c618fca79fae0ca5852b493c971e8d299bb8a1860751aee09580e2e01e88ce1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_fermat, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Jan 20 19:06:06 compute-0 systemd[1]: Started libpod-conmon-5c618fca79fae0ca5852b493c971e8d299bb8a1860751aee09580e2e01e88ce1.scope.
Jan 20 19:06:06 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:06:06 compute-0 podman[258427]: 2026-01-20 19:06:06.120499462 +0000 UTC m=+0.027216685 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:06:06 compute-0 podman[258427]: 2026-01-20 19:06:06.224987712 +0000 UTC m=+0.131704865 container init 5c618fca79fae0ca5852b493c971e8d299bb8a1860751aee09580e2e01e88ce1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:06:06 compute-0 podman[258427]: 2026-01-20 19:06:06.233786659 +0000 UTC m=+0.140503782 container start 5c618fca79fae0ca5852b493c971e8d299bb8a1860751aee09580e2e01e88ce1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_fermat, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 19:06:06 compute-0 podman[258427]: 2026-01-20 19:06:06.236702118 +0000 UTC m=+0.143419281 container attach 5c618fca79fae0ca5852b493c971e8d299bb8a1860751aee09580e2e01e88ce1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_fermat, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 20 19:06:06 compute-0 priceless_fermat[258441]: 167 167
Jan 20 19:06:06 compute-0 systemd[1]: libpod-5c618fca79fae0ca5852b493c971e8d299bb8a1860751aee09580e2e01e88ce1.scope: Deactivated successfully.
Jan 20 19:06:06 compute-0 conmon[258441]: conmon 5c618fca79fae0ca5852 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5c618fca79fae0ca5852b493c971e8d299bb8a1860751aee09580e2e01e88ce1.scope/container/memory.events
Jan 20 19:06:06 compute-0 podman[258427]: 2026-01-20 19:06:06.239181945 +0000 UTC m=+0.145899098 container died 5c618fca79fae0ca5852b493c971e8d299bb8a1860751aee09580e2e01e88ce1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_fermat, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:06:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-644a5856c309a60d4888cee33484ed7320b5ee8b72bcfa044eb064998752d241-merged.mount: Deactivated successfully.
Jan 20 19:06:06 compute-0 podman[258427]: 2026-01-20 19:06:06.274773156 +0000 UTC m=+0.181490279 container remove 5c618fca79fae0ca5852b493c971e8d299bb8a1860751aee09580e2e01e88ce1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_fermat, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 19:06:06 compute-0 systemd[1]: libpod-conmon-5c618fca79fae0ca5852b493c971e8d299bb8a1860751aee09580e2e01e88ce1.scope: Deactivated successfully.
Jan 20 19:06:06 compute-0 podman[258466]: 2026-01-20 19:06:06.466929392 +0000 UTC m=+0.072823347 container create c8da8472b094cd7897d821d9137d27fb3a1b6439befcf77102b24e9962b75a1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Jan 20 19:06:06 compute-0 systemd[1]: Started libpod-conmon-c8da8472b094cd7897d821d9137d27fb3a1b6439befcf77102b24e9962b75a1e.scope.
Jan 20 19:06:06 compute-0 podman[258466]: 2026-01-20 19:06:06.424488907 +0000 UTC m=+0.030382872 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:06:06 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:06:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/419c2155e2b09b5e3b67cb3dc9bae69cd6c4af5a8c6d7449eebe1de1d00c2be7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:06:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/419c2155e2b09b5e3b67cb3dc9bae69cd6c4af5a8c6d7449eebe1de1d00c2be7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:06:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/419c2155e2b09b5e3b67cb3dc9bae69cd6c4af5a8c6d7449eebe1de1d00c2be7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:06:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/419c2155e2b09b5e3b67cb3dc9bae69cd6c4af5a8c6d7449eebe1de1d00c2be7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:06:06 compute-0 podman[258466]: 2026-01-20 19:06:06.568099272 +0000 UTC m=+0.173993237 container init c8da8472b094cd7897d821d9137d27fb3a1b6439befcf77102b24e9962b75a1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_proskuriakova, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:06:06 compute-0 podman[258466]: 2026-01-20 19:06:06.577850455 +0000 UTC m=+0.183744400 container start c8da8472b094cd7897d821d9137d27fb3a1b6439befcf77102b24e9962b75a1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_proskuriakova, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:06:06 compute-0 podman[258466]: 2026-01-20 19:06:06.580740203 +0000 UTC m=+0.186634168 container attach c8da8472b094cd7897d821d9137d27fb3a1b6439befcf77102b24e9962b75a1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_proskuriakova, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Jan 20 19:06:06 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:06:06 compute-0 ceph-mon[74381]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Jan 20 19:06:06 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:06:06.667914) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 19:06:06 compute-0 ceph-mon[74381]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Jan 20 19:06:06 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935966668001, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 630, "num_deletes": 252, "total_data_size": 826795, "memory_usage": 839392, "flush_reason": "Manual Compaction"}
Jan 20 19:06:06 compute-0 ceph-mon[74381]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Jan 20 19:06:06 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935966677284, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 624298, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22553, "largest_seqno": 23182, "table_properties": {"data_size": 621125, "index_size": 1081, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 8317, "raw_average_key_size": 20, "raw_value_size": 614428, "raw_average_value_size": 1513, "num_data_blocks": 46, "num_entries": 406, "num_filter_entries": 406, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768935932, "oldest_key_time": 1768935932, "file_creation_time": 1768935966, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cbf3ab03-d51c-4622-b6c7-e997cd5246eb", "db_session_id": "I40O2DG19JCNHUB0JQU4", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Jan 20 19:06:06 compute-0 ceph-mon[74381]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 9421 microseconds, and 5295 cpu microseconds.
Jan 20 19:06:06 compute-0 ceph-mon[74381]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 19:06:06 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:06:06.677345) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 624298 bytes OK
Jan 20 19:06:06 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:06:06.677374) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Jan 20 19:06:06 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:06:06.679267) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Jan 20 19:06:06 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:06:06.679290) EVENT_LOG_v1 {"time_micros": 1768935966679283, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 19:06:06 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:06:06.679321) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 19:06:06 compute-0 ceph-mon[74381]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 823381, prev total WAL file size 823381, number of live WAL files 2.
Jan 20 19:06:06 compute-0 ceph-mon[74381]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 19:06:06 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:06:06.680200) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353032' seq:72057594037927935, type:22 .. '6D67727374617400373535' seq:0, type:0; will stop at (end)
Jan 20 19:06:06 compute-0 ceph-mon[74381]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 19:06:06 compute-0 ceph-mon[74381]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(609KB)], [47(14MB)]
Jan 20 19:06:06 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935966680321, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 16322177, "oldest_snapshot_seqno": -1}
Jan 20 19:06:06 compute-0 ceph-mon[74381]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 5583 keys, 12395354 bytes, temperature: kUnknown
Jan 20 19:06:06 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935966776842, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 12395354, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12359329, "index_size": 20939, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14021, "raw_key_size": 141830, "raw_average_key_size": 25, "raw_value_size": 12259642, "raw_average_value_size": 2195, "num_data_blocks": 851, "num_entries": 5583, "num_filter_entries": 5583, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768934326, "oldest_key_time": 0, "file_creation_time": 1768935966, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cbf3ab03-d51c-4622-b6c7-e997cd5246eb", "db_session_id": "I40O2DG19JCNHUB0JQU4", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Jan 20 19:06:06 compute-0 ceph-mon[74381]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 19:06:06 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:06:06.777315) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 12395354 bytes
Jan 20 19:06:06 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:06:06.779172) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 168.9 rd, 128.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 15.0 +0.0 blob) out(11.8 +0.0 blob), read-write-amplify(46.0) write-amplify(19.9) OK, records in: 6094, records dropped: 511 output_compression: NoCompression
Jan 20 19:06:06 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:06:06.779206) EVENT_LOG_v1 {"time_micros": 1768935966779190, "job": 24, "event": "compaction_finished", "compaction_time_micros": 96640, "compaction_time_cpu_micros": 51168, "output_level": 6, "num_output_files": 1, "total_output_size": 12395354, "num_input_records": 6094, "num_output_records": 5583, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 19:06:06 compute-0 ceph-mon[74381]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 19:06:06 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935966779598, "job": 24, "event": "table_file_deletion", "file_number": 49}
Jan 20 19:06:06 compute-0 ceph-mon[74381]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 19:06:06 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935966784610, "job": 24, "event": "table_file_deletion", "file_number": 47}
Jan 20 19:06:06 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:06:06.680034) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:06:06 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:06:06.784696) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:06:06 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:06:06.784703) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:06:06 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:06:06.784704) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:06:06 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:06:06.784705) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:06:06 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:06:06.784707) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:06:06 compute-0 sad_proskuriakova[258482]: {
Jan 20 19:06:06 compute-0 sad_proskuriakova[258482]:     "0": [
Jan 20 19:06:06 compute-0 sad_proskuriakova[258482]:         {
Jan 20 19:06:06 compute-0 sad_proskuriakova[258482]:             "devices": [
Jan 20 19:06:06 compute-0 sad_proskuriakova[258482]:                 "/dev/loop3"
Jan 20 19:06:06 compute-0 sad_proskuriakova[258482]:             ],
Jan 20 19:06:06 compute-0 sad_proskuriakova[258482]:             "lv_name": "ceph_lv0",
Jan 20 19:06:06 compute-0 sad_proskuriakova[258482]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:06:06 compute-0 sad_proskuriakova[258482]:             "lv_size": "21470642176",
Jan 20 19:06:06 compute-0 sad_proskuriakova[258482]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=aecbbf3b-b405-507b-97d7-637a83f5b4b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5f53c0c6-6046-4836-83f9-ff93da7e674e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:06:06 compute-0 sad_proskuriakova[258482]:             "lv_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 19:06:06 compute-0 sad_proskuriakova[258482]:             "name": "ceph_lv0",
Jan 20 19:06:06 compute-0 sad_proskuriakova[258482]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:06:06 compute-0 sad_proskuriakova[258482]:             "tags": {
Jan 20 19:06:06 compute-0 sad_proskuriakova[258482]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:06:06 compute-0 sad_proskuriakova[258482]:                 "ceph.block_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 19:06:06 compute-0 sad_proskuriakova[258482]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:06:06 compute-0 sad_proskuriakova[258482]:                 "ceph.cluster_fsid": "aecbbf3b-b405-507b-97d7-637a83f5b4b1",
Jan 20 19:06:06 compute-0 sad_proskuriakova[258482]:                 "ceph.cluster_name": "ceph",
Jan 20 19:06:06 compute-0 sad_proskuriakova[258482]:                 "ceph.crush_device_class": "",
Jan 20 19:06:06 compute-0 sad_proskuriakova[258482]:                 "ceph.encrypted": "0",
Jan 20 19:06:06 compute-0 sad_proskuriakova[258482]:                 "ceph.osd_fsid": "5f53c0c6-6046-4836-83f9-ff93da7e674e",
Jan 20 19:06:06 compute-0 sad_proskuriakova[258482]:                 "ceph.osd_id": "0",
Jan 20 19:06:06 compute-0 sad_proskuriakova[258482]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:06:06 compute-0 sad_proskuriakova[258482]:                 "ceph.type": "block",
Jan 20 19:06:06 compute-0 sad_proskuriakova[258482]:                 "ceph.vdo": "0",
Jan 20 19:06:06 compute-0 sad_proskuriakova[258482]:                 "ceph.with_tpm": "0"
Jan 20 19:06:06 compute-0 sad_proskuriakova[258482]:             },
Jan 20 19:06:06 compute-0 sad_proskuriakova[258482]:             "type": "block",
Jan 20 19:06:06 compute-0 sad_proskuriakova[258482]:             "vg_name": "ceph_vg0"
Jan 20 19:06:06 compute-0 sad_proskuriakova[258482]:         }
Jan 20 19:06:06 compute-0 sad_proskuriakova[258482]:     ]
Jan 20 19:06:06 compute-0 sad_proskuriakova[258482]: }
Jan 20 19:06:06 compute-0 systemd[1]: libpod-c8da8472b094cd7897d821d9137d27fb3a1b6439befcf77102b24e9962b75a1e.scope: Deactivated successfully.
Jan 20 19:06:06 compute-0 podman[258491]: 2026-01-20 19:06:06.919061104 +0000 UTC m=+0.031640365 container died c8da8472b094cd7897d821d9137d27fb3a1b6439befcf77102b24e9962b75a1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_proskuriakova, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Jan 20 19:06:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-419c2155e2b09b5e3b67cb3dc9bae69cd6c4af5a8c6d7449eebe1de1d00c2be7-merged.mount: Deactivated successfully.
Jan 20 19:06:06 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:06 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6ec004850 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:06 compute-0 podman[258491]: 2026-01-20 19:06:06.962833955 +0000 UTC m=+0.075413196 container remove c8da8472b094cd7897d821d9137d27fb3a1b6439befcf77102b24e9962b75a1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_proskuriakova, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 19:06:06 compute-0 systemd[1]: libpod-conmon-c8da8472b094cd7897d821d9137d27fb3a1b6439befcf77102b24e9962b75a1e.scope: Deactivated successfully.
Jan 20 19:06:07 compute-0 ceph-mon[74381]: pgmap v711: 337 pgs: 337 active+clean; 458 KiB data, 158 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:06:07 compute-0 sudo[258362]: pam_unix(sudo:session): session closed for user root
Jan 20 19:06:07 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:06:07 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:06:07 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:06:07.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:06:07 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:07 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 20 19:06:07 compute-0 sudo[258506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:06:07 compute-0 sudo[258506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:06:07 compute-0 sudo[258506]: pam_unix(sudo:session): session closed for user root
Jan 20 19:06:07 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:06:07.139Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:06:07 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:06:07.139Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:06:07 compute-0 sudo[258531]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- raw list --format json
Jan 20 19:06:07 compute-0 sudo[258531]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:06:07 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v712: 337 pgs: 337 active+clean; 21 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 3.7 MiB/s wr, 55 op/s
Jan 20 19:06:07 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:07 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6f0003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:07 compute-0 podman[258598]: 2026-01-20 19:06:07.599989931 +0000 UTC m=+0.054021349 container create 6b0090211a662a5d5630fd96358991edf46a83931f1174a01013a66fcc276324 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:06:07 compute-0 systemd[1]: Started libpod-conmon-6b0090211a662a5d5630fd96358991edf46a83931f1174a01013a66fcc276324.scope.
Jan 20 19:06:07 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:06:07 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:06:07 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:06:07 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:06:07.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:06:07 compute-0 podman[258598]: 2026-01-20 19:06:07.657264517 +0000 UTC m=+0.111296025 container init 6b0090211a662a5d5630fd96358991edf46a83931f1174a01013a66fcc276324 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_wilson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Jan 20 19:06:07 compute-0 podman[258598]: 2026-01-20 19:06:07.664871702 +0000 UTC m=+0.118903160 container start 6b0090211a662a5d5630fd96358991edf46a83931f1174a01013a66fcc276324 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_wilson, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Jan 20 19:06:07 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e162 do_prune osdmap full prune enabled
Jan 20 19:06:07 compute-0 affectionate_wilson[258615]: 167 167
Jan 20 19:06:07 compute-0 systemd[1]: libpod-6b0090211a662a5d5630fd96358991edf46a83931f1174a01013a66fcc276324.scope: Deactivated successfully.
Jan 20 19:06:07 compute-0 podman[258598]: 2026-01-20 19:06:07.669563569 +0000 UTC m=+0.123594987 container attach 6b0090211a662a5d5630fd96358991edf46a83931f1174a01013a66fcc276324 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_wilson, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Jan 20 19:06:07 compute-0 podman[258598]: 2026-01-20 19:06:07.670289709 +0000 UTC m=+0.124321137 container died 6b0090211a662a5d5630fd96358991edf46a83931f1174a01013a66fcc276324 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Jan 20 19:06:07 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e163 e163: 3 total, 3 up, 3 in
Jan 20 19:06:07 compute-0 podman[258598]: 2026-01-20 19:06:07.581941214 +0000 UTC m=+0.035972652 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:06:07 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e163: 3 total, 3 up, 3 in
Jan 20 19:06:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-b70574f90ca0c5f7c20aac986894e30497923240255f46014fca870761310801-merged.mount: Deactivated successfully.
Jan 20 19:06:07 compute-0 podman[258598]: 2026-01-20 19:06:07.720783441 +0000 UTC m=+0.174814859 container remove 6b0090211a662a5d5630fd96358991edf46a83931f1174a01013a66fcc276324 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Jan 20 19:06:07 compute-0 systemd[1]: libpod-conmon-6b0090211a662a5d5630fd96358991edf46a83931f1174a01013a66fcc276324.scope: Deactivated successfully.
Jan 20 19:06:07 compute-0 podman[258639]: 2026-01-20 19:06:07.884583092 +0000 UTC m=+0.035700764 container create 2478fda515e4e58090da41818457625ac56076e5f443b1c668dbadcab14553e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_carver, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 20 19:06:07 compute-0 systemd[1]: Started libpod-conmon-2478fda515e4e58090da41818457625ac56076e5f443b1c668dbadcab14553e8.scope.
Jan 20 19:06:07 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:07 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6e0004530 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:07 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:06:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4862c1d15133bc9eb3385c5886f148139b565bd81c5d7ece049969b8b1586c32/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:06:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4862c1d15133bc9eb3385c5886f148139b565bd81c5d7ece049969b8b1586c32/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:06:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4862c1d15133bc9eb3385c5886f148139b565bd81c5d7ece049969b8b1586c32/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:06:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4862c1d15133bc9eb3385c5886f148139b565bd81c5d7ece049969b8b1586c32/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:06:07 compute-0 podman[258639]: 2026-01-20 19:06:07.868381276 +0000 UTC m=+0.019498968 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:06:07 compute-0 podman[258639]: 2026-01-20 19:06:07.968775214 +0000 UTC m=+0.119892946 container init 2478fda515e4e58090da41818457625ac56076e5f443b1c668dbadcab14553e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Jan 20 19:06:07 compute-0 podman[258639]: 2026-01-20 19:06:07.977777448 +0000 UTC m=+0.128895130 container start 2478fda515e4e58090da41818457625ac56076e5f443b1c668dbadcab14553e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_carver, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:06:07 compute-0 podman[258639]: 2026-01-20 19:06:07.981520689 +0000 UTC m=+0.132638581 container attach 2478fda515e4e58090da41818457625ac56076e5f443b1c668dbadcab14553e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_carver, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Jan 20 19:06:08 compute-0 ceph-mon[74381]: pgmap v712: 337 pgs: 337 active+clean; 21 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 3.7 MiB/s wr, 55 op/s
Jan 20 19:06:08 compute-0 ceph-mon[74381]: osdmap e163: 3 total, 3 up, 3 in
Jan 20 19:06:08 compute-0 lvm[258730]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 19:06:08 compute-0 lvm[258730]: VG ceph_vg0 finished
Jan 20 19:06:08 compute-0 eloquent_carver[258655]: {}
Jan 20 19:06:08 compute-0 systemd[1]: libpod-2478fda515e4e58090da41818457625ac56076e5f443b1c668dbadcab14553e8.scope: Deactivated successfully.
Jan 20 19:06:08 compute-0 podman[258639]: 2026-01-20 19:06:08.783927065 +0000 UTC m=+0.935044767 container died 2478fda515e4e58090da41818457625ac56076e5f443b1c668dbadcab14553e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_carver, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:06:08 compute-0 systemd[1]: libpod-2478fda515e4e58090da41818457625ac56076e5f443b1c668dbadcab14553e8.scope: Consumed 1.215s CPU time.
Jan 20 19:06:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-4862c1d15133bc9eb3385c5886f148139b565bd81c5d7ece049969b8b1586c32-merged.mount: Deactivated successfully.
Jan 20 19:06:08 compute-0 podman[258639]: 2026-01-20 19:06:08.8278635 +0000 UTC m=+0.978981182 container remove 2478fda515e4e58090da41818457625ac56076e5f443b1c668dbadcab14553e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:06:08 compute-0 systemd[1]: libpod-conmon-2478fda515e4e58090da41818457625ac56076e5f443b1c668dbadcab14553e8.scope: Deactivated successfully.
Jan 20 19:06:08 compute-0 sudo[258531]: pam_unix(sudo:session): session closed for user root
Jan 20 19:06:08 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:06:08 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:06:08 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:06:08 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:06:08 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:08 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6c4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:08 compute-0 sudo[258747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 19:06:08 compute-0 sudo[258747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:06:08 compute-0 sudo[258747]: pam_unix(sudo:session): session closed for user root
Jan 20 19:06:09 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:06:09 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:06:09 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:06:09.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:06:09 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v714: 337 pgs: 337 active+clean; 21 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 3.4 MiB/s wr, 50 op/s
Jan 20 19:06:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:09 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6ec004850 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:09 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:06:09 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:06:09 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:06:09.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:06:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:06:09] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Jan 20 19:06:09 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:06:09] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Jan 20 19:06:09 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:06:09 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:06:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:09 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6f0003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:10 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Jan 20 19:06:10 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:06:10 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:10 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 20 19:06:10 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:10 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 20 19:06:10 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:10 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6e0004530 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:11 compute-0 ceph-mon[74381]: pgmap v714: 337 pgs: 337 active+clean; 21 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 3.4 MiB/s wr, 50 op/s
Jan 20 19:06:11 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:06:11 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:06:11 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:06:11 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:06:11 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:06:11.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:06:11 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v715: 337 pgs: 337 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 5.1 MiB/s wr, 45 op/s
Jan 20 19:06:11 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:11 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6ec004850 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:11 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:06:11 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e163 do_prune osdmap full prune enabled
Jan 20 19:06:11 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:06:11 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:06:11 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:06:11.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:06:11 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e164 e164: 3 total, 3 up, 3 in
Jan 20 19:06:11 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e164: 3 total, 3 up, 3 in
Jan 20 19:06:11 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:11 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6c4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:12 compute-0 ceph-mon[74381]: pgmap v715: 337 pgs: 337 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 5.1 MiB/s wr, 45 op/s
Jan 20 19:06:12 compute-0 ceph-mon[74381]: osdmap e164: 3 total, 3 up, 3 in
Jan 20 19:06:12 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:12 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6f0003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:13 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:06:13 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:06:13 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:06:13.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:06:13 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:13 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 20 19:06:13 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v717: 337 pgs: 337 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 5.1 MiB/s wr, 51 op/s
Jan 20 19:06:13 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:13 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6e0004530 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:13 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:06:13 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:06:13 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:06:13.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:06:13 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:13 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6ec004850 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:14 compute-0 ceph-mon[74381]: pgmap v717: 337 pgs: 337 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 5.1 MiB/s wr, 51 op/s
Jan 20 19:06:14 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:14 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6c4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:15 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:06:15 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:06:15 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:06:15.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:06:15 compute-0 podman[258778]: 2026-01-20 19:06:15.095607647 +0000 UTC m=+0.064753150 container health_status 7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 20 19:06:15 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v718: 337 pgs: 337 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 2.6 MiB/s wr, 13 op/s
Jan 20 19:06:15 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:15 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6f0003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:15 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:06:15 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:06:15 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:06:15.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:06:15 compute-0 sudo[258798]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:06:15 compute-0 sudo[258798]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:06:15 compute-0 sudo[258798]: pam_unix(sudo:session): session closed for user root
Jan 20 19:06:15 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:15 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6e0004530 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:16 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:16 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 20 19:06:16 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:06:16 compute-0 ceph-mon[74381]: pgmap v718: 337 pgs: 337 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 2.6 MiB/s wr, 13 op/s
Jan 20 19:06:16 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:16 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6e0004530 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:17 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:06:17 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:06:17 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:06:17.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:06:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:06:17.141Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:06:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:06:17.141Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:06:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:06:17.141Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:06:17 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v719: 337 pgs: 337 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 8.4 KiB/s rd, 2.1 MiB/s wr, 13 op/s
Jan 20 19:06:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:17 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6c4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:17 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:06:17 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:06:17 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:06:17.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:06:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:17 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6e0004530 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [WARNING] 019/190618 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 20 19:06:18 compute-0 ceph-mon[74381]: pgmap v719: 337 pgs: 337 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 8.4 KiB/s rd, 2.1 MiB/s wr, 13 op/s
Jan 20 19:06:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:18 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6ec004850 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:19 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:06:19 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:06:19 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:06:19.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:06:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:19 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 20 19:06:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:19 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 20 19:06:19 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v720: 337 pgs: 337 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 2.0 MiB/s wr, 13 op/s
Jan 20 19:06:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:19 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6f0003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:19 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:06:19 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:06:19 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:06:19.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:06:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:06:19] "GET /metrics HTTP/1.1" 200 48397 "" "Prometheus/2.51.0"
Jan 20 19:06:19 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:06:19] "GET /metrics HTTP/1.1" 200 48397 "" "Prometheus/2.51.0"
Jan 20 19:06:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:19 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6c4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:20 compute-0 ceph-mon[74381]: pgmap v720: 337 pgs: 337 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 2.0 MiB/s wr, 13 op/s
Jan 20 19:06:20 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:20 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6e0004530 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:21 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:06:21 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:06:21 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:06:21.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:06:21 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v721: 337 pgs: 337 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s rd, 1.6 KiB/s wr, 7 op/s
Jan 20 19:06:21 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:21 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6ec004850 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:21 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:06:21 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:06:21 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:06:21 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:06:21.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:06:21 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:21 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6f0003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:22 compute-0 podman[258829]: 2026-01-20 19:06:22.143347384 +0000 UTC m=+0.116266009 container health_status d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 20 19:06:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:22 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 20 19:06:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:22 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6c4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:23 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:06:23 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:06:23 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:06:23.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:06:23 compute-0 ceph-mon[74381]: pgmap v721: 337 pgs: 337 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s rd, 1.6 KiB/s wr, 7 op/s
Jan 20 19:06:23 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v722: 337 pgs: 337 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s rd, 1.6 KiB/s wr, 7 op/s
Jan 20 19:06:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:23 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6e0004530 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:23 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:06:23 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:06:23 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:06:23.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:06:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:23 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6ec004850 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:24 compute-0 ceph-mon[74381]: pgmap v722: 337 pgs: 337 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s rd, 1.6 KiB/s wr, 7 op/s
Jan 20 19:06:24 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:24 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6f0003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:24 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [WARNING] 019/190624 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 20 19:06:25 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:06:25 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:06:25 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:06:25.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:06:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:06:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:06:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:06:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:06:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:06:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:06:25 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v723: 337 pgs: 337 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 853 B/s wr, 3 op/s
Jan 20 19:06:25 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:06:25 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:25 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6c4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:25 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:06:25 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:06:25 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:06:25.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:06:25 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:06:25.832 165659 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'be:fe:f2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '0e:71:69:cd:a8:95'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 19:06:25 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:06:25.833 165659 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 19:06:25 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:25 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6e0004530 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:26 compute-0 ceph-mon[74381]: pgmap v723: 337 pgs: 337 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 853 B/s wr, 3 op/s
Jan 20 19:06:26 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:06:26 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:26 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6ec004850 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:27 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:06:27 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:06:27 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:06:27.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:06:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:06:27.142Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:06:27 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v724: 337 pgs: 337 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 853 B/s wr, 3 op/s
Jan 20 19:06:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:27 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6d4002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:27 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:06:27 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:06:27 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:06:27.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:06:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:27 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6c4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:28 compute-0 ceph-mon[74381]: pgmap v724: 337 pgs: 337 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 853 B/s wr, 3 op/s
Jan 20 19:06:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:28 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6c4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:29 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:06:29 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:06:29 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:06:29.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:06:29 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v725: 337 pgs: 337 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Jan 20 19:06:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:29 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6ec004850 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:29 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:06:29 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:06:29 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:06:29.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:06:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:06:29] "GET /metrics HTTP/1.1" 200 48399 "" "Prometheus/2.51.0"
Jan 20 19:06:29 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:06:29] "GET /metrics HTTP/1.1" 200 48399 "" "Prometheus/2.51.0"
Jan 20 19:06:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:29 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6d4002690 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:06:30.282 165659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:06:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:06:30.282 165659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:06:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:06:30.282 165659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:06:30 compute-0 ceph-mon[74381]: pgmap v725: 337 pgs: 337 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Jan 20 19:06:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:30 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6c4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:31 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:06:31 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:06:31 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:06:31.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:06:31 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v726: 337 pgs: 337 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Jan 20 19:06:31 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:31 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6e0004530 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:31 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:06:31 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:06:31 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:06:31 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:06:31.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:06:31 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:06:31.834 165659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7018ca8a-de0e-4b56-bb43-675238d4f8b3, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 19:06:31 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:31 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6e0004530 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:32 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:32 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6d4001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:33 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:06:33 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:06:33 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:06:33.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:06:33 compute-0 ceph-mon[74381]: pgmap v726: 337 pgs: 337 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Jan 20 19:06:33 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v727: 337 pgs: 337 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Jan 20 19:06:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:33 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6c4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:33 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:06:33 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:06:33 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:06:33.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:06:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:33 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6c4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:34 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6c4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:35 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:06:35 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:06:35 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:06:35.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:06:35 compute-0 ceph-mon[74381]: pgmap v727: 337 pgs: 337 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Jan 20 19:06:35 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v728: 337 pgs: 337 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Jan 20 19:06:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:35 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6d4001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:35 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:06:35 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:06:35 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:06:35.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:06:35 compute-0 sudo[258872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:06:35 compute-0 sudo[258872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:06:35 compute-0 sudo[258872]: pam_unix(sudo:session): session closed for user root
Jan 20 19:06:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:35 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6c4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:36 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:06:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:36 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6c4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:37 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:06:37 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:06:37 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:06:37.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:06:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:06:37.144Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:06:37 compute-0 ceph-mon[74381]: pgmap v728: 337 pgs: 337 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Jan 20 19:06:37 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v729: 337 pgs: 337 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Jan 20 19:06:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:37 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6e0004530 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:37 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:06:37 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:06:37 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:06:37.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:06:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:37 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6d4001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:38 compute-0 ceph-mon[74381]: pgmap v729: 337 pgs: 337 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Jan 20 19:06:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:38 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6ec004850 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:39 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:06:39 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:06:39 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:06:39.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:06:39 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v730: 337 pgs: 337 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:06:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:39 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6c4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:39 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:06:39 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:06:39 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:06:39.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:06:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:06:39] "GET /metrics HTTP/1.1" 200 48399 "" "Prometheus/2.51.0"
Jan 20 19:06:39 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:06:39] "GET /metrics HTTP/1.1" 200 48399 "" "Prometheus/2.51.0"
Jan 20 19:06:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:39 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6e0004530 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:40 compute-0 ceph-mon[74381]: pgmap v730: 337 pgs: 337 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:06:40 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:06:40 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:40 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6e0004530 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:41 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:06:41 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:06:41 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:06:41.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:06:41 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v731: 337 pgs: 337 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:06:41 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:41 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6ec004850 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:41 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:06:41 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:06:41 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:06:41 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:06:41.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:06:41 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:41 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6c4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:42 compute-0 ceph-mon[74381]: pgmap v731: 337 pgs: 337 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:06:42 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:42 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6e0004530 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:43 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:06:43 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:06:43 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:06:43.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:06:43 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v732: 337 pgs: 337 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:06:43 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:43 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6d4003670 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:43 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:06:43 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:06:43 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:06:43.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:06:43 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:43 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6ec004850 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:44 compute-0 ceph-mon[74381]: pgmap v732: 337 pgs: 337 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:06:44 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:44 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6c4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:45 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:06:45 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:06:45 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:06:45.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:06:45 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v733: 337 pgs: 337 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:06:45 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:45 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6e0004530 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:45 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:06:45 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:06:45 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:06:45.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:06:45 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:45 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6d4003670 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:46 compute-0 podman[258907]: 2026-01-20 19:06:46.099017567 +0000 UTC m=+0.070807363 container health_status 7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 20 19:06:46 compute-0 nova_compute[254061]: 2026-01-20 19:06:46.202 254065 DEBUG oslo_concurrency.lockutils [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Acquiring lock "120a65b5-a5a0-4431-bfbb-56c5468d25a6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:06:46 compute-0 nova_compute[254061]: 2026-01-20 19:06:46.203 254065 DEBUG oslo_concurrency.lockutils [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "120a65b5-a5a0-4431-bfbb-56c5468d25a6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:06:46 compute-0 nova_compute[254061]: 2026-01-20 19:06:46.227 254065 DEBUG nova.compute.manager [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 19:06:46 compute-0 nova_compute[254061]: 2026-01-20 19:06:46.327 254065 DEBUG oslo_concurrency.lockutils [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:06:46 compute-0 nova_compute[254061]: 2026-01-20 19:06:46.328 254065 DEBUG oslo_concurrency.lockutils [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:06:46 compute-0 nova_compute[254061]: 2026-01-20 19:06:46.333 254065 DEBUG nova.virt.hardware [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 19:06:46 compute-0 nova_compute[254061]: 2026-01-20 19:06:46.334 254065 INFO nova.compute.claims [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Claim successful on node compute-0.ctlplane.example.com
Jan 20 19:06:46 compute-0 nova_compute[254061]: 2026-01-20 19:06:46.425 254065 DEBUG oslo_concurrency.processutils [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:06:46 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:06:46 compute-0 ceph-mon[74381]: pgmap v733: 337 pgs: 337 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:06:46 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:06:46 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1919364627' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:06:46 compute-0 nova_compute[254061]: 2026-01-20 19:06:46.848 254065 DEBUG oslo_concurrency.processutils [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:06:46 compute-0 nova_compute[254061]: 2026-01-20 19:06:46.853 254065 DEBUG nova.compute.provider_tree [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Inventory has not changed in ProviderTree for provider: cb9161e5-191d-495c-920a-01144f42a215 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 19:06:46 compute-0 nova_compute[254061]: 2026-01-20 19:06:46.875 254065 DEBUG nova.scheduler.client.report [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Inventory has not changed for provider cb9161e5-191d-495c-920a-01144f42a215 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 19:06:46 compute-0 nova_compute[254061]: 2026-01-20 19:06:46.912 254065 DEBUG oslo_concurrency.lockutils [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.584s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:06:46 compute-0 nova_compute[254061]: 2026-01-20 19:06:46.912 254065 DEBUG nova.compute.manager [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 19:06:46 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:46 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6ec004850 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:46 compute-0 nova_compute[254061]: 2026-01-20 19:06:46.981 254065 DEBUG nova.compute.manager [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 19:06:46 compute-0 nova_compute[254061]: 2026-01-20 19:06:46.982 254065 DEBUG nova.network.neutron [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 19:06:47 compute-0 nova_compute[254061]: 2026-01-20 19:06:47.007 254065 INFO nova.virt.libvirt.driver [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 19:06:47 compute-0 nova_compute[254061]: 2026-01-20 19:06:47.025 254065 DEBUG nova.compute.manager [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 19:06:47 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:06:47 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:06:47 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:06:47.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:06:47 compute-0 nova_compute[254061]: 2026-01-20 19:06:47.108 254065 DEBUG nova.compute.manager [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 19:06:47 compute-0 nova_compute[254061]: 2026-01-20 19:06:47.110 254065 DEBUG nova.virt.libvirt.driver [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 19:06:47 compute-0 nova_compute[254061]: 2026-01-20 19:06:47.110 254065 INFO nova.virt.libvirt.driver [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Creating image(s)
Jan 20 19:06:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:06:47.145Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:06:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:06:47.145Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:06:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:06:47.146Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:06:47 compute-0 nova_compute[254061]: 2026-01-20 19:06:47.153 254065 DEBUG nova.storage.rbd_utils [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] rbd image 120a65b5-a5a0-4431-bfbb-56c5468d25a6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 19:06:47 compute-0 nova_compute[254061]: 2026-01-20 19:06:47.189 254065 DEBUG nova.storage.rbd_utils [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] rbd image 120a65b5-a5a0-4431-bfbb-56c5468d25a6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 19:06:47 compute-0 nova_compute[254061]: 2026-01-20 19:06:47.225 254065 DEBUG nova.storage.rbd_utils [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] rbd image 120a65b5-a5a0-4431-bfbb-56c5468d25a6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 19:06:47 compute-0 nova_compute[254061]: 2026-01-20 19:06:47.229 254065 DEBUG oslo_concurrency.lockutils [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Acquiring lock "2e09eef5d7d60aeeb43ad4911302a9acdced7386" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:06:47 compute-0 nova_compute[254061]: 2026-01-20 19:06:47.230 254065 DEBUG oslo_concurrency.lockutils [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "2e09eef5d7d60aeeb43ad4911302a9acdced7386" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:06:47 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v734: 337 pgs: 337 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 20 19:06:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:47 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6c4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:47 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:06:47 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:06:47 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:06:47.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:06:47 compute-0 nova_compute[254061]: 2026-01-20 19:06:47.690 254065 DEBUG nova.virt.libvirt.imagebackend [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Image locations are: [{'url': 'rbd://aecbbf3b-b405-507b-97d7-637a83f5b4b1/images/bc57af0c-4b71-499e-9808-3c8fc070a488/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://aecbbf3b-b405-507b-97d7-637a83f5b4b1/images/bc57af0c-4b71-499e-9808-3c8fc070a488/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Jan 20 19:06:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:47 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6c4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:48 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/1919364627' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:06:48 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 20 19:06:48 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1786568201' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 19:06:48 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 20 19:06:48 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1786568201' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 19:06:48 compute-0 nova_compute[254061]: 2026-01-20 19:06:48.589 254065 WARNING oslo_policy.policy [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Jan 20 19:06:48 compute-0 nova_compute[254061]: 2026-01-20 19:06:48.590 254065 WARNING oslo_policy.policy [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Jan 20 19:06:48 compute-0 nova_compute[254061]: 2026-01-20 19:06:48.593 254065 DEBUG nova.policy [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd34bd159f8884263a7481e3fcff15267', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dc8a6ea17f334edbbfaf2a91ec6fd167', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 19:06:48 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:48 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6c4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:49 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:06:49 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:06:49 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:06:49.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:06:49 compute-0 ceph-mon[74381]: pgmap v734: 337 pgs: 337 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Jan 20 19:06:49 compute-0 ceph-mon[74381]: from='client.? 192.168.122.10:0/1786568201' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 19:06:49 compute-0 ceph-mon[74381]: from='client.? 192.168.122.10:0/1786568201' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 19:06:49 compute-0 nova_compute[254061]: 2026-01-20 19:06:49.326 254065 DEBUG oslo_concurrency.processutils [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2e09eef5d7d60aeeb43ad4911302a9acdced7386.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:06:49 compute-0 nova_compute[254061]: 2026-01-20 19:06:49.377 254065 DEBUG oslo_concurrency.processutils [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2e09eef5d7d60aeeb43ad4911302a9acdced7386.part --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:06:49 compute-0 nova_compute[254061]: 2026-01-20 19:06:49.378 254065 DEBUG nova.virt.images [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] bc57af0c-4b71-499e-9808-3c8fc070a488 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Jan 20 19:06:49 compute-0 nova_compute[254061]: 2026-01-20 19:06:49.379 254065 DEBUG nova.privsep.utils [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Jan 20 19:06:49 compute-0 nova_compute[254061]: 2026-01-20 19:06:49.380 254065 DEBUG oslo_concurrency.processutils [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/2e09eef5d7d60aeeb43ad4911302a9acdced7386.part /var/lib/nova/instances/_base/2e09eef5d7d60aeeb43ad4911302a9acdced7386.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:06:49 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v735: 337 pgs: 337 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:06:49 compute-0 nova_compute[254061]: 2026-01-20 19:06:49.516 254065 DEBUG nova.network.neutron [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Successfully created port: cfcfd83d-5be0-4a39-9bc1-94ae78153295 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 19:06:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:49 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6ec004850 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:49 compute-0 nova_compute[254061]: 2026-01-20 19:06:49.590 254065 DEBUG oslo_concurrency.processutils [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/2e09eef5d7d60aeeb43ad4911302a9acdced7386.part /var/lib/nova/instances/_base/2e09eef5d7d60aeeb43ad4911302a9acdced7386.converted" returned: 0 in 0.210s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:06:49 compute-0 nova_compute[254061]: 2026-01-20 19:06:49.595 254065 DEBUG oslo_concurrency.processutils [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2e09eef5d7d60aeeb43ad4911302a9acdced7386.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:06:49 compute-0 nova_compute[254061]: 2026-01-20 19:06:49.646 254065 DEBUG oslo_concurrency.processutils [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2e09eef5d7d60aeeb43ad4911302a9acdced7386.converted --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:06:49 compute-0 nova_compute[254061]: 2026-01-20 19:06:49.648 254065 DEBUG oslo_concurrency.lockutils [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "2e09eef5d7d60aeeb43ad4911302a9acdced7386" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.418s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:06:49 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:06:49 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:06:49 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:06:49.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:06:49 compute-0 nova_compute[254061]: 2026-01-20 19:06:49.692 254065 DEBUG nova.storage.rbd_utils [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] rbd image 120a65b5-a5a0-4431-bfbb-56c5468d25a6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 19:06:49 compute-0 nova_compute[254061]: 2026-01-20 19:06:49.696 254065 DEBUG oslo_concurrency.processutils [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/2e09eef5d7d60aeeb43ad4911302a9acdced7386 120a65b5-a5a0-4431-bfbb-56c5468d25a6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:06:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:06:49] "GET /metrics HTTP/1.1" 200 48397 "" "Prometheus/2.51.0"
Jan 20 19:06:49 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:06:49] "GET /metrics HTTP/1.1" 200 48397 "" "Prometheus/2.51.0"
Jan 20 19:06:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:49 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6d4003670 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:50 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e164 do_prune osdmap full prune enabled
Jan 20 19:06:50 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e165 e165: 3 total, 3 up, 3 in
Jan 20 19:06:50 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e165: 3 total, 3 up, 3 in
Jan 20 19:06:50 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:50 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6d4003670 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:51 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:06:51 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:06:51 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:06:51.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:06:51 compute-0 ceph-mon[74381]: pgmap v735: 337 pgs: 337 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:06:51 compute-0 ceph-mon[74381]: osdmap e165: 3 total, 3 up, 3 in
Jan 20 19:06:51 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e165 do_prune osdmap full prune enabled
Jan 20 19:06:51 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e166 e166: 3 total, 3 up, 3 in
Jan 20 19:06:51 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e166: 3 total, 3 up, 3 in
Jan 20 19:06:51 compute-0 nova_compute[254061]: 2026-01-20 19:06:51.439 254065 DEBUG oslo_concurrency.processutils [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/2e09eef5d7d60aeeb43ad4911302a9acdced7386 120a65b5-a5a0-4431-bfbb-56c5468d25a6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.743s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:06:51 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v738: 337 pgs: 337 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 383 B/s wr, 10 op/s
Jan 20 19:06:51 compute-0 nova_compute[254061]: 2026-01-20 19:06:51.508 254065 DEBUG nova.storage.rbd_utils [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] resizing rbd image 120a65b5-a5a0-4431-bfbb-56c5468d25a6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 19:06:51 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:51 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6c4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:51 compute-0 nova_compute[254061]: 2026-01-20 19:06:51.605 254065 DEBUG nova.objects.instance [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lazy-loading 'migration_context' on Instance uuid 120a65b5-a5a0-4431-bfbb-56c5468d25a6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 19:06:51 compute-0 nova_compute[254061]: 2026-01-20 19:06:51.620 254065 DEBUG nova.virt.libvirt.driver [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 19:06:51 compute-0 nova_compute[254061]: 2026-01-20 19:06:51.621 254065 DEBUG nova.virt.libvirt.driver [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Ensure instance console log exists: /var/lib/nova/instances/120a65b5-a5a0-4431-bfbb-56c5468d25a6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 19:06:51 compute-0 nova_compute[254061]: 2026-01-20 19:06:51.621 254065 DEBUG oslo_concurrency.lockutils [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:06:51 compute-0 nova_compute[254061]: 2026-01-20 19:06:51.621 254065 DEBUG oslo_concurrency.lockutils [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:06:51 compute-0 nova_compute[254061]: 2026-01-20 19:06:51.621 254065 DEBUG oslo_concurrency.lockutils [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:06:51 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:06:51 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:06:51 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:06:51 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:06:51.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:06:51 compute-0 nova_compute[254061]: 2026-01-20 19:06:51.810 254065 DEBUG nova.network.neutron [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Successfully updated port: cfcfd83d-5be0-4a39-9bc1-94ae78153295 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 19:06:51 compute-0 nova_compute[254061]: 2026-01-20 19:06:51.828 254065 DEBUG oslo_concurrency.lockutils [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Acquiring lock "refresh_cache-120a65b5-a5a0-4431-bfbb-56c5468d25a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 19:06:51 compute-0 nova_compute[254061]: 2026-01-20 19:06:51.829 254065 DEBUG oslo_concurrency.lockutils [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Acquired lock "refresh_cache-120a65b5-a5a0-4431-bfbb-56c5468d25a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 19:06:51 compute-0 nova_compute[254061]: 2026-01-20 19:06:51.829 254065 DEBUG nova.network.neutron [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 19:06:51 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:51 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6ec004850 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:52 compute-0 ceph-mon[74381]: osdmap e166: 3 total, 3 up, 3 in
Jan 20 19:06:52 compute-0 nova_compute[254061]: 2026-01-20 19:06:52.378 254065 DEBUG nova.compute.manager [req-b0235dc7-5765-4fca-a4b9-969378935f5c req-7a84ef64-5857-4d78-8d11-5d92aef42122 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Received event network-changed-cfcfd83d-5be0-4a39-9bc1-94ae78153295 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 19:06:52 compute-0 nova_compute[254061]: 2026-01-20 19:06:52.378 254065 DEBUG nova.compute.manager [req-b0235dc7-5765-4fca-a4b9-969378935f5c req-7a84ef64-5857-4d78-8d11-5d92aef42122 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Refreshing instance network info cache due to event network-changed-cfcfd83d-5be0-4a39-9bc1-94ae78153295. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 19:06:52 compute-0 nova_compute[254061]: 2026-01-20 19:06:52.379 254065 DEBUG oslo_concurrency.lockutils [req-b0235dc7-5765-4fca-a4b9-969378935f5c req-7a84ef64-5857-4d78-8d11-5d92aef42122 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Acquiring lock "refresh_cache-120a65b5-a5a0-4431-bfbb-56c5468d25a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 19:06:52 compute-0 nova_compute[254061]: 2026-01-20 19:06:52.560 254065 DEBUG nova.network.neutron [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 19:06:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:52 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6ec004850 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:53 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:06:53 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:06:53 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:06:53.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:06:53 compute-0 ceph-mon[74381]: pgmap v738: 337 pgs: 337 active+clean; 41 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 383 B/s wr, 10 op/s
Jan 20 19:06:53 compute-0 podman[259133]: 2026-01-20 19:06:53.176189659 +0000 UTC m=+0.155364474 container health_status d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 20 19:06:53 compute-0 nova_compute[254061]: 2026-01-20 19:06:53.418 254065 DEBUG nova.network.neutron [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Updating instance_info_cache with network_info: [{"id": "cfcfd83d-5be0-4a39-9bc1-94ae78153295", "address": "fa:16:3e:7d:76:12", "network": {"id": "be9957c5-bb46-4eb1-886f-ace069f03c77", "bridge": "br-int", "label": "tempest-network-smoke--835136308", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcfcfd83d-5b", "ovs_interfaceid": "cfcfd83d-5be0-4a39-9bc1-94ae78153295", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 19:06:53 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v739: 337 pgs: 337 active+clean; 48 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 330 KiB/s wr, 29 op/s
Jan 20 19:06:53 compute-0 nova_compute[254061]: 2026-01-20 19:06:53.469 254065 DEBUG oslo_concurrency.lockutils [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Releasing lock "refresh_cache-120a65b5-a5a0-4431-bfbb-56c5468d25a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 19:06:53 compute-0 nova_compute[254061]: 2026-01-20 19:06:53.470 254065 DEBUG nova.compute.manager [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Instance network_info: |[{"id": "cfcfd83d-5be0-4a39-9bc1-94ae78153295", "address": "fa:16:3e:7d:76:12", "network": {"id": "be9957c5-bb46-4eb1-886f-ace069f03c77", "bridge": "br-int", "label": "tempest-network-smoke--835136308", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcfcfd83d-5b", "ovs_interfaceid": "cfcfd83d-5be0-4a39-9bc1-94ae78153295", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 19:06:53 compute-0 nova_compute[254061]: 2026-01-20 19:06:53.471 254065 DEBUG oslo_concurrency.lockutils [req-b0235dc7-5765-4fca-a4b9-969378935f5c req-7a84ef64-5857-4d78-8d11-5d92aef42122 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Acquired lock "refresh_cache-120a65b5-a5a0-4431-bfbb-56c5468d25a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 19:06:53 compute-0 nova_compute[254061]: 2026-01-20 19:06:53.471 254065 DEBUG nova.network.neutron [req-b0235dc7-5765-4fca-a4b9-969378935f5c req-7a84ef64-5857-4d78-8d11-5d92aef42122 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Refreshing network info cache for port cfcfd83d-5be0-4a39-9bc1-94ae78153295 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 19:06:53 compute-0 nova_compute[254061]: 2026-01-20 19:06:53.476 254065 DEBUG nova.virt.libvirt.driver [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Start _get_guest_xml network_info=[{"id": "cfcfd83d-5be0-4a39-9bc1-94ae78153295", "address": "fa:16:3e:7d:76:12", "network": {"id": "be9957c5-bb46-4eb1-886f-ace069f03c77", "bridge": "br-int", "label": "tempest-network-smoke--835136308", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcfcfd83d-5b", "ovs_interfaceid": "cfcfd83d-5be0-4a39-9bc1-94ae78153295", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T19:05:59Z,direct_url=<?>,disk_format='qcow2',id=bc57af0c-4b71-499e-9808-3c8fc070a488,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='811a4eb676464ca2bd20c0cc2d2f61c9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T19:06:05Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_secret_uuid': None, 'encryption_format': None, 'size': 0, 'encrypted': False, 'guest_format': None, 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': 'bc57af0c-4b71-499e-9808-3c8fc070a488'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 19:06:53 compute-0 nova_compute[254061]: 2026-01-20 19:06:53.485 254065 WARNING nova.virt.libvirt.driver [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 19:06:53 compute-0 nova_compute[254061]: 2026-01-20 19:06:53.493 254065 DEBUG nova.virt.libvirt.host [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 19:06:53 compute-0 nova_compute[254061]: 2026-01-20 19:06:53.494 254065 DEBUG nova.virt.libvirt.host [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 19:06:53 compute-0 nova_compute[254061]: 2026-01-20 19:06:53.503 254065 DEBUG nova.virt.libvirt.host [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 19:06:53 compute-0 nova_compute[254061]: 2026-01-20 19:06:53.504 254065 DEBUG nova.virt.libvirt.host [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 19:06:53 compute-0 nova_compute[254061]: 2026-01-20 19:06:53.505 254065 DEBUG nova.virt.libvirt.driver [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 19:06:53 compute-0 nova_compute[254061]: 2026-01-20 19:06:53.506 254065 DEBUG nova.virt.hardware [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T19:05:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7446c314-5a17-42fd-97d9-a7a94e27bff9',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T19:05:59Z,direct_url=<?>,disk_format='qcow2',id=bc57af0c-4b71-499e-9808-3c8fc070a488,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='811a4eb676464ca2bd20c0cc2d2f61c9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T19:06:05Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 19:06:53 compute-0 nova_compute[254061]: 2026-01-20 19:06:53.507 254065 DEBUG nova.virt.hardware [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 19:06:53 compute-0 nova_compute[254061]: 2026-01-20 19:06:53.507 254065 DEBUG nova.virt.hardware [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 19:06:53 compute-0 nova_compute[254061]: 2026-01-20 19:06:53.508 254065 DEBUG nova.virt.hardware [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 19:06:53 compute-0 nova_compute[254061]: 2026-01-20 19:06:53.508 254065 DEBUG nova.virt.hardware [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 19:06:53 compute-0 nova_compute[254061]: 2026-01-20 19:06:53.509 254065 DEBUG nova.virt.hardware [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 19:06:53 compute-0 nova_compute[254061]: 2026-01-20 19:06:53.509 254065 DEBUG nova.virt.hardware [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 19:06:53 compute-0 nova_compute[254061]: 2026-01-20 19:06:53.510 254065 DEBUG nova.virt.hardware [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 19:06:53 compute-0 nova_compute[254061]: 2026-01-20 19:06:53.510 254065 DEBUG nova.virt.hardware [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 19:06:53 compute-0 nova_compute[254061]: 2026-01-20 19:06:53.511 254065 DEBUG nova.virt.hardware [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 19:06:53 compute-0 nova_compute[254061]: 2026-01-20 19:06:53.512 254065 DEBUG nova.virt.hardware [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 19:06:53 compute-0 nova_compute[254061]: 2026-01-20 19:06:53.518 254065 DEBUG nova.privsep.utils [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Jan 20 19:06:53 compute-0 nova_compute[254061]: 2026-01-20 19:06:53.519 254065 DEBUG oslo_concurrency.processutils [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:06:53 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:53 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6e0004530 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:53 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:06:53 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:06:53 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:06:53.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:06:53 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 20 19:06:53 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2710421768' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 19:06:53 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:53 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6e0004530 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:53 compute-0 nova_compute[254061]: 2026-01-20 19:06:53.979 254065 DEBUG oslo_concurrency.processutils [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:06:54 compute-0 nova_compute[254061]: 2026-01-20 19:06:54.001 254065 DEBUG nova.storage.rbd_utils [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] rbd image 120a65b5-a5a0-4431-bfbb-56c5468d25a6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 19:06:54 compute-0 nova_compute[254061]: 2026-01-20 19:06:54.005 254065 DEBUG oslo_concurrency.processutils [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:06:54 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/2710421768' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 19:06:54 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 20 19:06:54 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3187729471' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 19:06:54 compute-0 nova_compute[254061]: 2026-01-20 19:06:54.466 254065 DEBUG oslo_concurrency.processutils [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:06:54 compute-0 nova_compute[254061]: 2026-01-20 19:06:54.468 254065 DEBUG nova.virt.libvirt.vif [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T19:06:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-84660842',display_name='tempest-TestNetworkBasicOps-server-84660842',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-84660842',id=1,image_ref='bc57af0c-4b71-499e-9808-3c8fc070a488',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFU7AxNv7ZeURl1+csXbYC/yFx+mGOUnV8YctLQySdOGLbNML9aoeg2PcBDcpPXGhyvDZG90VA03RRAO3sskooaLNd6/MsjrlH5CyWAQVkGencURtEhb/4rZrGfyF5EWzw==',key_name='tempest-TestNetworkBasicOps-1594253247',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dc8a6ea17f334edbbfaf2a91ec6fd167',ramdisk_id='',reservation_id='r-h1lditqw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='bc57af0c-4b71-499e-9808-3c8fc070a488',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-899583499',owner_user_name='tempest-TestNetworkBasicOps-899583499-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T19:06:47Z,user_data=None,user_id='d34bd159f8884263a7481e3fcff15267',uuid=120a65b5-a5a0-4431-bfbb-56c5468d25a6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cfcfd83d-5be0-4a39-9bc1-94ae78153295", "address": "fa:16:3e:7d:76:12", "network": {"id": "be9957c5-bb46-4eb1-886f-ace069f03c77", "bridge": "br-int", "label": "tempest-network-smoke--835136308", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcfcfd83d-5b", "ovs_interfaceid": "cfcfd83d-5be0-4a39-9bc1-94ae78153295", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 19:06:54 compute-0 nova_compute[254061]: 2026-01-20 19:06:54.468 254065 DEBUG nova.network.os_vif_util [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Converting VIF {"id": "cfcfd83d-5be0-4a39-9bc1-94ae78153295", "address": "fa:16:3e:7d:76:12", "network": {"id": "be9957c5-bb46-4eb1-886f-ace069f03c77", "bridge": "br-int", "label": "tempest-network-smoke--835136308", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcfcfd83d-5b", "ovs_interfaceid": "cfcfd83d-5be0-4a39-9bc1-94ae78153295", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 19:06:54 compute-0 nova_compute[254061]: 2026-01-20 19:06:54.469 254065 DEBUG nova.network.os_vif_util [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:76:12,bridge_name='br-int',has_traffic_filtering=True,id=cfcfd83d-5be0-4a39-9bc1-94ae78153295,network=Network(be9957c5-bb46-4eb1-886f-ace069f03c77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcfcfd83d-5b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 19:06:54 compute-0 nova_compute[254061]: 2026-01-20 19:06:54.472 254065 DEBUG nova.objects.instance [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lazy-loading 'pci_devices' on Instance uuid 120a65b5-a5a0-4431-bfbb-56c5468d25a6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 19:06:54 compute-0 nova_compute[254061]: 2026-01-20 19:06:54.493 254065 DEBUG nova.virt.libvirt.driver [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] End _get_guest_xml xml=<domain type="kvm">
Jan 20 19:06:54 compute-0 nova_compute[254061]:   <uuid>120a65b5-a5a0-4431-bfbb-56c5468d25a6</uuid>
Jan 20 19:06:54 compute-0 nova_compute[254061]:   <name>instance-00000001</name>
Jan 20 19:06:54 compute-0 nova_compute[254061]:   <memory>131072</memory>
Jan 20 19:06:54 compute-0 nova_compute[254061]:   <vcpu>1</vcpu>
Jan 20 19:06:54 compute-0 nova_compute[254061]:   <metadata>
Jan 20 19:06:54 compute-0 nova_compute[254061]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 19:06:54 compute-0 nova_compute[254061]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 19:06:54 compute-0 nova_compute[254061]:       <nova:name>tempest-TestNetworkBasicOps-server-84660842</nova:name>
Jan 20 19:06:54 compute-0 nova_compute[254061]:       <nova:creationTime>2026-01-20 19:06:53</nova:creationTime>
Jan 20 19:06:54 compute-0 nova_compute[254061]:       <nova:flavor name="m1.nano">
Jan 20 19:06:54 compute-0 nova_compute[254061]:         <nova:memory>128</nova:memory>
Jan 20 19:06:54 compute-0 nova_compute[254061]:         <nova:disk>1</nova:disk>
Jan 20 19:06:54 compute-0 nova_compute[254061]:         <nova:swap>0</nova:swap>
Jan 20 19:06:54 compute-0 nova_compute[254061]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 19:06:54 compute-0 nova_compute[254061]:         <nova:vcpus>1</nova:vcpus>
Jan 20 19:06:54 compute-0 nova_compute[254061]:       </nova:flavor>
Jan 20 19:06:54 compute-0 nova_compute[254061]:       <nova:owner>
Jan 20 19:06:54 compute-0 nova_compute[254061]:         <nova:user uuid="d34bd159f8884263a7481e3fcff15267">tempest-TestNetworkBasicOps-899583499-project-member</nova:user>
Jan 20 19:06:54 compute-0 nova_compute[254061]:         <nova:project uuid="dc8a6ea17f334edbbfaf2a91ec6fd167">tempest-TestNetworkBasicOps-899583499</nova:project>
Jan 20 19:06:54 compute-0 nova_compute[254061]:       </nova:owner>
Jan 20 19:06:54 compute-0 nova_compute[254061]:       <nova:root type="image" uuid="bc57af0c-4b71-499e-9808-3c8fc070a488"/>
Jan 20 19:06:54 compute-0 nova_compute[254061]:       <nova:ports>
Jan 20 19:06:54 compute-0 nova_compute[254061]:         <nova:port uuid="cfcfd83d-5be0-4a39-9bc1-94ae78153295">
Jan 20 19:06:54 compute-0 nova_compute[254061]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 20 19:06:54 compute-0 nova_compute[254061]:         </nova:port>
Jan 20 19:06:54 compute-0 nova_compute[254061]:       </nova:ports>
Jan 20 19:06:54 compute-0 nova_compute[254061]:     </nova:instance>
Jan 20 19:06:54 compute-0 nova_compute[254061]:   </metadata>
Jan 20 19:06:54 compute-0 nova_compute[254061]:   <sysinfo type="smbios">
Jan 20 19:06:54 compute-0 nova_compute[254061]:     <system>
Jan 20 19:06:54 compute-0 nova_compute[254061]:       <entry name="manufacturer">RDO</entry>
Jan 20 19:06:54 compute-0 nova_compute[254061]:       <entry name="product">OpenStack Compute</entry>
Jan 20 19:06:54 compute-0 nova_compute[254061]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 19:06:54 compute-0 nova_compute[254061]:       <entry name="serial">120a65b5-a5a0-4431-bfbb-56c5468d25a6</entry>
Jan 20 19:06:54 compute-0 nova_compute[254061]:       <entry name="uuid">120a65b5-a5a0-4431-bfbb-56c5468d25a6</entry>
Jan 20 19:06:54 compute-0 nova_compute[254061]:       <entry name="family">Virtual Machine</entry>
Jan 20 19:06:54 compute-0 nova_compute[254061]:     </system>
Jan 20 19:06:54 compute-0 nova_compute[254061]:   </sysinfo>
Jan 20 19:06:54 compute-0 nova_compute[254061]:   <os>
Jan 20 19:06:54 compute-0 nova_compute[254061]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 19:06:54 compute-0 nova_compute[254061]:     <boot dev="hd"/>
Jan 20 19:06:54 compute-0 nova_compute[254061]:     <smbios mode="sysinfo"/>
Jan 20 19:06:54 compute-0 nova_compute[254061]:   </os>
Jan 20 19:06:54 compute-0 nova_compute[254061]:   <features>
Jan 20 19:06:54 compute-0 nova_compute[254061]:     <acpi/>
Jan 20 19:06:54 compute-0 nova_compute[254061]:     <apic/>
Jan 20 19:06:54 compute-0 nova_compute[254061]:     <vmcoreinfo/>
Jan 20 19:06:54 compute-0 nova_compute[254061]:   </features>
Jan 20 19:06:54 compute-0 nova_compute[254061]:   <clock offset="utc">
Jan 20 19:06:54 compute-0 nova_compute[254061]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 19:06:54 compute-0 nova_compute[254061]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 19:06:54 compute-0 nova_compute[254061]:     <timer name="hpet" present="no"/>
Jan 20 19:06:54 compute-0 nova_compute[254061]:   </clock>
Jan 20 19:06:54 compute-0 nova_compute[254061]:   <cpu mode="host-model" match="exact">
Jan 20 19:06:54 compute-0 nova_compute[254061]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 19:06:54 compute-0 nova_compute[254061]:   </cpu>
Jan 20 19:06:54 compute-0 nova_compute[254061]:   <devices>
Jan 20 19:06:54 compute-0 nova_compute[254061]:     <disk type="network" device="disk">
Jan 20 19:06:54 compute-0 nova_compute[254061]:       <driver type="raw" cache="none"/>
Jan 20 19:06:54 compute-0 nova_compute[254061]:       <source protocol="rbd" name="vms/120a65b5-a5a0-4431-bfbb-56c5468d25a6_disk">
Jan 20 19:06:54 compute-0 nova_compute[254061]:         <host name="192.168.122.100" port="6789"/>
Jan 20 19:06:54 compute-0 nova_compute[254061]:         <host name="192.168.122.102" port="6789"/>
Jan 20 19:06:54 compute-0 nova_compute[254061]:         <host name="192.168.122.101" port="6789"/>
Jan 20 19:06:54 compute-0 nova_compute[254061]:       </source>
Jan 20 19:06:54 compute-0 nova_compute[254061]:       <auth username="openstack">
Jan 20 19:06:54 compute-0 nova_compute[254061]:         <secret type="ceph" uuid="aecbbf3b-b405-507b-97d7-637a83f5b4b1"/>
Jan 20 19:06:54 compute-0 nova_compute[254061]:       </auth>
Jan 20 19:06:54 compute-0 nova_compute[254061]:       <target dev="vda" bus="virtio"/>
Jan 20 19:06:54 compute-0 nova_compute[254061]:     </disk>
Jan 20 19:06:54 compute-0 nova_compute[254061]:     <disk type="network" device="cdrom">
Jan 20 19:06:54 compute-0 nova_compute[254061]:       <driver type="raw" cache="none"/>
Jan 20 19:06:54 compute-0 nova_compute[254061]:       <source protocol="rbd" name="vms/120a65b5-a5a0-4431-bfbb-56c5468d25a6_disk.config">
Jan 20 19:06:54 compute-0 nova_compute[254061]:         <host name="192.168.122.100" port="6789"/>
Jan 20 19:06:54 compute-0 nova_compute[254061]:         <host name="192.168.122.102" port="6789"/>
Jan 20 19:06:54 compute-0 nova_compute[254061]:         <host name="192.168.122.101" port="6789"/>
Jan 20 19:06:54 compute-0 nova_compute[254061]:       </source>
Jan 20 19:06:54 compute-0 nova_compute[254061]:       <auth username="openstack">
Jan 20 19:06:54 compute-0 nova_compute[254061]:         <secret type="ceph" uuid="aecbbf3b-b405-507b-97d7-637a83f5b4b1"/>
Jan 20 19:06:54 compute-0 nova_compute[254061]:       </auth>
Jan 20 19:06:54 compute-0 nova_compute[254061]:       <target dev="sda" bus="sata"/>
Jan 20 19:06:54 compute-0 nova_compute[254061]:     </disk>
Jan 20 19:06:54 compute-0 nova_compute[254061]:     <interface type="ethernet">
Jan 20 19:06:54 compute-0 nova_compute[254061]:       <mac address="fa:16:3e:7d:76:12"/>
Jan 20 19:06:54 compute-0 nova_compute[254061]:       <model type="virtio"/>
Jan 20 19:06:54 compute-0 nova_compute[254061]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 19:06:54 compute-0 nova_compute[254061]:       <mtu size="1442"/>
Jan 20 19:06:54 compute-0 nova_compute[254061]:       <target dev="tapcfcfd83d-5b"/>
Jan 20 19:06:54 compute-0 nova_compute[254061]:     </interface>
Jan 20 19:06:54 compute-0 nova_compute[254061]:     <serial type="pty">
Jan 20 19:06:54 compute-0 nova_compute[254061]:       <log file="/var/lib/nova/instances/120a65b5-a5a0-4431-bfbb-56c5468d25a6/console.log" append="off"/>
Jan 20 19:06:54 compute-0 nova_compute[254061]:     </serial>
Jan 20 19:06:54 compute-0 nova_compute[254061]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 19:06:54 compute-0 nova_compute[254061]:     <video>
Jan 20 19:06:54 compute-0 nova_compute[254061]:       <model type="virtio"/>
Jan 20 19:06:54 compute-0 nova_compute[254061]:     </video>
Jan 20 19:06:54 compute-0 nova_compute[254061]:     <input type="tablet" bus="usb"/>
Jan 20 19:06:54 compute-0 nova_compute[254061]:     <rng model="virtio">
Jan 20 19:06:54 compute-0 nova_compute[254061]:       <backend model="random">/dev/urandom</backend>
Jan 20 19:06:54 compute-0 nova_compute[254061]:     </rng>
Jan 20 19:06:54 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root"/>
Jan 20 19:06:54 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:06:54 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:06:54 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:06:54 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:06:54 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:06:54 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:06:54 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:06:54 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:06:54 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:06:54 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:06:54 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:06:54 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:06:54 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:06:54 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:06:54 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:06:54 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:06:54 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:06:54 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:06:54 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:06:54 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:06:54 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:06:54 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:06:54 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:06:54 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:06:54 compute-0 nova_compute[254061]:     <controller type="usb" index="0"/>
Jan 20 19:06:54 compute-0 nova_compute[254061]:     <memballoon model="virtio">
Jan 20 19:06:54 compute-0 nova_compute[254061]:       <stats period="10"/>
Jan 20 19:06:54 compute-0 nova_compute[254061]:     </memballoon>
Jan 20 19:06:54 compute-0 nova_compute[254061]:   </devices>
Jan 20 19:06:54 compute-0 nova_compute[254061]: </domain>
Jan 20 19:06:54 compute-0 nova_compute[254061]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 19:06:54 compute-0 nova_compute[254061]: 2026-01-20 19:06:54.495 254065 DEBUG nova.compute.manager [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Preparing to wait for external event network-vif-plugged-cfcfd83d-5be0-4a39-9bc1-94ae78153295 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 19:06:54 compute-0 nova_compute[254061]: 2026-01-20 19:06:54.495 254065 DEBUG oslo_concurrency.lockutils [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Acquiring lock "120a65b5-a5a0-4431-bfbb-56c5468d25a6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:06:54 compute-0 nova_compute[254061]: 2026-01-20 19:06:54.496 254065 DEBUG oslo_concurrency.lockutils [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "120a65b5-a5a0-4431-bfbb-56c5468d25a6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:06:54 compute-0 nova_compute[254061]: 2026-01-20 19:06:54.496 254065 DEBUG oslo_concurrency.lockutils [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "120a65b5-a5a0-4431-bfbb-56c5468d25a6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:06:54 compute-0 nova_compute[254061]: 2026-01-20 19:06:54.496 254065 DEBUG nova.virt.libvirt.vif [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T19:06:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-84660842',display_name='tempest-TestNetworkBasicOps-server-84660842',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-84660842',id=1,image_ref='bc57af0c-4b71-499e-9808-3c8fc070a488',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFU7AxNv7ZeURl1+csXbYC/yFx+mGOUnV8YctLQySdOGLbNML9aoeg2PcBDcpPXGhyvDZG90VA03RRAO3sskooaLNd6/MsjrlH5CyWAQVkGencURtEhb/4rZrGfyF5EWzw==',key_name='tempest-TestNetworkBasicOps-1594253247',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dc8a6ea17f334edbbfaf2a91ec6fd167',ramdisk_id='',reservation_id='r-h1lditqw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='bc57af0c-4b71-499e-9808-3c8fc070a488',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-899583499',owner_user_name='tempest-TestNetworkBasicOps-899583499-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T19:06:47Z,user_data=None,user_id='d34bd159f8884263a7481e3fcff15267',uuid=120a65b5-a5a0-4431-bfbb-56c5468d25a6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cfcfd83d-5be0-4a39-9bc1-94ae78153295", "address": "fa:16:3e:7d:76:12", "network": {"id": "be9957c5-bb46-4eb1-886f-ace069f03c77", "bridge": "br-int", "label": "tempest-network-smoke--835136308", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcfcfd83d-5b", "ovs_interfaceid": "cfcfd83d-5be0-4a39-9bc1-94ae78153295", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 19:06:54 compute-0 nova_compute[254061]: 2026-01-20 19:06:54.497 254065 DEBUG nova.network.os_vif_util [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Converting VIF {"id": "cfcfd83d-5be0-4a39-9bc1-94ae78153295", "address": "fa:16:3e:7d:76:12", "network": {"id": "be9957c5-bb46-4eb1-886f-ace069f03c77", "bridge": "br-int", "label": "tempest-network-smoke--835136308", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcfcfd83d-5b", "ovs_interfaceid": "cfcfd83d-5be0-4a39-9bc1-94ae78153295", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 19:06:54 compute-0 nova_compute[254061]: 2026-01-20 19:06:54.497 254065 DEBUG nova.network.os_vif_util [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7d:76:12,bridge_name='br-int',has_traffic_filtering=True,id=cfcfd83d-5be0-4a39-9bc1-94ae78153295,network=Network(be9957c5-bb46-4eb1-886f-ace069f03c77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcfcfd83d-5b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 19:06:54 compute-0 nova_compute[254061]: 2026-01-20 19:06:54.498 254065 DEBUG os_vif [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:76:12,bridge_name='br-int',has_traffic_filtering=True,id=cfcfd83d-5be0-4a39-9bc1-94ae78153295,network=Network(be9957c5-bb46-4eb1-886f-ace069f03c77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcfcfd83d-5b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 19:06:54 compute-0 nova_compute[254061]: 2026-01-20 19:06:54.527 254065 DEBUG ovsdbapp.backend.ovs_idl [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 20 19:06:54 compute-0 nova_compute[254061]: 2026-01-20 19:06:54.528 254065 DEBUG ovsdbapp.backend.ovs_idl [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 20 19:06:54 compute-0 nova_compute[254061]: 2026-01-20 19:06:54.528 254065 DEBUG ovsdbapp.backend.ovs_idl [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 20 19:06:54 compute-0 nova_compute[254061]: 2026-01-20 19:06:54.528 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 19:06:54 compute-0 nova_compute[254061]: 2026-01-20 19:06:54.529 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [POLLOUT] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:06:54 compute-0 nova_compute[254061]: 2026-01-20 19:06:54.529 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 19:06:54 compute-0 nova_compute[254061]: 2026-01-20 19:06:54.530 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:06:54 compute-0 nova_compute[254061]: 2026-01-20 19:06:54.531 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:06:54 compute-0 nova_compute[254061]: 2026-01-20 19:06:54.533 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:06:54 compute-0 nova_compute[254061]: 2026-01-20 19:06:54.542 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:06:54 compute-0 nova_compute[254061]: 2026-01-20 19:06:54.542 254065 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 19:06:54 compute-0 nova_compute[254061]: 2026-01-20 19:06:54.543 254065 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 19:06:54 compute-0 nova_compute[254061]: 2026-01-20 19:06:54.544 254065 INFO oslo.privsep.daemon [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpj5qgs1e2/privsep.sock']
Jan 20 19:06:54 compute-0 ceph-mgr[74676]: [balancer INFO root] Optimize plan auto_2026-01-20_19:06:54
Jan 20 19:06:54 compute-0 ceph-mgr[74676]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 19:06:54 compute-0 ceph-mgr[74676]: [balancer INFO root] do_upmap
Jan 20 19:06:54 compute-0 ceph-mgr[74676]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.nfs', 'vms', 'default.rgw.control', 'cephfs.cephfs.data', '.mgr', 'images', 'default.rgw.log', 'backups', 'default.rgw.meta', '.rgw.root', 'volumes']
Jan 20 19:06:54 compute-0 ceph-mgr[74676]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 19:06:54 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:54 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6ec004850 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:55 compute-0 nova_compute[254061]: 2026-01-20 19:06:55.041 254065 DEBUG nova.network.neutron [req-b0235dc7-5765-4fca-a4b9-969378935f5c req-7a84ef64-5857-4d78-8d11-5d92aef42122 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Updated VIF entry in instance network info cache for port cfcfd83d-5be0-4a39-9bc1-94ae78153295. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 19:06:55 compute-0 nova_compute[254061]: 2026-01-20 19:06:55.043 254065 DEBUG nova.network.neutron [req-b0235dc7-5765-4fca-a4b9-969378935f5c req-7a84ef64-5857-4d78-8d11-5d92aef42122 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Updating instance_info_cache with network_info: [{"id": "cfcfd83d-5be0-4a39-9bc1-94ae78153295", "address": "fa:16:3e:7d:76:12", "network": {"id": "be9957c5-bb46-4eb1-886f-ace069f03c77", "bridge": "br-int", "label": "tempest-network-smoke--835136308", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcfcfd83d-5b", "ovs_interfaceid": "cfcfd83d-5be0-4a39-9bc1-94ae78153295", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 19:06:55 compute-0 nova_compute[254061]: 2026-01-20 19:06:55.064 254065 DEBUG oslo_concurrency.lockutils [req-b0235dc7-5765-4fca-a4b9-969378935f5c req-7a84ef64-5857-4d78-8d11-5d92aef42122 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Releasing lock "refresh_cache-120a65b5-a5a0-4431-bfbb-56c5468d25a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 19:06:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:06:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:06:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:06:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:06:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:06:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:06:55 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:06:55 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:06:55 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:06:55.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:06:55 compute-0 ceph-mon[74381]: pgmap v739: 337 pgs: 337 active+clean; 48 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 330 KiB/s wr, 29 op/s
Jan 20 19:06:55 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/3187729471' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 19:06:55 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:06:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 19:06:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:06:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 19:06:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:06:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 4.3432453441427364e-05 of space, bias 1.0, pg target 0.01302973603242821 quantized to 32 (current 32)
Jan 20 19:06:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:06:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:06:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:06:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:06:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:06:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 20 19:06:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:06:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 20 19:06:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:06:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:06:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:06:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 19:06:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:06:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 20 19:06:55 compute-0 nova_compute[254061]: 2026-01-20 19:06:55.329 254065 INFO oslo.privsep.daemon [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Spawned new privsep daemon via rootwrap
Jan 20 19:06:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:06:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:06:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:06:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 19:06:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:06:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 20 19:06:55 compute-0 nova_compute[254061]: 2026-01-20 19:06:55.179 259227 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 20 19:06:55 compute-0 nova_compute[254061]: 2026-01-20 19:06:55.184 259227 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 20 19:06:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 19:06:55 compute-0 nova_compute[254061]: 2026-01-20 19:06:55.187 259227 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none
Jan 20 19:06:55 compute-0 nova_compute[254061]: 2026-01-20 19:06:55.187 259227 INFO oslo.privsep.daemon [-] privsep daemon running as pid 259227
Jan 20 19:06:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:06:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 19:06:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:06:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:06:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:06:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:06:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:06:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:06:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:06:55 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v740: 337 pgs: 337 active+clean; 48 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 330 KiB/s wr, 28 op/s
Jan 20 19:06:55 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:55 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6ec004850 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:55 compute-0 nova_compute[254061]: 2026-01-20 19:06:55.666 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:06:55 compute-0 nova_compute[254061]: 2026-01-20 19:06:55.667 254065 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcfcfd83d-5b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 19:06:55 compute-0 nova_compute[254061]: 2026-01-20 19:06:55.668 254065 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcfcfd83d-5b, col_values=(('external_ids', {'iface-id': 'cfcfd83d-5be0-4a39-9bc1-94ae78153295', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7d:76:12', 'vm-uuid': '120a65b5-a5a0-4431-bfbb-56c5468d25a6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 19:06:55 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:06:55 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:06:55 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:06:55.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:06:55 compute-0 NetworkManager[48914]: <info>  [1768936015.7119] manager: (tapcfcfd83d-5b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/23)
Jan 20 19:06:55 compute-0 nova_compute[254061]: 2026-01-20 19:06:55.710 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:06:55 compute-0 nova_compute[254061]: 2026-01-20 19:06:55.714 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:06:55 compute-0 nova_compute[254061]: 2026-01-20 19:06:55.717 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:06:55 compute-0 nova_compute[254061]: 2026-01-20 19:06:55.718 254065 INFO os_vif [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7d:76:12,bridge_name='br-int',has_traffic_filtering=True,id=cfcfd83d-5be0-4a39-9bc1-94ae78153295,network=Network(be9957c5-bb46-4eb1-886f-ace069f03c77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcfcfd83d-5b')
Jan 20 19:06:55 compute-0 nova_compute[254061]: 2026-01-20 19:06:55.786 254065 DEBUG nova.virt.libvirt.driver [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 19:06:55 compute-0 nova_compute[254061]: 2026-01-20 19:06:55.787 254065 DEBUG nova.virt.libvirt.driver [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 19:06:55 compute-0 nova_compute[254061]: 2026-01-20 19:06:55.787 254065 DEBUG nova.virt.libvirt.driver [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] No VIF found with MAC fa:16:3e:7d:76:12, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 19:06:55 compute-0 nova_compute[254061]: 2026-01-20 19:06:55.788 254065 INFO nova.virt.libvirt.driver [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Using config drive
Jan 20 19:06:55 compute-0 nova_compute[254061]: 2026-01-20 19:06:55.834 254065 DEBUG nova.storage.rbd_utils [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] rbd image 120a65b5-a5a0-4431-bfbb-56c5468d25a6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 19:06:55 compute-0 nova_compute[254061]: 2026-01-20 19:06:55.842 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:06:55 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:55 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6e0004530 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:56 compute-0 sudo[259252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:06:56 compute-0 sudo[259252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:06:56 compute-0 sudo[259252]: pam_unix(sudo:session): session closed for user root
Jan 20 19:06:56 compute-0 nova_compute[254061]: 2026-01-20 19:06:56.602 254065 INFO nova.virt.libvirt.driver [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Creating config drive at /var/lib/nova/instances/120a65b5-a5a0-4431-bfbb-56c5468d25a6/disk.config
Jan 20 19:06:56 compute-0 nova_compute[254061]: 2026-01-20 19:06:56.610 254065 DEBUG oslo_concurrency.processutils [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/120a65b5-a5a0-4431-bfbb-56c5468d25a6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7akytk3j execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:06:56 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:06:56 compute-0 nova_compute[254061]: 2026-01-20 19:06:56.748 254065 DEBUG oslo_concurrency.processutils [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/120a65b5-a5a0-4431-bfbb-56c5468d25a6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7akytk3j" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:06:56 compute-0 nova_compute[254061]: 2026-01-20 19:06:56.781 254065 DEBUG nova.storage.rbd_utils [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] rbd image 120a65b5-a5a0-4431-bfbb-56c5468d25a6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 19:06:56 compute-0 nova_compute[254061]: 2026-01-20 19:06:56.785 254065 DEBUG oslo_concurrency.processutils [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/120a65b5-a5a0-4431-bfbb-56c5468d25a6/disk.config 120a65b5-a5a0-4431-bfbb-56c5468d25a6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:06:56 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:56 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6e0004530 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:57 compute-0 ceph-osd[82836]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Jan 20 19:06:57 compute-0 nova_compute[254061]: 2026-01-20 19:06:57.068 254065 DEBUG oslo_concurrency.processutils [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/120a65b5-a5a0-4431-bfbb-56c5468d25a6/disk.config 120a65b5-a5a0-4431-bfbb-56c5468d25a6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.283s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:06:57 compute-0 nova_compute[254061]: 2026-01-20 19:06:57.069 254065 INFO nova.virt.libvirt.driver [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Deleting local config drive /var/lib/nova/instances/120a65b5-a5a0-4431-bfbb-56c5468d25a6/disk.config because it was imported into RBD.
Jan 20 19:06:57 compute-0 systemd[1]: Starting libvirt secret daemon...
Jan 20 19:06:57 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:06:57 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:06:57 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:06:57.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:06:57 compute-0 systemd[1]: Started libvirt secret daemon.
Jan 20 19:06:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:06:57.146Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:06:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:06:57.148Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:06:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:06:57.148Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:06:57 compute-0 kernel: tun: Universal TUN/TAP device driver, 1.6
Jan 20 19:06:57 compute-0 kernel: tapcfcfd83d-5b: entered promiscuous mode
Jan 20 19:06:57 compute-0 NetworkManager[48914]: <info>  [1768936017.1959] manager: (tapcfcfd83d-5b): new Tun device (/org/freedesktop/NetworkManager/Devices/24)
Jan 20 19:06:57 compute-0 ovn_controller[155128]: 2026-01-20T19:06:57Z|00027|binding|INFO|Claiming lport cfcfd83d-5be0-4a39-9bc1-94ae78153295 for this chassis.
Jan 20 19:06:57 compute-0 ovn_controller[155128]: 2026-01-20T19:06:57Z|00028|binding|INFO|cfcfd83d-5be0-4a39-9bc1-94ae78153295: Claiming fa:16:3e:7d:76:12 10.100.0.6
Jan 20 19:06:57 compute-0 nova_compute[254061]: 2026-01-20 19:06:57.198 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:06:57 compute-0 nova_compute[254061]: 2026-01-20 19:06:57.203 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:06:57 compute-0 ceph-mon[74381]: pgmap v740: 337 pgs: 337 active+clean; 48 MiB data, 201 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 330 KiB/s wr, 28 op/s
Jan 20 19:06:57 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:06:57.214 165659 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7d:76:12 10.100.0.6'], port_security=['fa:16:3e:7d:76:12 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '120a65b5-a5a0-4431-bfbb-56c5468d25a6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-be9957c5-bb46-4eb1-886f-ace069f03c77', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dc8a6ea17f334edbbfaf2a91ec6fd167', 'neutron:revision_number': '2', 'neutron:security_group_ids': '42ca4ceb-b47f-4881-86bc-67ed2569e13c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a8599ef1-9c40-40f6-97bc-4f256790f7ed, chassis=[<ovs.db.idl.Row object at 0x7f4e780f9880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4e780f9880>], logical_port=cfcfd83d-5be0-4a39-9bc1-94ae78153295) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 19:06:57 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:06:57.215 165659 INFO neutron.agent.ovn.metadata.agent [-] Port cfcfd83d-5be0-4a39-9bc1-94ae78153295 in datapath be9957c5-bb46-4eb1-886f-ace069f03c77 bound to our chassis
Jan 20 19:06:57 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:06:57.217 165659 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network be9957c5-bb46-4eb1-886f-ace069f03c77
Jan 20 19:06:57 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:06:57.218 165659 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpvqem_9nk/privsep.sock']
Jan 20 19:06:57 compute-0 systemd-udevd[259356]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 19:06:57 compute-0 NetworkManager[48914]: <info>  [1768936017.2609] device (tapcfcfd83d-5b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 19:06:57 compute-0 NetworkManager[48914]: <info>  [1768936017.2616] device (tapcfcfd83d-5b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 19:06:57 compute-0 systemd-machined[220746]: New machine qemu-1-instance-00000001.
Jan 20 19:06:57 compute-0 nova_compute[254061]: 2026-01-20 19:06:57.305 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:06:57 compute-0 ovn_controller[155128]: 2026-01-20T19:06:57Z|00029|binding|INFO|Setting lport cfcfd83d-5be0-4a39-9bc1-94ae78153295 ovn-installed in OVS
Jan 20 19:06:57 compute-0 ovn_controller[155128]: 2026-01-20T19:06:57Z|00030|binding|INFO|Setting lport cfcfd83d-5be0-4a39-9bc1-94ae78153295 up in Southbound
Jan 20 19:06:57 compute-0 nova_compute[254061]: 2026-01-20 19:06:57.314 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:06:57 compute-0 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Jan 20 19:06:57 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v741: 337 pgs: 337 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 56 op/s
Jan 20 19:06:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:57 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6ec004850 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:57 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:06:57 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:06:57 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:06:57.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:06:57 compute-0 nova_compute[254061]: 2026-01-20 19:06:57.851 254065 DEBUG nova.compute.manager [req-ca91a70d-8d0f-4e05-8f7a-687c520f7683 req-64d5a837-a829-405c-9e18-5903b9e78c9c 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Received event network-vif-plugged-cfcfd83d-5be0-4a39-9bc1-94ae78153295 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 19:06:57 compute-0 nova_compute[254061]: 2026-01-20 19:06:57.852 254065 DEBUG oslo_concurrency.lockutils [req-ca91a70d-8d0f-4e05-8f7a-687c520f7683 req-64d5a837-a829-405c-9e18-5903b9e78c9c 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Acquiring lock "120a65b5-a5a0-4431-bfbb-56c5468d25a6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:06:57 compute-0 nova_compute[254061]: 2026-01-20 19:06:57.852 254065 DEBUG oslo_concurrency.lockutils [req-ca91a70d-8d0f-4e05-8f7a-687c520f7683 req-64d5a837-a829-405c-9e18-5903b9e78c9c 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Lock "120a65b5-a5a0-4431-bfbb-56c5468d25a6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:06:57 compute-0 nova_compute[254061]: 2026-01-20 19:06:57.852 254065 DEBUG oslo_concurrency.lockutils [req-ca91a70d-8d0f-4e05-8f7a-687c520f7683 req-64d5a837-a829-405c-9e18-5903b9e78c9c 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Lock "120a65b5-a5a0-4431-bfbb-56c5468d25a6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:06:57 compute-0 nova_compute[254061]: 2026-01-20 19:06:57.852 254065 DEBUG nova.compute.manager [req-ca91a70d-8d0f-4e05-8f7a-687c520f7683 req-64d5a837-a829-405c-9e18-5903b9e78c9c 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Processing event network-vif-plugged-cfcfd83d-5be0-4a39-9bc1-94ae78153295 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 19:06:57 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:06:57.901 165659 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 20 19:06:57 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:06:57.902 165659 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpvqem_9nk/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 20 19:06:57 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:06:57.783 259376 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 20 19:06:57 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:06:57.790 259376 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 20 19:06:57 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:06:57.794 259376 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
Jan 20 19:06:57 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:06:57.794 259376 INFO oslo.privsep.daemon [-] privsep daemon running as pid 259376
Jan 20 19:06:57 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:06:57.904 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[7c195dae-02c8-40dc-a583-8e00365820e8]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:06:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:57 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6cc002600 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:58 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:06:58.457 259376 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:06:58 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:06:58.457 259376 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:06:58 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:06:58.458 259376 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:06:58 compute-0 nova_compute[254061]: 2026-01-20 19:06:58.839 254065 DEBUG nova.virt.driver [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] Emitting event <LifecycleEvent: 1768936018.8386657, 120a65b5-a5a0-4431-bfbb-56c5468d25a6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 19:06:58 compute-0 nova_compute[254061]: 2026-01-20 19:06:58.840 254065 INFO nova.compute.manager [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] VM Started (Lifecycle Event)
Jan 20 19:06:58 compute-0 nova_compute[254061]: 2026-01-20 19:06:58.842 254065 DEBUG nova.compute.manager [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 19:06:58 compute-0 nova_compute[254061]: 2026-01-20 19:06:58.845 254065 DEBUG nova.virt.libvirt.driver [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 19:06:58 compute-0 nova_compute[254061]: 2026-01-20 19:06:58.863 254065 INFO nova.virt.libvirt.driver [-] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Instance spawned successfully.
Jan 20 19:06:58 compute-0 nova_compute[254061]: 2026-01-20 19:06:58.864 254065 DEBUG nova.virt.libvirt.driver [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 19:06:58 compute-0 nova_compute[254061]: 2026-01-20 19:06:58.891 254065 DEBUG nova.compute.manager [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 19:06:58 compute-0 nova_compute[254061]: 2026-01-20 19:06:58.897 254065 DEBUG nova.compute.manager [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 19:06:58 compute-0 nova_compute[254061]: 2026-01-20 19:06:58.901 254065 DEBUG nova.virt.libvirt.driver [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 19:06:58 compute-0 nova_compute[254061]: 2026-01-20 19:06:58.901 254065 DEBUG nova.virt.libvirt.driver [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 19:06:58 compute-0 nova_compute[254061]: 2026-01-20 19:06:58.902 254065 DEBUG nova.virt.libvirt.driver [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 19:06:58 compute-0 nova_compute[254061]: 2026-01-20 19:06:58.902 254065 DEBUG nova.virt.libvirt.driver [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 19:06:58 compute-0 nova_compute[254061]: 2026-01-20 19:06:58.903 254065 DEBUG nova.virt.libvirt.driver [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 19:06:58 compute-0 nova_compute[254061]: 2026-01-20 19:06:58.903 254065 DEBUG nova.virt.libvirt.driver [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 19:06:58 compute-0 nova_compute[254061]: 2026-01-20 19:06:58.926 254065 INFO nova.compute.manager [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 19:06:58 compute-0 nova_compute[254061]: 2026-01-20 19:06:58.926 254065 DEBUG nova.virt.driver [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] Emitting event <LifecycleEvent: 1768936018.8389587, 120a65b5-a5a0-4431-bfbb-56c5468d25a6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 19:06:58 compute-0 nova_compute[254061]: 2026-01-20 19:06:58.926 254065 INFO nova.compute.manager [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] VM Paused (Lifecycle Event)
Jan 20 19:06:58 compute-0 nova_compute[254061]: 2026-01-20 19:06:58.943 254065 DEBUG nova.compute.manager [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 19:06:58 compute-0 nova_compute[254061]: 2026-01-20 19:06:58.946 254065 DEBUG nova.virt.driver [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] Emitting event <LifecycleEvent: 1768936018.8448987, 120a65b5-a5a0-4431-bfbb-56c5468d25a6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 19:06:58 compute-0 nova_compute[254061]: 2026-01-20 19:06:58.946 254065 INFO nova.compute.manager [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] VM Resumed (Lifecycle Event)
Jan 20 19:06:58 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:58 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6e0004530 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:59 compute-0 nova_compute[254061]: 2026-01-20 19:06:59.045 254065 DEBUG nova.compute.manager [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 19:06:59 compute-0 nova_compute[254061]: 2026-01-20 19:06:59.049 254065 DEBUG nova.compute.manager [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 19:06:59 compute-0 nova_compute[254061]: 2026-01-20 19:06:59.097 254065 INFO nova.compute.manager [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 19:06:59 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:06:59 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:06:59 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:06:59.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:06:59 compute-0 nova_compute[254061]: 2026-01-20 19:06:59.121 254065 INFO nova.compute.manager [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Took 12.01 seconds to spawn the instance on the hypervisor.
Jan 20 19:06:59 compute-0 nova_compute[254061]: 2026-01-20 19:06:59.122 254065 DEBUG nova.compute.manager [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 19:06:59 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:06:59.131 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[b735def1-6eb9-4747-9b51-4bf99a6455b9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:06:59 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:06:59.132 165659 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapbe9957c5-b1 in ovnmeta-be9957c5-bb46-4eb1-886f-ace069f03c77 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 19:06:59 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:06:59.134 259376 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapbe9957c5-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 19:06:59 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:06:59.135 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[269828ba-8004-4b75-b240-fc76bf5fe4c2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:06:59 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:06:59.138 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[8f344b3c-042a-4101-8449-b7714cfcf531]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:06:59 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:06:59.163 166372 DEBUG oslo.privsep.daemon [-] privsep: reply[f7d01dd1-94b4-4509-9722-8ffdb0fc1ad0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:06:59 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:06:59.179 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[90eb564f-34df-4b56-ae59-f2ce3098f24d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:06:59 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:06:59.181 165659 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpsnl7xpvy/privsep.sock']
Jan 20 19:06:59 compute-0 nova_compute[254061]: 2026-01-20 19:06:59.181 254065 INFO nova.compute.manager [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Took 12.88 seconds to build instance.
Jan 20 19:06:59 compute-0 nova_compute[254061]: 2026-01-20 19:06:59.198 254065 DEBUG oslo_concurrency.lockutils [None req-dd5bbe99-9dd0-4e5f-b35e-1b91719abc0f d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "120a65b5-a5a0-4431-bfbb-56c5468d25a6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.995s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:06:59 compute-0 ceph-mon[74381]: pgmap v741: 337 pgs: 337 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 56 op/s
Jan 20 19:06:59 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v742: 337 pgs: 337 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.3 MiB/s wr, 48 op/s
Jan 20 19:06:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:59 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6f00012b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:06:59 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:06:59 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:06:59 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:06:59.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:06:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:06:59] "GET /metrics HTTP/1.1" 200 48449 "" "Prometheus/2.51.0"
Jan 20 19:06:59 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:06:59] "GET /metrics HTTP/1.1" 200 48449 "" "Prometheus/2.51.0"
Jan 20 19:06:59 compute-0 nova_compute[254061]: 2026-01-20 19:06:59.937 254065 DEBUG nova.compute.manager [req-0af6de90-eb3b-47cf-a779-462845e57d9e req-2b37f447-54ee-4bf0-99a8-752843f42fdc 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Received event network-vif-plugged-cfcfd83d-5be0-4a39-9bc1-94ae78153295 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 19:06:59 compute-0 nova_compute[254061]: 2026-01-20 19:06:59.938 254065 DEBUG oslo_concurrency.lockutils [req-0af6de90-eb3b-47cf-a779-462845e57d9e req-2b37f447-54ee-4bf0-99a8-752843f42fdc 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Acquiring lock "120a65b5-a5a0-4431-bfbb-56c5468d25a6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:06:59 compute-0 nova_compute[254061]: 2026-01-20 19:06:59.938 254065 DEBUG oslo_concurrency.lockutils [req-0af6de90-eb3b-47cf-a779-462845e57d9e req-2b37f447-54ee-4bf0-99a8-752843f42fdc 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Lock "120a65b5-a5a0-4431-bfbb-56c5468d25a6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:06:59 compute-0 nova_compute[254061]: 2026-01-20 19:06:59.938 254065 DEBUG oslo_concurrency.lockutils [req-0af6de90-eb3b-47cf-a779-462845e57d9e req-2b37f447-54ee-4bf0-99a8-752843f42fdc 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Lock "120a65b5-a5a0-4431-bfbb-56c5468d25a6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:06:59 compute-0 nova_compute[254061]: 2026-01-20 19:06:59.939 254065 DEBUG nova.compute.manager [req-0af6de90-eb3b-47cf-a779-462845e57d9e req-2b37f447-54ee-4bf0-99a8-752843f42fdc 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] No waiting events found dispatching network-vif-plugged-cfcfd83d-5be0-4a39-9bc1-94ae78153295 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 19:06:59 compute-0 nova_compute[254061]: 2026-01-20 19:06:59.939 254065 WARNING nova.compute.manager [req-0af6de90-eb3b-47cf-a779-462845e57d9e req-2b37f447-54ee-4bf0-99a8-752843f42fdc 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Received unexpected event network-vif-plugged-cfcfd83d-5be0-4a39-9bc1-94ae78153295 for instance with vm_state active and task_state None.
Jan 20 19:06:59 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:06:59.943 165659 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 20 19:06:59 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:06:59.944 165659 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpsnl7xpvy/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 20 19:06:59 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:06:59.805 259434 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 20 19:06:59 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:06:59.810 259434 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 20 19:06:59 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:06:59.811 259434 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Jan 20 19:06:59 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:06:59.811 259434 INFO oslo.privsep.daemon [-] privsep daemon running as pid 259434
Jan 20 19:06:59 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:06:59.946 259434 DEBUG oslo.privsep.daemon [-] privsep: reply[f4d787c3-2b62-402c-8412-22d7cba90a31]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:06:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:06:59 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6ec004850 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:07:00 compute-0 nova_compute[254061]: 2026-01-20 19:07:00.129 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:07:00 compute-0 nova_compute[254061]: 2026-01-20 19:07:00.130 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 20 19:07:00 compute-0 nova_compute[254061]: 2026-01-20 19:07:00.145 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 20 19:07:00 compute-0 nova_compute[254061]: 2026-01-20 19:07:00.146 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:07:00 compute-0 nova_compute[254061]: 2026-01-20 19:07:00.146 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 20 19:07:00 compute-0 nova_compute[254061]: 2026-01-20 19:07:00.170 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:07:00 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:07:00.445 259434 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:07:00 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:07:00.445 259434 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:07:00 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:07:00.445 259434 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:07:00 compute-0 nova_compute[254061]: 2026-01-20 19:07:00.713 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:07:00 compute-0 nova_compute[254061]: 2026-01-20 19:07:00.837 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:07:00 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:07:00 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6cc002600 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:07:01 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:07:01 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:07:01 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:07:01.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:07:01 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:07:01.166 259434 DEBUG oslo.privsep.daemon [-] privsep: reply[686dce21-423c-4884-a35f-ed15babf89de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:07:01 compute-0 NetworkManager[48914]: <info>  [1768936021.1878] manager: (tapbe9957c5-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/25)
Jan 20 19:07:01 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:07:01.189 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[2001c73c-8c17-411f-bddd-a488f3e5ed35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:07:01 compute-0 nova_compute[254061]: 2026-01-20 19:07:01.191 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:07:01 compute-0 nova_compute[254061]: 2026-01-20 19:07:01.192 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:07:01 compute-0 nova_compute[254061]: 2026-01-20 19:07:01.219 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:07:01 compute-0 nova_compute[254061]: 2026-01-20 19:07:01.219 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:07:01 compute-0 nova_compute[254061]: 2026-01-20 19:07:01.220 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:07:01 compute-0 nova_compute[254061]: 2026-01-20 19:07:01.220 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 19:07:01 compute-0 nova_compute[254061]: 2026-01-20 19:07:01.220 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:07:01 compute-0 systemd-udevd[259448]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 19:07:01 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:07:01.228 259434 DEBUG oslo.privsep.daemon [-] privsep: reply[3b4422b8-d5c3-4dcd-9d29-63dd8c7ab9a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:07:01 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:07:01.237 259434 DEBUG oslo.privsep.daemon [-] privsep: reply[c44c5d22-4829-412a-abb4-57a1fe685175]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:07:01 compute-0 ceph-mon[74381]: pgmap v742: 337 pgs: 337 active+clean; 88 MiB data, 220 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.3 MiB/s wr, 48 op/s
Jan 20 19:07:01 compute-0 NetworkManager[48914]: <info>  [1768936021.2759] device (tapbe9957c5-b0): carrier: link connected
Jan 20 19:07:01 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:07:01.280 259434 DEBUG oslo.privsep.daemon [-] privsep: reply[0d019aa3-5d4a-4002-9f7f-3b1e80f0fcf8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:07:01 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:07:01.299 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[69051425-56ef-4faa-a734-247281083137]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbe9957c5-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:65:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 421705, 'reachable_time': 18442, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 259467, 'error': None, 'target': 'ovnmeta-be9957c5-bb46-4eb1-886f-ace069f03c77', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:07:01 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:07:01.316 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[f9799bd3-5253-4672-949b-aac488aec6d9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9f:657a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 421705, 'tstamp': 421705}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 259468, 'error': None, 'target': 'ovnmeta-be9957c5-bb46-4eb1-886f-ace069f03c77', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:07:01 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:07:01.330 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[3ffadd78-d583-4879-8da2-b3015107f4a9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbe9957c5-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:65:7a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 421705, 'reachable_time': 18442, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 259469, 'error': None, 'target': 'ovnmeta-be9957c5-bb46-4eb1-886f-ace069f03c77', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:07:01 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:07:01.367 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[d8fd56ea-d5c4-4ea5-9df9-f5b4f1a7b30e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:07:01 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:07:01.452 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[b9f605a5-53b0-4b9d-817f-b43184503168]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:07:01 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:07:01.454 165659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbe9957c5-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 19:07:01 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:07:01.454 165659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 19:07:01 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:07:01.454 165659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbe9957c5-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 19:07:01 compute-0 nova_compute[254061]: 2026-01-20 19:07:01.457 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:07:01 compute-0 NetworkManager[48914]: <info>  [1768936021.4583] manager: (tapbe9957c5-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/26)
Jan 20 19:07:01 compute-0 kernel: tapbe9957c5-b0: entered promiscuous mode
Jan 20 19:07:01 compute-0 nova_compute[254061]: 2026-01-20 19:07:01.464 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:07:01 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:07:01.465 165659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbe9957c5-b0, col_values=(('external_ids', {'iface-id': '286a9bf9-bd18-4196-95d5-fe7ca2fbe5bf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 19:07:01 compute-0 nova_compute[254061]: 2026-01-20 19:07:01.466 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:07:01 compute-0 ovn_controller[155128]: 2026-01-20T19:07:01Z|00031|binding|INFO|Releasing lport 286a9bf9-bd18-4196-95d5-fe7ca2fbe5bf from this chassis (sb_readonly=0)
Jan 20 19:07:01 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v743: 337 pgs: 337 active+clean; 88 MiB data, 238 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 2.1 MiB/s wr, 111 op/s
Jan 20 19:07:01 compute-0 nova_compute[254061]: 2026-01-20 19:07:01.497 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:07:01 compute-0 nova_compute[254061]: 2026-01-20 19:07:01.499 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:07:01 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:07:01.500 165659 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/be9957c5-bb46-4eb1-886f-ace069f03c77.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/be9957c5-bb46-4eb1-886f-ace069f03c77.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 19:07:01 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:07:01.501 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[7127b657-b0dd-4053-a6da-2cbe115f1b8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:07:01 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:07:01.502 165659 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 19:07:01 compute-0 ovn_metadata_agent[165637]: global
Jan 20 19:07:01 compute-0 ovn_metadata_agent[165637]:     log         /dev/log local0 debug
Jan 20 19:07:01 compute-0 ovn_metadata_agent[165637]:     log-tag     haproxy-metadata-proxy-be9957c5-bb46-4eb1-886f-ace069f03c77
Jan 20 19:07:01 compute-0 ovn_metadata_agent[165637]:     user        root
Jan 20 19:07:01 compute-0 ovn_metadata_agent[165637]:     group       root
Jan 20 19:07:01 compute-0 ovn_metadata_agent[165637]:     maxconn     1024
Jan 20 19:07:01 compute-0 ovn_metadata_agent[165637]:     pidfile     /var/lib/neutron/external/pids/be9957c5-bb46-4eb1-886f-ace069f03c77.pid.haproxy
Jan 20 19:07:01 compute-0 ovn_metadata_agent[165637]:     daemon
Jan 20 19:07:01 compute-0 ovn_metadata_agent[165637]: 
Jan 20 19:07:01 compute-0 ovn_metadata_agent[165637]: defaults
Jan 20 19:07:01 compute-0 ovn_metadata_agent[165637]:     log global
Jan 20 19:07:01 compute-0 ovn_metadata_agent[165637]:     mode http
Jan 20 19:07:01 compute-0 ovn_metadata_agent[165637]:     option httplog
Jan 20 19:07:01 compute-0 ovn_metadata_agent[165637]:     option dontlognull
Jan 20 19:07:01 compute-0 ovn_metadata_agent[165637]:     option http-server-close
Jan 20 19:07:01 compute-0 ovn_metadata_agent[165637]:     option forwardfor
Jan 20 19:07:01 compute-0 ovn_metadata_agent[165637]:     retries                 3
Jan 20 19:07:01 compute-0 ovn_metadata_agent[165637]:     timeout http-request    30s
Jan 20 19:07:01 compute-0 ovn_metadata_agent[165637]:     timeout connect         30s
Jan 20 19:07:01 compute-0 ovn_metadata_agent[165637]:     timeout client          32s
Jan 20 19:07:01 compute-0 ovn_metadata_agent[165637]:     timeout server          32s
Jan 20 19:07:01 compute-0 ovn_metadata_agent[165637]:     timeout http-keep-alive 30s
Jan 20 19:07:01 compute-0 ovn_metadata_agent[165637]: 
Jan 20 19:07:01 compute-0 ovn_metadata_agent[165637]: 
Jan 20 19:07:01 compute-0 ovn_metadata_agent[165637]: listen listener
Jan 20 19:07:01 compute-0 ovn_metadata_agent[165637]:     bind 169.254.169.254:80
Jan 20 19:07:01 compute-0 ovn_metadata_agent[165637]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 19:07:01 compute-0 ovn_metadata_agent[165637]:     http-request add-header X-OVN-Network-ID be9957c5-bb46-4eb1-886f-ace069f03c77
Jan 20 19:07:01 compute-0 ovn_metadata_agent[165637]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 19:07:01 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:07:01.502 165659 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-be9957c5-bb46-4eb1-886f-ace069f03c77', 'env', 'PROCESS_TAG=haproxy-be9957c5-bb46-4eb1-886f-ace069f03c77', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/be9957c5-bb46-4eb1-886f-ace069f03c77.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 19:07:01 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:07:01 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6e0004530 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:07:01 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:07:01 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e166 do_prune osdmap full prune enabled
Jan 20 19:07:01 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 e167: 3 total, 3 up, 3 in
Jan 20 19:07:01 compute-0 ceph-mon[74381]: log_channel(cluster) log [DBG] : osdmap e167: 3 total, 3 up, 3 in
Jan 20 19:07:01 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:07:01 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4201095336' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:07:01 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:07:01 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:07:01 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:07:01.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:07:01 compute-0 nova_compute[254061]: 2026-01-20 19:07:01.721 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:07:01 compute-0 nova_compute[254061]: 2026-01-20 19:07:01.831 254065 DEBUG nova.virt.libvirt.driver [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 19:07:01 compute-0 nova_compute[254061]: 2026-01-20 19:07:01.831 254065 DEBUG nova.virt.libvirt.driver [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 19:07:01 compute-0 podman[259525]: 2026-01-20 19:07:01.964130575 +0000 UTC m=+0.094652248 container create 93f3f0ad4e71909fcf4d0cc06729cc8a9b74e1a94cba98ba0a23b145deda24ea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-be9957c5-bb46-4eb1-886f-ace069f03c77, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:07:01 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:07:01 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6f0002150 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:07:01 compute-0 podman[259525]: 2026-01-20 19:07:01.904140508 +0000 UTC m=+0.034662211 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 19:07:02 compute-0 systemd[1]: Started libpod-conmon-93f3f0ad4e71909fcf4d0cc06729cc8a9b74e1a94cba98ba0a23b145deda24ea.scope.
Jan 20 19:07:02 compute-0 nova_compute[254061]: 2026-01-20 19:07:02.068 254065 WARNING nova.virt.libvirt.driver [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 19:07:02 compute-0 nova_compute[254061]: 2026-01-20 19:07:02.069 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4449MB free_disk=59.967384338378906GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 19:07:02 compute-0 nova_compute[254061]: 2026-01-20 19:07:02.070 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:07:02 compute-0 nova_compute[254061]: 2026-01-20 19:07:02.070 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:07:02 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:07:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e43c1e92bfc636b8119da72484c22b94474b8eade6b64880cf9ed4981cb3034e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 19:07:02 compute-0 podman[259525]: 2026-01-20 19:07:02.091454768 +0000 UTC m=+0.221976451 container init 93f3f0ad4e71909fcf4d0cc06729cc8a9b74e1a94cba98ba0a23b145deda24ea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-be9957c5-bb46-4eb1-886f-ace069f03c77, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 20 19:07:02 compute-0 podman[259525]: 2026-01-20 19:07:02.09670803 +0000 UTC m=+0.227229693 container start 93f3f0ad4e71909fcf4d0cc06729cc8a9b74e1a94cba98ba0a23b145deda24ea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-be9957c5-bb46-4eb1-886f-ace069f03c77, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0)
Jan 20 19:07:02 compute-0 neutron-haproxy-ovnmeta-be9957c5-bb46-4eb1-886f-ace069f03c77[259541]: [NOTICE]   (259545) : New worker (259547) forked
Jan 20 19:07:02 compute-0 neutron-haproxy-ovnmeta-be9957c5-bb46-4eb1-886f-ace069f03c77[259541]: [NOTICE]   (259545) : Loading success.
Jan 20 19:07:02 compute-0 nova_compute[254061]: 2026-01-20 19:07:02.214 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Instance 120a65b5-a5a0-4431-bfbb-56c5468d25a6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 19:07:02 compute-0 nova_compute[254061]: 2026-01-20 19:07:02.214 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 19:07:02 compute-0 nova_compute[254061]: 2026-01-20 19:07:02.215 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 19:07:02 compute-0 nova_compute[254061]: 2026-01-20 19:07:02.367 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:07:02 compute-0 ceph-mon[74381]: pgmap v743: 337 pgs: 337 active+clean; 88 MiB data, 238 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 2.1 MiB/s wr, 111 op/s
Jan 20 19:07:02 compute-0 ceph-mon[74381]: osdmap e167: 3 total, 3 up, 3 in
Jan 20 19:07:02 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/4201095336' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:07:02 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:07:02 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3094904806' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:07:02 compute-0 nova_compute[254061]: 2026-01-20 19:07:02.810 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:07:02 compute-0 nova_compute[254061]: 2026-01-20 19:07:02.819 254065 DEBUG nova.compute.provider_tree [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Updating inventory in ProviderTree for provider cb9161e5-191d-495c-920a-01144f42a215 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 20 19:07:02 compute-0 nova_compute[254061]: 2026-01-20 19:07:02.862 254065 ERROR nova.scheduler.client.report [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] [req-fd166269-45d9-4f1c-b02e-71ab3ac9afc4] Failed to update inventory to [{'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID cb9161e5-191d-495c-920a-01144f42a215.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-fd166269-45d9-4f1c-b02e-71ab3ac9afc4"}]}
Jan 20 19:07:02 compute-0 nova_compute[254061]: 2026-01-20 19:07:02.889 254065 DEBUG nova.scheduler.client.report [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Refreshing inventories for resource provider cb9161e5-191d-495c-920a-01144f42a215 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 20 19:07:02 compute-0 nova_compute[254061]: 2026-01-20 19:07:02.910 254065 DEBUG nova.scheduler.client.report [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Updating ProviderTree inventory for provider cb9161e5-191d-495c-920a-01144f42a215 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 20 19:07:02 compute-0 nova_compute[254061]: 2026-01-20 19:07:02.911 254065 DEBUG nova.compute.provider_tree [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Updating inventory in ProviderTree for provider cb9161e5-191d-495c-920a-01144f42a215 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 20 19:07:02 compute-0 nova_compute[254061]: 2026-01-20 19:07:02.933 254065 DEBUG nova.scheduler.client.report [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Refreshing aggregate associations for resource provider cb9161e5-191d-495c-920a-01144f42a215, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 20 19:07:02 compute-0 nova_compute[254061]: 2026-01-20 19:07:02.961 254065 DEBUG nova.scheduler.client.report [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Refreshing trait associations for resource provider cb9161e5-191d-495c-920a-01144f42a215, traits: COMPUTE_DEVICE_TAGGING,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AESNI,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_ABM,COMPUTE_ACCELERATORS,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE4A,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NODE,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_F16C,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_USB,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE,HW_CPU_X86_SSE2,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_BMI,HW_CPU_X86_SVM,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 20 19:07:02 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:07:02 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6f0002150 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:07:03 compute-0 nova_compute[254061]: 2026-01-20 19:07:03.008 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:07:03 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:07:03 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:07:03 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:07:03.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:07:03 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:07:03 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2344397701' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:07:03 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v745: 337 pgs: 337 active+clean; 88 MiB data, 238 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.9 MiB/s wr, 106 op/s
Jan 20 19:07:03 compute-0 nova_compute[254061]: 2026-01-20 19:07:03.468 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:07:03 compute-0 nova_compute[254061]: 2026-01-20 19:07:03.479 254065 DEBUG nova.compute.provider_tree [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Updating inventory in ProviderTree for provider cb9161e5-191d-495c-920a-01144f42a215 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 20 19:07:03 compute-0 nova_compute[254061]: 2026-01-20 19:07:03.535 254065 DEBUG nova.scheduler.client.report [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Updated inventory for provider cb9161e5-191d-495c-920a-01144f42a215 with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Jan 20 19:07:03 compute-0 nova_compute[254061]: 2026-01-20 19:07:03.536 254065 DEBUG nova.compute.provider_tree [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Updating resource provider cb9161e5-191d-495c-920a-01144f42a215 generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Jan 20 19:07:03 compute-0 nova_compute[254061]: 2026-01-20 19:07:03.536 254065 DEBUG nova.compute.provider_tree [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Updating inventory in ProviderTree for provider cb9161e5-191d-495c-920a-01144f42a215 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 20 19:07:03 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:07:03 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6ec004850 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:07:03 compute-0 nova_compute[254061]: 2026-01-20 19:07:03.565 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 19:07:03 compute-0 nova_compute[254061]: 2026-01-20 19:07:03.565 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.495s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:07:03 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:07:03 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:07:03 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:07:03.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:07:03 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/3094904806' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:07:03 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/2344397701' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:07:03 compute-0 ovn_controller[155128]: 2026-01-20T19:07:03Z|00032|binding|INFO|Releasing lport 286a9bf9-bd18-4196-95d5-fe7ca2fbe5bf from this chassis (sb_readonly=0)
Jan 20 19:07:03 compute-0 NetworkManager[48914]: <info>  [1768936023.8934] manager: (patch-provnet-d23a33c8-2f92-47f7-9446-58aa0ac25f0e-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/27)
Jan 20 19:07:03 compute-0 NetworkManager[48914]: <info>  [1768936023.8945] device (patch-provnet-d23a33c8-2f92-47f7-9446-58aa0ac25f0e-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 19:07:03 compute-0 NetworkManager[48914]: <warn>  [1768936023.8947] device (patch-provnet-d23a33c8-2f92-47f7-9446-58aa0ac25f0e-to-br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 20 19:07:03 compute-0 nova_compute[254061]: 2026-01-20 19:07:03.892 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:07:03 compute-0 NetworkManager[48914]: <info>  [1768936023.8960] manager: (patch-br-int-to-provnet-d23a33c8-2f92-47f7-9446-58aa0ac25f0e): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/28)
Jan 20 19:07:03 compute-0 NetworkManager[48914]: <info>  [1768936023.8965] device (patch-br-int-to-provnet-d23a33c8-2f92-47f7-9446-58aa0ac25f0e)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 19:07:03 compute-0 NetworkManager[48914]: <warn>  [1768936023.8966] device (patch-br-int-to-provnet-d23a33c8-2f92-47f7-9446-58aa0ac25f0e)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 20 19:07:03 compute-0 NetworkManager[48914]: <info>  [1768936023.9103] manager: (patch-provnet-d23a33c8-2f92-47f7-9446-58aa0ac25f0e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Jan 20 19:07:03 compute-0 NetworkManager[48914]: <info>  [1768936023.9110] manager: (patch-br-int-to-provnet-d23a33c8-2f92-47f7-9446-58aa0ac25f0e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/30)
Jan 20 19:07:03 compute-0 NetworkManager[48914]: <info>  [1768936023.9115] device (patch-provnet-d23a33c8-2f92-47f7-9446-58aa0ac25f0e-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 20 19:07:03 compute-0 NetworkManager[48914]: <info>  [1768936023.9121] device (patch-br-int-to-provnet-d23a33c8-2f92-47f7-9446-58aa0ac25f0e)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 20 19:07:03 compute-0 ovn_controller[155128]: 2026-01-20T19:07:03Z|00033|binding|INFO|Releasing lport 286a9bf9-bd18-4196-95d5-fe7ca2fbe5bf from this chassis (sb_readonly=0)
Jan 20 19:07:03 compute-0 nova_compute[254061]: 2026-01-20 19:07:03.932 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:07:03 compute-0 nova_compute[254061]: 2026-01-20 19:07:03.937 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:07:03 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:07:03 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6c4003e20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:07:04 compute-0 nova_compute[254061]: 2026-01-20 19:07:04.501 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:07:04 compute-0 nova_compute[254061]: 2026-01-20 19:07:04.502 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 19:07:04 compute-0 nova_compute[254061]: 2026-01-20 19:07:04.502 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 19:07:04 compute-0 ceph-mon[74381]: pgmap v745: 337 pgs: 337 active+clean; 88 MiB data, 238 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.9 MiB/s wr, 106 op/s
Jan 20 19:07:04 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/3236890976' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:07:04 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/1009458724' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:07:04 compute-0 nova_compute[254061]: 2026-01-20 19:07:04.795 254065 DEBUG nova.compute.manager [req-66f28a78-c45b-4daa-bf39-36d204fff909 req-c6698543-0ca7-4278-aec8-b76249e8ca60 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Received event network-changed-cfcfd83d-5be0-4a39-9bc1-94ae78153295 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 19:07:04 compute-0 nova_compute[254061]: 2026-01-20 19:07:04.795 254065 DEBUG nova.compute.manager [req-66f28a78-c45b-4daa-bf39-36d204fff909 req-c6698543-0ca7-4278-aec8-b76249e8ca60 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Refreshing instance network info cache due to event network-changed-cfcfd83d-5be0-4a39-9bc1-94ae78153295. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 19:07:04 compute-0 nova_compute[254061]: 2026-01-20 19:07:04.795 254065 DEBUG oslo_concurrency.lockutils [req-66f28a78-c45b-4daa-bf39-36d204fff909 req-c6698543-0ca7-4278-aec8-b76249e8ca60 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Acquiring lock "refresh_cache-120a65b5-a5a0-4431-bfbb-56c5468d25a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 19:07:04 compute-0 nova_compute[254061]: 2026-01-20 19:07:04.796 254065 DEBUG oslo_concurrency.lockutils [req-66f28a78-c45b-4daa-bf39-36d204fff909 req-c6698543-0ca7-4278-aec8-b76249e8ca60 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Acquired lock "refresh_cache-120a65b5-a5a0-4431-bfbb-56c5468d25a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 19:07:04 compute-0 nova_compute[254061]: 2026-01-20 19:07:04.796 254065 DEBUG nova.network.neutron [req-66f28a78-c45b-4daa-bf39-36d204fff909 req-c6698543-0ca7-4278-aec8-b76249e8ca60 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Refreshing network info cache for port cfcfd83d-5be0-4a39-9bc1-94ae78153295 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 19:07:04 compute-0 nova_compute[254061]: 2026-01-20 19:07:04.909 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Acquiring lock "refresh_cache-120a65b5-a5a0-4431-bfbb-56c5468d25a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 19:07:04 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:07:04 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6cc0017c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:07:05 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:07:05 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:07:05 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:07:05.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:07:05 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v746: 337 pgs: 337 active+clean; 88 MiB data, 238 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.9 MiB/s wr, 106 op/s
Jan 20 19:07:05 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:07:05 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6f0002150 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:07:05 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:07:05 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:07:05 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:07:05.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:07:05 compute-0 nova_compute[254061]: 2026-01-20 19:07:05.715 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:07:05 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/291036709' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:07:05 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/2838612996' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:07:05 compute-0 nova_compute[254061]: 2026-01-20 19:07:05.839 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:07:05 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:07:05 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6ec004850 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:07:06 compute-0 nova_compute[254061]: 2026-01-20 19:07:06.129 254065 DEBUG nova.network.neutron [req-66f28a78-c45b-4daa-bf39-36d204fff909 req-c6698543-0ca7-4278-aec8-b76249e8ca60 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Updated VIF entry in instance network info cache for port cfcfd83d-5be0-4a39-9bc1-94ae78153295. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 19:07:06 compute-0 nova_compute[254061]: 2026-01-20 19:07:06.129 254065 DEBUG nova.network.neutron [req-66f28a78-c45b-4daa-bf39-36d204fff909 req-c6698543-0ca7-4278-aec8-b76249e8ca60 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Updating instance_info_cache with network_info: [{"id": "cfcfd83d-5be0-4a39-9bc1-94ae78153295", "address": "fa:16:3e:7d:76:12", "network": {"id": "be9957c5-bb46-4eb1-886f-ace069f03c77", "bridge": "br-int", "label": "tempest-network-smoke--835136308", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcfcfd83d-5b", "ovs_interfaceid": "cfcfd83d-5be0-4a39-9bc1-94ae78153295", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 19:07:06 compute-0 nova_compute[254061]: 2026-01-20 19:07:06.153 254065 DEBUG oslo_concurrency.lockutils [req-66f28a78-c45b-4daa-bf39-36d204fff909 req-c6698543-0ca7-4278-aec8-b76249e8ca60 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Releasing lock "refresh_cache-120a65b5-a5a0-4431-bfbb-56c5468d25a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 19:07:06 compute-0 nova_compute[254061]: 2026-01-20 19:07:06.153 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Acquired lock "refresh_cache-120a65b5-a5a0-4431-bfbb-56c5468d25a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 19:07:06 compute-0 nova_compute[254061]: 2026-01-20 19:07:06.153 254065 DEBUG nova.network.neutron [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 19:07:06 compute-0 nova_compute[254061]: 2026-01-20 19:07:06.154 254065 DEBUG nova.objects.instance [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 120a65b5-a5a0-4431-bfbb-56c5468d25a6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 19:07:06 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:07:06 compute-0 ceph-mon[74381]: pgmap v746: 337 pgs: 337 active+clean; 88 MiB data, 238 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.9 MiB/s wr, 106 op/s
Jan 20 19:07:06 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:07:06 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6c4003e20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:07:07 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:07:07 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:07:07 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:07:07.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:07:07 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:07:07.149Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:07:07 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:07:07.149Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:07:07 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:07:07.150Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:07:07 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v747: 337 pgs: 337 active+clean; 88 MiB data, 238 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 307 B/s wr, 84 op/s
Jan 20 19:07:07 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:07:07 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6cc0017c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:07:07 compute-0 nova_compute[254061]: 2026-01-20 19:07:07.567 254065 DEBUG nova.network.neutron [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Updating instance_info_cache with network_info: [{"id": "cfcfd83d-5be0-4a39-9bc1-94ae78153295", "address": "fa:16:3e:7d:76:12", "network": {"id": "be9957c5-bb46-4eb1-886f-ace069f03c77", "bridge": "br-int", "label": "tempest-network-smoke--835136308", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcfcfd83d-5b", "ovs_interfaceid": "cfcfd83d-5be0-4a39-9bc1-94ae78153295", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 19:07:07 compute-0 nova_compute[254061]: 2026-01-20 19:07:07.583 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Releasing lock "refresh_cache-120a65b5-a5a0-4431-bfbb-56c5468d25a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 19:07:07 compute-0 nova_compute[254061]: 2026-01-20 19:07:07.584 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 19:07:07 compute-0 nova_compute[254061]: 2026-01-20 19:07:07.584 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:07:07 compute-0 nova_compute[254061]: 2026-01-20 19:07:07.584 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:07:07 compute-0 nova_compute[254061]: 2026-01-20 19:07:07.584 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:07:07 compute-0 nova_compute[254061]: 2026-01-20 19:07:07.585 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:07:07 compute-0 nova_compute[254061]: 2026-01-20 19:07:07.585 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:07:07 compute-0 nova_compute[254061]: 2026-01-20 19:07:07.585 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 19:07:07 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:07:07 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:07:07 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:07:07.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:07:07 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:07:07 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6f0003250 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:07:08 compute-0 ceph-mgr[74676]: [dashboard INFO request] [192.168.122.100:43144] [POST] [200] [0.001s] [4.0B] [0ad30875-0a85-41e7-b612-a05414ee8d29] /api/prometheus_receiver
Jan 20 19:07:08 compute-0 ceph-mon[74381]: pgmap v747: 337 pgs: 337 active+clean; 88 MiB data, 238 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 307 B/s wr, 84 op/s
Jan 20 19:07:08 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:07:08 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6ec004850 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:07:09 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:07:09 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:07:09 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:07:09.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:07:09 compute-0 nova_compute[254061]: 2026-01-20 19:07:09.207 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:07:09 compute-0 sudo[259609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:07:09 compute-0 sudo[259609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:07:09 compute-0 sudo[259609]: pam_unix(sudo:session): session closed for user root
Jan 20 19:07:09 compute-0 sudo[259634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 20 19:07:09 compute-0 sudo[259634]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:07:09 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v748: 337 pgs: 337 active+clean; 88 MiB data, 238 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 307 B/s wr, 84 op/s
Jan 20 19:07:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:07:09 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6c4003e40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:07:09 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:07:09 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:07:09 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:07:09.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:07:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:07:09] "GET /metrics HTTP/1.1" 200 48449 "" "Prometheus/2.51.0"
Jan 20 19:07:09 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:07:09] "GET /metrics HTTP/1.1" 200 48449 "" "Prometheus/2.51.0"
Jan 20 19:07:09 compute-0 sudo[259634]: pam_unix(sudo:session): session closed for user root
Jan 20 19:07:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:07:09 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6cc0017c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:07:10 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v749: 337 pgs: 337 active+clean; 88 MiB data, 238 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 16 op/s
Jan 20 19:07:10 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v750: 337 pgs: 337 active+clean; 88 MiB data, 238 MiB used, 60 GiB / 60 GiB avail; 335 KiB/s rd, 17 op/s
Jan 20 19:07:10 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 19:07:10 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:07:10 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 20 19:07:10 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:07:10 compute-0 sudo[259692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:07:10 compute-0 sudo[259692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:07:10 compute-0 sudo[259692]: pam_unix(sudo:session): session closed for user root
Jan 20 19:07:10 compute-0 sudo[259717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 19:07:10 compute-0 sudo[259717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:07:10 compute-0 nova_compute[254061]: 2026-01-20 19:07:10.759 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:07:10 compute-0 podman[259787]: 2026-01-20 19:07:10.811236896 +0000 UTC m=+0.051365253 container create fb1ef1d9f89c207381d2c9a164fe0a5ef58776969868fb9a0639aa8f79f3f8fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_johnson, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Jan 20 19:07:10 compute-0 nova_compute[254061]: 2026-01-20 19:07:10.842 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:07:10 compute-0 systemd[1]: Started libpod-conmon-fb1ef1d9f89c207381d2c9a164fe0a5ef58776969868fb9a0639aa8f79f3f8fc.scope.
Jan 20 19:07:10 compute-0 ceph-mon[74381]: pgmap v748: 337 pgs: 337 active+clean; 88 MiB data, 238 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 307 B/s wr, 84 op/s
Jan 20 19:07:10 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:07:10 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 19:07:10 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 19:07:10 compute-0 ceph-mon[74381]: pgmap v749: 337 pgs: 337 active+clean; 88 MiB data, 238 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 16 op/s
Jan 20 19:07:10 compute-0 ceph-mon[74381]: pgmap v750: 337 pgs: 337 active+clean; 88 MiB data, 238 MiB used, 60 GiB / 60 GiB avail; 335 KiB/s rd, 17 op/s
Jan 20 19:07:10 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:07:10 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:07:10 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 19:07:10 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 19:07:10 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 19:07:10 compute-0 podman[259787]: 2026-01-20 19:07:10.78958989 +0000 UTC m=+0.029718277 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:07:10 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:07:10 compute-0 podman[259787]: 2026-01-20 19:07:10.925076514 +0000 UTC m=+0.165204881 container init fb1ef1d9f89c207381d2c9a164fe0a5ef58776969868fb9a0639aa8f79f3f8fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_johnson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 20 19:07:10 compute-0 podman[259787]: 2026-01-20 19:07:10.933581575 +0000 UTC m=+0.173709932 container start fb1ef1d9f89c207381d2c9a164fe0a5ef58776969868fb9a0639aa8f79f3f8fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_johnson, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 20 19:07:10 compute-0 podman[259787]: 2026-01-20 19:07:10.936722379 +0000 UTC m=+0.176850756 container attach fb1ef1d9f89c207381d2c9a164fe0a5ef58776969868fb9a0639aa8f79f3f8fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_johnson, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Jan 20 19:07:10 compute-0 modest_johnson[259803]: 167 167
Jan 20 19:07:10 compute-0 systemd[1]: libpod-fb1ef1d9f89c207381d2c9a164fe0a5ef58776969868fb9a0639aa8f79f3f8fc.scope: Deactivated successfully.
Jan 20 19:07:10 compute-0 conmon[259803]: conmon fb1ef1d9f89c207381d2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fb1ef1d9f89c207381d2c9a164fe0a5ef58776969868fb9a0639aa8f79f3f8fc.scope/container/memory.events
Jan 20 19:07:10 compute-0 podman[259787]: 2026-01-20 19:07:10.941832588 +0000 UTC m=+0.181960975 container died fb1ef1d9f89c207381d2c9a164fe0a5ef58776969868fb9a0639aa8f79f3f8fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_johnson, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:07:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-bb5fa7759135ae8f4cd1be4e9f3f62c9bfe1743935f9fe4d78c873eedcc5336e-merged.mount: Deactivated successfully.
Jan 20 19:07:10 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:07:10 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6f0003250 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:07:10 compute-0 podman[259787]: 2026-01-20 19:07:10.996786698 +0000 UTC m=+0.236915065 container remove fb1ef1d9f89c207381d2c9a164fe0a5ef58776969868fb9a0639aa8f79f3f8fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_johnson, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Jan 20 19:07:11 compute-0 systemd[1]: libpod-conmon-fb1ef1d9f89c207381d2c9a164fe0a5ef58776969868fb9a0639aa8f79f3f8fc.scope: Deactivated successfully.
Jan 20 19:07:11 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:07:11 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:07:11 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:07:11.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:07:11 compute-0 podman[259827]: 2026-01-20 19:07:11.229002586 +0000 UTC m=+0.055238870 container create da0b962f75098cfcbf9079a38ea3d23dbd6ac22b5e2af38ccce40cd58e94babe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Jan 20 19:07:11 compute-0 systemd[1]: Started libpod-conmon-da0b962f75098cfcbf9079a38ea3d23dbd6ac22b5e2af38ccce40cd58e94babe.scope.
Jan 20 19:07:11 compute-0 podman[259827]: 2026-01-20 19:07:11.206846515 +0000 UTC m=+0.033082819 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:07:11 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:07:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e636c9ffe84df6d5aef377c801854a24296812c3e699f6dd87924a63a845931/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:07:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e636c9ffe84df6d5aef377c801854a24296812c3e699f6dd87924a63a845931/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:07:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e636c9ffe84df6d5aef377c801854a24296812c3e699f6dd87924a63a845931/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:07:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e636c9ffe84df6d5aef377c801854a24296812c3e699f6dd87924a63a845931/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:07:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e636c9ffe84df6d5aef377c801854a24296812c3e699f6dd87924a63a845931/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:07:11 compute-0 podman[259827]: 2026-01-20 19:07:11.350796188 +0000 UTC m=+0.177032532 container init da0b962f75098cfcbf9079a38ea3d23dbd6ac22b5e2af38ccce40cd58e94babe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_bhaskara, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 20 19:07:11 compute-0 podman[259827]: 2026-01-20 19:07:11.36158266 +0000 UTC m=+0.187818974 container start da0b962f75098cfcbf9079a38ea3d23dbd6ac22b5e2af38ccce40cd58e94babe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_bhaskara, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Jan 20 19:07:11 compute-0 podman[259827]: 2026-01-20 19:07:11.366245077 +0000 UTC m=+0.192481401 container attach da0b962f75098cfcbf9079a38ea3d23dbd6ac22b5e2af38ccce40cd58e94babe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:07:11 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:07:11 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6ec004850 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:07:11 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:07:11 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:07:11 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:07:11 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:07:11.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:07:11 compute-0 affectionate_bhaskara[259843]: --> passed data devices: 0 physical, 1 LVM
Jan 20 19:07:11 compute-0 affectionate_bhaskara[259843]: --> All data devices are unavailable
Jan 20 19:07:11 compute-0 systemd[1]: libpod-da0b962f75098cfcbf9079a38ea3d23dbd6ac22b5e2af38ccce40cd58e94babe.scope: Deactivated successfully.
Jan 20 19:07:11 compute-0 podman[259827]: 2026-01-20 19:07:11.761431864 +0000 UTC m=+0.587668148 container died da0b962f75098cfcbf9079a38ea3d23dbd6ac22b5e2af38ccce40cd58e94babe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_bhaskara, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:07:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e636c9ffe84df6d5aef377c801854a24296812c3e699f6dd87924a63a845931-merged.mount: Deactivated successfully.
Jan 20 19:07:11 compute-0 podman[259827]: 2026-01-20 19:07:11.808405067 +0000 UTC m=+0.634641361 container remove da0b962f75098cfcbf9079a38ea3d23dbd6ac22b5e2af38ccce40cd58e94babe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:07:11 compute-0 ovn_controller[155128]: 2026-01-20T19:07:11Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7d:76:12 10.100.0.6
Jan 20 19:07:11 compute-0 ovn_controller[155128]: 2026-01-20T19:07:11Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7d:76:12 10.100.0.6
Jan 20 19:07:11 compute-0 systemd[1]: libpod-conmon-da0b962f75098cfcbf9079a38ea3d23dbd6ac22b5e2af38ccce40cd58e94babe.scope: Deactivated successfully.
Jan 20 19:07:11 compute-0 sudo[259717]: pam_unix(sudo:session): session closed for user root
Jan 20 19:07:11 compute-0 sudo[259872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:07:11 compute-0 sudo[259872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:07:11 compute-0 sudo[259872]: pam_unix(sudo:session): session closed for user root
Jan 20 19:07:11 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:07:11 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6c4003e60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:07:12 compute-0 sudo[259897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- lvm list --format json
Jan 20 19:07:12 compute-0 sudo[259897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:07:12 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v751: 337 pgs: 337 active+clean; 101 MiB data, 238 MiB used, 60 GiB / 60 GiB avail; 249 KiB/s rd, 2.0 MiB/s wr, 40 op/s
Jan 20 19:07:12 compute-0 podman[259965]: 2026-01-20 19:07:12.545875376 +0000 UTC m=+0.059589767 container create c2c4f7ea312b6858dee15cc5edb7042d61df91bd19ba0d13b176082985c00791 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_moore, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:07:12 compute-0 systemd[1]: Started libpod-conmon-c2c4f7ea312b6858dee15cc5edb7042d61df91bd19ba0d13b176082985c00791.scope.
Jan 20 19:07:12 compute-0 podman[259965]: 2026-01-20 19:07:12.526973053 +0000 UTC m=+0.040687464 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:07:12 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:07:12 compute-0 podman[259965]: 2026-01-20 19:07:12.658990503 +0000 UTC m=+0.172704984 container init c2c4f7ea312b6858dee15cc5edb7042d61df91bd19ba0d13b176082985c00791 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_moore, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 19:07:12 compute-0 podman[259965]: 2026-01-20 19:07:12.667078673 +0000 UTC m=+0.180793064 container start c2c4f7ea312b6858dee15cc5edb7042d61df91bd19ba0d13b176082985c00791 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 20 19:07:12 compute-0 podman[259965]: 2026-01-20 19:07:12.669999222 +0000 UTC m=+0.183713633 container attach c2c4f7ea312b6858dee15cc5edb7042d61df91bd19ba0d13b176082985c00791 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_moore, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:07:12 compute-0 busy_moore[259981]: 167 167
Jan 20 19:07:12 compute-0 systemd[1]: libpod-c2c4f7ea312b6858dee15cc5edb7042d61df91bd19ba0d13b176082985c00791.scope: Deactivated successfully.
Jan 20 19:07:12 compute-0 podman[259965]: 2026-01-20 19:07:12.675499781 +0000 UTC m=+0.189214172 container died c2c4f7ea312b6858dee15cc5edb7042d61df91bd19ba0d13b176082985c00791 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_moore, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True)
Jan 20 19:07:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-46fdbf12b2517381b387af0889b82dee074ddf4bff3d476e31ed9f8d15584fff-merged.mount: Deactivated successfully.
Jan 20 19:07:12 compute-0 podman[259965]: 2026-01-20 19:07:12.729961618 +0000 UTC m=+0.243676009 container remove c2c4f7ea312b6858dee15cc5edb7042d61df91bd19ba0d13b176082985c00791 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 20 19:07:12 compute-0 systemd[1]: libpod-conmon-c2c4f7ea312b6858dee15cc5edb7042d61df91bd19ba0d13b176082985c00791.scope: Deactivated successfully.
Jan 20 19:07:12 compute-0 podman[260005]: 2026-01-20 19:07:12.930529987 +0000 UTC m=+0.057345056 container create 8d2ffbba9784bcf75417743d9bf765d26e4e4bd7e208e09fc4ff7a153d618ac2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_jepsen, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 20 19:07:12 compute-0 systemd[1]: Started libpod-conmon-8d2ffbba9784bcf75417743d9bf765d26e4e4bd7e208e09fc4ff7a153d618ac2.scope.
Jan 20 19:07:12 compute-0 podman[260005]: 2026-01-20 19:07:12.903582346 +0000 UTC m=+0.030397505 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:07:12 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:07:12 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6cc0017c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:07:13 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:07:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24008e34d35e25bdff2a73bd2129081d9b7fe0a1908fbfdc38a0ca02fd3fe329/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:07:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24008e34d35e25bdff2a73bd2129081d9b7fe0a1908fbfdc38a0ca02fd3fe329/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:07:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24008e34d35e25bdff2a73bd2129081d9b7fe0a1908fbfdc38a0ca02fd3fe329/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:07:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24008e34d35e25bdff2a73bd2129081d9b7fe0a1908fbfdc38a0ca02fd3fe329/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:07:13 compute-0 podman[260005]: 2026-01-20 19:07:13.040682704 +0000 UTC m=+0.167497893 container init 8d2ffbba9784bcf75417743d9bf765d26e4e4bd7e208e09fc4ff7a153d618ac2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_jepsen, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 20 19:07:13 compute-0 podman[260005]: 2026-01-20 19:07:13.05124739 +0000 UTC m=+0.178062459 container start 8d2ffbba9784bcf75417743d9bf765d26e4e4bd7e208e09fc4ff7a153d618ac2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_jepsen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Jan 20 19:07:13 compute-0 podman[260005]: 2026-01-20 19:07:13.054584781 +0000 UTC m=+0.181399890 container attach 8d2ffbba9784bcf75417743d9bf765d26e4e4bd7e208e09fc4ff7a153d618ac2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_jepsen, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 20 19:07:13 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:07:13 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:07:13 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:07:13.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:07:13 compute-0 ceph-mon[74381]: pgmap v751: 337 pgs: 337 active+clean; 101 MiB data, 238 MiB used, 60 GiB / 60 GiB avail; 249 KiB/s rd, 2.0 MiB/s wr, 40 op/s
Jan 20 19:07:13 compute-0 awesome_jepsen[260021]: {
Jan 20 19:07:13 compute-0 awesome_jepsen[260021]:     "0": [
Jan 20 19:07:13 compute-0 awesome_jepsen[260021]:         {
Jan 20 19:07:13 compute-0 awesome_jepsen[260021]:             "devices": [
Jan 20 19:07:13 compute-0 awesome_jepsen[260021]:                 "/dev/loop3"
Jan 20 19:07:13 compute-0 awesome_jepsen[260021]:             ],
Jan 20 19:07:13 compute-0 awesome_jepsen[260021]:             "lv_name": "ceph_lv0",
Jan 20 19:07:13 compute-0 awesome_jepsen[260021]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:07:13 compute-0 awesome_jepsen[260021]:             "lv_size": "21470642176",
Jan 20 19:07:13 compute-0 awesome_jepsen[260021]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=aecbbf3b-b405-507b-97d7-637a83f5b4b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5f53c0c6-6046-4836-83f9-ff93da7e674e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:07:13 compute-0 awesome_jepsen[260021]:             "lv_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 19:07:13 compute-0 awesome_jepsen[260021]:             "name": "ceph_lv0",
Jan 20 19:07:13 compute-0 awesome_jepsen[260021]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:07:13 compute-0 awesome_jepsen[260021]:             "tags": {
Jan 20 19:07:13 compute-0 awesome_jepsen[260021]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:07:13 compute-0 awesome_jepsen[260021]:                 "ceph.block_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 19:07:13 compute-0 awesome_jepsen[260021]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:07:13 compute-0 awesome_jepsen[260021]:                 "ceph.cluster_fsid": "aecbbf3b-b405-507b-97d7-637a83f5b4b1",
Jan 20 19:07:13 compute-0 awesome_jepsen[260021]:                 "ceph.cluster_name": "ceph",
Jan 20 19:07:13 compute-0 awesome_jepsen[260021]:                 "ceph.crush_device_class": "",
Jan 20 19:07:13 compute-0 awesome_jepsen[260021]:                 "ceph.encrypted": "0",
Jan 20 19:07:13 compute-0 awesome_jepsen[260021]:                 "ceph.osd_fsid": "5f53c0c6-6046-4836-83f9-ff93da7e674e",
Jan 20 19:07:13 compute-0 awesome_jepsen[260021]:                 "ceph.osd_id": "0",
Jan 20 19:07:13 compute-0 awesome_jepsen[260021]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:07:13 compute-0 awesome_jepsen[260021]:                 "ceph.type": "block",
Jan 20 19:07:13 compute-0 awesome_jepsen[260021]:                 "ceph.vdo": "0",
Jan 20 19:07:13 compute-0 awesome_jepsen[260021]:                 "ceph.with_tpm": "0"
Jan 20 19:07:13 compute-0 awesome_jepsen[260021]:             },
Jan 20 19:07:13 compute-0 awesome_jepsen[260021]:             "type": "block",
Jan 20 19:07:13 compute-0 awesome_jepsen[260021]:             "vg_name": "ceph_vg0"
Jan 20 19:07:13 compute-0 awesome_jepsen[260021]:         }
Jan 20 19:07:13 compute-0 awesome_jepsen[260021]:     ]
Jan 20 19:07:13 compute-0 awesome_jepsen[260021]: }
Jan 20 19:07:13 compute-0 systemd[1]: libpod-8d2ffbba9784bcf75417743d9bf765d26e4e4bd7e208e09fc4ff7a153d618ac2.scope: Deactivated successfully.
Jan 20 19:07:13 compute-0 podman[260005]: 2026-01-20 19:07:13.434681138 +0000 UTC m=+0.561496257 container died 8d2ffbba9784bcf75417743d9bf765d26e4e4bd7e208e09fc4ff7a153d618ac2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_jepsen, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:07:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-24008e34d35e25bdff2a73bd2129081d9b7fe0a1908fbfdc38a0ca02fd3fe329-merged.mount: Deactivated successfully.
Jan 20 19:07:13 compute-0 podman[260005]: 2026-01-20 19:07:13.490998145 +0000 UTC m=+0.617813214 container remove 8d2ffbba9784bcf75417743d9bf765d26e4e4bd7e208e09fc4ff7a153d618ac2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_jepsen, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 19:07:13 compute-0 systemd[1]: libpod-conmon-8d2ffbba9784bcf75417743d9bf765d26e4e4bd7e208e09fc4ff7a153d618ac2.scope: Deactivated successfully.
Jan 20 19:07:13 compute-0 sudo[259897]: pam_unix(sudo:session): session closed for user root
Jan 20 19:07:13 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:07:13 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6f0003f60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:07:13 compute-0 sudo[260044]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:07:13 compute-0 sudo[260044]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:07:13 compute-0 sudo[260044]: pam_unix(sudo:session): session closed for user root
Jan 20 19:07:13 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:07:13 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:07:13 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:07:13.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:07:13 compute-0 sudo[260070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- raw list --format json
Jan 20 19:07:13 compute-0 sudo[260070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:07:13 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:07:13 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6ec004850 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:07:14 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v752: 337 pgs: 337 active+clean; 109 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 305 KiB/s rd, 2.8 MiB/s wr, 58 op/s
Jan 20 19:07:14 compute-0 podman[260134]: 2026-01-20 19:07:14.308597316 +0000 UTC m=+0.079008963 container create baf0229d35f21590162ae6febe6ef540a55061bba0b29ea0631582e53d95af13 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_jepsen, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:07:14 compute-0 systemd[1]: Started libpod-conmon-baf0229d35f21590162ae6febe6ef540a55061bba0b29ea0631582e53d95af13.scope.
Jan 20 19:07:14 compute-0 podman[260134]: 2026-01-20 19:07:14.272479627 +0000 UTC m=+0.042891344 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:07:14 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:07:14 compute-0 podman[260134]: 2026-01-20 19:07:14.428678553 +0000 UTC m=+0.199090190 container init baf0229d35f21590162ae6febe6ef540a55061bba0b29ea0631582e53d95af13 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_jepsen, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 20 19:07:14 compute-0 podman[260134]: 2026-01-20 19:07:14.442649162 +0000 UTC m=+0.213060809 container start baf0229d35f21590162ae6febe6ef540a55061bba0b29ea0631582e53d95af13 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:07:14 compute-0 podman[260134]: 2026-01-20 19:07:14.446913767 +0000 UTC m=+0.217325394 container attach baf0229d35f21590162ae6febe6ef540a55061bba0b29ea0631582e53d95af13 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 20 19:07:14 compute-0 distracted_jepsen[260151]: 167 167
Jan 20 19:07:14 compute-0 systemd[1]: libpod-baf0229d35f21590162ae6febe6ef540a55061bba0b29ea0631582e53d95af13.scope: Deactivated successfully.
Jan 20 19:07:14 compute-0 podman[260134]: 2026-01-20 19:07:14.453111505 +0000 UTC m=+0.223523162 container died baf0229d35f21590162ae6febe6ef540a55061bba0b29ea0631582e53d95af13 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_jepsen, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:07:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-b47c941df729f3ed7110c1a96f92d858dc20d27a92e5c2afb0c8caba8ea97658-merged.mount: Deactivated successfully.
Jan 20 19:07:14 compute-0 podman[260134]: 2026-01-20 19:07:14.497516939 +0000 UTC m=+0.267928596 container remove baf0229d35f21590162ae6febe6ef540a55061bba0b29ea0631582e53d95af13 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_jepsen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1)
Jan 20 19:07:14 compute-0 systemd[1]: libpod-conmon-baf0229d35f21590162ae6febe6ef540a55061bba0b29ea0631582e53d95af13.scope: Deactivated successfully.
Jan 20 19:07:14 compute-0 podman[260175]: 2026-01-20 19:07:14.75825565 +0000 UTC m=+0.036108930 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:07:14 compute-0 podman[260175]: 2026-01-20 19:07:14.863793753 +0000 UTC m=+0.141647033 container create c0382698aba612842067d1b78b6a715a5a04a8ba83d14f47ea8d3fcfe3b1f43f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 19:07:14 compute-0 systemd[1]: Started libpod-conmon-c0382698aba612842067d1b78b6a715a5a04a8ba83d14f47ea8d3fcfe3b1f43f.scope.
Jan 20 19:07:14 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:07:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb628901480cff286c4c42104e17614b9f90c56116dbb512b1f87e0c41c1b2e8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:07:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb628901480cff286c4c42104e17614b9f90c56116dbb512b1f87e0c41c1b2e8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:07:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb628901480cff286c4c42104e17614b9f90c56116dbb512b1f87e0c41c1b2e8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:07:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb628901480cff286c4c42104e17614b9f90c56116dbb512b1f87e0c41c1b2e8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:07:14 compute-0 podman[260175]: 2026-01-20 19:07:14.997323223 +0000 UTC m=+0.275176453 container init c0382698aba612842067d1b78b6a715a5a04a8ba83d14f47ea8d3fcfe3b1f43f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_ritchie, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:07:14 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:07:14 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6c4003e80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:07:15 compute-0 podman[260175]: 2026-01-20 19:07:15.003929753 +0000 UTC m=+0.281782983 container start c0382698aba612842067d1b78b6a715a5a04a8ba83d14f47ea8d3fcfe3b1f43f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:07:15 compute-0 podman[260175]: 2026-01-20 19:07:15.00753591 +0000 UTC m=+0.285389190 container attach c0382698aba612842067d1b78b6a715a5a04a8ba83d14f47ea8d3fcfe3b1f43f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_ritchie, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:07:15 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:07:15 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:07:15 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:07:15.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:07:15 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:07:15 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6cc0017c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:07:15 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:07:15 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:07:15 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:07:15.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:07:15 compute-0 nova_compute[254061]: 2026-01-20 19:07:15.794 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:07:15 compute-0 ceph-mon[74381]: pgmap v752: 337 pgs: 337 active+clean; 109 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 305 KiB/s rd, 2.8 MiB/s wr, 58 op/s
Jan 20 19:07:15 compute-0 nova_compute[254061]: 2026-01-20 19:07:15.845 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:07:15 compute-0 lvm[260267]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 19:07:15 compute-0 lvm[260267]: VG ceph_vg0 finished
Jan 20 19:07:15 compute-0 modest_ritchie[260192]: {}
Jan 20 19:07:15 compute-0 systemd[1]: libpod-c0382698aba612842067d1b78b6a715a5a04a8ba83d14f47ea8d3fcfe3b1f43f.scope: Deactivated successfully.
Jan 20 19:07:15 compute-0 systemd[1]: libpod-c0382698aba612842067d1b78b6a715a5a04a8ba83d14f47ea8d3fcfe3b1f43f.scope: Consumed 1.621s CPU time.
Jan 20 19:07:15 compute-0 podman[260175]: 2026-01-20 19:07:15.97255685 +0000 UTC m=+1.250410080 container died c0382698aba612842067d1b78b6a715a5a04a8ba83d14f47ea8d3fcfe3b1f43f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_ritchie, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 19:07:15 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:07:15 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6f0003f60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:07:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb628901480cff286c4c42104e17614b9f90c56116dbb512b1f87e0c41c1b2e8-merged.mount: Deactivated successfully.
Jan 20 19:07:16 compute-0 podman[260175]: 2026-01-20 19:07:16.034093368 +0000 UTC m=+1.311946588 container remove c0382698aba612842067d1b78b6a715a5a04a8ba83d14f47ea8d3fcfe3b1f43f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_ritchie, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Jan 20 19:07:16 compute-0 systemd[1]: libpod-conmon-c0382698aba612842067d1b78b6a715a5a04a8ba83d14f47ea8d3fcfe3b1f43f.scope: Deactivated successfully.
Jan 20 19:07:16 compute-0 sudo[260070]: pam_unix(sudo:session): session closed for user root
Jan 20 19:07:16 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:07:16 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v753: 337 pgs: 337 active+clean; 109 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 305 KiB/s rd, 2.8 MiB/s wr, 57 op/s
Jan 20 19:07:16 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:07:16 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:07:16 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:07:16 compute-0 sudo[260286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:07:16 compute-0 sudo[260286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:07:16 compute-0 sudo[260286]: pam_unix(sudo:session): session closed for user root
Jan 20 19:07:16 compute-0 sudo[260311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 19:07:16 compute-0 sudo[260311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:07:16 compute-0 sudo[260311]: pam_unix(sudo:session): session closed for user root
Jan 20 19:07:16 compute-0 podman[260310]: 2026-01-20 19:07:16.256739436 +0000 UTC m=+0.066525445 container health_status 7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 20 19:07:16 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:07:16 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:07:16 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6ec004850 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:07:17 compute-0 ceph-mon[74381]: pgmap v753: 337 pgs: 337 active+clean; 109 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 305 KiB/s rd, 2.8 MiB/s wr, 57 op/s
Jan 20 19:07:17 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:07:17 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:07:17 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:07:17 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:07:17 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:07:17.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:07:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:07:17.150Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:07:17 compute-0 nova_compute[254061]: 2026-01-20 19:07:17.233 254065 INFO nova.compute.manager [None req-f8c9f9b0-da97-4bfe-95fc-ceb7e715d27d d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Get console output
Jan 20 19:07:17 compute-0 nova_compute[254061]: 2026-01-20 19:07:17.241 254065 INFO oslo.privsep.daemon [None req-f8c9f9b0-da97-4bfe-95fc-ceb7e715d27d d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmp0dgzp0sz/privsep.sock']
Jan 20 19:07:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:07:17 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6c4003ea0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:07:17 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:07:17 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:07:17 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:07:17.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:07:17 compute-0 nova_compute[254061]: 2026-01-20 19:07:17.937 254065 INFO oslo.privsep.daemon [None req-f8c9f9b0-da97-4bfe-95fc-ceb7e715d27d d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Spawned new privsep daemon via rootwrap
Jan 20 19:07:17 compute-0 nova_compute[254061]: 2026-01-20 19:07:17.824 260360 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 20 19:07:17 compute-0 nova_compute[254061]: 2026-01-20 19:07:17.830 260360 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 20 19:07:17 compute-0 nova_compute[254061]: 2026-01-20 19:07:17.834 260360 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Jan 20 19:07:17 compute-0 nova_compute[254061]: 2026-01-20 19:07:17.834 260360 INFO oslo.privsep.daemon [-] privsep daemon running as pid 260360
Jan 20 19:07:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:07:17 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6c4003ea0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:07:18 compute-0 nova_compute[254061]: 2026-01-20 19:07:18.039 260360 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 20 19:07:18 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v754: 337 pgs: 337 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 454 KiB/s rd, 3.0 MiB/s wr, 89 op/s
Jan 20 19:07:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:07:18.844Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:07:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:07:18 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6cc004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:07:19 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:07:19 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:07:19 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:07:19.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:07:19 compute-0 ceph-mon[74381]: pgmap v754: 337 pgs: 337 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 454 KiB/s rd, 3.0 MiB/s wr, 89 op/s
Jan 20 19:07:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:07:19 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6cc004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:07:19 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:07:19 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:07:19 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:07:19.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:07:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:07:19] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Jan 20 19:07:19 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:07:19] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Jan 20 19:07:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:07:19 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6f0004880 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:07:20 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v755: 337 pgs: 337 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 392 KiB/s rd, 2.6 MiB/s wr, 77 op/s
Jan 20 19:07:20 compute-0 nova_compute[254061]: 2026-01-20 19:07:20.831 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:07:20 compute-0 nova_compute[254061]: 2026-01-20 19:07:20.847 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:07:21 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:07:20 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6f0004880 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:07:21 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:07:21 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:07:21 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:07:21.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:07:21 compute-0 ceph-mon[74381]: pgmap v755: 337 pgs: 337 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 392 KiB/s rd, 2.6 MiB/s wr, 77 op/s
Jan 20 19:07:21 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:07:21 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6e0004530 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:07:21 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:07:21 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:07:21 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:07:21 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:07:21.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:07:21 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:07:21 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6cc004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:07:22 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v756: 337 pgs: 337 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 333 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 20 19:07:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:07:23 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6cc004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:07:23 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:07:23 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:07:23 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:07:23.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:07:23 compute-0 ceph-mon[74381]: pgmap v756: 337 pgs: 337 active+clean; 121 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 333 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 20 19:07:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:07:23 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6c4003f00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:07:23 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:07:23 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:07:23 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:07:23.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:07:24 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:07:23 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6e0004530 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:07:24 compute-0 podman[260368]: 2026-01-20 19:07:24.111648308 +0000 UTC m=+0.087696640 container health_status d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 20 19:07:24 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v757: 337 pgs: 337 active+clean; 121 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 154 KiB/s rd, 701 KiB/s wr, 36 op/s
Jan 20 19:07:24 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [WARNING] 019/190724 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 20 19:07:25 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:07:25 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6f0004880 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:07:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:07:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:07:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:07:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:07:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:07:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:07:25 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:07:25 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:07:25 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:07:25.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:07:25 compute-0 ceph-mon[74381]: pgmap v757: 337 pgs: 337 active+clean; 121 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 154 KiB/s rd, 701 KiB/s wr, 36 op/s
Jan 20 19:07:25 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:07:25 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:07:25 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6cc004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:07:25 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:07:25 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:07:25 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:07:25.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:07:25 compute-0 nova_compute[254061]: 2026-01-20 19:07:25.849 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:07:25 compute-0 nova_compute[254061]: 2026-01-20 19:07:25.851 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:07:25 compute-0 nova_compute[254061]: 2026-01-20 19:07:25.852 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 20 19:07:25 compute-0 nova_compute[254061]: 2026-01-20 19:07:25.852 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 19:07:25 compute-0 nova_compute[254061]: 2026-01-20 19:07:25.902 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:07:25 compute-0 nova_compute[254061]: 2026-01-20 19:07:25.903 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 19:07:26 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:07:26 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6c4003f20 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:07:26 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v758: 337 pgs: 337 active+clean; 121 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 114 KiB/s rd, 107 KiB/s wr, 23 op/s
Jan 20 19:07:26 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:07:26 compute-0 ceph-mon[74381]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Jan 20 19:07:26 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:07:26.682000) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 19:07:26 compute-0 ceph-mon[74381]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Jan 20 19:07:26 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936046682065, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 1024, "num_deletes": 256, "total_data_size": 1717748, "memory_usage": 1743256, "flush_reason": "Manual Compaction"}
Jan 20 19:07:26 compute-0 ceph-mon[74381]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Jan 20 19:07:26 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936046692498, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 1673281, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23183, "largest_seqno": 24206, "table_properties": {"data_size": 1668298, "index_size": 2507, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 10777, "raw_average_key_size": 19, "raw_value_size": 1657986, "raw_average_value_size": 2944, "num_data_blocks": 112, "num_entries": 563, "num_filter_entries": 563, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768935967, "oldest_key_time": 1768935967, "file_creation_time": 1768936046, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cbf3ab03-d51c-4622-b6c7-e997cd5246eb", "db_session_id": "I40O2DG19JCNHUB0JQU4", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Jan 20 19:07:26 compute-0 ceph-mon[74381]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 10531 microseconds, and 4959 cpu microseconds.
Jan 20 19:07:26 compute-0 ceph-mon[74381]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 19:07:26 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:07:26.692540) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 1673281 bytes OK
Jan 20 19:07:26 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:07:26.692557) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Jan 20 19:07:26 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:07:26.694458) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Jan 20 19:07:26 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:07:26.694471) EVENT_LOG_v1 {"time_micros": 1768936046694467, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 19:07:26 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:07:26.694488) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 19:07:26 compute-0 ceph-mon[74381]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 1712952, prev total WAL file size 1712952, number of live WAL files 2.
Jan 20 19:07:26 compute-0 ceph-mon[74381]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 19:07:26 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:07:26.695186) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323532' seq:72057594037927935, type:22 .. '6C6F676D00353034' seq:0, type:0; will stop at (end)
Jan 20 19:07:26 compute-0 ceph-mon[74381]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 19:07:26 compute-0 ceph-mon[74381]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(1634KB)], [50(11MB)]
Jan 20 19:07:26 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936046695236, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 14068635, "oldest_snapshot_seqno": -1}
Jan 20 19:07:26 compute-0 ceph-mon[74381]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 5615 keys, 13897020 bytes, temperature: kUnknown
Jan 20 19:07:26 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936046781680, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 13897020, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13859072, "index_size": 22798, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14085, "raw_key_size": 143730, "raw_average_key_size": 25, "raw_value_size": 13757017, "raw_average_value_size": 2450, "num_data_blocks": 926, "num_entries": 5615, "num_filter_entries": 5615, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768934326, "oldest_key_time": 0, "file_creation_time": 1768936046, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cbf3ab03-d51c-4622-b6c7-e997cd5246eb", "db_session_id": "I40O2DG19JCNHUB0JQU4", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Jan 20 19:07:26 compute-0 ceph-mon[74381]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 19:07:26 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:07:26.781950) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 13897020 bytes
Jan 20 19:07:26 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:07:26.783333) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 162.6 rd, 160.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 11.8 +0.0 blob) out(13.3 +0.0 blob), read-write-amplify(16.7) write-amplify(8.3) OK, records in: 6146, records dropped: 531 output_compression: NoCompression
Jan 20 19:07:26 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:07:26.783371) EVENT_LOG_v1 {"time_micros": 1768936046783354, "job": 26, "event": "compaction_finished", "compaction_time_micros": 86510, "compaction_time_cpu_micros": 26843, "output_level": 6, "num_output_files": 1, "total_output_size": 13897020, "num_input_records": 6146, "num_output_records": 5615, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 19:07:26 compute-0 ceph-mon[74381]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 19:07:26 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936046784254, "job": 26, "event": "table_file_deletion", "file_number": 52}
Jan 20 19:07:26 compute-0 ceph-mon[74381]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 19:07:26 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936046789279, "job": 26, "event": "table_file_deletion", "file_number": 50}
Jan 20 19:07:26 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:07:26.695097) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:07:26 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:07:26.789323) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:07:26 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:07:26.789329) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:07:26 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:07:26.789330) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:07:26 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:07:26.789332) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:07:26 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:07:26.789333) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:07:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:07:27 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6e0004530 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:07:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:07:27.151Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:07:27 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:07:27 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:07:27 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:07:27.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:07:27 compute-0 ceph-mon[74381]: pgmap v758: 337 pgs: 337 active+clean; 121 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 114 KiB/s rd, 107 KiB/s wr, 23 op/s
Jan 20 19:07:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:07:27 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6f0004880 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:07:27 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:07:27 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:07:27 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:07:27.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:07:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:07:28 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6cc004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:07:28 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v759: 337 pgs: 337 active+clean; 121 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 114 KiB/s rd, 111 KiB/s wr, 23 op/s
Jan 20 19:07:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:07:28.845Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:07:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:07:29 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6c4003f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:07:29 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:07:29 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:07:29 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:07:29.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:07:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:07:29 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6d4001f10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:07:29 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:07:29 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:07:29 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:07:29.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:07:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:07:29] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Jan 20 19:07:29 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:07:29] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Jan 20 19:07:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:07:30 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6d0000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:07:30 compute-0 ceph-mon[74381]: pgmap v759: 337 pgs: 337 active+clean; 121 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 114 KiB/s rd, 111 KiB/s wr, 23 op/s
Jan 20 19:07:30 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/533909390' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:07:30 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v760: 337 pgs: 337 active+clean; 121 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 16 KiB/s wr, 1 op/s
Jan 20 19:07:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:07:30.284 165659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:07:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:07:30.285 165659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:07:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:07:30.286 165659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:07:30 compute-0 nova_compute[254061]: 2026-01-20 19:07:30.904 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:07:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:07:30.923 165659 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'be:fe:f2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '0e:71:69:cd:a8:95'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 19:07:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:07:30.924 165659 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 19:07:30 compute-0 nova_compute[254061]: 2026-01-20 19:07:30.924 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:07:31 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:07:31 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6d0000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:07:31 compute-0 ceph-mon[74381]: pgmap v760: 337 pgs: 337 active+clean; 121 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 16 KiB/s wr, 1 op/s
Jan 20 19:07:31 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:07:31 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:07:31 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:07:31.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:07:31 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:07:31 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6c4003f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:07:31 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:07:31 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:07:31 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:07:31 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:07:31.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:07:32 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:07:32 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6d4001f10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:07:32 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v761: 337 pgs: 337 active+clean; 121 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 17 KiB/s wr, 7 op/s
Jan 20 19:07:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:07:33 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6d0000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:07:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:07:33 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 20 19:07:33 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:07:33 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:07:33 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:07:33.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:07:33 compute-0 ceph-mon[74381]: pgmap v761: 337 pgs: 337 active+clean; 121 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 17 KiB/s wr, 7 op/s
Jan 20 19:07:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:07:33 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6f0004880 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:07:33 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:07:33 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:07:33 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:07:33.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:07:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:07:34 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6c4003f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:07:34 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v762: 337 pgs: 337 active+clean; 142 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 834 KiB/s wr, 11 op/s
Jan 20 19:07:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:07:35 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6d4001f10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:07:35 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:07:35 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:07:35 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:07:35.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:07:35 compute-0 ceph-mon[74381]: pgmap v762: 337 pgs: 337 active+clean; 142 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 834 KiB/s wr, 11 op/s
Jan 20 19:07:35 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/57096459' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 19:07:35 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/2116771972' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 19:07:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:07:35 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6d0001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:07:35 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:07:35 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:07:35 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:07:35.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:07:35 compute-0 nova_compute[254061]: 2026-01-20 19:07:35.907 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:07:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:07:36 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6d0001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:07:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:07:36 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 20 19:07:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:07:36 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 20 19:07:36 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v763: 337 pgs: 337 active+clean; 142 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 833 KiB/s wr, 11 op/s
Jan 20 19:07:36 compute-0 sudo[260408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:07:36 compute-0 sudo[260408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:07:36 compute-0 sudo[260408]: pam_unix(sudo:session): session closed for user root
Jan 20 19:07:36 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:07:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:07:37 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6c4003f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:07:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:07:37.152Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:07:37 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:07:37 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:07:37 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:07:37.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:07:37 compute-0 ceph-mon[74381]: pgmap v763: 337 pgs: 337 active+clean; 142 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 833 KiB/s wr, 11 op/s
Jan 20 19:07:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:07:37 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6d4002e90 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:07:37 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:07:37 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:07:37 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:07:37.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:07:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:07:38 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6f0004880 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:07:38 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v764: 337 pgs: 337 active+clean; 167 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 37 op/s
Jan 20 19:07:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:07:38.847Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:07:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:07:38.847Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:07:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:07:38.847Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:07:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:07:39 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6d0001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:07:39 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:07:39 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:07:39 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:07:39.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:07:39 compute-0 ceph-mon[74381]: pgmap v764: 337 pgs: 337 active+clean; 167 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 37 op/s
Jan 20 19:07:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:07:39 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6c4003f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:07:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:07:39 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 20 19:07:39 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:07:39 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:07:39 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:07:39.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:07:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:07:39] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Jan 20 19:07:39 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:07:39] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Jan 20 19:07:39 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:07:39.927 165659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7018ca8a-de0e-4b56-bb43-675238d4f8b3, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 19:07:40 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:07:40 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6c4003f40 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:07:40 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v765: 337 pgs: 337 active+clean; 167 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 37 op/s
Jan 20 19:07:40 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:07:40 compute-0 ceph-mon[74381]: pgmap v765: 337 pgs: 337 active+clean; 167 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 37 op/s
Jan 20 19:07:40 compute-0 nova_compute[254061]: 2026-01-20 19:07:40.909 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:07:41 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:07:41 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6f0004880 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:07:41 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:07:41 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:07:41 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:07:41.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:07:41 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:07:41 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6d00032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:07:41 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:07:41 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:07:41 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:07:41 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:07:41.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:07:42 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:07:42 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6d00032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:07:42 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v766: 337 pgs: 337 active+clean; 167 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.8 MiB/s wr, 79 op/s
Jan 20 19:07:43 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[257749]: 20/01/2026 19:07:43 : epoch 696fd1e2 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa6d00032f0 fd 38 proxy ignored for local
Jan 20 19:07:43 compute-0 kernel: ganesha.nfsd[259315]: segfault at 50 ip 00007fa77b75a32e sp 00007fa7017f9210 error 4 in libntirpc.so.5.8[7fa77b73f000+2c000] likely on CPU 2 (core 0, socket 2)
Jan 20 19:07:43 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Jan 20 19:07:43 compute-0 systemd[1]: Started Process Core Dump (PID 260441/UID 0).
Jan 20 19:07:43 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:07:43 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:07:43 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:07:43.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:07:43 compute-0 ceph-mon[74381]: pgmap v766: 337 pgs: 337 active+clean; 167 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.8 MiB/s wr, 79 op/s
Jan 20 19:07:43 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:07:43 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:07:43 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:07:43.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:07:44 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v767: 337 pgs: 337 active+clean; 167 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 105 op/s
Jan 20 19:07:44 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [WARNING] 019/190744 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 20 19:07:45 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:07:45 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:07:45 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:07:45.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:07:45 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:07:45 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:07:45 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:07:45.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:07:45 compute-0 nova_compute[254061]: 2026-01-20 19:07:45.910 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:07:46 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v768: 337 pgs: 337 active+clean; 167 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1000 KiB/s wr, 100 op/s
Jan 20 19:07:46 compute-0 systemd-coredump[260442]: Process 257753 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 59:
                                                    #0  0x00007fa77b75a32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Jan 20 19:07:46 compute-0 systemd[1]: systemd-coredump@14-260441-0.service: Deactivated successfully.
Jan 20 19:07:46 compute-0 nova_compute[254061]: 2026-01-20 19:07:46.462 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:07:46 compute-0 systemd[1]: systemd-coredump@14-260441-0.service: Consumed 1.188s CPU time.
Jan 20 19:07:46 compute-0 ceph-mon[74381]: pgmap v767: 337 pgs: 337 active+clean; 167 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 105 op/s
Jan 20 19:07:46 compute-0 podman[260452]: 2026-01-20 19:07:46.519454394 +0000 UTC m=+0.031984168 container died 5256222f490819f563fd54b46fc4b0b27af425fe64ecea947d310103fd077b6f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:07:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-515e31a79b7400cdbd59d87239d8412710ed25b4f278eb9416ecc0881fce7c26-merged.mount: Deactivated successfully.
Jan 20 19:07:46 compute-0 podman[260451]: 2026-01-20 19:07:46.562144581 +0000 UTC m=+0.063043380 container health_status 7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 20 19:07:46 compute-0 podman[260452]: 2026-01-20 19:07:46.581352132 +0000 UTC m=+0.093881906 container remove 5256222f490819f563fd54b46fc4b0b27af425fe64ecea947d310103fd077b6f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:07:46 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Main process exited, code=exited, status=139/n/a
Jan 20 19:07:46 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:07:46 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Failed with result 'exit-code'.
Jan 20 19:07:46 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Consumed 1.621s CPU time.
Jan 20 19:07:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:07:47.153Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:07:47 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:07:47 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:07:47 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:07:47.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:07:47 compute-0 ceph-mon[74381]: pgmap v768: 337 pgs: 337 active+clean; 167 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1000 KiB/s wr, 100 op/s
Jan 20 19:07:47 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:07:47 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:07:47 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:07:47.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:07:48 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v769: 337 pgs: 337 active+clean; 167 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1002 KiB/s wr, 101 op/s
Jan 20 19:07:48 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 20 19:07:48 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2622568277' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 19:07:48 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 20 19:07:48 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2622568277' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 19:07:48 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:07:48.848Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:07:48 compute-0 ceph-mon[74381]: pgmap v769: 337 pgs: 337 active+clean; 167 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1002 KiB/s wr, 101 op/s
Jan 20 19:07:48 compute-0 ceph-mon[74381]: from='client.? 192.168.122.10:0/2622568277' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 19:07:48 compute-0 ceph-mon[74381]: from='client.? 192.168.122.10:0/2622568277' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 19:07:49 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:07:49 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:07:49 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:07:49.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:07:49 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:07:49 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:07:49 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:07:49.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:07:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:07:49] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Jan 20 19:07:49 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:07:49] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Jan 20 19:07:50 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v770: 337 pgs: 337 active+clean; 167 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Jan 20 19:07:50 compute-0 nova_compute[254061]: 2026-01-20 19:07:50.913 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:07:51 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [WARNING] 019/190751 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 20 19:07:51 compute-0 ceph-mon[74381]: pgmap v770: 337 pgs: 337 active+clean; 167 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Jan 20 19:07:51 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:07:51 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:07:51 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:07:51.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:07:51 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:07:51 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:07:51 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:07:51 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:07:51.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:07:52 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v771: 337 pgs: 337 active+clean; 183 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.3 MiB/s wr, 104 op/s
Jan 20 19:07:53 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:07:53 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:07:53 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:07:53.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:07:53 compute-0 ceph-mon[74381]: pgmap v771: 337 pgs: 337 active+clean; 183 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.3 MiB/s wr, 104 op/s
Jan 20 19:07:53 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:07:53 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:07:53 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:07:53.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:07:54 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v772: 337 pgs: 337 active+clean; 192 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 2.0 MiB/s wr, 78 op/s
Jan 20 19:07:54 compute-0 ceph-mgr[74676]: [balancer INFO root] Optimize plan auto_2026-01-20_19:07:54
Jan 20 19:07:54 compute-0 ceph-mgr[74676]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 19:07:54 compute-0 ceph-mgr[74676]: [balancer INFO root] do_upmap
Jan 20 19:07:54 compute-0 ceph-mgr[74676]: [balancer INFO root] pools ['.mgr', 'default.rgw.control', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.rgw.root', 'backups', '.nfs', 'volumes', 'vms', 'images', 'default.rgw.meta', 'default.rgw.log']
Jan 20 19:07:54 compute-0 ceph-mgr[74676]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 19:07:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:07:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:07:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:07:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:07:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:07:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:07:55 compute-0 podman[260522]: 2026-01-20 19:07:55.172664619 +0000 UTC m=+0.138971210 container health_status d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 20 19:07:55 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:07:55 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:07:55 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:07:55.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:07:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 19:07:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:07:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 19:07:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:07:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015044925563261053 of space, bias 1.0, pg target 0.45134776689783157 quantized to 32 (current 32)
Jan 20 19:07:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:07:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:07:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:07:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:07:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:07:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 20 19:07:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:07:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 20 19:07:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:07:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:07:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:07:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 19:07:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:07:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 20 19:07:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:07:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:07:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:07:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 19:07:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:07:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 20 19:07:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 19:07:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:07:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 19:07:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:07:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:07:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:07:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:07:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:07:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:07:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:07:55 compute-0 ceph-mon[74381]: pgmap v772: 337 pgs: 337 active+clean; 192 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 2.0 MiB/s wr, 78 op/s
Jan 20 19:07:55 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:07:55 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:07:55 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:07:55 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:07:55.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:07:55 compute-0 nova_compute[254061]: 2026-01-20 19:07:55.914 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:07:55 compute-0 nova_compute[254061]: 2026-01-20 19:07:55.916 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:07:55 compute-0 nova_compute[254061]: 2026-01-20 19:07:55.916 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 20 19:07:55 compute-0 nova_compute[254061]: 2026-01-20 19:07:55.916 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 19:07:55 compute-0 nova_compute[254061]: 2026-01-20 19:07:55.917 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 19:07:56 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v773: 337 pgs: 337 active+clean; 192 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 257 KiB/s rd, 2.0 MiB/s wr, 46 op/s
Jan 20 19:07:56 compute-0 sudo[260550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:07:56 compute-0 sudo[260550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:07:56 compute-0 sudo[260550]: pam_unix(sudo:session): session closed for user root
Jan 20 19:07:56 compute-0 ceph-mon[74381]: pgmap v773: 337 pgs: 337 active+clean; 192 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 257 KiB/s rd, 2.0 MiB/s wr, 46 op/s
Jan 20 19:07:56 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:07:56 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Scheduled restart job, restart counter is at 15.
Jan 20 19:07:56 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.ulclbx for aecbbf3b-b405-507b-97d7-637a83f5b4b1.
Jan 20 19:07:56 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Consumed 1.621s CPU time.
Jan 20 19:07:56 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.ulclbx for aecbbf3b-b405-507b-97d7-637a83f5b4b1...
Jan 20 19:07:57 compute-0 podman[260626]: 2026-01-20 19:07:57.009990111 +0000 UTC m=+0.055558798 container create dba301c3b9bce0bcc4a0a2feb3150accc5e8d80491fee017b345220780b63d1d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 20 19:07:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/460d0f2ab4a3249c2e01dbce6b7554ef02625f2c2dc5544355d8ffbd415ee2b1/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Jan 20 19:07:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/460d0f2ab4a3249c2e01dbce6b7554ef02625f2c2dc5544355d8ffbd415ee2b1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:07:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/460d0f2ab4a3249c2e01dbce6b7554ef02625f2c2dc5544355d8ffbd415ee2b1/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:07:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/460d0f2ab4a3249c2e01dbce6b7554ef02625f2c2dc5544355d8ffbd415ee2b1/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.ulclbx-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:07:57 compute-0 podman[260626]: 2026-01-20 19:07:57.075118697 +0000 UTC m=+0.120687424 container init dba301c3b9bce0bcc4a0a2feb3150accc5e8d80491fee017b345220780b63d1d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Jan 20 19:07:57 compute-0 podman[260626]: 2026-01-20 19:07:56.985131616 +0000 UTC m=+0.030700373 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:07:57 compute-0 podman[260626]: 2026-01-20 19:07:57.080531083 +0000 UTC m=+0.126099760 container start dba301c3b9bce0bcc4a0a2feb3150accc5e8d80491fee017b345220780b63d1d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 19:07:57 compute-0 bash[260626]: dba301c3b9bce0bcc4a0a2feb3150accc5e8d80491fee017b345220780b63d1d
Jan 20 19:07:57 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.ulclbx for aecbbf3b-b405-507b-97d7-637a83f5b4b1.
Jan 20 19:07:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[260641]: 20/01/2026 19:07:57 : epoch 696fd28d : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Jan 20 19:07:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[260641]: 20/01/2026 19:07:57 : epoch 696fd28d : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Jan 20 19:07:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[260641]: 20/01/2026 19:07:57 : epoch 696fd28d : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Jan 20 19:07:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[260641]: 20/01/2026 19:07:57 : epoch 696fd28d : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Jan 20 19:07:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[260641]: 20/01/2026 19:07:57 : epoch 696fd28d : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Jan 20 19:07:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[260641]: 20/01/2026 19:07:57 : epoch 696fd28d : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Jan 20 19:07:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[260641]: 20/01/2026 19:07:57 : epoch 696fd28d : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Jan 20 19:07:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:07:57.154Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:07:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:07:57.154Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:07:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[260641]: 20/01/2026 19:07:57 : epoch 696fd28d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 20 19:07:57 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:07:57 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:07:57 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:07:57.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:07:57 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:07:57 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:07:57 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:07:57.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:07:58 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v774: 337 pgs: 337 active+clean; 200 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 20 19:07:58 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:07:58.849Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:07:59 compute-0 ceph-mon[74381]: pgmap v774: 337 pgs: 337 active+clean; 200 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 20 19:07:59 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:07:59 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:07:59 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:07:59.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:07:59 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:07:59 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:07:59 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:07:59.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:07:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:07:59] "GET /metrics HTTP/1.1" 200 48462 "" "Prometheus/2.51.0"
Jan 20 19:07:59 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:07:59] "GET /metrics HTTP/1.1" 200 48462 "" "Prometheus/2.51.0"
Jan 20 19:08:00 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v775: 337 pgs: 337 active+clean; 200 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 20 19:08:00 compute-0 nova_compute[254061]: 2026-01-20 19:08:00.920 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:08:00 compute-0 nova_compute[254061]: 2026-01-20 19:08:00.921 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:08:00 compute-0 nova_compute[254061]: 2026-01-20 19:08:00.922 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 20 19:08:00 compute-0 nova_compute[254061]: 2026-01-20 19:08:00.922 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 19:08:00 compute-0 nova_compute[254061]: 2026-01-20 19:08:00.955 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:08:00 compute-0 nova_compute[254061]: 2026-01-20 19:08:00.956 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 19:08:01 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:08:01 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:08:01 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:08:01.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:08:01 compute-0 ceph-mon[74381]: pgmap v775: 337 pgs: 337 active+clean; 200 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 20 19:08:01 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:08:01 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:08:01 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:08:01 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:08:01.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:08:02 compute-0 nova_compute[254061]: 2026-01-20 19:08:02.129 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:08:02 compute-0 nova_compute[254061]: 2026-01-20 19:08:02.129 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:08:02 compute-0 nova_compute[254061]: 2026-01-20 19:08:02.130 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:08:02 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v776: 337 pgs: 337 active+clean; 155 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 337 KiB/s rd, 2.1 MiB/s wr, 82 op/s
Jan 20 19:08:02 compute-0 nova_compute[254061]: 2026-01-20 19:08:02.165 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:08:02 compute-0 nova_compute[254061]: 2026-01-20 19:08:02.166 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:08:02 compute-0 nova_compute[254061]: 2026-01-20 19:08:02.166 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:08:02 compute-0 nova_compute[254061]: 2026-01-20 19:08:02.167 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 19:08:02 compute-0 nova_compute[254061]: 2026-01-20 19:08:02.167 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:08:02 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:08:02 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1235292389' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:08:02 compute-0 nova_compute[254061]: 2026-01-20 19:08:02.619 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:08:02 compute-0 nova_compute[254061]: 2026-01-20 19:08:02.709 254065 DEBUG nova.virt.libvirt.driver [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 19:08:02 compute-0 nova_compute[254061]: 2026-01-20 19:08:02.709 254065 DEBUG nova.virt.libvirt.driver [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 19:08:02 compute-0 nova_compute[254061]: 2026-01-20 19:08:02.873 254065 WARNING nova.virt.libvirt.driver [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 19:08:02 compute-0 nova_compute[254061]: 2026-01-20 19:08:02.874 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4341MB free_disk=59.92430877685547GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 19:08:02 compute-0 nova_compute[254061]: 2026-01-20 19:08:02.875 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:08:02 compute-0 nova_compute[254061]: 2026-01-20 19:08:02.875 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:08:02 compute-0 nova_compute[254061]: 2026-01-20 19:08:02.935 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Instance 120a65b5-a5a0-4431-bfbb-56c5468d25a6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 19:08:02 compute-0 nova_compute[254061]: 2026-01-20 19:08:02.936 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 19:08:02 compute-0 nova_compute[254061]: 2026-01-20 19:08:02.936 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 19:08:02 compute-0 nova_compute[254061]: 2026-01-20 19:08:02.987 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:08:03 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[260641]: 20/01/2026 19:08:03 : epoch 696fd28d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 20 19:08:03 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[260641]: 20/01/2026 19:08:03 : epoch 696fd28d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 20 19:08:03 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:08:03 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:08:03 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:08:03.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:08:03 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:08:03 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1995905559' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:08:03 compute-0 nova_compute[254061]: 2026-01-20 19:08:03.417 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:08:03 compute-0 nova_compute[254061]: 2026-01-20 19:08:03.423 254065 DEBUG nova.compute.provider_tree [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Inventory has not changed in ProviderTree for provider: cb9161e5-191d-495c-920a-01144f42a215 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 19:08:03 compute-0 ceph-mon[74381]: pgmap v776: 337 pgs: 337 active+clean; 155 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 337 KiB/s rd, 2.1 MiB/s wr, 82 op/s
Jan 20 19:08:03 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/1235292389' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:08:03 compute-0 nova_compute[254061]: 2026-01-20 19:08:03.444 254065 DEBUG nova.scheduler.client.report [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Inventory has not changed for provider cb9161e5-191d-495c-920a-01144f42a215 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 19:08:03 compute-0 nova_compute[254061]: 2026-01-20 19:08:03.446 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 19:08:03 compute-0 nova_compute[254061]: 2026-01-20 19:08:03.446 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.571s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:08:03 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:08:03 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:08:03 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:08:03.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:08:04 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v777: 337 pgs: 337 active+clean; 121 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 222 KiB/s rd, 877 KiB/s wr, 64 op/s
Jan 20 19:08:04 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/1995905559' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:08:04 compute-0 ceph-mon[74381]: pgmap v777: 337 pgs: 337 active+clean; 121 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 222 KiB/s rd, 877 KiB/s wr, 64 op/s
Jan 20 19:08:04 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/4249039112' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:08:05 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:08:05 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:08:05 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:08:05.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:08:05 compute-0 nova_compute[254061]: 2026-01-20 19:08:05.445 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:08:05 compute-0 nova_compute[254061]: 2026-01-20 19:08:05.446 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 19:08:05 compute-0 nova_compute[254061]: 2026-01-20 19:08:05.446 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 19:08:05 compute-0 nova_compute[254061]: 2026-01-20 19:08:05.677 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Acquiring lock "refresh_cache-120a65b5-a5a0-4431-bfbb-56c5468d25a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 19:08:05 compute-0 nova_compute[254061]: 2026-01-20 19:08:05.677 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Acquired lock "refresh_cache-120a65b5-a5a0-4431-bfbb-56c5468d25a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 19:08:05 compute-0 nova_compute[254061]: 2026-01-20 19:08:05.677 254065 DEBUG nova.network.neutron [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 19:08:05 compute-0 nova_compute[254061]: 2026-01-20 19:08:05.678 254065 DEBUG nova.objects.instance [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 120a65b5-a5a0-4431-bfbb-56c5468d25a6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 19:08:05 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:08:05 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:08:05 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:08:05.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:08:05 compute-0 nova_compute[254061]: 2026-01-20 19:08:05.957 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:08:05 compute-0 nova_compute[254061]: 2026-01-20 19:08:05.960 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:08:05 compute-0 nova_compute[254061]: 2026-01-20 19:08:05.960 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 20 19:08:05 compute-0 nova_compute[254061]: 2026-01-20 19:08:05.961 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 19:08:06 compute-0 nova_compute[254061]: 2026-01-20 19:08:06.007 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:08:06 compute-0 nova_compute[254061]: 2026-01-20 19:08:06.008 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 19:08:06 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/2771056704' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:08:06 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/339069068' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:08:06 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v778: 337 pgs: 337 active+clean; 121 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 88 KiB/s rd, 107 KiB/s wr, 47 op/s
Jan 20 19:08:06 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:08:07 compute-0 ceph-mon[74381]: pgmap v778: 337 pgs: 337 active+clean; 121 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 88 KiB/s rd, 107 KiB/s wr, 47 op/s
Jan 20 19:08:07 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/1686987916' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:08:07 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/570759466' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:08:07 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:08:07.155Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:08:07 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:08:07.155Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:08:07 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:08:07 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:08:07 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:08:07.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:08:07 compute-0 ovn_controller[155128]: 2026-01-20T19:08:07Z|00034|binding|INFO|Releasing lport 286a9bf9-bd18-4196-95d5-fe7ca2fbe5bf from this chassis (sb_readonly=0)
Jan 20 19:08:07 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:08:07 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:08:07 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:08:07.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:08:07 compute-0 nova_compute[254061]: 2026-01-20 19:08:07.803 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:08:07 compute-0 nova_compute[254061]: 2026-01-20 19:08:07.821 254065 DEBUG nova.network.neutron [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Updating instance_info_cache with network_info: [{"id": "cfcfd83d-5be0-4a39-9bc1-94ae78153295", "address": "fa:16:3e:7d:76:12", "network": {"id": "be9957c5-bb46-4eb1-886f-ace069f03c77", "bridge": "br-int", "label": "tempest-network-smoke--835136308", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcfcfd83d-5b", "ovs_interfaceid": "cfcfd83d-5be0-4a39-9bc1-94ae78153295", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 19:08:07 compute-0 nova_compute[254061]: 2026-01-20 19:08:07.837 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Releasing lock "refresh_cache-120a65b5-a5a0-4431-bfbb-56c5468d25a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 19:08:07 compute-0 nova_compute[254061]: 2026-01-20 19:08:07.837 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 19:08:07 compute-0 nova_compute[254061]: 2026-01-20 19:08:07.838 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:08:07 compute-0 nova_compute[254061]: 2026-01-20 19:08:07.838 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:08:07 compute-0 nova_compute[254061]: 2026-01-20 19:08:07.838 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:08:07 compute-0 nova_compute[254061]: 2026-01-20 19:08:07.838 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:08:07 compute-0 nova_compute[254061]: 2026-01-20 19:08:07.838 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 19:08:08 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/2186424784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:08:08 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v779: 337 pgs: 337 active+clean; 121 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 89 KiB/s rd, 107 KiB/s wr, 49 op/s
Jan 20 19:08:08 compute-0 nova_compute[254061]: 2026-01-20 19:08:08.516 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:08:08 compute-0 nova_compute[254061]: 2026-01-20 19:08:08.516 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:08:08 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:08:08.850Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:08:08 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:08:08.851Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:08:09 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:08:09 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:08:09 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:08:09.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:08:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[260641]: 20/01/2026 19:08:09 : epoch 696fd28d : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 20 19:08:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[260641]: 20/01/2026 19:08:09 : epoch 696fd28d : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Jan 20 19:08:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[260641]: 20/01/2026 19:08:09 : epoch 696fd28d : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Jan 20 19:08:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[260641]: 20/01/2026 19:08:09 : epoch 696fd28d : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Jan 20 19:08:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[260641]: 20/01/2026 19:08:09 : epoch 696fd28d : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Jan 20 19:08:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[260641]: 20/01/2026 19:08:09 : epoch 696fd28d : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Jan 20 19:08:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[260641]: 20/01/2026 19:08:09 : epoch 696fd28d : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Jan 20 19:08:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[260641]: 20/01/2026 19:08:09 : epoch 696fd28d : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 20 19:08:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[260641]: 20/01/2026 19:08:09 : epoch 696fd28d : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 20 19:08:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[260641]: 20/01/2026 19:08:09 : epoch 696fd28d : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 20 19:08:09 compute-0 ceph-mon[74381]: pgmap v779: 337 pgs: 337 active+clean; 121 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 89 KiB/s rd, 107 KiB/s wr, 49 op/s
Jan 20 19:08:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[260641]: 20/01/2026 19:08:09 : epoch 696fd28d : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Jan 20 19:08:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[260641]: 20/01/2026 19:08:09 : epoch 696fd28d : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 20 19:08:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[260641]: 20/01/2026 19:08:09 : epoch 696fd28d : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Jan 20 19:08:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[260641]: 20/01/2026 19:08:09 : epoch 696fd28d : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Jan 20 19:08:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[260641]: 20/01/2026 19:08:09 : epoch 696fd28d : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Jan 20 19:08:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[260641]: 20/01/2026 19:08:09 : epoch 696fd28d : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Jan 20 19:08:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[260641]: 20/01/2026 19:08:09 : epoch 696fd28d : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Jan 20 19:08:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[260641]: 20/01/2026 19:08:09 : epoch 696fd28d : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Jan 20 19:08:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[260641]: 20/01/2026 19:08:09 : epoch 696fd28d : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Jan 20 19:08:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[260641]: 20/01/2026 19:08:09 : epoch 696fd28d : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Jan 20 19:08:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[260641]: 20/01/2026 19:08:09 : epoch 696fd28d : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Jan 20 19:08:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[260641]: 20/01/2026 19:08:09 : epoch 696fd28d : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Jan 20 19:08:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[260641]: 20/01/2026 19:08:09 : epoch 696fd28d : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Jan 20 19:08:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[260641]: 20/01/2026 19:08:09 : epoch 696fd28d : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Jan 20 19:08:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[260641]: 20/01/2026 19:08:09 : epoch 696fd28d : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 20 19:08:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[260641]: 20/01/2026 19:08:09 : epoch 696fd28d : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Jan 20 19:08:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[260641]: 20/01/2026 19:08:09 : epoch 696fd28d : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 20 19:08:09 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:08:09 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:08:09 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:08:09.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:08:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:08:09] "GET /metrics HTTP/1.1" 200 48462 "" "Prometheus/2.51.0"
Jan 20 19:08:09 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:08:09] "GET /metrics HTTP/1.1" 200 48462 "" "Prometheus/2.51.0"
Jan 20 19:08:09 compute-0 nova_compute[254061]: 2026-01-20 19:08:09.895 254065 DEBUG nova.compute.manager [req-eab8e25b-7741-4a84-a163-7a1e1f8b18c7 req-ce979a5f-7160-4155-9175-e88ebacbb61a 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Received event network-changed-cfcfd83d-5be0-4a39-9bc1-94ae78153295 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 19:08:09 compute-0 nova_compute[254061]: 2026-01-20 19:08:09.896 254065 DEBUG nova.compute.manager [req-eab8e25b-7741-4a84-a163-7a1e1f8b18c7 req-ce979a5f-7160-4155-9175-e88ebacbb61a 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Refreshing instance network info cache due to event network-changed-cfcfd83d-5be0-4a39-9bc1-94ae78153295. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 19:08:09 compute-0 nova_compute[254061]: 2026-01-20 19:08:09.896 254065 DEBUG oslo_concurrency.lockutils [req-eab8e25b-7741-4a84-a163-7a1e1f8b18c7 req-ce979a5f-7160-4155-9175-e88ebacbb61a 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Acquiring lock "refresh_cache-120a65b5-a5a0-4431-bfbb-56c5468d25a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 19:08:09 compute-0 nova_compute[254061]: 2026-01-20 19:08:09.897 254065 DEBUG oslo_concurrency.lockutils [req-eab8e25b-7741-4a84-a163-7a1e1f8b18c7 req-ce979a5f-7160-4155-9175-e88ebacbb61a 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Acquired lock "refresh_cache-120a65b5-a5a0-4431-bfbb-56c5468d25a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 19:08:09 compute-0 nova_compute[254061]: 2026-01-20 19:08:09.897 254065 DEBUG nova.network.neutron [req-eab8e25b-7741-4a84-a163-7a1e1f8b18c7 req-ce979a5f-7160-4155-9175-e88ebacbb61a 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Refreshing network info cache for port cfcfd83d-5be0-4a39-9bc1-94ae78153295 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 19:08:09 compute-0 nova_compute[254061]: 2026-01-20 19:08:09.963 254065 DEBUG oslo_concurrency.lockutils [None req-03ae0dfd-3709-4046-8ca2-b640cada5f3c d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Acquiring lock "120a65b5-a5a0-4431-bfbb-56c5468d25a6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:08:09 compute-0 nova_compute[254061]: 2026-01-20 19:08:09.964 254065 DEBUG oslo_concurrency.lockutils [None req-03ae0dfd-3709-4046-8ca2-b640cada5f3c d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "120a65b5-a5a0-4431-bfbb-56c5468d25a6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:08:09 compute-0 nova_compute[254061]: 2026-01-20 19:08:09.964 254065 DEBUG oslo_concurrency.lockutils [None req-03ae0dfd-3709-4046-8ca2-b640cada5f3c d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Acquiring lock "120a65b5-a5a0-4431-bfbb-56c5468d25a6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:08:09 compute-0 nova_compute[254061]: 2026-01-20 19:08:09.965 254065 DEBUG oslo_concurrency.lockutils [None req-03ae0dfd-3709-4046-8ca2-b640cada5f3c d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "120a65b5-a5a0-4431-bfbb-56c5468d25a6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:08:09 compute-0 nova_compute[254061]: 2026-01-20 19:08:09.965 254065 DEBUG oslo_concurrency.lockutils [None req-03ae0dfd-3709-4046-8ca2-b640cada5f3c d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "120a65b5-a5a0-4431-bfbb-56c5468d25a6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:08:09 compute-0 nova_compute[254061]: 2026-01-20 19:08:09.967 254065 INFO nova.compute.manager [None req-03ae0dfd-3709-4046-8ca2-b640cada5f3c d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Terminating instance
Jan 20 19:08:09 compute-0 nova_compute[254061]: 2026-01-20 19:08:09.970 254065 DEBUG nova.compute.manager [None req-03ae0dfd-3709-4046-8ca2-b640cada5f3c d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 19:08:10 compute-0 kernel: tapcfcfd83d-5b (unregistering): left promiscuous mode
Jan 20 19:08:10 compute-0 NetworkManager[48914]: <info>  [1768936090.0400] device (tapcfcfd83d-5b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 19:08:10 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[260641]: 20/01/2026 19:08:10 : epoch 696fd28d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0434000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:08:10 compute-0 ovn_controller[155128]: 2026-01-20T19:08:10Z|00035|binding|INFO|Releasing lport cfcfd83d-5be0-4a39-9bc1-94ae78153295 from this chassis (sb_readonly=0)
Jan 20 19:08:10 compute-0 ovn_controller[155128]: 2026-01-20T19:08:10Z|00036|binding|INFO|Setting lport cfcfd83d-5be0-4a39-9bc1-94ae78153295 down in Southbound
Jan 20 19:08:10 compute-0 ovn_controller[155128]: 2026-01-20T19:08:10Z|00037|binding|INFO|Removing iface tapcfcfd83d-5b ovn-installed in OVS
Jan 20 19:08:10 compute-0 nova_compute[254061]: 2026-01-20 19:08:10.053 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:08:10 compute-0 nova_compute[254061]: 2026-01-20 19:08:10.055 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:08:10 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:08:10.061 165659 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7d:76:12 10.100.0.6'], port_security=['fa:16:3e:7d:76:12 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '120a65b5-a5a0-4431-bfbb-56c5468d25a6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-be9957c5-bb46-4eb1-886f-ace069f03c77', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dc8a6ea17f334edbbfaf2a91ec6fd167', 'neutron:revision_number': '4', 'neutron:security_group_ids': '42ca4ceb-b47f-4881-86bc-67ed2569e13c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a8599ef1-9c40-40f6-97bc-4f256790f7ed, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4e780f9880>], logical_port=cfcfd83d-5be0-4a39-9bc1-94ae78153295) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4e780f9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 19:08:10 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:08:10.063 165659 INFO neutron.agent.ovn.metadata.agent [-] Port cfcfd83d-5be0-4a39-9bc1-94ae78153295 in datapath be9957c5-bb46-4eb1-886f-ace069f03c77 unbound from our chassis
Jan 20 19:08:10 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:08:10.065 165659 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network be9957c5-bb46-4eb1-886f-ace069f03c77, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 19:08:10 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:08:10.066 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[69e08d7a-a90f-4b0a-ac83-9e550e3b37e5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:08:10 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:08:10.067 165659 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-be9957c5-bb46-4eb1-886f-ace069f03c77 namespace which is not needed anymore
Jan 20 19:08:10 compute-0 nova_compute[254061]: 2026-01-20 19:08:10.091 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:08:10 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Jan 20 19:08:10 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 17.868s CPU time.
Jan 20 19:08:10 compute-0 systemd-machined[220746]: Machine qemu-1-instance-00000001 terminated.
Jan 20 19:08:10 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v780: 337 pgs: 337 active+clean; 121 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 15 KiB/s wr, 31 op/s
Jan 20 19:08:10 compute-0 nova_compute[254061]: 2026-01-20 19:08:10.193 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:08:10 compute-0 nova_compute[254061]: 2026-01-20 19:08:10.201 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:08:10 compute-0 nova_compute[254061]: 2026-01-20 19:08:10.210 254065 INFO nova.virt.libvirt.driver [-] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Instance destroyed successfully.
Jan 20 19:08:10 compute-0 nova_compute[254061]: 2026-01-20 19:08:10.211 254065 DEBUG nova.objects.instance [None req-03ae0dfd-3709-4046-8ca2-b640cada5f3c d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lazy-loading 'resources' on Instance uuid 120a65b5-a5a0-4431-bfbb-56c5468d25a6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 19:08:10 compute-0 nova_compute[254061]: 2026-01-20 19:08:10.230 254065 DEBUG nova.virt.libvirt.vif [None req-03ae0dfd-3709-4046-8ca2-b640cada5f3c d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T19:06:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-84660842',display_name='tempest-TestNetworkBasicOps-server-84660842',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-84660842',id=1,image_ref='bc57af0c-4b71-499e-9808-3c8fc070a488',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFU7AxNv7ZeURl1+csXbYC/yFx+mGOUnV8YctLQySdOGLbNML9aoeg2PcBDcpPXGhyvDZG90VA03RRAO3sskooaLNd6/MsjrlH5CyWAQVkGencURtEhb/4rZrGfyF5EWzw==',key_name='tempest-TestNetworkBasicOps-1594253247',keypairs=<?>,launch_index=0,launched_at=2026-01-20T19:06:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dc8a6ea17f334edbbfaf2a91ec6fd167',ramdisk_id='',reservation_id='r-h1lditqw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='bc57af0c-4b71-499e-9808-3c8fc070a488',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-899583499',owner_user_name='tempest-TestNetworkBasicOps-899583499-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T19:06:59Z,user_data=None,user_id='d34bd159f8884263a7481e3fcff15267',uuid=120a65b5-a5a0-4431-bfbb-56c5468d25a6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cfcfd83d-5be0-4a39-9bc1-94ae78153295", "address": "fa:16:3e:7d:76:12", "network": {"id": "be9957c5-bb46-4eb1-886f-ace069f03c77", "bridge": "br-int", "label": "tempest-network-smoke--835136308", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcfcfd83d-5b", "ovs_interfaceid": "cfcfd83d-5be0-4a39-9bc1-94ae78153295", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 19:08:10 compute-0 nova_compute[254061]: 2026-01-20 19:08:10.231 254065 DEBUG nova.network.os_vif_util [None req-03ae0dfd-3709-4046-8ca2-b640cada5f3c d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Converting VIF {"id": "cfcfd83d-5be0-4a39-9bc1-94ae78153295", "address": "fa:16:3e:7d:76:12", "network": {"id": "be9957c5-bb46-4eb1-886f-ace069f03c77", "bridge": "br-int", "label": "tempest-network-smoke--835136308", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcfcfd83d-5b", "ovs_interfaceid": "cfcfd83d-5be0-4a39-9bc1-94ae78153295", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 19:08:10 compute-0 nova_compute[254061]: 2026-01-20 19:08:10.232 254065 DEBUG nova.network.os_vif_util [None req-03ae0dfd-3709-4046-8ca2-b640cada5f3c d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7d:76:12,bridge_name='br-int',has_traffic_filtering=True,id=cfcfd83d-5be0-4a39-9bc1-94ae78153295,network=Network(be9957c5-bb46-4eb1-886f-ace069f03c77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcfcfd83d-5b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 19:08:10 compute-0 nova_compute[254061]: 2026-01-20 19:08:10.233 254065 DEBUG os_vif [None req-03ae0dfd-3709-4046-8ca2-b640cada5f3c d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7d:76:12,bridge_name='br-int',has_traffic_filtering=True,id=cfcfd83d-5be0-4a39-9bc1-94ae78153295,network=Network(be9957c5-bb46-4eb1-886f-ace069f03c77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcfcfd83d-5b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 19:08:10 compute-0 nova_compute[254061]: 2026-01-20 19:08:10.235 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:08:10 compute-0 nova_compute[254061]: 2026-01-20 19:08:10.235 254065 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcfcfd83d-5b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 19:08:10 compute-0 neutron-haproxy-ovnmeta-be9957c5-bb46-4eb1-886f-ace069f03c77[259541]: [NOTICE]   (259545) : haproxy version is 2.8.14-c23fe91
Jan 20 19:08:10 compute-0 neutron-haproxy-ovnmeta-be9957c5-bb46-4eb1-886f-ace069f03c77[259541]: [NOTICE]   (259545) : path to executable is /usr/sbin/haproxy
Jan 20 19:08:10 compute-0 neutron-haproxy-ovnmeta-be9957c5-bb46-4eb1-886f-ace069f03c77[259541]: [WARNING]  (259545) : Exiting Master process...
Jan 20 19:08:10 compute-0 nova_compute[254061]: 2026-01-20 19:08:10.238 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:08:10 compute-0 neutron-haproxy-ovnmeta-be9957c5-bb46-4eb1-886f-ace069f03c77[259541]: [ALERT]    (259545) : Current worker (259547) exited with code 143 (Terminated)
Jan 20 19:08:10 compute-0 neutron-haproxy-ovnmeta-be9957c5-bb46-4eb1-886f-ace069f03c77[259541]: [WARNING]  (259545) : All workers exited. Exiting... (0)
Jan 20 19:08:10 compute-0 nova_compute[254061]: 2026-01-20 19:08:10.239 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:08:10 compute-0 systemd[1]: libpod-93f3f0ad4e71909fcf4d0cc06729cc8a9b74e1a94cba98ba0a23b145deda24ea.scope: Deactivated successfully.
Jan 20 19:08:10 compute-0 nova_compute[254061]: 2026-01-20 19:08:10.245 254065 INFO os_vif [None req-03ae0dfd-3709-4046-8ca2-b640cada5f3c d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7d:76:12,bridge_name='br-int',has_traffic_filtering=True,id=cfcfd83d-5be0-4a39-9bc1-94ae78153295,network=Network(be9957c5-bb46-4eb1-886f-ace069f03c77),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcfcfd83d-5b')
Jan 20 19:08:10 compute-0 podman[260782]: 2026-01-20 19:08:10.249464082 +0000 UTC m=+0.069621169 container died 93f3f0ad4e71909fcf4d0cc06729cc8a9b74e1a94cba98ba0a23b145deda24ea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-be9957c5-bb46-4eb1-886f-ace069f03c77, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 20 19:08:10 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-93f3f0ad4e71909fcf4d0cc06729cc8a9b74e1a94cba98ba0a23b145deda24ea-userdata-shm.mount: Deactivated successfully.
Jan 20 19:08:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-e43c1e92bfc636b8119da72484c22b94474b8eade6b64880cf9ed4981cb3034e-merged.mount: Deactivated successfully.
Jan 20 19:08:10 compute-0 podman[260782]: 2026-01-20 19:08:10.303091306 +0000 UTC m=+0.123248423 container cleanup 93f3f0ad4e71909fcf4d0cc06729cc8a9b74e1a94cba98ba0a23b145deda24ea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-be9957c5-bb46-4eb1-886f-ace069f03c77, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 19:08:10 compute-0 systemd[1]: libpod-conmon-93f3f0ad4e71909fcf4d0cc06729cc8a9b74e1a94cba98ba0a23b145deda24ea.scope: Deactivated successfully.
Jan 20 19:08:10 compute-0 podman[260840]: 2026-01-20 19:08:10.396617082 +0000 UTC m=+0.058824846 container remove 93f3f0ad4e71909fcf4d0cc06729cc8a9b74e1a94cba98ba0a23b145deda24ea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-be9957c5-bb46-4eb1-886f-ace069f03c77, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 19:08:10 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:08:10.403 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[1cf2d2d5-c3a6-4de5-9e6d-f8bec83f419a]: (4, ('Tue Jan 20 07:08:10 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-be9957c5-bb46-4eb1-886f-ace069f03c77 (93f3f0ad4e71909fcf4d0cc06729cc8a9b74e1a94cba98ba0a23b145deda24ea)\n93f3f0ad4e71909fcf4d0cc06729cc8a9b74e1a94cba98ba0a23b145deda24ea\nTue Jan 20 07:08:10 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-be9957c5-bb46-4eb1-886f-ace069f03c77 (93f3f0ad4e71909fcf4d0cc06729cc8a9b74e1a94cba98ba0a23b145deda24ea)\n93f3f0ad4e71909fcf4d0cc06729cc8a9b74e1a94cba98ba0a23b145deda24ea\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:08:10 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:08:10.405 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[8489247b-4597-4a69-8f7b-2f6b10e8fea8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:08:10 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:08:10.407 165659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbe9957c5-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 19:08:10 compute-0 nova_compute[254061]: 2026-01-20 19:08:10.443 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:08:10 compute-0 kernel: tapbe9957c5-b0: left promiscuous mode
Jan 20 19:08:10 compute-0 nova_compute[254061]: 2026-01-20 19:08:10.463 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:08:10 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:08:10.467 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[691f5192-2b7a-4d9a-8f57-342efba98414]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:08:10 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:08:10.488 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[9f7aac3b-6965-4226-89ae-e29b084ef99f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:08:10 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:08:10.490 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[63cb9a4e-5dba-47de-9687-8736c033834d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:08:10 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:08:10 compute-0 ceph-mon[74381]: pgmap v780: 337 pgs: 337 active+clean; 121 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 15 KiB/s wr, 31 op/s
Jan 20 19:08:10 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[260641]: 20/01/2026 19:08:10 : epoch 696fd28d : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0420001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:08:10 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:08:10.511 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[80743a44-5626-4041-8608-15e94397154e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 421694, 'reachable_time': 35165, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260855, 'error': None, 'target': 'ovnmeta-be9957c5-bb46-4eb1-886f-ace069f03c77', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:08:10 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:08:10.527 166372 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-be9957c5-bb46-4eb1-886f-ace069f03c77 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 19:08:10 compute-0 systemd[1]: run-netns-ovnmeta\x2dbe9957c5\x2dbb46\x2d4eb1\x2d886f\x2dace069f03c77.mount: Deactivated successfully.
Jan 20 19:08:10 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:08:10.528 166372 DEBUG oslo.privsep.daemon [-] privsep: reply[2fb18ffe-5a96-493e-a369-856d1fa4bc2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:08:10 compute-0 nova_compute[254061]: 2026-01-20 19:08:10.756 254065 INFO nova.virt.libvirt.driver [None req-03ae0dfd-3709-4046-8ca2-b640cada5f3c d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Deleting instance files /var/lib/nova/instances/120a65b5-a5a0-4431-bfbb-56c5468d25a6_del
Jan 20 19:08:10 compute-0 nova_compute[254061]: 2026-01-20 19:08:10.757 254065 INFO nova.virt.libvirt.driver [None req-03ae0dfd-3709-4046-8ca2-b640cada5f3c d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Deletion of /var/lib/nova/instances/120a65b5-a5a0-4431-bfbb-56c5468d25a6_del complete
Jan 20 19:08:10 compute-0 nova_compute[254061]: 2026-01-20 19:08:10.820 254065 DEBUG nova.virt.libvirt.host [None req-03ae0dfd-3709-4046-8ca2-b640cada5f3c d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754
Jan 20 19:08:10 compute-0 nova_compute[254061]: 2026-01-20 19:08:10.821 254065 INFO nova.virt.libvirt.host [None req-03ae0dfd-3709-4046-8ca2-b640cada5f3c d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] UEFI support detected
Jan 20 19:08:10 compute-0 nova_compute[254061]: 2026-01-20 19:08:10.823 254065 INFO nova.compute.manager [None req-03ae0dfd-3709-4046-8ca2-b640cada5f3c d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Took 0.85 seconds to destroy the instance on the hypervisor.
Jan 20 19:08:10 compute-0 nova_compute[254061]: 2026-01-20 19:08:10.824 254065 DEBUG oslo.service.loopingcall [None req-03ae0dfd-3709-4046-8ca2-b640cada5f3c d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 19:08:10 compute-0 nova_compute[254061]: 2026-01-20 19:08:10.827 254065 DEBUG nova.compute.manager [-] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 19:08:10 compute-0 nova_compute[254061]: 2026-01-20 19:08:10.827 254065 DEBUG nova.network.neutron [-] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 19:08:11 compute-0 nova_compute[254061]: 2026-01-20 19:08:11.010 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:08:11 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[260641]: 20/01/2026 19:08:11 : epoch 696fd28d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0420001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:08:11 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:08:11 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:08:11 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:08:11.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:08:11 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:08:11 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:08:11 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:08:11 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:08:11.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:08:11 compute-0 nova_compute[254061]: 2026-01-20 19:08:11.996 254065 DEBUG nova.compute.manager [req-ec26afd2-4e01-4787-b5e7-fffcd1473201 req-c7a1a476-97cd-4b74-a660-4733a43d5c99 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Received event network-vif-unplugged-cfcfd83d-5be0-4a39-9bc1-94ae78153295 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 19:08:11 compute-0 nova_compute[254061]: 2026-01-20 19:08:11.996 254065 DEBUG oslo_concurrency.lockutils [req-ec26afd2-4e01-4787-b5e7-fffcd1473201 req-c7a1a476-97cd-4b74-a660-4733a43d5c99 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Acquiring lock "120a65b5-a5a0-4431-bfbb-56c5468d25a6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:08:11 compute-0 nova_compute[254061]: 2026-01-20 19:08:11.997 254065 DEBUG oslo_concurrency.lockutils [req-ec26afd2-4e01-4787-b5e7-fffcd1473201 req-c7a1a476-97cd-4b74-a660-4733a43d5c99 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Lock "120a65b5-a5a0-4431-bfbb-56c5468d25a6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:08:11 compute-0 nova_compute[254061]: 2026-01-20 19:08:11.997 254065 DEBUG oslo_concurrency.lockutils [req-ec26afd2-4e01-4787-b5e7-fffcd1473201 req-c7a1a476-97cd-4b74-a660-4733a43d5c99 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Lock "120a65b5-a5a0-4431-bfbb-56c5468d25a6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:08:11 compute-0 nova_compute[254061]: 2026-01-20 19:08:11.997 254065 DEBUG nova.compute.manager [req-ec26afd2-4e01-4787-b5e7-fffcd1473201 req-c7a1a476-97cd-4b74-a660-4733a43d5c99 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] No waiting events found dispatching network-vif-unplugged-cfcfd83d-5be0-4a39-9bc1-94ae78153295 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 19:08:11 compute-0 nova_compute[254061]: 2026-01-20 19:08:11.997 254065 DEBUG nova.compute.manager [req-ec26afd2-4e01-4787-b5e7-fffcd1473201 req-c7a1a476-97cd-4b74-a660-4733a43d5c99 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Received event network-vif-unplugged-cfcfd83d-5be0-4a39-9bc1-94ae78153295 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 19:08:11 compute-0 nova_compute[254061]: 2026-01-20 19:08:11.998 254065 DEBUG nova.compute.manager [req-ec26afd2-4e01-4787-b5e7-fffcd1473201 req-c7a1a476-97cd-4b74-a660-4733a43d5c99 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Received event network-vif-plugged-cfcfd83d-5be0-4a39-9bc1-94ae78153295 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 19:08:11 compute-0 nova_compute[254061]: 2026-01-20 19:08:11.998 254065 DEBUG oslo_concurrency.lockutils [req-ec26afd2-4e01-4787-b5e7-fffcd1473201 req-c7a1a476-97cd-4b74-a660-4733a43d5c99 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Acquiring lock "120a65b5-a5a0-4431-bfbb-56c5468d25a6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:08:11 compute-0 nova_compute[254061]: 2026-01-20 19:08:11.998 254065 DEBUG oslo_concurrency.lockutils [req-ec26afd2-4e01-4787-b5e7-fffcd1473201 req-c7a1a476-97cd-4b74-a660-4733a43d5c99 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Lock "120a65b5-a5a0-4431-bfbb-56c5468d25a6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:08:11 compute-0 nova_compute[254061]: 2026-01-20 19:08:11.998 254065 DEBUG oslo_concurrency.lockutils [req-ec26afd2-4e01-4787-b5e7-fffcd1473201 req-c7a1a476-97cd-4b74-a660-4733a43d5c99 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Lock "120a65b5-a5a0-4431-bfbb-56c5468d25a6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:08:11 compute-0 nova_compute[254061]: 2026-01-20 19:08:11.998 254065 DEBUG nova.compute.manager [req-ec26afd2-4e01-4787-b5e7-fffcd1473201 req-c7a1a476-97cd-4b74-a660-4733a43d5c99 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] No waiting events found dispatching network-vif-plugged-cfcfd83d-5be0-4a39-9bc1-94ae78153295 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 19:08:11 compute-0 nova_compute[254061]: 2026-01-20 19:08:11.998 254065 WARNING nova.compute.manager [req-ec26afd2-4e01-4787-b5e7-fffcd1473201 req-c7a1a476-97cd-4b74-a660-4733a43d5c99 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Received unexpected event network-vif-plugged-cfcfd83d-5be0-4a39-9bc1-94ae78153295 for instance with vm_state active and task_state deleting.
Jan 20 19:08:12 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[260641]: 20/01/2026 19:08:12 : epoch 696fd28d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0400000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:08:12 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v781: 337 pgs: 337 active+clean; 59 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 17 KiB/s wr, 41 op/s
Jan 20 19:08:12 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[260641]: 20/01/2026 19:08:12 : epoch 696fd28d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0408000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:08:12 compute-0 nova_compute[254061]: 2026-01-20 19:08:12.591 254065 DEBUG nova.network.neutron [-] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 19:08:12 compute-0 nova_compute[254061]: 2026-01-20 19:08:12.607 254065 INFO nova.compute.manager [-] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Took 1.78 seconds to deallocate network for instance.
Jan 20 19:08:12 compute-0 nova_compute[254061]: 2026-01-20 19:08:12.650 254065 DEBUG oslo_concurrency.lockutils [None req-03ae0dfd-3709-4046-8ca2-b640cada5f3c d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:08:12 compute-0 nova_compute[254061]: 2026-01-20 19:08:12.651 254065 DEBUG oslo_concurrency.lockutils [None req-03ae0dfd-3709-4046-8ca2-b640cada5f3c d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:08:12 compute-0 nova_compute[254061]: 2026-01-20 19:08:12.708 254065 DEBUG nova.compute.manager [req-59c1cab3-5cae-4237-af0d-ce5864b2ee1b req-ee085719-717a-4911-8b64-5a50705223dd 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Received event network-vif-deleted-cfcfd83d-5be0-4a39-9bc1-94ae78153295 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 19:08:12 compute-0 nova_compute[254061]: 2026-01-20 19:08:12.712 254065 DEBUG oslo_concurrency.processutils [None req-03ae0dfd-3709-4046-8ca2-b640cada5f3c d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:08:12 compute-0 nova_compute[254061]: 2026-01-20 19:08:12.747 254065 DEBUG nova.network.neutron [req-eab8e25b-7741-4a84-a163-7a1e1f8b18c7 req-ce979a5f-7160-4155-9175-e88ebacbb61a 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Updated VIF entry in instance network info cache for port cfcfd83d-5be0-4a39-9bc1-94ae78153295. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 19:08:12 compute-0 nova_compute[254061]: 2026-01-20 19:08:12.750 254065 DEBUG nova.network.neutron [req-eab8e25b-7741-4a84-a163-7a1e1f8b18c7 req-ce979a5f-7160-4155-9175-e88ebacbb61a 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Updating instance_info_cache with network_info: [{"id": "cfcfd83d-5be0-4a39-9bc1-94ae78153295", "address": "fa:16:3e:7d:76:12", "network": {"id": "be9957c5-bb46-4eb1-886f-ace069f03c77", "bridge": "br-int", "label": "tempest-network-smoke--835136308", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcfcfd83d-5b", "ovs_interfaceid": "cfcfd83d-5be0-4a39-9bc1-94ae78153295", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 19:08:12 compute-0 nova_compute[254061]: 2026-01-20 19:08:12.768 254065 DEBUG oslo_concurrency.lockutils [req-eab8e25b-7741-4a84-a163-7a1e1f8b18c7 req-ce979a5f-7160-4155-9175-e88ebacbb61a 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Releasing lock "refresh_cache-120a65b5-a5a0-4431-bfbb-56c5468d25a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 19:08:13 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [WARNING] 019/190813 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 20 19:08:13 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[260641]: 20/01/2026 19:08:13 : epoch 696fd28d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0408000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:08:13 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:08:13 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4181925061' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:08:13 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:08:13 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:08:13 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:08:13.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:08:13 compute-0 nova_compute[254061]: 2026-01-20 19:08:13.255 254065 DEBUG oslo_concurrency.processutils [None req-03ae0dfd-3709-4046-8ca2-b640cada5f3c d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:08:13 compute-0 nova_compute[254061]: 2026-01-20 19:08:13.261 254065 DEBUG nova.compute.provider_tree [None req-03ae0dfd-3709-4046-8ca2-b640cada5f3c d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Inventory has not changed in ProviderTree for provider: cb9161e5-191d-495c-920a-01144f42a215 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 19:08:13 compute-0 nova_compute[254061]: 2026-01-20 19:08:13.274 254065 DEBUG nova.scheduler.client.report [None req-03ae0dfd-3709-4046-8ca2-b640cada5f3c d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Inventory has not changed for provider cb9161e5-191d-495c-920a-01144f42a215 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 19:08:13 compute-0 nova_compute[254061]: 2026-01-20 19:08:13.291 254065 DEBUG oslo_concurrency.lockutils [None req-03ae0dfd-3709-4046-8ca2-b640cada5f3c d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:08:13 compute-0 nova_compute[254061]: 2026-01-20 19:08:13.328 254065 INFO nova.scheduler.client.report [None req-03ae0dfd-3709-4046-8ca2-b640cada5f3c d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Deleted allocations for instance 120a65b5-a5a0-4431-bfbb-56c5468d25a6
Jan 20 19:08:13 compute-0 nova_compute[254061]: 2026-01-20 19:08:13.388 254065 DEBUG oslo_concurrency.lockutils [None req-03ae0dfd-3709-4046-8ca2-b640cada5f3c d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "120a65b5-a5a0-4431-bfbb-56c5468d25a6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.424s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:08:13 compute-0 ceph-mon[74381]: pgmap v781: 337 pgs: 337 active+clean; 59 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 17 KiB/s wr, 41 op/s
Jan 20 19:08:13 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/4181925061' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:08:13 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:08:13 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:08:13 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:08:13.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:08:14 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[260641]: 20/01/2026 19:08:14 : epoch 696fd28d : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0420001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:08:14 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v782: 337 pgs: 337 active+clean; 41 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.9 KiB/s wr, 40 op/s
Jan 20 19:08:14 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[260641]: 20/01/2026 19:08:14 : epoch 696fd28d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04000016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:08:14 compute-0 ceph-mon[74381]: pgmap v782: 337 pgs: 337 active+clean; 41 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 1.9 KiB/s wr, 40 op/s
Jan 20 19:08:15 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[260641]: 20/01/2026 19:08:15 : epoch 696fd28d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f040c000ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:08:15 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:08:15 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:08:15 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:08:15.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:08:15 compute-0 nova_compute[254061]: 2026-01-20 19:08:15.240 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:08:15 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:08:15 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:08:15 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:08:15.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:08:16 compute-0 nova_compute[254061]: 2026-01-20 19:08:16.011 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:08:16 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[260641]: 20/01/2026 19:08:16 : epoch 696fd28d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0408001f60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:08:16 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v783: 337 pgs: 337 active+clean; 41 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.6 KiB/s wr, 29 op/s
Jan 20 19:08:16 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[260641]: 20/01/2026 19:08:16 : epoch 696fd28d : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0420001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:08:16 compute-0 sudo[260887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:08:16 compute-0 sudo[260887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:08:16 compute-0 sudo[260887]: pam_unix(sudo:session): session closed for user root
Jan 20 19:08:16 compute-0 sudo[260910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:08:16 compute-0 sudo[260910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:08:16 compute-0 sudo[260910]: pam_unix(sudo:session): session closed for user root
Jan 20 19:08:16 compute-0 sudo[260938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Jan 20 19:08:16 compute-0 sudo[260938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:08:16 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:08:16 compute-0 podman[260936]: 2026-01-20 19:08:16.706714738 +0000 UTC m=+0.097915556 container health_status 7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent)
Jan 20 19:08:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[260641]: 20/01/2026 19:08:17 : epoch 696fd28d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04000016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:08:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:08:17.156Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:08:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:08:17.156Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:08:17 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:08:17 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:08:17 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:08:17.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:08:17 compute-0 ceph-mon[74381]: pgmap v783: 337 pgs: 337 active+clean; 41 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.6 KiB/s wr, 29 op/s
Jan 20 19:08:17 compute-0 podman[261052]: 2026-01-20 19:08:17.28905602 +0000 UTC m=+0.073199417 container exec 2fba800b181b7eef43f9bbe592c3e2bd413c4a1140b3b26fe8cd839a68603da7 (image=quay.io/ceph/ceph:v19, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:08:17 compute-0 podman[261052]: 2026-01-20 19:08:17.383167592 +0000 UTC m=+0.167310959 container exec_died 2fba800b181b7eef43f9bbe592c3e2bd413c4a1140b3b26fe8cd839a68603da7 (image=quay.io/ceph/ceph:v19, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mon-compute-0, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:08:17 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:08:17 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:08:17 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:08:17.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:08:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[260641]: 20/01/2026 19:08:18 : epoch 696fd28d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f040c0019c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:08:18 compute-0 podman[261189]: 2026-01-20 19:08:18.110459964 +0000 UTC m=+0.086232589 container exec d3ebab8fa832c2f5835ae87e49226279b06e4ab5bb811ac33df7c9afc2478663 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 19:08:18 compute-0 podman[261189]: 2026-01-20 19:08:18.131448233 +0000 UTC m=+0.107220848 container exec_died d3ebab8fa832c2f5835ae87e49226279b06e4ab5bb811ac33df7c9afc2478663 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 19:08:18 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v784: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.6 KiB/s wr, 29 op/s
Jan 20 19:08:18 compute-0 nova_compute[254061]: 2026-01-20 19:08:18.332 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:08:18 compute-0 nova_compute[254061]: 2026-01-20 19:08:18.432 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:08:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[260641]: 20/01/2026 19:08:18 : epoch 696fd28d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0408001f60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:08:18 compute-0 podman[261262]: 2026-01-20 19:08:18.54398857 +0000 UTC m=+0.092523890 container exec dba301c3b9bce0bcc4a0a2feb3150accc5e8d80491fee017b345220780b63d1d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Jan 20 19:08:18 compute-0 podman[261262]: 2026-01-20 19:08:18.564396473 +0000 UTC m=+0.112931803 container exec_died dba301c3b9bce0bcc4a0a2feb3150accc5e8d80491fee017b345220780b63d1d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Jan 20 19:08:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:08:18.852Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:08:18 compute-0 podman[261327]: 2026-01-20 19:08:18.914742374 +0000 UTC m=+0.095082429 container exec 4e8e761c58a323b2e232c17a02958108663f7cfbfd5c846bd9edf3835afc34ea (image=quay.io/ceph/haproxy:2.3, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm)
Jan 20 19:08:18 compute-0 podman[261327]: 2026-01-20 19:08:18.931254391 +0000 UTC m=+0.111594446 container exec_died 4e8e761c58a323b2e232c17a02958108663f7cfbfd5c846bd9edf3835afc34ea (image=quay.io/ceph/haproxy:2.3, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm)
Jan 20 19:08:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[260641]: 20/01/2026 19:08:19 : epoch 696fd28d : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04000016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:08:19 compute-0 podman[261392]: 2026-01-20 19:08:19.209925378 +0000 UTC m=+0.068683253 container exec 03da620c845e54fd933e41497235ed884812b05ac119911f7372fce805879a03 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-keepalived-nfs-cephfs-compute-0-kuklye, com.redhat.component=keepalived-container, architecture=x86_64, io.openshift.expose-services=, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., version=2.2.4, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git)
Jan 20 19:08:19 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:08:19 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:08:19 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:08:19.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:08:19 compute-0 podman[261392]: 2026-01-20 19:08:19.25937094 +0000 UTC m=+0.118128815 container exec_died 03da620c845e54fd933e41497235ed884812b05ac119911f7372fce805879a03 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-keepalived-nfs-cephfs-compute-0-kuklye, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., version=2.2.4, io.buildah.version=1.28.2, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, vcs-type=git, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, architecture=x86_64, build-date=2023-02-22T09:23:20, release=1793)
Jan 20 19:08:19 compute-0 ceph-mon[74381]: pgmap v784: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.6 KiB/s wr, 29 op/s
Jan 20 19:08:19 compute-0 podman[261456]: 2026-01-20 19:08:19.617423749 +0000 UTC m=+0.081171962 container exec a8dfcd2937823e651e7ddb7da3250c4b2d68a69832f3be4617cbfedf4f0f7749 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 19:08:19 compute-0 podman[261456]: 2026-01-20 19:08:19.697162211 +0000 UTC m=+0.160910334 container exec_died a8dfcd2937823e651e7ddb7da3250c4b2d68a69832f3be4617cbfedf4f0f7749 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 19:08:19 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:08:19 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:08:19 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:08:19.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:08:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:08:19] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Jan 20 19:08:19 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:08:19] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Jan 20 19:08:19 compute-0 podman[261532]: 2026-01-20 19:08:19.930123849 +0000 UTC m=+0.049687459 container exec 07b235bbcf8e275aef429beeae1d676b774a5b2e004859627b1a936217e5b679 (image=quay.io/ceph/grafana:10.4.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 20 19:08:20 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[260641]: 20/01/2026 19:08:20 : epoch 696fd28d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04200030f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:08:20 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v785: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Jan 20 19:08:20 compute-0 podman[261532]: 2026-01-20 19:08:20.143291729 +0000 UTC m=+0.262855319 container exec_died 07b235bbcf8e275aef429beeae1d676b774a5b2e004859627b1a936217e5b679 (image=quay.io/ceph/grafana:10.4.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 20 19:08:20 compute-0 nova_compute[254061]: 2026-01-20 19:08:20.243 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:08:20 compute-0 ceph-mon[74381]: pgmap v785: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Jan 20 19:08:20 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[260641]: 20/01/2026 19:08:20 : epoch 696fd28d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f040c0019c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:08:20 compute-0 podman[261645]: 2026-01-20 19:08:20.564637585 +0000 UTC m=+0.063375519 container exec 3b46b6d3dc84fc240ebe1bff0868890dcc11b4799d0aa0050a48fba74c90cc31 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 19:08:20 compute-0 podman[261645]: 2026-01-20 19:08:20.63784767 +0000 UTC m=+0.136585504 container exec_died 3b46b6d3dc84fc240ebe1bff0868890dcc11b4799d0aa0050a48fba74c90cc31 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 19:08:20 compute-0 sudo[260938]: pam_unix(sudo:session): session closed for user root
Jan 20 19:08:20 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:08:20 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:08:20 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:08:20 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:08:20 compute-0 sudo[261685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:08:20 compute-0 sudo[261685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:08:20 compute-0 sudo[261685]: pam_unix(sudo:session): session closed for user root
Jan 20 19:08:20 compute-0 sudo[261710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 20 19:08:20 compute-0 sudo[261710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:08:21 compute-0 nova_compute[254061]: 2026-01-20 19:08:21.012 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:08:21 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[260641]: 20/01/2026 19:08:21 : epoch 696fd28d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0408001f60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:08:21 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:08:21 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:08:21 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:08:21.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:08:21 compute-0 sudo[261710]: pam_unix(sudo:session): session closed for user root
Jan 20 19:08:21 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v786: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Jan 20 19:08:21 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 19:08:21 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:08:21 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 20 19:08:21 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:08:21 compute-0 sudo[261766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:08:21 compute-0 sudo[261766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:08:21 compute-0 sudo[261766]: pam_unix(sudo:session): session closed for user root
Jan 20 19:08:21 compute-0 sudo[261791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 19:08:21 compute-0 sudo[261791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:08:21 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:08:21 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:08:21 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:08:21 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 19:08:21 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 19:08:21 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:08:21 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:08:21 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 19:08:21 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 19:08:21 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 19:08:21 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:08:21 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:08:21 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:08:21.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:08:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[260641]: 20/01/2026 19:08:22 : epoch 696fd28d : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04000016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:08:22 compute-0 podman[261858]: 2026-01-20 19:08:22.07379225 +0000 UTC m=+0.048611090 container create 369197dbc5e93237dbb9c2d233d10cd40a37c2cb733fc72af3514a09689cfbf0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_bohr, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:08:22 compute-0 systemd[1]: Started libpod-conmon-369197dbc5e93237dbb9c2d233d10cd40a37c2cb733fc72af3514a09689cfbf0.scope.
Jan 20 19:08:22 compute-0 podman[261858]: 2026-01-20 19:08:22.052867462 +0000 UTC m=+0.027686322 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:08:22 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:08:22 compute-0 podman[261858]: 2026-01-20 19:08:22.174343587 +0000 UTC m=+0.149162437 container init 369197dbc5e93237dbb9c2d233d10cd40a37c2cb733fc72af3514a09689cfbf0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_bohr, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 19:08:22 compute-0 podman[261858]: 2026-01-20 19:08:22.185413117 +0000 UTC m=+0.160231947 container start 369197dbc5e93237dbb9c2d233d10cd40a37c2cb733fc72af3514a09689cfbf0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Jan 20 19:08:22 compute-0 podman[261858]: 2026-01-20 19:08:22.189483868 +0000 UTC m=+0.164302718 container attach 369197dbc5e93237dbb9c2d233d10cd40a37c2cb733fc72af3514a09689cfbf0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_bohr, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:08:22 compute-0 stupefied_bohr[261874]: 167 167
Jan 20 19:08:22 compute-0 systemd[1]: libpod-369197dbc5e93237dbb9c2d233d10cd40a37c2cb733fc72af3514a09689cfbf0.scope: Deactivated successfully.
Jan 20 19:08:22 compute-0 podman[261858]: 2026-01-20 19:08:22.197200216 +0000 UTC m=+0.172019056 container died 369197dbc5e93237dbb9c2d233d10cd40a37c2cb733fc72af3514a09689cfbf0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_bohr, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Jan 20 19:08:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-1282e7fde988872c3ee17d1f23617cfa876f39ae46e677bcce19795b30ac8c55-merged.mount: Deactivated successfully.
Jan 20 19:08:22 compute-0 podman[261858]: 2026-01-20 19:08:22.234782646 +0000 UTC m=+0.209601486 container remove 369197dbc5e93237dbb9c2d233d10cd40a37c2cb733fc72af3514a09689cfbf0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_bohr, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 20 19:08:22 compute-0 systemd[1]: libpod-conmon-369197dbc5e93237dbb9c2d233d10cd40a37c2cb733fc72af3514a09689cfbf0.scope: Deactivated successfully.
Jan 20 19:08:22 compute-0 podman[261897]: 2026-01-20 19:08:22.440669089 +0000 UTC m=+0.060695137 container create 951701fa7216ae71768a493cc63733de597eb85a2e41ad714466a9aaef66e57a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 20 19:08:22 compute-0 systemd[1]: Started libpod-conmon-951701fa7216ae71768a493cc63733de597eb85a2e41ad714466a9aaef66e57a.scope.
Jan 20 19:08:22 compute-0 podman[261897]: 2026-01-20 19:08:22.408088635 +0000 UTC m=+0.028114713 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:08:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[260641]: 20/01/2026 19:08:22 : epoch 696fd28d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04200030f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:08:22 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:08:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34f1b2b6139da1aa0c1c7560a891346df698040f66ce006319bd5c0d617f371e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:08:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34f1b2b6139da1aa0c1c7560a891346df698040f66ce006319bd5c0d617f371e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:08:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34f1b2b6139da1aa0c1c7560a891346df698040f66ce006319bd5c0d617f371e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:08:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34f1b2b6139da1aa0c1c7560a891346df698040f66ce006319bd5c0d617f371e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:08:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34f1b2b6139da1aa0c1c7560a891346df698040f66ce006319bd5c0d617f371e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:08:22 compute-0 podman[261897]: 2026-01-20 19:08:22.54066471 +0000 UTC m=+0.160690828 container init 951701fa7216ae71768a493cc63733de597eb85a2e41ad714466a9aaef66e57a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_wu, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 20 19:08:22 compute-0 podman[261897]: 2026-01-20 19:08:22.557042464 +0000 UTC m=+0.177068502 container start 951701fa7216ae71768a493cc63733de597eb85a2e41ad714466a9aaef66e57a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_wu, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:08:22 compute-0 podman[261897]: 2026-01-20 19:08:22.560858668 +0000 UTC m=+0.180884736 container attach 951701fa7216ae71768a493cc63733de597eb85a2e41ad714466a9aaef66e57a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Jan 20 19:08:22 compute-0 ceph-mon[74381]: pgmap v786: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Jan 20 19:08:22 compute-0 brave_wu[261914]: --> passed data devices: 0 physical, 1 LVM
Jan 20 19:08:22 compute-0 brave_wu[261914]: --> All data devices are unavailable
Jan 20 19:08:23 compute-0 systemd[1]: libpod-951701fa7216ae71768a493cc63733de597eb85a2e41ad714466a9aaef66e57a.scope: Deactivated successfully.
Jan 20 19:08:23 compute-0 podman[261897]: 2026-01-20 19:08:23.010970874 +0000 UTC m=+0.630996942 container died 951701fa7216ae71768a493cc63733de597eb85a2e41ad714466a9aaef66e57a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_wu, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Jan 20 19:08:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-34f1b2b6139da1aa0c1c7560a891346df698040f66ce006319bd5c0d617f371e-merged.mount: Deactivated successfully.
Jan 20 19:08:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[260641]: 20/01/2026 19:08:23 : epoch 696fd28d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f040c0019c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:08:23 compute-0 podman[261897]: 2026-01-20 19:08:23.071797574 +0000 UTC m=+0.691823602 container remove 951701fa7216ae71768a493cc63733de597eb85a2e41ad714466a9aaef66e57a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_wu, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 19:08:23 compute-0 systemd[1]: libpod-conmon-951701fa7216ae71768a493cc63733de597eb85a2e41ad714466a9aaef66e57a.scope: Deactivated successfully.
Jan 20 19:08:23 compute-0 sudo[261791]: pam_unix(sudo:session): session closed for user root
Jan 20 19:08:23 compute-0 sudo[261943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:08:23 compute-0 sudo[261943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:08:23 compute-0 sudo[261943]: pam_unix(sudo:session): session closed for user root
Jan 20 19:08:23 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:08:23 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:08:23 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:08:23.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:08:23 compute-0 sudo[261968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- lvm list --format json
Jan 20 19:08:23 compute-0 sudo[261968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:08:23 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v787: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 18 op/s
Jan 20 19:08:23 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:08:23 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:08:23 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:08:23.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:08:23 compute-0 podman[262033]: 2026-01-20 19:08:23.798356196 +0000 UTC m=+0.068459427 container create 43b8fb66b4a8d63a0a1be62e4dd4ba372257f20983cff46976004b0106ea4368 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 20 19:08:23 compute-0 systemd[1]: Started libpod-conmon-43b8fb66b4a8d63a0a1be62e4dd4ba372257f20983cff46976004b0106ea4368.scope.
Jan 20 19:08:23 compute-0 podman[262033]: 2026-01-20 19:08:23.763647505 +0000 UTC m=+0.033750776 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:08:23 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:08:23 compute-0 podman[262033]: 2026-01-20 19:08:23.909490519 +0000 UTC m=+0.179593710 container init 43b8fb66b4a8d63a0a1be62e4dd4ba372257f20983cff46976004b0106ea4368 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:08:23 compute-0 podman[262033]: 2026-01-20 19:08:23.922230055 +0000 UTC m=+0.192333256 container start 43b8fb66b4a8d63a0a1be62e4dd4ba372257f20983cff46976004b0106ea4368 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_merkle, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 20 19:08:23 compute-0 podman[262033]: 2026-01-20 19:08:23.92571708 +0000 UTC m=+0.195820281 container attach 43b8fb66b4a8d63a0a1be62e4dd4ba372257f20983cff46976004b0106ea4368 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_merkle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:08:23 compute-0 boring_merkle[262049]: 167 167
Jan 20 19:08:23 compute-0 systemd[1]: libpod-43b8fb66b4a8d63a0a1be62e4dd4ba372257f20983cff46976004b0106ea4368.scope: Deactivated successfully.
Jan 20 19:08:23 compute-0 podman[262033]: 2026-01-20 19:08:23.930602562 +0000 UTC m=+0.200705773 container died 43b8fb66b4a8d63a0a1be62e4dd4ba372257f20983cff46976004b0106ea4368 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_merkle, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 20 19:08:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-58f32d6dc5cbe499ea7e0cb8adcae0a40b50e55b545046feb13495a869c53775-merged.mount: Deactivated successfully.
Jan 20 19:08:23 compute-0 podman[262033]: 2026-01-20 19:08:23.974901454 +0000 UTC m=+0.245004645 container remove 43b8fb66b4a8d63a0a1be62e4dd4ba372257f20983cff46976004b0106ea4368 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_merkle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Jan 20 19:08:24 compute-0 systemd[1]: libpod-conmon-43b8fb66b4a8d63a0a1be62e4dd4ba372257f20983cff46976004b0106ea4368.scope: Deactivated successfully.
Jan 20 19:08:24 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[260641]: 20/01/2026 19:08:24 : epoch 696fd28d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0408003340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:08:24 compute-0 podman[262075]: 2026-01-20 19:08:24.189759249 +0000 UTC m=+0.050897490 container create 277f66a890670bafafa857dc69fa0dc2e81ef29001bad53290385dd91b57b113 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Jan 20 19:08:24 compute-0 systemd[1]: Started libpod-conmon-277f66a890670bafafa857dc69fa0dc2e81ef29001bad53290385dd91b57b113.scope.
Jan 20 19:08:24 compute-0 podman[262075]: 2026-01-20 19:08:24.166560251 +0000 UTC m=+0.027698522 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:08:24 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:08:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/924170d24bbe0f38ec345a3608a4309382aa66199624dac9c518398394a1497b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:08:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/924170d24bbe0f38ec345a3608a4309382aa66199624dac9c518398394a1497b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:08:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/924170d24bbe0f38ec345a3608a4309382aa66199624dac9c518398394a1497b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:08:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/924170d24bbe0f38ec345a3608a4309382aa66199624dac9c518398394a1497b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:08:24 compute-0 podman[262075]: 2026-01-20 19:08:24.294096389 +0000 UTC m=+0.155234690 container init 277f66a890670bafafa857dc69fa0dc2e81ef29001bad53290385dd91b57b113 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_taussig, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:08:24 compute-0 podman[262075]: 2026-01-20 19:08:24.301408447 +0000 UTC m=+0.162546688 container start 277f66a890670bafafa857dc69fa0dc2e81ef29001bad53290385dd91b57b113 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_taussig, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 20 19:08:24 compute-0 podman[262075]: 2026-01-20 19:08:24.30516835 +0000 UTC m=+0.166306591 container attach 277f66a890670bafafa857dc69fa0dc2e81ef29001bad53290385dd91b57b113 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_taussig, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:08:24 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[260641]: 20/01/2026 19:08:24 : epoch 696fd28d : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0400002f00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:08:24 compute-0 hopeful_taussig[262092]: {
Jan 20 19:08:24 compute-0 hopeful_taussig[262092]:     "0": [
Jan 20 19:08:24 compute-0 hopeful_taussig[262092]:         {
Jan 20 19:08:24 compute-0 hopeful_taussig[262092]:             "devices": [
Jan 20 19:08:24 compute-0 hopeful_taussig[262092]:                 "/dev/loop3"
Jan 20 19:08:24 compute-0 hopeful_taussig[262092]:             ],
Jan 20 19:08:24 compute-0 hopeful_taussig[262092]:             "lv_name": "ceph_lv0",
Jan 20 19:08:24 compute-0 hopeful_taussig[262092]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:08:24 compute-0 hopeful_taussig[262092]:             "lv_size": "21470642176",
Jan 20 19:08:24 compute-0 hopeful_taussig[262092]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=aecbbf3b-b405-507b-97d7-637a83f5b4b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5f53c0c6-6046-4836-83f9-ff93da7e674e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:08:24 compute-0 hopeful_taussig[262092]:             "lv_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 19:08:24 compute-0 hopeful_taussig[262092]:             "name": "ceph_lv0",
Jan 20 19:08:24 compute-0 hopeful_taussig[262092]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:08:24 compute-0 hopeful_taussig[262092]:             "tags": {
Jan 20 19:08:24 compute-0 hopeful_taussig[262092]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:08:24 compute-0 hopeful_taussig[262092]:                 "ceph.block_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 19:08:24 compute-0 hopeful_taussig[262092]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:08:24 compute-0 hopeful_taussig[262092]:                 "ceph.cluster_fsid": "aecbbf3b-b405-507b-97d7-637a83f5b4b1",
Jan 20 19:08:24 compute-0 hopeful_taussig[262092]:                 "ceph.cluster_name": "ceph",
Jan 20 19:08:24 compute-0 hopeful_taussig[262092]:                 "ceph.crush_device_class": "",
Jan 20 19:08:24 compute-0 hopeful_taussig[262092]:                 "ceph.encrypted": "0",
Jan 20 19:08:24 compute-0 hopeful_taussig[262092]:                 "ceph.osd_fsid": "5f53c0c6-6046-4836-83f9-ff93da7e674e",
Jan 20 19:08:24 compute-0 hopeful_taussig[262092]:                 "ceph.osd_id": "0",
Jan 20 19:08:24 compute-0 hopeful_taussig[262092]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:08:24 compute-0 hopeful_taussig[262092]:                 "ceph.type": "block",
Jan 20 19:08:24 compute-0 hopeful_taussig[262092]:                 "ceph.vdo": "0",
Jan 20 19:08:24 compute-0 hopeful_taussig[262092]:                 "ceph.with_tpm": "0"
Jan 20 19:08:24 compute-0 hopeful_taussig[262092]:             },
Jan 20 19:08:24 compute-0 hopeful_taussig[262092]:             "type": "block",
Jan 20 19:08:24 compute-0 hopeful_taussig[262092]:             "vg_name": "ceph_vg0"
Jan 20 19:08:24 compute-0 hopeful_taussig[262092]:         }
Jan 20 19:08:24 compute-0 hopeful_taussig[262092]:     ]
Jan 20 19:08:24 compute-0 hopeful_taussig[262092]: }
Jan 20 19:08:24 compute-0 systemd[1]: libpod-277f66a890670bafafa857dc69fa0dc2e81ef29001bad53290385dd91b57b113.scope: Deactivated successfully.
Jan 20 19:08:24 compute-0 podman[262075]: 2026-01-20 19:08:24.646571957 +0000 UTC m=+0.507710218 container died 277f66a890670bafafa857dc69fa0dc2e81ef29001bad53290385dd91b57b113 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 20 19:08:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-924170d24bbe0f38ec345a3608a4309382aa66199624dac9c518398394a1497b-merged.mount: Deactivated successfully.
Jan 20 19:08:24 compute-0 podman[262075]: 2026-01-20 19:08:24.699183744 +0000 UTC m=+0.560322025 container remove 277f66a890670bafafa857dc69fa0dc2e81ef29001bad53290385dd91b57b113 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_taussig, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:08:24 compute-0 systemd[1]: libpod-conmon-277f66a890670bafafa857dc69fa0dc2e81ef29001bad53290385dd91b57b113.scope: Deactivated successfully.
Jan 20 19:08:24 compute-0 sudo[261968]: pam_unix(sudo:session): session closed for user root
Jan 20 19:08:24 compute-0 ceph-mon[74381]: pgmap v787: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 0 B/s wr, 18 op/s
Jan 20 19:08:24 compute-0 sudo[262112]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:08:24 compute-0 sudo[262112]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:08:24 compute-0 sudo[262112]: pam_unix(sudo:session): session closed for user root
Jan 20 19:08:24 compute-0 sudo[262137]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- raw list --format json
Jan 20 19:08:24 compute-0 sudo[262137]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:08:25 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[260641]: 20/01/2026 19:08:25 : epoch 696fd28d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04200030f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:08:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:08:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:08:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:08:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:08:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:08:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:08:25 compute-0 nova_compute[254061]: 2026-01-20 19:08:25.208 254065 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768936090.206146, 120a65b5-a5a0-4431-bfbb-56c5468d25a6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 19:08:25 compute-0 nova_compute[254061]: 2026-01-20 19:08:25.208 254065 INFO nova.compute.manager [-] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] VM Stopped (Lifecycle Event)
Jan 20 19:08:25 compute-0 nova_compute[254061]: 2026-01-20 19:08:25.235 254065 DEBUG nova.compute.manager [None req-91f0685d-96bb-45e5-8d25-1734b96fc4a6 - - - - - -] [instance: 120a65b5-a5a0-4431-bfbb-56c5468d25a6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 19:08:25 compute-0 nova_compute[254061]: 2026-01-20 19:08:25.246 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:08:25 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:08:25 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:08:25 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:08:25.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:08:25 compute-0 podman[262202]: 2026-01-20 19:08:25.394000726 +0000 UTC m=+0.054643403 container create 651c23cbd2281ddbb1098b5aa4c43795c9b4afa8dd8e3e81202109c3dea40b7b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 20 19:08:25 compute-0 systemd[1]: Started libpod-conmon-651c23cbd2281ddbb1098b5aa4c43795c9b4afa8dd8e3e81202109c3dea40b7b.scope.
Jan 20 19:08:25 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v788: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 271 B/s rd, 0 op/s
Jan 20 19:08:25 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:08:25 compute-0 podman[262202]: 2026-01-20 19:08:25.364126726 +0000 UTC m=+0.024769383 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:08:25 compute-0 podman[262202]: 2026-01-20 19:08:25.487556293 +0000 UTC m=+0.148198970 container init 651c23cbd2281ddbb1098b5aa4c43795c9b4afa8dd8e3e81202109c3dea40b7b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_beaver, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:08:25 compute-0 podman[262202]: 2026-01-20 19:08:25.497882452 +0000 UTC m=+0.158525109 container start 651c23cbd2281ddbb1098b5aa4c43795c9b4afa8dd8e3e81202109c3dea40b7b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_beaver, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:08:25 compute-0 podman[262202]: 2026-01-20 19:08:25.503082384 +0000 UTC m=+0.163725061 container attach 651c23cbd2281ddbb1098b5aa4c43795c9b4afa8dd8e3e81202109c3dea40b7b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_beaver, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 20 19:08:25 compute-0 fervent_beaver[262219]: 167 167
Jan 20 19:08:25 compute-0 systemd[1]: libpod-651c23cbd2281ddbb1098b5aa4c43795c9b4afa8dd8e3e81202109c3dea40b7b.scope: Deactivated successfully.
Jan 20 19:08:25 compute-0 podman[262202]: 2026-01-20 19:08:25.507492293 +0000 UTC m=+0.168134970 container died 651c23cbd2281ddbb1098b5aa4c43795c9b4afa8dd8e3e81202109c3dea40b7b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 20 19:08:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-72e79ce48b28eaf5bf5836376db46d84c1ca60eccac1a1993717fca248cd1325-merged.mount: Deactivated successfully.
Jan 20 19:08:25 compute-0 podman[262202]: 2026-01-20 19:08:25.550411367 +0000 UTC m=+0.211054014 container remove 651c23cbd2281ddbb1098b5aa4c43795c9b4afa8dd8e3e81202109c3dea40b7b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_beaver, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 20 19:08:25 compute-0 systemd[1]: libpod-conmon-651c23cbd2281ddbb1098b5aa4c43795c9b4afa8dd8e3e81202109c3dea40b7b.scope: Deactivated successfully.
Jan 20 19:08:25 compute-0 podman[262216]: 2026-01-20 19:08:25.647364306 +0000 UTC m=+0.196898060 container health_status d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 20 19:08:25 compute-0 podman[262270]: 2026-01-20 19:08:25.771235496 +0000 UTC m=+0.069827375 container create f9343880a885e90b05a9f549aa75ea9a57f76ec38bfee982d81f37809519425e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 19:08:25 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:08:25 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:08:25 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:08:25.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:08:25 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:08:25 compute-0 systemd[1]: Started libpod-conmon-f9343880a885e90b05a9f549aa75ea9a57f76ec38bfee982d81f37809519425e.scope.
Jan 20 19:08:25 compute-0 podman[262270]: 2026-01-20 19:08:25.736674178 +0000 UTC m=+0.035266117 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:08:25 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:08:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b63db195a8eb9c2b95bd9e1b8813347487782301a5734e4535cbb94c3f687ce0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:08:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b63db195a8eb9c2b95bd9e1b8813347487782301a5734e4535cbb94c3f687ce0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:08:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b63db195a8eb9c2b95bd9e1b8813347487782301a5734e4535cbb94c3f687ce0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:08:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b63db195a8eb9c2b95bd9e1b8813347487782301a5734e4535cbb94c3f687ce0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:08:25 compute-0 podman[262270]: 2026-01-20 19:08:25.890662334 +0000 UTC m=+0.189254263 container init f9343880a885e90b05a9f549aa75ea9a57f76ec38bfee982d81f37809519425e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_shamir, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 20 19:08:25 compute-0 podman[262270]: 2026-01-20 19:08:25.898825606 +0000 UTC m=+0.197417495 container start f9343880a885e90b05a9f549aa75ea9a57f76ec38bfee982d81f37809519425e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_shamir, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 20 19:08:25 compute-0 podman[262270]: 2026-01-20 19:08:25.904621503 +0000 UTC m=+0.203213372 container attach f9343880a885e90b05a9f549aa75ea9a57f76ec38bfee982d81f37809519425e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_shamir, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 20 19:08:26 compute-0 nova_compute[254061]: 2026-01-20 19:08:26.016 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:08:26 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[260641]: 20/01/2026 19:08:26 : epoch 696fd28d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f040c002e50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:08:26 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[260641]: 20/01/2026 19:08:26 : epoch 696fd28d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0408003340 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:08:26 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:08:26 compute-0 lvm[262363]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 19:08:26 compute-0 lvm[262363]: VG ceph_vg0 finished
Jan 20 19:08:26 compute-0 determined_shamir[262287]: {}
Jan 20 19:08:26 compute-0 lvm[262367]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 19:08:26 compute-0 lvm[262367]: VG ceph_vg0 finished
Jan 20 19:08:26 compute-0 systemd[1]: libpod-f9343880a885e90b05a9f549aa75ea9a57f76ec38bfee982d81f37809519425e.scope: Deactivated successfully.
Jan 20 19:08:26 compute-0 podman[262270]: 2026-01-20 19:08:26.768450068 +0000 UTC m=+1.067041947 container died f9343880a885e90b05a9f549aa75ea9a57f76ec38bfee982d81f37809519425e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_shamir, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:08:26 compute-0 systemd[1]: libpod-f9343880a885e90b05a9f549aa75ea9a57f76ec38bfee982d81f37809519425e.scope: Consumed 1.606s CPU time.
Jan 20 19:08:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-b63db195a8eb9c2b95bd9e1b8813347487782301a5734e4535cbb94c3f687ce0-merged.mount: Deactivated successfully.
Jan 20 19:08:26 compute-0 ceph-mon[74381]: pgmap v788: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 271 B/s rd, 0 op/s
Jan 20 19:08:26 compute-0 podman[262270]: 2026-01-20 19:08:26.818151215 +0000 UTC m=+1.116743064 container remove f9343880a885e90b05a9f549aa75ea9a57f76ec38bfee982d81f37809519425e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_shamir, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 19:08:26 compute-0 systemd[1]: libpod-conmon-f9343880a885e90b05a9f549aa75ea9a57f76ec38bfee982d81f37809519425e.scope: Deactivated successfully.
Jan 20 19:08:26 compute-0 sudo[262137]: pam_unix(sudo:session): session closed for user root
Jan 20 19:08:26 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:08:26 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:08:26 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:08:26 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:08:27 compute-0 sudo[262379]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 19:08:27 compute-0 sudo[262379]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:08:27 compute-0 sudo[262379]: pam_unix(sudo:session): session closed for user root
Jan 20 19:08:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[260641]: 20/01/2026 19:08:27 : epoch 696fd28d : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0400002f00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:08:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:08:27.157Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:08:27 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:08:27 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:08:27 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:08:27.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:08:27 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v789: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 452 B/s rd, 0 op/s
Jan 20 19:08:27 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:08:27 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:08:27 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:08:27.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:08:27 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:08:27 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:08:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[260641]: 20/01/2026 19:08:28 : epoch 696fd28d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f040c002e50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:08:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[260641]: 20/01/2026 19:08:28 : epoch 696fd28d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04200030f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:08:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:08:28.852Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:08:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[260641]: 20/01/2026 19:08:29 : epoch 696fd28d : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04200030f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:08:29 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:08:29 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:08:29 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:08:29.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:08:29 compute-0 ceph-mon[74381]: pgmap v789: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 452 B/s rd, 0 op/s
Jan 20 19:08:29 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v790: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 271 B/s rd, 0 op/s
Jan 20 19:08:29 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:08:29 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:08:29 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:08:29.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:08:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:08:29] "GET /metrics HTTP/1.1" 200 48438 "" "Prometheus/2.51.0"
Jan 20 19:08:29 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:08:29] "GET /metrics HTTP/1.1" 200 48438 "" "Prometheus/2.51.0"
Jan 20 19:08:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[260641]: 20/01/2026 19:08:30 : epoch 696fd28d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0400003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:08:30 compute-0 nova_compute[254061]: 2026-01-20 19:08:30.249 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:08:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:08:30.285 165659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:08:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:08:30.285 165659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:08:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:08:30.285 165659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:08:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[260641]: 20/01/2026 19:08:30 : epoch 696fd28d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f040c002e50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:08:31 compute-0 nova_compute[254061]: 2026-01-20 19:08:31.018 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:08:31 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[260641]: 20/01/2026 19:08:31 : epoch 696fd28d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0408004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:08:31 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:08:31 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:08:31 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:08:31.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:08:31 compute-0 ceph-mon[74381]: pgmap v790: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 271 B/s rd, 0 op/s
Jan 20 19:08:31 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v791: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 271 B/s rd, 0 op/s
Jan 20 19:08:31 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:08:31 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:08:31 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:08:31 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:08:31.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:08:32 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[260641]: 20/01/2026 19:08:32 : epoch 696fd28d : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04200030f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:08:32 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[260641]: 20/01/2026 19:08:32 : epoch 696fd28d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0400003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:08:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[260641]: 20/01/2026 19:08:33 : epoch 696fd28d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f040c002e50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:08:33 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:08:33 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:08:33 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:08:33.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:08:33 compute-0 ceph-mon[74381]: pgmap v791: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 271 B/s rd, 0 op/s
Jan 20 19:08:33 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v792: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 20 19:08:33 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:08:33 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:08:33 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:08:33.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:08:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[260641]: 20/01/2026 19:08:34 : epoch 696fd28d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0408004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:08:34 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/3353599639' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:08:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[260641]: 20/01/2026 19:08:34 : epoch 696fd28d : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f04200030f0 fd 39 proxy ignored for local
Jan 20 19:08:34 compute-0 kernel: ganesha.nfsd[260743]: segfault at 50 ip 00007f04b4d5232e sp 00007f042f7fd210 error 4 in libntirpc.so.5.8[7f04b4d37000+2c000] likely on CPU 5 (core 0, socket 5)
Jan 20 19:08:34 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Jan 20 19:08:34 compute-0 systemd[1]: Started Process Core Dump (PID 262412/UID 0).
Jan 20 19:08:35 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:08:35.169 165659 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'be:fe:f2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '0e:71:69:cd:a8:95'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 19:08:35 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:08:35.170 165659 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 19:08:35 compute-0 nova_compute[254061]: 2026-01-20 19:08:35.170 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:08:35 compute-0 nova_compute[254061]: 2026-01-20 19:08:35.250 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:08:35 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:08:35 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:08:35 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:08:35.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:08:35 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v793: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:08:35 compute-0 ceph-mon[74381]: pgmap v792: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Jan 20 19:08:35 compute-0 systemd-coredump[262413]: Process 260645 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 43:
                                                    #0  0x00007f04b4d5232e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Jan 20 19:08:35 compute-0 systemd[1]: systemd-coredump@15-262412-0.service: Deactivated successfully.
Jan 20 19:08:35 compute-0 systemd[1]: systemd-coredump@15-262412-0.service: Consumed 1.137s CPU time.
Jan 20 19:08:35 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:08:35 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:08:35 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:08:35.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:08:35 compute-0 podman[262418]: 2026-01-20 19:08:35.851910088 +0000 UTC m=+0.029683036 container died dba301c3b9bce0bcc4a0a2feb3150accc5e8d80491fee017b345220780b63d1d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 20 19:08:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-460d0f2ab4a3249c2e01dbce6b7554ef02625f2c2dc5544355d8ffbd415ee2b1-merged.mount: Deactivated successfully.
Jan 20 19:08:35 compute-0 podman[262418]: 2026-01-20 19:08:35.888312835 +0000 UTC m=+0.066085693 container remove dba301c3b9bce0bcc4a0a2feb3150accc5e8d80491fee017b345220780b63d1d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:08:35 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Main process exited, code=exited, status=139/n/a
Jan 20 19:08:36 compute-0 nova_compute[254061]: 2026-01-20 19:08:36.021 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:08:36 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Failed with result 'exit-code'.
Jan 20 19:08:36 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Consumed 1.597s CPU time.
Jan 20 19:08:36 compute-0 sudo[262461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:08:36 compute-0 sudo[262461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:08:36 compute-0 sudo[262461]: pam_unix(sudo:session): session closed for user root
Jan 20 19:08:36 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:08:36 compute-0 ceph-mon[74381]: pgmap v793: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:08:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:08:37.159Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:08:37 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:08:37 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:08:37 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:08:37.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:08:37 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v794: 337 pgs: 337 active+clean; 88 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 25 op/s
Jan 20 19:08:37 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:08:37 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:08:37 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:08:37.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:08:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:08:38.853Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:08:38 compute-0 ceph-mon[74381]: pgmap v794: 337 pgs: 337 active+clean; 88 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 25 op/s
Jan 20 19:08:39 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:08:39 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:08:39 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:08:39.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:08:39 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v795: 337 pgs: 337 active+clean; 88 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 25 op/s
Jan 20 19:08:39 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:08:39 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:08:39 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:08:39.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:08:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:08:39] "GET /metrics HTTP/1.1" 200 48438 "" "Prometheus/2.51.0"
Jan 20 19:08:39 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:08:39] "GET /metrics HTTP/1.1" 200 48438 "" "Prometheus/2.51.0"
Jan 20 19:08:39 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/476441562' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 19:08:39 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/2032869731' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 19:08:40 compute-0 nova_compute[254061]: 2026-01-20 19:08:40.253 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:08:40 compute-0 ceph-mon[74381]: pgmap v795: 337 pgs: 337 active+clean; 88 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 25 op/s
Jan 20 19:08:40 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:08:41 compute-0 nova_compute[254061]: 2026-01-20 19:08:41.023 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:08:41 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [WARNING] 019/190841 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 20 19:08:41 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:08:41 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:08:41 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:08:41.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:08:41 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v796: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 20 19:08:41 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:08:41 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:08:41 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:08:41 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:08:41.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:08:42 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:08:42.171 165659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7018ca8a-de0e-4b56-bb43-675238d4f8b3, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 19:08:42 compute-0 ceph-mon[74381]: pgmap v796: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 20 19:08:43 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:08:43 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:08:43 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:08:43.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:08:43 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v797: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 20 19:08:43 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:08:43 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:08:43 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:08:43.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:08:44 compute-0 ceph-mon[74381]: pgmap v797: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 20 19:08:45 compute-0 nova_compute[254061]: 2026-01-20 19:08:45.256 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:08:45 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:08:45 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:08:45 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:08:45.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:08:45 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v798: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 20 19:08:45 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:08:45 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:08:45 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:08:45.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:08:46 compute-0 nova_compute[254061]: 2026-01-20 19:08:46.025 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:08:46 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Scheduled restart job, restart counter is at 16.
Jan 20 19:08:46 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.ulclbx for aecbbf3b-b405-507b-97d7-637a83f5b4b1.
Jan 20 19:08:46 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Consumed 1.597s CPU time.
Jan 20 19:08:46 compute-0 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.ulclbx for aecbbf3b-b405-507b-97d7-637a83f5b4b1...
Jan 20 19:08:46 compute-0 podman[262541]: 2026-01-20 19:08:46.528333225 +0000 UTC m=+0.058097367 container create 890c9045cd3ab3a7d7e549d04dfc48de28b810c959d1ccbe485ba26818660b0d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:08:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/561f1266344d4a0521a9a4369fb24a0769c57ebea46ffc1a91604785be39fed5/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Jan 20 19:08:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/561f1266344d4a0521a9a4369fb24a0769c57ebea46ffc1a91604785be39fed5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:08:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/561f1266344d4a0521a9a4369fb24a0769c57ebea46ffc1a91604785be39fed5/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:08:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/561f1266344d4a0521a9a4369fb24a0769c57ebea46ffc1a91604785be39fed5/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.ulclbx-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:08:46 compute-0 podman[262541]: 2026-01-20 19:08:46.511774256 +0000 UTC m=+0.041538418 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:08:46 compute-0 podman[262541]: 2026-01-20 19:08:46.613322759 +0000 UTC m=+0.143086941 container init 890c9045cd3ab3a7d7e549d04dfc48de28b810c959d1ccbe485ba26818660b0d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Jan 20 19:08:46 compute-0 podman[262541]: 2026-01-20 19:08:46.619211419 +0000 UTC m=+0.148975571 container start 890c9045cd3ab3a7d7e549d04dfc48de28b810c959d1ccbe485ba26818660b0d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:08:46 compute-0 bash[262541]: 890c9045cd3ab3a7d7e549d04dfc48de28b810c959d1ccbe485ba26818660b0d
Jan 20 19:08:46 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:08:46 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Jan 20 19:08:46 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:08:46 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Jan 20 19:08:46 compute-0 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.ulclbx for aecbbf3b-b405-507b-97d7-637a83f5b4b1.
Jan 20 19:08:46 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:08:46 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Jan 20 19:08:46 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:08:46 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Jan 20 19:08:46 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:08:46 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Jan 20 19:08:46 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:08:46 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Jan 20 19:08:46 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:08:46 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Jan 20 19:08:46 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:08:46 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:08:46 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 20 19:08:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [WARNING] 019/190847 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 20 19:08:47 compute-0 podman[262598]: 2026-01-20 19:08:47.10122859 +0000 UTC m=+0.068308344 container health_status 7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 20 19:08:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:08:47.159Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:08:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:08:47.159Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:08:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:08:47.160Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:08:47 compute-0 ceph-mon[74381]: pgmap v798: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Jan 20 19:08:47 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:08:47 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:08:47 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:08:47.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:08:47 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v799: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 108 op/s
Jan 20 19:08:47 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:08:47 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:08:47 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:08:47.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:08:48 compute-0 ceph-mon[74381]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 19:08:48 compute-0 ceph-mon[74381]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Cumulative writes: 5478 writes, 24K keys, 5476 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.03 MB/s
                                           Cumulative WAL: 5478 writes, 5476 syncs, 1.00 writes per sync, written: 0.04 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1493 writes, 6453 keys, 1493 commit groups, 1.0 writes per commit group, ingest: 11.10 MB, 0.02 MB/s
                                           Interval WAL: 1493 writes, 1493 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    122.4      0.31              0.12        13    0.024       0      0       0.0       0.0
                                             L6      1/0   13.25 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   4.2    145.9    126.3      1.25              0.43        12    0.104     63K   6270       0.0       0.0
                                            Sum      1/0   13.25 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   5.2    117.0    125.5      1.56              0.55        25    0.062     63K   6270       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.6    128.5    131.1      0.59              0.23        10    0.059     29K   2604       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   0.0    145.9    126.3      1.25              0.43        12    0.104     63K   6270       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    124.1      0.30              0.12        12    0.025       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.2      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.037, interval 0.010
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.19 GB write, 0.11 MB/s write, 0.18 GB read, 0.10 MB/s read, 1.6 seconds
                                           Interval compaction: 0.07 GB write, 0.13 MB/s write, 0.07 GB read, 0.13 MB/s read, 0.6 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564b95c0c9b0#2 capacity: 304.00 MB usage: 11.46 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000145 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(678,10.94 MB,3.59853%) FilterBlock(26,186.11 KB,0.0597853%) IndexBlock(26,342.61 KB,0.110059%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 20 19:08:48 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:08:48.854Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:08:49 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:08:49 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:08:49 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:08:49.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:08:49 compute-0 ceph-mon[74381]: pgmap v799: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 108 op/s
Jan 20 19:08:49 compute-0 ceph-mon[74381]: from='client.? 192.168.122.10:0/2057213993' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 19:08:49 compute-0 ceph-mon[74381]: from='client.? 192.168.122.10:0/2057213993' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 19:08:49 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v800: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 83 op/s
Jan 20 19:08:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:08:49] "GET /metrics HTTP/1.1" 200 48466 "" "Prometheus/2.51.0"
Jan 20 19:08:49 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:08:49] "GET /metrics HTTP/1.1" 200 48466 "" "Prometheus/2.51.0"
Jan 20 19:08:49 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:08:49 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:08:49 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:08:49.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:08:50 compute-0 nova_compute[254061]: 2026-01-20 19:08:50.260 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:08:51 compute-0 nova_compute[254061]: 2026-01-20 19:08:51.059 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:08:51 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:08:51 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:08:51 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:08:51.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:08:51 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v801: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 83 op/s
Jan 20 19:08:51 compute-0 ceph-mon[74381]: pgmap v800: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 83 op/s
Jan 20 19:08:51 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:08:51 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:08:51 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:08:51 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:08:51.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:08:52 compute-0 ceph-mon[74381]: pgmap v801: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 83 op/s
Jan 20 19:08:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:08:52 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 20 19:08:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:08:52 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 20 19:08:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:08:52 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Jan 20 19:08:53 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:08:53 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:08:53 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:08:53.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:08:53 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:08:53 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 20 19:08:53 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:08:53 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 20 19:08:53 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:08:53 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 20 19:08:53 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v802: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 75 op/s
Jan 20 19:08:53 compute-0 ceph-mon[74381]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Jan 20 19:08:53 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:08:53.545738) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 19:08:53 compute-0 ceph-mon[74381]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Jan 20 19:08:53 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936133545770, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 1006, "num_deletes": 251, "total_data_size": 1733243, "memory_usage": 1755840, "flush_reason": "Manual Compaction"}
Jan 20 19:08:53 compute-0 ceph-mon[74381]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Jan 20 19:08:53 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936133565013, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 1687809, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24207, "largest_seqno": 25212, "table_properties": {"data_size": 1682930, "index_size": 2403, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 10860, "raw_average_key_size": 19, "raw_value_size": 1673069, "raw_average_value_size": 3053, "num_data_blocks": 107, "num_entries": 548, "num_filter_entries": 548, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768936047, "oldest_key_time": 1768936047, "file_creation_time": 1768936133, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cbf3ab03-d51c-4622-b6c7-e997cd5246eb", "db_session_id": "I40O2DG19JCNHUB0JQU4", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Jan 20 19:08:53 compute-0 ceph-mon[74381]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 19335 microseconds, and 5732 cpu microseconds.
Jan 20 19:08:53 compute-0 ceph-mon[74381]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 19:08:53 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:08:53.565069) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 1687809 bytes OK
Jan 20 19:08:53 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:08:53.565091) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Jan 20 19:08:53 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:08:53.568214) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Jan 20 19:08:53 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:08:53.568245) EVENT_LOG_v1 {"time_micros": 1768936133568238, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 19:08:53 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:08:53.568266) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 19:08:53 compute-0 ceph-mon[74381]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1728554, prev total WAL file size 1728554, number of live WAL files 2.
Jan 20 19:08:53 compute-0 ceph-mon[74381]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 19:08:53 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:08:53.570490) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Jan 20 19:08:53 compute-0 ceph-mon[74381]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 19:08:53 compute-0 ceph-mon[74381]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(1648KB)], [53(13MB)]
Jan 20 19:08:53 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936133570563, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 15584829, "oldest_snapshot_seqno": -1}
Jan 20 19:08:53 compute-0 ovn_controller[155128]: 2026-01-20T19:08:53Z|00038|memory_trim|INFO|Detected inactivity (last active 30008 ms ago): trimming memory
Jan 20 19:08:53 compute-0 ceph-mon[74381]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 5647 keys, 13367336 bytes, temperature: kUnknown
Jan 20 19:08:53 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936133669035, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 13367336, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13329674, "index_size": 22448, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14149, "raw_key_size": 145078, "raw_average_key_size": 25, "raw_value_size": 13227509, "raw_average_value_size": 2342, "num_data_blocks": 908, "num_entries": 5647, "num_filter_entries": 5647, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768934326, "oldest_key_time": 0, "file_creation_time": 1768936133, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cbf3ab03-d51c-4622-b6c7-e997cd5246eb", "db_session_id": "I40O2DG19JCNHUB0JQU4", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Jan 20 19:08:53 compute-0 ceph-mon[74381]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 19:08:53 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:08:53.669409) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 13367336 bytes
Jan 20 19:08:53 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:08:53.670955) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 158.1 rd, 135.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 13.3 +0.0 blob) out(12.7 +0.0 blob), read-write-amplify(17.2) write-amplify(7.9) OK, records in: 6163, records dropped: 516 output_compression: NoCompression
Jan 20 19:08:53 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:08:53.670999) EVENT_LOG_v1 {"time_micros": 1768936133670976, "job": 28, "event": "compaction_finished", "compaction_time_micros": 98587, "compaction_time_cpu_micros": 25756, "output_level": 6, "num_output_files": 1, "total_output_size": 13367336, "num_input_records": 6163, "num_output_records": 5647, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 19:08:53 compute-0 ceph-mon[74381]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 19:08:53 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936133671629, "job": 28, "event": "table_file_deletion", "file_number": 55}
Jan 20 19:08:53 compute-0 ceph-mon[74381]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 19:08:53 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936133676519, "job": 28, "event": "table_file_deletion", "file_number": 53}
Jan 20 19:08:53 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:08:53.570120) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:08:53 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:08:53.676688) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:08:53 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:08:53.676698) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:08:53 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:08:53.676701) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:08:53 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:08:53.676704) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:08:53 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:08:53.676707) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:08:53 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:08:53 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:08:53 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:08:53.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:08:54 compute-0 ceph-mon[74381]: pgmap v802: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 75 op/s
Jan 20 19:08:54 compute-0 ceph-mgr[74676]: [balancer INFO root] Optimize plan auto_2026-01-20_19:08:54
Jan 20 19:08:54 compute-0 ceph-mgr[74676]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 19:08:54 compute-0 ceph-mgr[74676]: [balancer INFO root] do_upmap
Jan 20 19:08:54 compute-0 ceph-mgr[74676]: [balancer INFO root] pools ['backups', '.mgr', 'vms', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.data', '.nfs', '.rgw.root', 'default.rgw.log', 'default.rgw.control', 'images', 'cephfs.cephfs.meta']
Jan 20 19:08:54 compute-0 ceph-mgr[74676]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 19:08:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:08:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:08:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:08:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:08:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:08:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:08:55 compute-0 nova_compute[254061]: 2026-01-20 19:08:55.263 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:08:55 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:08:55 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:08:55 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:08:55.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:08:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 19:08:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:08:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 19:08:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:08:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Jan 20 19:08:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:08:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:08:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:08:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:08:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:08:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 20 19:08:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:08:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 20 19:08:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:08:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:08:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:08:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 19:08:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:08:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 20 19:08:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:08:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:08:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:08:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 19:08:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:08:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 20 19:08:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 19:08:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:08:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 19:08:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:08:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:08:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:08:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:08:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:08:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:08:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:08:55 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v803: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Jan 20 19:08:55 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:08:55 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:08:55 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:08:55 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:08:55.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:08:56 compute-0 nova_compute[254061]: 2026-01-20 19:08:56.110 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:08:56 compute-0 podman[262626]: 2026-01-20 19:08:56.183653722 +0000 UTC m=+0.154000847 container health_status d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 20 19:08:56 compute-0 ceph-mon[74381]: pgmap v803: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Jan 20 19:08:56 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:08:56 compute-0 sudo[262656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:08:56 compute-0 sudo[262656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:08:56 compute-0 sudo[262656]: pam_unix(sudo:session): session closed for user root
Jan 20 19:08:57 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Jan 20 19:08:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:08:57.160Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:08:57 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:08:57 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:08:57 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:08:57.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:08:57 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v804: 337 pgs: 337 active+clean; 113 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 125 op/s
Jan 20 19:08:57 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:08:57 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:08:57 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:08:57.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:08:58 compute-0 ceph-mon[74381]: pgmap v804: 337 pgs: 337 active+clean; 113 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 125 op/s
Jan 20 19:08:58 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:08:58.855Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:08:58 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:08:58.855Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:08:59 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:08:59 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:08:59 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:08:59.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:08:59 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v805: 337 pgs: 337 active+clean; 113 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 277 KiB/s rd, 2.1 MiB/s wr, 51 op/s
Jan 20 19:08:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:08:59 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 20 19:08:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:08:59 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Jan 20 19:08:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:08:59 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Jan 20 19:08:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:08:59 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Jan 20 19:08:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:08:59 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Jan 20 19:08:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:08:59 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Jan 20 19:08:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:08:59 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Jan 20 19:08:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:08:59 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 20 19:08:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:08:59 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 20 19:08:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:08:59 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 20 19:08:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:08:59 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Jan 20 19:08:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:08:59 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Jan 20 19:08:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:08:59 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Jan 20 19:08:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:08:59 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Jan 20 19:08:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:08:59 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Jan 20 19:08:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:08:59 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Jan 20 19:08:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:08:59 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Jan 20 19:08:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:08:59 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Jan 20 19:08:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:08:59 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Jan 20 19:08:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:08:59 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Jan 20 19:08:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:08:59 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Jan 20 19:08:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:08:59 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Jan 20 19:08:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:08:59 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Jan 20 19:08:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:08:59 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Jan 20 19:08:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:08:59 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 20 19:08:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:08:59 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Jan 20 19:08:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:08:59 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Jan 20 19:08:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:08:59 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 20 19:08:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:08:59] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Jan 20 19:08:59 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:08:59] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Jan 20 19:08:59 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:08:59 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:08:59 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:08:59.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:09:00 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:00 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdb04000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:00 compute-0 nova_compute[254061]: 2026-01-20 19:09:00.267 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:09:00 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:00 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdafc001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:01 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:01 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdae0000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:01 compute-0 nova_compute[254061]: 2026-01-20 19:09:01.111 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:09:01 compute-0 ceph-mon[74381]: pgmap v805: 337 pgs: 337 active+clean; 113 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 277 KiB/s rd, 2.1 MiB/s wr, 51 op/s
Jan 20 19:09:01 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:09:01 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:09:01 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:09:01.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:09:01 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v806: 337 pgs: 337 active+clean; 119 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 318 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 20 19:09:01 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:09:01 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:09:01 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:09:01 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:09:01.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:09:02 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:02 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdaf0001230 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:02 compute-0 nova_compute[254061]: 2026-01-20 19:09:02.128 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:09:02 compute-0 nova_compute[254061]: 2026-01-20 19:09:02.163 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:09:02 compute-0 nova_compute[254061]: 2026-01-20 19:09:02.163 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:09:02 compute-0 nova_compute[254061]: 2026-01-20 19:09:02.164 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:09:02 compute-0 nova_compute[254061]: 2026-01-20 19:09:02.164 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 19:09:02 compute-0 nova_compute[254061]: 2026-01-20 19:09:02.164 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:09:02 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:02 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 20 19:09:02 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:02 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 20 19:09:02 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:02 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdaf8001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:02 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:09:02 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2395174244' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:09:02 compute-0 nova_compute[254061]: 2026-01-20 19:09:02.650 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:09:02 compute-0 nova_compute[254061]: 2026-01-20 19:09:02.828 254065 WARNING nova.virt.libvirt.driver [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 19:09:02 compute-0 nova_compute[254061]: 2026-01-20 19:09:02.829 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4576MB free_disk=59.94317626953125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 19:09:02 compute-0 nova_compute[254061]: 2026-01-20 19:09:02.829 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:09:02 compute-0 nova_compute[254061]: 2026-01-20 19:09:02.829 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:09:02 compute-0 nova_compute[254061]: 2026-01-20 19:09:02.917 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 19:09:02 compute-0 nova_compute[254061]: 2026-01-20 19:09:02.917 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 19:09:02 compute-0 nova_compute[254061]: 2026-01-20 19:09:02.940 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:09:03 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [WARNING] 019/190903 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 20 19:09:03 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:03 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdafc001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:03 compute-0 ceph-mon[74381]: pgmap v806: 337 pgs: 337 active+clean; 119 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 318 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 20 19:09:03 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/2395174244' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:09:03 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:09:03 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:09:03 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:09:03.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:09:03 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:09:03 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1165034350' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:09:03 compute-0 nova_compute[254061]: 2026-01-20 19:09:03.365 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:09:03 compute-0 nova_compute[254061]: 2026-01-20 19:09:03.371 254065 DEBUG nova.compute.provider_tree [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Inventory has not changed in ProviderTree for provider: cb9161e5-191d-495c-920a-01144f42a215 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 19:09:03 compute-0 nova_compute[254061]: 2026-01-20 19:09:03.396 254065 DEBUG nova.scheduler.client.report [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Inventory has not changed for provider cb9161e5-191d-495c-920a-01144f42a215 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 19:09:03 compute-0 nova_compute[254061]: 2026-01-20 19:09:03.438 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 19:09:03 compute-0 nova_compute[254061]: 2026-01-20 19:09:03.439 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.610s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:09:03 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v807: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 334 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Jan 20 19:09:03 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:09:03 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:09:03 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:09:03.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:09:04 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:04 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdae00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:04 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/1165034350' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:09:04 compute-0 nova_compute[254061]: 2026-01-20 19:09:04.440 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:09:04 compute-0 nova_compute[254061]: 2026-01-20 19:09:04.440 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:09:04 compute-0 nova_compute[254061]: 2026-01-20 19:09:04.440 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:09:04 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:04 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdaf0001d50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:05 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:05 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdaf80023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:05 compute-0 nova_compute[254061]: 2026-01-20 19:09:05.128 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:09:05 compute-0 nova_compute[254061]: 2026-01-20 19:09:05.128 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 19:09:05 compute-0 ceph-mon[74381]: pgmap v807: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 334 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Jan 20 19:09:05 compute-0 nova_compute[254061]: 2026-01-20 19:09:05.270 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:09:05 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:09:05 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 19:09:05 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:09:05.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 19:09:05 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v808: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 334 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Jan 20 19:09:05 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:05 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 20 19:09:05 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:09:05 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:09:05 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:09:05.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:09:06 compute-0 nova_compute[254061]: 2026-01-20 19:09:06.113 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:09:06 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:06 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdaf80023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:06 compute-0 nova_compute[254061]: 2026-01-20 19:09:06.129 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:09:06 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/2304505356' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:09:06 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/3201054541' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:09:06 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/1711841358' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:09:06 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:06 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdae00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:06 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:09:07 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:07 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdae00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:07 compute-0 nova_compute[254061]: 2026-01-20 19:09:07.129 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:09:07 compute-0 nova_compute[254061]: 2026-01-20 19:09:07.129 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 19:09:07 compute-0 nova_compute[254061]: 2026-01-20 19:09:07.129 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 19:09:07 compute-0 nova_compute[254061]: 2026-01-20 19:09:07.148 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 19:09:07 compute-0 nova_compute[254061]: 2026-01-20 19:09:07.148 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:09:07 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:09:07.161Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:09:07 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:09:07 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:09:07 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:09:07.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:09:07 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v809: 337 pgs: 337 active+clean; 121 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 336 KiB/s rd, 2.1 MiB/s wr, 70 op/s
Jan 20 19:09:07 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:09:07 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:09:07 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:09:07.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:09:08 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:08 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdafc001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:08 compute-0 nova_compute[254061]: 2026-01-20 19:09:08.144 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:09:08 compute-0 ceph-mon[74381]: pgmap v808: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 334 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Jan 20 19:09:08 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/561447396' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:09:08 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/989865676' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:09:08 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:08 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdaf80023e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:08 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:09:08.855Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:09:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:09 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdae00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:09 compute-0 ceph-mon[74381]: pgmap v809: 337 pgs: 337 active+clean; 121 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 336 KiB/s rd, 2.1 MiB/s wr, 70 op/s
Jan 20 19:09:09 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:09:09 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:09:09 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:09:09.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:09:09 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v810: 337 pgs: 337 active+clean; 121 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 81 KiB/s wr, 19 op/s
Jan 20 19:09:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:09:09] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Jan 20 19:09:09 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:09:09] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Jan 20 19:09:09 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:09:09 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:09:09 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:09:09.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:09:10 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:10 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdaf0002670 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:10 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:09:10 compute-0 nova_compute[254061]: 2026-01-20 19:09:10.274 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:09:10 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:10 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdafc001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:11 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:11 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdaf80034e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:11 compute-0 nova_compute[254061]: 2026-01-20 19:09:11.116 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:09:11 compute-0 ceph-mon[74381]: pgmap v810: 337 pgs: 337 active+clean; 121 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 81 KiB/s wr, 19 op/s
Jan 20 19:09:11 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:09:11 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:09:11 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:09:11.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:09:11 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v811: 337 pgs: 337 active+clean; 121 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 81 KiB/s wr, 19 op/s
Jan 20 19:09:11 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:09:11 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:09:11 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:09:11 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:09:11.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:09:12 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:12 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdae00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:12 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:12 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdaf0002670 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:13 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:13 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdafc001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:13 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:09:13 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:09:13 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:09:13.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:09:13 compute-0 ceph-mon[74381]: pgmap v811: 337 pgs: 337 active+clean; 121 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 81 KiB/s wr, 19 op/s
Jan 20 19:09:13 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v812: 337 pgs: 337 active+clean; 121 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 64 KiB/s wr, 8 op/s
Jan 20 19:09:13 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:09:13 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:09:13 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:09:13.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:09:14 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:14 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdaf80034e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:14 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:14 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdae00032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:15 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:15 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdae00032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:15 compute-0 nova_compute[254061]: 2026-01-20 19:09:15.278 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:09:15 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:09:15 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:09:15 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:09:15.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:09:15 compute-0 ceph-mon[74381]: pgmap v812: 337 pgs: 337 active+clean; 121 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 64 KiB/s wr, 8 op/s
Jan 20 19:09:15 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v813: 337 pgs: 337 active+clean; 121 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 15 KiB/s wr, 2 op/s
Jan 20 19:09:15 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:09:15 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:09:15 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:09:15.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:09:16 compute-0 nova_compute[254061]: 2026-01-20 19:09:16.120 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:09:16 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:16 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdafc001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:16 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:16 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdaf80034e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:16 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:09:16 compute-0 sudo[262763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:09:16 compute-0 sudo[262763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:09:16 compute-0 sudo[262763]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:17 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdaf00035d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:09:17.162Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:09:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:09:17.162Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:09:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:09:17.163Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:09:17 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:09:17 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:09:17 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:09:17.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:09:17 compute-0 ceph-mon[74381]: pgmap v813: 337 pgs: 337 active+clean; 121 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 15 KiB/s wr, 2 op/s
Jan 20 19:09:17 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v814: 337 pgs: 337 active+clean; 121 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 9.4 KiB/s rd, 19 KiB/s wr, 3 op/s
Jan 20 19:09:17 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:09:17 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:09:17 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:09:17.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:09:18 compute-0 podman[262788]: 2026-01-20 19:09:18.125304608 +0000 UTC m=+0.090030118 container health_status 7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 20 19:09:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:18 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdae00032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:18 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdafc001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:09:18.857Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:09:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:19 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdaf80045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:19 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:09:19 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:09:19 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:09:19.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:09:19 compute-0 ceph-mon[74381]: pgmap v814: 337 pgs: 337 active+clean; 121 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 9.4 KiB/s rd, 19 KiB/s wr, 3 op/s
Jan 20 19:09:19 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v815: 337 pgs: 337 active+clean; 121 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 7.8 KiB/s rd, 6.0 KiB/s wr, 1 op/s
Jan 20 19:09:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:09:19] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Jan 20 19:09:19 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:09:19] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Jan 20 19:09:19 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:09:19 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:09:19 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:09:19.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:09:20 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:20 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdaf00035d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:20 compute-0 nova_compute[254061]: 2026-01-20 19:09:20.282 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:09:20 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:20 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdae0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:21 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:21 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdafc001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:21 compute-0 nova_compute[254061]: 2026-01-20 19:09:21.122 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:09:21 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:09:21 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:09:21 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:09:21.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:09:21 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v816: 337 pgs: 337 active+clean; 71 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 7.2 KiB/s wr, 26 op/s
Jan 20 19:09:21 compute-0 ceph-mon[74381]: pgmap v815: 337 pgs: 337 active+clean; 121 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 7.8 KiB/s rd, 6.0 KiB/s wr, 1 op/s
Jan 20 19:09:21 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:09:21 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:09:21 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:09:21 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:09:21.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:09:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:22 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdaf80045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:22 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Jan 20 19:09:22 compute-0 ceph-mon[74381]: pgmap v816: 337 pgs: 337 active+clean; 71 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 7.2 KiB/s wr, 26 op/s
Jan 20 19:09:22 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/3802165269' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:09:22 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:22 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdaf00035d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:23 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:23 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdae0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:23 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:09:23 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:09:23 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:09:23.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:09:23 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v817: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 7.6 KiB/s wr, 30 op/s
Jan 20 19:09:23 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:09:23 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:09:23 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:09:23.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:09:24 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:24 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdafc001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:24 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:24 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdaf80045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:24 compute-0 ceph-mon[74381]: pgmap v817: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 7.6 KiB/s wr, 30 op/s
Jan 20 19:09:25 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:25 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdaf80045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:09:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:09:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:09:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:09:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:09:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:09:25 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:25 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Jan 20 19:09:25 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:25 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 20 19:09:25 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:25 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Jan 20 19:09:25 compute-0 nova_compute[254061]: 2026-01-20 19:09:25.285 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:09:25 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:09:25 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:09:25 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:09:25.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:09:25 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v818: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 5.6 KiB/s wr, 30 op/s
Jan 20 19:09:25 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:09:25 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:09:25 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:09:25 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:09:25.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:09:26 compute-0 nova_compute[254061]: 2026-01-20 19:09:26.125 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:09:26 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:26 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdae0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:26 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:26 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdafc003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:26 compute-0 ceph-mon[74381]: pgmap v818: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 5.6 KiB/s wr, 30 op/s
Jan 20 19:09:26 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:09:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:27 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdaf00035d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:27 compute-0 podman[262818]: 2026-01-20 19:09:27.129316785 +0000 UTC m=+0.097362020 container health_status d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Jan 20 19:09:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:09:27.164Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:09:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:09:27.164Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:09:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:09:27.165Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:09:27 compute-0 sudo[262846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:09:27 compute-0 sudo[262846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:09:27 compute-0 sudo[262846]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:27 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:09:27 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:09:27 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:09:27.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:09:27 compute-0 sudo[262871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 20 19:09:27 compute-0 sudo[262871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:09:27 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v819: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 6.1 KiB/s wr, 32 op/s
Jan 20 19:09:27 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:09:27 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:09:27 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:09:27.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:09:27 compute-0 sudo[262871]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:28 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v820: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 2.4 KiB/s wr, 35 op/s
Jan 20 19:09:28 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 19:09:28 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:09:28 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 20 19:09:28 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:09:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:28 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdaf80045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:28 compute-0 sudo[262928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:09:28 compute-0 sudo[262928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:09:28 compute-0 sudo[262928]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:28 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Jan 20 19:09:28 compute-0 sudo[262953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 19:09:28 compute-0 sudo[262953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:09:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:28 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdaf80045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:28 compute-0 podman[263018]: 2026-01-20 19:09:28.591001845 +0000 UTC m=+0.022502517 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:09:28 compute-0 podman[263018]: 2026-01-20 19:09:28.796054169 +0000 UTC m=+0.227554791 container create f32135f7240fe571065f7d96c6db37a902a30688055fc2f62da2e2aa6613151c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:09:28 compute-0 ceph-mon[74381]: pgmap v819: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 6.1 KiB/s wr, 32 op/s
Jan 20 19:09:28 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 19:09:28 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 19:09:28 compute-0 ceph-mon[74381]: pgmap v820: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 2.4 KiB/s wr, 35 op/s
Jan 20 19:09:28 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:09:28 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:09:28 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 19:09:28 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 19:09:28 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 19:09:28 compute-0 systemd[1]: Started libpod-conmon-f32135f7240fe571065f7d96c6db37a902a30688055fc2f62da2e2aa6613151c.scope.
Jan 20 19:09:28 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:09:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:09:28.858Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:09:28 compute-0 podman[263018]: 2026-01-20 19:09:28.876473579 +0000 UTC m=+0.307974221 container init f32135f7240fe571065f7d96c6db37a902a30688055fc2f62da2e2aa6613151c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_nash, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Jan 20 19:09:28 compute-0 podman[263018]: 2026-01-20 19:09:28.893385487 +0000 UTC m=+0.324886109 container start f32135f7240fe571065f7d96c6db37a902a30688055fc2f62da2e2aa6613151c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_nash, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:09:28 compute-0 podman[263018]: 2026-01-20 19:09:28.896651344 +0000 UTC m=+0.328151996 container attach f32135f7240fe571065f7d96c6db37a902a30688055fc2f62da2e2aa6613151c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_nash, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default)
Jan 20 19:09:28 compute-0 flamboyant_nash[263034]: 167 167
Jan 20 19:09:28 compute-0 systemd[1]: libpod-f32135f7240fe571065f7d96c6db37a902a30688055fc2f62da2e2aa6613151c.scope: Deactivated successfully.
Jan 20 19:09:28 compute-0 podman[263018]: 2026-01-20 19:09:28.901964724 +0000 UTC m=+0.333465376 container died f32135f7240fe571065f7d96c6db37a902a30688055fc2f62da2e2aa6613151c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_nash, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 20 19:09:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-9930d4c9bee7312cb63edc165cd8da0790569a899fef7ab3777a8432984e3699-merged.mount: Deactivated successfully.
Jan 20 19:09:28 compute-0 podman[263018]: 2026-01-20 19:09:28.943866895 +0000 UTC m=+0.375367537 container remove f32135f7240fe571065f7d96c6db37a902a30688055fc2f62da2e2aa6613151c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_nash, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 20 19:09:28 compute-0 systemd[1]: libpod-conmon-f32135f7240fe571065f7d96c6db37a902a30688055fc2f62da2e2aa6613151c.scope: Deactivated successfully.
Jan 20 19:09:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:29 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdaf80045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:29 compute-0 podman[263058]: 2026-01-20 19:09:29.145044766 +0000 UTC m=+0.044212903 container create a59b2a84dd33504b74537b730c207198ea61631d67e9fc8db4ec6152769f4433 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_williams, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 20 19:09:29 compute-0 systemd[1]: Started libpod-conmon-a59b2a84dd33504b74537b730c207198ea61631d67e9fc8db4ec6152769f4433.scope.
Jan 20 19:09:29 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:09:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7592fc4651d318d5162be0e5e0510c6027d58a1cffe9ab516001d53f88e5a276/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:09:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7592fc4651d318d5162be0e5e0510c6027d58a1cffe9ab516001d53f88e5a276/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:09:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7592fc4651d318d5162be0e5e0510c6027d58a1cffe9ab516001d53f88e5a276/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:09:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7592fc4651d318d5162be0e5e0510c6027d58a1cffe9ab516001d53f88e5a276/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:09:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7592fc4651d318d5162be0e5e0510c6027d58a1cffe9ab516001d53f88e5a276/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:09:29 compute-0 podman[263058]: 2026-01-20 19:09:29.127272544 +0000 UTC m=+0.026440721 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:09:29 compute-0 podman[263058]: 2026-01-20 19:09:29.222481527 +0000 UTC m=+0.121649744 container init a59b2a84dd33504b74537b730c207198ea61631d67e9fc8db4ec6152769f4433 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_williams, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1)
Jan 20 19:09:29 compute-0 podman[263058]: 2026-01-20 19:09:29.232897514 +0000 UTC m=+0.132065661 container start a59b2a84dd33504b74537b730c207198ea61631d67e9fc8db4ec6152769f4433 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_williams, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Jan 20 19:09:29 compute-0 podman[263058]: 2026-01-20 19:09:29.235844022 +0000 UTC m=+0.135012269 container attach a59b2a84dd33504b74537b730c207198ea61631d67e9fc8db4ec6152769f4433 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_williams, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Jan 20 19:09:29 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:09:29 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:09:29 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:09:29.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:09:29 compute-0 amazing_williams[263074]: --> passed data devices: 0 physical, 1 LVM
Jan 20 19:09:29 compute-0 amazing_williams[263074]: --> All data devices are unavailable
Jan 20 19:09:29 compute-0 systemd[1]: libpod-a59b2a84dd33504b74537b730c207198ea61631d67e9fc8db4ec6152769f4433.scope: Deactivated successfully.
Jan 20 19:09:29 compute-0 podman[263058]: 2026-01-20 19:09:29.636053586 +0000 UTC m=+0.535221783 container died a59b2a84dd33504b74537b730c207198ea61631d67e9fc8db4ec6152769f4433 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:09:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-7592fc4651d318d5162be0e5e0510c6027d58a1cffe9ab516001d53f88e5a276-merged.mount: Deactivated successfully.
Jan 20 19:09:29 compute-0 podman[263058]: 2026-01-20 19:09:29.686378279 +0000 UTC m=+0.585546436 container remove a59b2a84dd33504b74537b730c207198ea61631d67e9fc8db4ec6152769f4433 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_williams, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:09:29 compute-0 systemd[1]: libpod-conmon-a59b2a84dd33504b74537b730c207198ea61631d67e9fc8db4ec6152769f4433.scope: Deactivated successfully.
Jan 20 19:09:29 compute-0 sudo[262953]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:09:29] "GET /metrics HTTP/1.1" 200 48442 "" "Prometheus/2.51.0"
Jan 20 19:09:29 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:09:29] "GET /metrics HTTP/1.1" 200 48442 "" "Prometheus/2.51.0"
Jan 20 19:09:29 compute-0 sudo[263102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:09:29 compute-0 sudo[263102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:09:29 compute-0 sudo[263102]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:29 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:09:29 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:09:29 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:09:29.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:09:29 compute-0 sudo[263127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- lvm list --format json
Jan 20 19:09:29 compute-0 sudo[263127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:09:30 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v821: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 2.4 KiB/s wr, 35 op/s
Jan 20 19:09:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:30 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdae0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:09:30.285 165659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:09:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:09:30.286 165659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:09:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:09:30.287 165659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:09:30 compute-0 nova_compute[254061]: 2026-01-20 19:09:30.289 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:09:30 compute-0 podman[263195]: 2026-01-20 19:09:30.316995278 +0000 UTC m=+0.054991938 container create 58d91bbf3da4813e18387fb318b19ff4fd856cdbdaf2a2dc9e3be54ba2b13e73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Jan 20 19:09:30 compute-0 systemd[1]: Started libpod-conmon-58d91bbf3da4813e18387fb318b19ff4fd856cdbdaf2a2dc9e3be54ba2b13e73.scope.
Jan 20 19:09:30 compute-0 podman[263195]: 2026-01-20 19:09:30.286922602 +0000 UTC m=+0.024919352 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:09:30 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:09:30 compute-0 podman[263195]: 2026-01-20 19:09:30.408183864 +0000 UTC m=+0.146180564 container init 58d91bbf3da4813e18387fb318b19ff4fd856cdbdaf2a2dc9e3be54ba2b13e73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_sammet, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:09:30 compute-0 podman[263195]: 2026-01-20 19:09:30.417348418 +0000 UTC m=+0.155345088 container start 58d91bbf3da4813e18387fb318b19ff4fd856cdbdaf2a2dc9e3be54ba2b13e73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:09:30 compute-0 podman[263195]: 2026-01-20 19:09:30.421292901 +0000 UTC m=+0.159289601 container attach 58d91bbf3da4813e18387fb318b19ff4fd856cdbdaf2a2dc9e3be54ba2b13e73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_sammet, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:09:30 compute-0 systemd[1]: libpod-58d91bbf3da4813e18387fb318b19ff4fd856cdbdaf2a2dc9e3be54ba2b13e73.scope: Deactivated successfully.
Jan 20 19:09:30 compute-0 compassionate_sammet[263212]: 167 167
Jan 20 19:09:30 compute-0 conmon[263212]: conmon 58d91bbf3da4813e1838 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-58d91bbf3da4813e18387fb318b19ff4fd856cdbdaf2a2dc9e3be54ba2b13e73.scope/container/memory.events
Jan 20 19:09:30 compute-0 podman[263195]: 2026-01-20 19:09:30.424053264 +0000 UTC m=+0.162049944 container died 58d91bbf3da4813e18387fb318b19ff4fd856cdbdaf2a2dc9e3be54ba2b13e73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 20 19:09:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-040e786722c825d7e6443f4ca1014f20105b21afe206e7ed6a462366d2627a64-merged.mount: Deactivated successfully.
Jan 20 19:09:30 compute-0 podman[263195]: 2026-01-20 19:09:30.460058519 +0000 UTC m=+0.198055179 container remove 58d91bbf3da4813e18387fb318b19ff4fd856cdbdaf2a2dc9e3be54ba2b13e73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_sammet, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 20 19:09:30 compute-0 systemd[1]: libpod-conmon-58d91bbf3da4813e18387fb318b19ff4fd856cdbdaf2a2dc9e3be54ba2b13e73.scope: Deactivated successfully.
Jan 20 19:09:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:30 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdaf00035d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:30 compute-0 podman[263236]: 2026-01-20 19:09:30.62311893 +0000 UTC m=+0.037866975 container create 4de2f43ee3dd698cc0d4bd26c3707f62deac95f3a033153ed16c45665a842c62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_jemison, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:09:30 compute-0 systemd[1]: Started libpod-conmon-4de2f43ee3dd698cc0d4bd26c3707f62deac95f3a033153ed16c45665a842c62.scope.
Jan 20 19:09:30 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:09:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7ab5258963e7fb2fb935976a00d2e05c2bd418949a5ab75c28676d67a5eb2b4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:09:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7ab5258963e7fb2fb935976a00d2e05c2bd418949a5ab75c28676d67a5eb2b4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:09:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7ab5258963e7fb2fb935976a00d2e05c2bd418949a5ab75c28676d67a5eb2b4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:09:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7ab5258963e7fb2fb935976a00d2e05c2bd418949a5ab75c28676d67a5eb2b4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:09:30 compute-0 podman[263236]: 2026-01-20 19:09:30.606294574 +0000 UTC m=+0.021042629 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:09:30 compute-0 podman[263236]: 2026-01-20 19:09:30.705488181 +0000 UTC m=+0.120236246 container init 4de2f43ee3dd698cc0d4bd26c3707f62deac95f3a033153ed16c45665a842c62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_jemison, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:09:30 compute-0 podman[263236]: 2026-01-20 19:09:30.715385365 +0000 UTC m=+0.130133410 container start 4de2f43ee3dd698cc0d4bd26c3707f62deac95f3a033153ed16c45665a842c62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_jemison, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default)
Jan 20 19:09:30 compute-0 podman[263236]: 2026-01-20 19:09:30.718345793 +0000 UTC m=+0.133093838 container attach 4de2f43ee3dd698cc0d4bd26c3707f62deac95f3a033153ed16c45665a842c62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Jan 20 19:09:30 compute-0 youthful_jemison[263253]: {
Jan 20 19:09:30 compute-0 youthful_jemison[263253]:     "0": [
Jan 20 19:09:30 compute-0 youthful_jemison[263253]:         {
Jan 20 19:09:30 compute-0 youthful_jemison[263253]:             "devices": [
Jan 20 19:09:30 compute-0 youthful_jemison[263253]:                 "/dev/loop3"
Jan 20 19:09:30 compute-0 youthful_jemison[263253]:             ],
Jan 20 19:09:30 compute-0 youthful_jemison[263253]:             "lv_name": "ceph_lv0",
Jan 20 19:09:30 compute-0 youthful_jemison[263253]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:09:30 compute-0 youthful_jemison[263253]:             "lv_size": "21470642176",
Jan 20 19:09:30 compute-0 youthful_jemison[263253]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=aecbbf3b-b405-507b-97d7-637a83f5b4b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5f53c0c6-6046-4836-83f9-ff93da7e674e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:09:30 compute-0 youthful_jemison[263253]:             "lv_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 19:09:30 compute-0 youthful_jemison[263253]:             "name": "ceph_lv0",
Jan 20 19:09:30 compute-0 youthful_jemison[263253]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:09:30 compute-0 youthful_jemison[263253]:             "tags": {
Jan 20 19:09:30 compute-0 youthful_jemison[263253]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:09:30 compute-0 youthful_jemison[263253]:                 "ceph.block_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 19:09:30 compute-0 youthful_jemison[263253]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:09:30 compute-0 youthful_jemison[263253]:                 "ceph.cluster_fsid": "aecbbf3b-b405-507b-97d7-637a83f5b4b1",
Jan 20 19:09:30 compute-0 youthful_jemison[263253]:                 "ceph.cluster_name": "ceph",
Jan 20 19:09:30 compute-0 youthful_jemison[263253]:                 "ceph.crush_device_class": "",
Jan 20 19:09:30 compute-0 youthful_jemison[263253]:                 "ceph.encrypted": "0",
Jan 20 19:09:30 compute-0 youthful_jemison[263253]:                 "ceph.osd_fsid": "5f53c0c6-6046-4836-83f9-ff93da7e674e",
Jan 20 19:09:30 compute-0 youthful_jemison[263253]:                 "ceph.osd_id": "0",
Jan 20 19:09:30 compute-0 youthful_jemison[263253]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:09:30 compute-0 youthful_jemison[263253]:                 "ceph.type": "block",
Jan 20 19:09:30 compute-0 youthful_jemison[263253]:                 "ceph.vdo": "0",
Jan 20 19:09:30 compute-0 youthful_jemison[263253]:                 "ceph.with_tpm": "0"
Jan 20 19:09:30 compute-0 youthful_jemison[263253]:             },
Jan 20 19:09:30 compute-0 youthful_jemison[263253]:             "type": "block",
Jan 20 19:09:30 compute-0 youthful_jemison[263253]:             "vg_name": "ceph_vg0"
Jan 20 19:09:30 compute-0 youthful_jemison[263253]:         }
Jan 20 19:09:30 compute-0 youthful_jemison[263253]:     ]
Jan 20 19:09:30 compute-0 youthful_jemison[263253]: }
Jan 20 19:09:30 compute-0 systemd[1]: libpod-4de2f43ee3dd698cc0d4bd26c3707f62deac95f3a033153ed16c45665a842c62.scope: Deactivated successfully.
Jan 20 19:09:30 compute-0 podman[263236]: 2026-01-20 19:09:30.985579034 +0000 UTC m=+0.400327079 container died 4de2f43ee3dd698cc0d4bd26c3707f62deac95f3a033153ed16c45665a842c62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_jemison, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 20 19:09:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-f7ab5258963e7fb2fb935976a00d2e05c2bd418949a5ab75c28676d67a5eb2b4-merged.mount: Deactivated successfully.
Jan 20 19:09:31 compute-0 podman[263236]: 2026-01-20 19:09:31.026885698 +0000 UTC m=+0.441633753 container remove 4de2f43ee3dd698cc0d4bd26c3707f62deac95f3a033153ed16c45665a842c62 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_jemison, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:09:31 compute-0 systemd[1]: libpod-conmon-4de2f43ee3dd698cc0d4bd26c3707f62deac95f3a033153ed16c45665a842c62.scope: Deactivated successfully.
Jan 20 19:09:31 compute-0 sudo[263127]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:31 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:31 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdafc003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:31 compute-0 sudo[263273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:09:31 compute-0 sudo[263273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:09:31 compute-0 sudo[263273]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:31 compute-0 ceph-mon[74381]: pgmap v821: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 2.4 KiB/s wr, 35 op/s
Jan 20 19:09:31 compute-0 nova_compute[254061]: 2026-01-20 19:09:31.146 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:09:31 compute-0 sudo[263298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- raw list --format json
Jan 20 19:09:31 compute-0 sudo[263298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:09:31 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-crash-compute-0[79794]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 20 19:09:31 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:09:31 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:09:31 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:09:31.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:09:31 compute-0 podman[263365]: 2026-01-20 19:09:31.623468595 +0000 UTC m=+0.049737398 container create 1e6821812bea3815654c3bc6896b90dbf859cbcf850f13840a771ab53e7efae6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_yalow, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 20 19:09:31 compute-0 systemd[1]: Started libpod-conmon-1e6821812bea3815654c3bc6896b90dbf859cbcf850f13840a771ab53e7efae6.scope.
Jan 20 19:09:31 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:09:31 compute-0 podman[263365]: 2026-01-20 19:09:31.599465799 +0000 UTC m=+0.025734652 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:09:31 compute-0 podman[263365]: 2026-01-20 19:09:31.697298262 +0000 UTC m=+0.123567095 container init 1e6821812bea3815654c3bc6896b90dbf859cbcf850f13840a771ab53e7efae6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:09:31 compute-0 podman[263365]: 2026-01-20 19:09:31.705508269 +0000 UTC m=+0.131777112 container start 1e6821812bea3815654c3bc6896b90dbf859cbcf850f13840a771ab53e7efae6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_yalow, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:09:31 compute-0 podman[263365]: 2026-01-20 19:09:31.709373911 +0000 UTC m=+0.135642724 container attach 1e6821812bea3815654c3bc6896b90dbf859cbcf850f13840a771ab53e7efae6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_yalow, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:09:31 compute-0 strange_yalow[263381]: 167 167
Jan 20 19:09:31 compute-0 systemd[1]: libpod-1e6821812bea3815654c3bc6896b90dbf859cbcf850f13840a771ab53e7efae6.scope: Deactivated successfully.
Jan 20 19:09:31 compute-0 podman[263365]: 2026-01-20 19:09:31.711084937 +0000 UTC m=+0.137353740 container died 1e6821812bea3815654c3bc6896b90dbf859cbcf850f13840a771ab53e7efae6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_yalow, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 20 19:09:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-85896ba12f5a60975af716567b172fcb219619b3893ac0707b6bb6504fc6e6b0-merged.mount: Deactivated successfully.
Jan 20 19:09:31 compute-0 podman[263365]: 2026-01-20 19:09:31.74479591 +0000 UTC m=+0.171064713 container remove 1e6821812bea3815654c3bc6896b90dbf859cbcf850f13840a771ab53e7efae6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:09:31 compute-0 systemd[1]: libpod-conmon-1e6821812bea3815654c3bc6896b90dbf859cbcf850f13840a771ab53e7efae6.scope: Deactivated successfully.
Jan 20 19:09:31 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:09:31 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:09:31 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 19:09:31 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:09:31.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 19:09:31 compute-0 podman[263405]: 2026-01-20 19:09:31.926828913 +0000 UTC m=+0.048740582 container create abaf931dcde53d89d1c0e846fed3957f45aebcc4fca7f00d690d568b8fb913e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Jan 20 19:09:31 compute-0 systemd[1]: Started libpod-conmon-abaf931dcde53d89d1c0e846fed3957f45aebcc4fca7f00d690d568b8fb913e3.scope.
Jan 20 19:09:31 compute-0 podman[263405]: 2026-01-20 19:09:31.901256645 +0000 UTC m=+0.023168304 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:09:32 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:09:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/624da83dff4c6f1e1620df7e920c2ae43f0e25c4e232a62db1e45b46944d6bb2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:09:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/624da83dff4c6f1e1620df7e920c2ae43f0e25c4e232a62db1e45b46944d6bb2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:09:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/624da83dff4c6f1e1620df7e920c2ae43f0e25c4e232a62db1e45b46944d6bb2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:09:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/624da83dff4c6f1e1620df7e920c2ae43f0e25c4e232a62db1e45b46944d6bb2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:09:32 compute-0 podman[263405]: 2026-01-20 19:09:32.029154775 +0000 UTC m=+0.151066444 container init abaf931dcde53d89d1c0e846fed3957f45aebcc4fca7f00d690d568b8fb913e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Jan 20 19:09:32 compute-0 podman[263405]: 2026-01-20 19:09:32.036013977 +0000 UTC m=+0.157925616 container start abaf931dcde53d89d1c0e846fed3957f45aebcc4fca7f00d690d568b8fb913e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:09:32 compute-0 podman[263405]: 2026-01-20 19:09:32.040460164 +0000 UTC m=+0.162371823 container attach abaf931dcde53d89d1c0e846fed3957f45aebcc4fca7f00d690d568b8fb913e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_antonelli, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Jan 20 19:09:32 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v822: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 3.9 KiB/s rd, 1.0 KiB/s wr, 6 op/s
Jan 20 19:09:32 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:32 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdaf80045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:32 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:32 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdaf80045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:32 compute-0 lvm[263501]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 19:09:32 compute-0 lvm[263501]: VG ceph_vg0 finished
Jan 20 19:09:32 compute-0 mystifying_antonelli[263422]: {}
Jan 20 19:09:32 compute-0 systemd[1]: libpod-abaf931dcde53d89d1c0e846fed3957f45aebcc4fca7f00d690d568b8fb913e3.scope: Deactivated successfully.
Jan 20 19:09:32 compute-0 podman[263405]: 2026-01-20 19:09:32.83587353 +0000 UTC m=+0.957785179 container died abaf931dcde53d89d1c0e846fed3957f45aebcc4fca7f00d690d568b8fb913e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:09:32 compute-0 systemd[1]: libpod-abaf931dcde53d89d1c0e846fed3957f45aebcc4fca7f00d690d568b8fb913e3.scope: Consumed 1.310s CPU time.
Jan 20 19:09:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-624da83dff4c6f1e1620df7e920c2ae43f0e25c4e232a62db1e45b46944d6bb2-merged.mount: Deactivated successfully.
Jan 20 19:09:32 compute-0 podman[263405]: 2026-01-20 19:09:32.89516665 +0000 UTC m=+1.017078319 container remove abaf931dcde53d89d1c0e846fed3957f45aebcc4fca7f00d690d568b8fb913e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_antonelli, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:09:32 compute-0 systemd[1]: libpod-conmon-abaf931dcde53d89d1c0e846fed3957f45aebcc4fca7f00d690d568b8fb913e3.scope: Deactivated successfully.
Jan 20 19:09:32 compute-0 sudo[263298]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:32 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:09:32 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:09:32 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:09:32 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:09:33 compute-0 sudo[263517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 19:09:33 compute-0 sudo[263517]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:09:33 compute-0 sudo[263517]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:33 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdad4000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:33 compute-0 ceph-mon[74381]: pgmap v822: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 3.9 KiB/s rd, 1.0 KiB/s wr, 6 op/s
Jan 20 19:09:33 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:09:33 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:09:33 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:09:33 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:09:33 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:09:33.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:09:33 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:09:33 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 19:09:33 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:09:33.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 19:09:34 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v823: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 676 B/s wr, 2 op/s
Jan 20 19:09:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:34 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdafc003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:34 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:34 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdae0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:35 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdaf80045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [WARNING] 019/190935 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Jan 20 19:09:35 compute-0 ceph-mon[74381]: pgmap v823: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 676 B/s wr, 2 op/s
Jan 20 19:09:35 compute-0 nova_compute[254061]: 2026-01-20 19:09:35.293 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:09:35 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:09:35 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:09:35 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:09:35.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:09:35 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:09:35 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:09:35 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:09:35.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:09:36 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v824: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 676 B/s wr, 2 op/s
Jan 20 19:09:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:36 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdad40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:36 compute-0 nova_compute[254061]: 2026-01-20 19:09:36.194 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:09:36 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:09:36.559 165659 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'be:fe:f2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '0e:71:69:cd:a8:95'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 19:09:36 compute-0 nova_compute[254061]: 2026-01-20 19:09:36.560 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:09:36 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:09:36.560 165659 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 19:09:36 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:36 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdafc003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:36 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:09:37 compute-0 sudo[263546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:09:37 compute-0 sudo[263546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:09:37 compute-0 sudo[263546]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:37 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdae0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:09:37.166Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:09:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:09:37.167Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:09:37 compute-0 ceph-mon[74381]: pgmap v824: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 676 B/s wr, 2 op/s
Jan 20 19:09:37 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:09:37 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:09:37 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:09:37.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:09:37 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:09:37 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:09:37 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:09:37.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:09:38 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v825: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 290 B/s rd, 96 B/s wr, 0 op/s
Jan 20 19:09:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:38 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdaf80045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:38 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdad40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:09:38.859Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:09:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:39 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdafc003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:39 compute-0 ceph-mon[74381]: pgmap v825: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 290 B/s rd, 96 B/s wr, 0 op/s
Jan 20 19:09:39 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:09:39 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:09:39 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:09:39.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:09:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:09:39] "GET /metrics HTTP/1.1" 200 48442 "" "Prometheus/2.51.0"
Jan 20 19:09:39 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:09:39] "GET /metrics HTTP/1.1" 200 48442 "" "Prometheus/2.51.0"
Jan 20 19:09:39 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:09:39 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:09:39 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:09:39.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:09:40 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v826: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Jan 20 19:09:40 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:40 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdae0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:40 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:09:40 compute-0 nova_compute[254061]: 2026-01-20 19:09:40.296 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:09:40 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:40 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdaf80045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:41 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:41 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdad40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:41 compute-0 nova_compute[254061]: 2026-01-20 19:09:41.196 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:09:41 compute-0 ceph-mon[74381]: pgmap v826: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Jan 20 19:09:41 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:09:41 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:09:41 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:09:41.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:09:41 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:09:41.562 165659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7018ca8a-de0e-4b56-bb43-675238d4f8b3, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 19:09:41 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:09:41 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:09:41 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 19:09:41 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:09:41.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 19:09:42 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v827: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Jan 20 19:09:42 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:42 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdafc003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:42 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:42 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdae0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:43 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:43 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdaf80045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:43 compute-0 ceph-mon[74381]: pgmap v827: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Jan 20 19:09:43 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:09:43 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:09:43 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:09:43.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:09:43 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:09:43 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 19:09:43 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:09:43.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 19:09:44 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v828: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Jan 20 19:09:44 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:44 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdad4002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:44 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:44 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdafc003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:44 compute-0 nova_compute[254061]: 2026-01-20 19:09:44.887 254065 DEBUG oslo_concurrency.lockutils [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Acquiring lock "390552fe-c600-4ce3-a209-851b5c0a067d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:09:44 compute-0 nova_compute[254061]: 2026-01-20 19:09:44.888 254065 DEBUG oslo_concurrency.lockutils [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "390552fe-c600-4ce3-a209-851b5c0a067d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:09:44 compute-0 nova_compute[254061]: 2026-01-20 19:09:44.910 254065 DEBUG nova.compute.manager [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 19:09:45 compute-0 nova_compute[254061]: 2026-01-20 19:09:45.001 254065 DEBUG oslo_concurrency.lockutils [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:09:45 compute-0 nova_compute[254061]: 2026-01-20 19:09:45.002 254065 DEBUG oslo_concurrency.lockutils [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:09:45 compute-0 nova_compute[254061]: 2026-01-20 19:09:45.011 254065 DEBUG nova.virt.hardware [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 19:09:45 compute-0 nova_compute[254061]: 2026-01-20 19:09:45.012 254065 INFO nova.compute.claims [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] Claim successful on node compute-0.ctlplane.example.com
Jan 20 19:09:45 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:45 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdae0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:45 compute-0 nova_compute[254061]: 2026-01-20 19:09:45.108 254065 DEBUG oslo_concurrency.processutils [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:09:45 compute-0 nova_compute[254061]: 2026-01-20 19:09:45.299 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:09:45 compute-0 ceph-mon[74381]: pgmap v828: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Jan 20 19:09:45 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:09:45 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:09:45 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:09:45.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:09:45 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:09:45 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2764613852' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:09:45 compute-0 nova_compute[254061]: 2026-01-20 19:09:45.624 254065 DEBUG oslo_concurrency.processutils [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:09:45 compute-0 nova_compute[254061]: 2026-01-20 19:09:45.634 254065 DEBUG nova.compute.provider_tree [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Inventory has not changed in ProviderTree for provider: cb9161e5-191d-495c-920a-01144f42a215 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 19:09:45 compute-0 nova_compute[254061]: 2026-01-20 19:09:45.654 254065 DEBUG nova.scheduler.client.report [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Inventory has not changed for provider cb9161e5-191d-495c-920a-01144f42a215 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 19:09:45 compute-0 nova_compute[254061]: 2026-01-20 19:09:45.682 254065 DEBUG oslo_concurrency.lockutils [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.680s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:09:45 compute-0 nova_compute[254061]: 2026-01-20 19:09:45.683 254065 DEBUG nova.compute.manager [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 19:09:45 compute-0 nova_compute[254061]: 2026-01-20 19:09:45.746 254065 DEBUG nova.compute.manager [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 19:09:45 compute-0 nova_compute[254061]: 2026-01-20 19:09:45.747 254065 DEBUG nova.network.neutron [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 19:09:45 compute-0 nova_compute[254061]: 2026-01-20 19:09:45.776 254065 INFO nova.virt.libvirt.driver [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 19:09:45 compute-0 nova_compute[254061]: 2026-01-20 19:09:45.796 254065 DEBUG nova.compute.manager [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 19:09:45 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:09:45 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:09:45 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:09:45.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:09:45 compute-0 nova_compute[254061]: 2026-01-20 19:09:45.906 254065 DEBUG nova.compute.manager [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 19:09:45 compute-0 nova_compute[254061]: 2026-01-20 19:09:45.908 254065 DEBUG nova.virt.libvirt.driver [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 19:09:45 compute-0 nova_compute[254061]: 2026-01-20 19:09:45.909 254065 INFO nova.virt.libvirt.driver [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] Creating image(s)
Jan 20 19:09:45 compute-0 nova_compute[254061]: 2026-01-20 19:09:45.954 254065 DEBUG nova.storage.rbd_utils [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] rbd image 390552fe-c600-4ce3-a209-851b5c0a067d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 19:09:45 compute-0 nova_compute[254061]: 2026-01-20 19:09:45.992 254065 DEBUG nova.storage.rbd_utils [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] rbd image 390552fe-c600-4ce3-a209-851b5c0a067d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 19:09:46 compute-0 nova_compute[254061]: 2026-01-20 19:09:46.029 254065 DEBUG nova.storage.rbd_utils [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] rbd image 390552fe-c600-4ce3-a209-851b5c0a067d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 19:09:46 compute-0 nova_compute[254061]: 2026-01-20 19:09:46.034 254065 DEBUG oslo_concurrency.processutils [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2e09eef5d7d60aeeb43ad4911302a9acdced7386 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:09:46 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v829: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:09:46 compute-0 nova_compute[254061]: 2026-01-20 19:09:46.127 254065 DEBUG oslo_concurrency.processutils [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2e09eef5d7d60aeeb43ad4911302a9acdced7386 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:09:46 compute-0 nova_compute[254061]: 2026-01-20 19:09:46.130 254065 DEBUG oslo_concurrency.lockutils [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Acquiring lock "2e09eef5d7d60aeeb43ad4911302a9acdced7386" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:09:46 compute-0 nova_compute[254061]: 2026-01-20 19:09:46.132 254065 DEBUG oslo_concurrency.lockutils [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "2e09eef5d7d60aeeb43ad4911302a9acdced7386" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:09:46 compute-0 nova_compute[254061]: 2026-01-20 19:09:46.132 254065 DEBUG oslo_concurrency.lockutils [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "2e09eef5d7d60aeeb43ad4911302a9acdced7386" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:09:46 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:46 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdaf80045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:46 compute-0 nova_compute[254061]: 2026-01-20 19:09:46.173 254065 DEBUG nova.storage.rbd_utils [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] rbd image 390552fe-c600-4ce3-a209-851b5c0a067d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 19:09:46 compute-0 nova_compute[254061]: 2026-01-20 19:09:46.178 254065 DEBUG oslo_concurrency.processutils [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/2e09eef5d7d60aeeb43ad4911302a9acdced7386 390552fe-c600-4ce3-a209-851b5c0a067d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:09:46 compute-0 nova_compute[254061]: 2026-01-20 19:09:46.206 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:09:46 compute-0 nova_compute[254061]: 2026-01-20 19:09:46.286 254065 DEBUG nova.policy [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd34bd159f8884263a7481e3fcff15267', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dc8a6ea17f334edbbfaf2a91ec6fd167', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 19:09:46 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/2764613852' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:09:46 compute-0 nova_compute[254061]: 2026-01-20 19:09:46.559 254065 DEBUG oslo_concurrency.processutils [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/2e09eef5d7d60aeeb43ad4911302a9acdced7386 390552fe-c600-4ce3-a209-851b5c0a067d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.381s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:09:46 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:46 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdad4002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:46 compute-0 nova_compute[254061]: 2026-01-20 19:09:46.644 254065 DEBUG nova.storage.rbd_utils [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] resizing rbd image 390552fe-c600-4ce3-a209-851b5c0a067d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 19:09:46 compute-0 nova_compute[254061]: 2026-01-20 19:09:46.741 254065 DEBUG nova.objects.instance [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lazy-loading 'migration_context' on Instance uuid 390552fe-c600-4ce3-a209-851b5c0a067d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 19:09:46 compute-0 nova_compute[254061]: 2026-01-20 19:09:46.758 254065 DEBUG nova.virt.libvirt.driver [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 19:09:46 compute-0 nova_compute[254061]: 2026-01-20 19:09:46.759 254065 DEBUG nova.virt.libvirt.driver [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] Ensure instance console log exists: /var/lib/nova/instances/390552fe-c600-4ce3-a209-851b5c0a067d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 19:09:46 compute-0 nova_compute[254061]: 2026-01-20 19:09:46.759 254065 DEBUG oslo_concurrency.lockutils [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:09:46 compute-0 nova_compute[254061]: 2026-01-20 19:09:46.760 254065 DEBUG oslo_concurrency.lockutils [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:09:46 compute-0 nova_compute[254061]: 2026-01-20 19:09:46.760 254065 DEBUG oslo_concurrency.lockutils [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:09:46 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:09:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:47 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdafc003cc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:47 compute-0 nova_compute[254061]: 2026-01-20 19:09:47.140 254065 DEBUG nova.network.neutron [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] Successfully created port: b5955f0c-06f4-4e74-ada0-a76a7d5a4d8b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 19:09:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:09:47.168Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:09:47 compute-0 ceph-mon[74381]: pgmap v829: 337 pgs: 337 active+clean; 41 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Jan 20 19:09:47 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:09:47 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:09:47 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:09:47.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:09:47 compute-0 nova_compute[254061]: 2026-01-20 19:09:47.867 254065 DEBUG nova.network.neutron [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] Successfully updated port: b5955f0c-06f4-4e74-ada0-a76a7d5a4d8b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 19:09:47 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:09:47 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:09:47 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:09:47.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:09:47 compute-0 nova_compute[254061]: 2026-01-20 19:09:47.881 254065 DEBUG oslo_concurrency.lockutils [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Acquiring lock "refresh_cache-390552fe-c600-4ce3-a209-851b5c0a067d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 19:09:47 compute-0 nova_compute[254061]: 2026-01-20 19:09:47.881 254065 DEBUG oslo_concurrency.lockutils [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Acquired lock "refresh_cache-390552fe-c600-4ce3-a209-851b5c0a067d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 19:09:47 compute-0 nova_compute[254061]: 2026-01-20 19:09:47.882 254065 DEBUG nova.network.neutron [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 19:09:48 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v830: 337 pgs: 337 active+clean; 60 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 1.1 MiB/s wr, 1 op/s
Jan 20 19:09:48 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:48 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdae0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:48 compute-0 nova_compute[254061]: 2026-01-20 19:09:48.200 254065 DEBUG nova.compute.manager [req-4718cffa-9f4d-423e-b9c5-24740723be08 req-c3c95cda-200f-47cb-8e03-c543ae89214a 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] Received event network-changed-b5955f0c-06f4-4e74-ada0-a76a7d5a4d8b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 19:09:48 compute-0 nova_compute[254061]: 2026-01-20 19:09:48.200 254065 DEBUG nova.compute.manager [req-4718cffa-9f4d-423e-b9c5-24740723be08 req-c3c95cda-200f-47cb-8e03-c543ae89214a 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] Refreshing instance network info cache due to event network-changed-b5955f0c-06f4-4e74-ada0-a76a7d5a4d8b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 19:09:48 compute-0 nova_compute[254061]: 2026-01-20 19:09:48.201 254065 DEBUG oslo_concurrency.lockutils [req-4718cffa-9f4d-423e-b9c5-24740723be08 req-c3c95cda-200f-47cb-8e03-c543ae89214a 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Acquiring lock "refresh_cache-390552fe-c600-4ce3-a209-851b5c0a067d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 19:09:48 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 20 19:09:48 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1158913762' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 19:09:48 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 20 19:09:48 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1158913762' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 19:09:48 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:48 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdaf80045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:48 compute-0 nova_compute[254061]: 2026-01-20 19:09:48.594 254065 DEBUG nova.network.neutron [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 19:09:48 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:09:48.860Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:09:49 compute-0 podman[263771]: 2026-01-20 19:09:49.096065978 +0000 UTC m=+0.064975283 container health_status 7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 19:09:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:49 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdaf80045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:49 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:09:49 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:09:49 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:09:49.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:09:49 compute-0 ceph-mon[74381]: pgmap v830: 337 pgs: 337 active+clean; 60 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 1.1 MiB/s wr, 1 op/s
Jan 20 19:09:49 compute-0 ceph-mon[74381]: from='client.? 192.168.122.10:0/1158913762' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 19:09:49 compute-0 ceph-mon[74381]: from='client.? 192.168.122.10:0/1158913762' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 19:09:49 compute-0 nova_compute[254061]: 2026-01-20 19:09:49.614 254065 DEBUG nova.network.neutron [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] Updating instance_info_cache with network_info: [{"id": "b5955f0c-06f4-4e74-ada0-a76a7d5a4d8b", "address": "fa:16:3e:2d:7a:43", "network": {"id": "873c0e56-2798-477a-adc3-8a628bffd4e1", "bridge": "br-int", "label": "tempest-network-smoke--647446815", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5955f0c-06", "ovs_interfaceid": "b5955f0c-06f4-4e74-ada0-a76a7d5a4d8b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 19:09:49 compute-0 nova_compute[254061]: 2026-01-20 19:09:49.633 254065 DEBUG oslo_concurrency.lockutils [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Releasing lock "refresh_cache-390552fe-c600-4ce3-a209-851b5c0a067d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 19:09:49 compute-0 nova_compute[254061]: 2026-01-20 19:09:49.633 254065 DEBUG nova.compute.manager [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] Instance network_info: |[{"id": "b5955f0c-06f4-4e74-ada0-a76a7d5a4d8b", "address": "fa:16:3e:2d:7a:43", "network": {"id": "873c0e56-2798-477a-adc3-8a628bffd4e1", "bridge": "br-int", "label": "tempest-network-smoke--647446815", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5955f0c-06", "ovs_interfaceid": "b5955f0c-06f4-4e74-ada0-a76a7d5a4d8b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 19:09:49 compute-0 nova_compute[254061]: 2026-01-20 19:09:49.633 254065 DEBUG oslo_concurrency.lockutils [req-4718cffa-9f4d-423e-b9c5-24740723be08 req-c3c95cda-200f-47cb-8e03-c543ae89214a 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Acquired lock "refresh_cache-390552fe-c600-4ce3-a209-851b5c0a067d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 19:09:49 compute-0 nova_compute[254061]: 2026-01-20 19:09:49.633 254065 DEBUG nova.network.neutron [req-4718cffa-9f4d-423e-b9c5-24740723be08 req-c3c95cda-200f-47cb-8e03-c543ae89214a 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] Refreshing network info cache for port b5955f0c-06f4-4e74-ada0-a76a7d5a4d8b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 19:09:49 compute-0 nova_compute[254061]: 2026-01-20 19:09:49.635 254065 DEBUG nova.virt.libvirt.driver [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] Start _get_guest_xml network_info=[{"id": "b5955f0c-06f4-4e74-ada0-a76a7d5a4d8b", "address": "fa:16:3e:2d:7a:43", "network": {"id": "873c0e56-2798-477a-adc3-8a628bffd4e1", "bridge": "br-int", "label": "tempest-network-smoke--647446815", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5955f0c-06", "ovs_interfaceid": "b5955f0c-06f4-4e74-ada0-a76a7d5a4d8b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T19:05:59Z,direct_url=<?>,disk_format='qcow2',id=bc57af0c-4b71-499e-9808-3c8fc070a488,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='811a4eb676464ca2bd20c0cc2d2f61c9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T19:06:05Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_secret_uuid': None, 'encryption_format': None, 'size': 0, 'encrypted': False, 'guest_format': None, 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': 'bc57af0c-4b71-499e-9808-3c8fc070a488'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 19:09:49 compute-0 nova_compute[254061]: 2026-01-20 19:09:49.641 254065 WARNING nova.virt.libvirt.driver [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 19:09:49 compute-0 nova_compute[254061]: 2026-01-20 19:09:49.646 254065 DEBUG nova.virt.libvirt.host [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 19:09:49 compute-0 nova_compute[254061]: 2026-01-20 19:09:49.647 254065 DEBUG nova.virt.libvirt.host [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 19:09:49 compute-0 nova_compute[254061]: 2026-01-20 19:09:49.653 254065 DEBUG nova.virt.libvirt.host [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 19:09:49 compute-0 nova_compute[254061]: 2026-01-20 19:09:49.653 254065 DEBUG nova.virt.libvirt.host [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 19:09:49 compute-0 nova_compute[254061]: 2026-01-20 19:09:49.654 254065 DEBUG nova.virt.libvirt.driver [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 19:09:49 compute-0 nova_compute[254061]: 2026-01-20 19:09:49.654 254065 DEBUG nova.virt.hardware [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T19:05:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7446c314-5a17-42fd-97d9-a7a94e27bff9',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T19:05:59Z,direct_url=<?>,disk_format='qcow2',id=bc57af0c-4b71-499e-9808-3c8fc070a488,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='811a4eb676464ca2bd20c0cc2d2f61c9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T19:06:05Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 19:09:49 compute-0 nova_compute[254061]: 2026-01-20 19:09:49.654 254065 DEBUG nova.virt.hardware [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 19:09:49 compute-0 nova_compute[254061]: 2026-01-20 19:09:49.654 254065 DEBUG nova.virt.hardware [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 19:09:49 compute-0 nova_compute[254061]: 2026-01-20 19:09:49.655 254065 DEBUG nova.virt.hardware [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 19:09:49 compute-0 nova_compute[254061]: 2026-01-20 19:09:49.655 254065 DEBUG nova.virt.hardware [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 19:09:49 compute-0 nova_compute[254061]: 2026-01-20 19:09:49.655 254065 DEBUG nova.virt.hardware [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 19:09:49 compute-0 nova_compute[254061]: 2026-01-20 19:09:49.655 254065 DEBUG nova.virt.hardware [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 19:09:49 compute-0 nova_compute[254061]: 2026-01-20 19:09:49.655 254065 DEBUG nova.virt.hardware [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 19:09:49 compute-0 nova_compute[254061]: 2026-01-20 19:09:49.655 254065 DEBUG nova.virt.hardware [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 19:09:49 compute-0 nova_compute[254061]: 2026-01-20 19:09:49.656 254065 DEBUG nova.virt.hardware [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 19:09:49 compute-0 nova_compute[254061]: 2026-01-20 19:09:49.656 254065 DEBUG nova.virt.hardware [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 19:09:49 compute-0 nova_compute[254061]: 2026-01-20 19:09:49.658 254065 DEBUG oslo_concurrency.processutils [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:09:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:09:49] "GET /metrics HTTP/1.1" 200 48451 "" "Prometheus/2.51.0"
Jan 20 19:09:49 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:09:49] "GET /metrics HTTP/1.1" 200 48451 "" "Prometheus/2.51.0"
Jan 20 19:09:49 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:09:49 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 19:09:49 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:09:49.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 19:09:50 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v831: 337 pgs: 337 active+clean; 60 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 1.1 MiB/s wr, 1 op/s
Jan 20 19:09:50 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 20 19:09:50 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4168557342' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 19:09:50 compute-0 nova_compute[254061]: 2026-01-20 19:09:50.143 254065 DEBUG oslo_concurrency.processutils [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:09:50 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:50 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdaf80045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:50 compute-0 nova_compute[254061]: 2026-01-20 19:09:50.181 254065 DEBUG nova.storage.rbd_utils [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] rbd image 390552fe-c600-4ce3-a209-851b5c0a067d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 19:09:50 compute-0 nova_compute[254061]: 2026-01-20 19:09:50.186 254065 DEBUG oslo_concurrency.processutils [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:09:50 compute-0 nova_compute[254061]: 2026-01-20 19:09:50.303 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:09:50 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:50 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdae0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:50 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/4168557342' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 19:09:50 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 20 19:09:50 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1783454636' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 19:09:50 compute-0 nova_compute[254061]: 2026-01-20 19:09:50.742 254065 DEBUG oslo_concurrency.processutils [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:09:50 compute-0 nova_compute[254061]: 2026-01-20 19:09:50.745 254065 DEBUG nova.virt.libvirt.vif [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T19:09:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-794136600',display_name='tempest-TestNetworkBasicOps-server-794136600',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-794136600',id=4,image_ref='bc57af0c-4b71-499e-9808-3c8fc070a488',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG3+rAyyjV/EzO01vUAJnl7IV+iKRclPYM6MZ5A/U1F9mbIlYEIUOrWmm0VSDtBi6EyX6b1roJWGutyV+ZX7+SU3lPvUOqicmJKar+2nRoxjyKH+QoCQaxwdC7KzJvVauA==',key_name='tempest-TestNetworkBasicOps-128194792',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dc8a6ea17f334edbbfaf2a91ec6fd167',ramdisk_id='',reservation_id='r-iydiyz0d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='bc57af0c-4b71-499e-9808-3c8fc070a488',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-899583499',owner_user_name='tempest-TestNetworkBasicOps-899583499-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T19:09:45Z,user_data=None,user_id='d34bd159f8884263a7481e3fcff15267',uuid=390552fe-c600-4ce3-a209-851b5c0a067d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b5955f0c-06f4-4e74-ada0-a76a7d5a4d8b", "address": "fa:16:3e:2d:7a:43", "network": {"id": "873c0e56-2798-477a-adc3-8a628bffd4e1", "bridge": "br-int", "label": "tempest-network-smoke--647446815", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5955f0c-06", "ovs_interfaceid": "b5955f0c-06f4-4e74-ada0-a76a7d5a4d8b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 19:09:50 compute-0 nova_compute[254061]: 2026-01-20 19:09:50.746 254065 DEBUG nova.network.os_vif_util [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Converting VIF {"id": "b5955f0c-06f4-4e74-ada0-a76a7d5a4d8b", "address": "fa:16:3e:2d:7a:43", "network": {"id": "873c0e56-2798-477a-adc3-8a628bffd4e1", "bridge": "br-int", "label": "tempest-network-smoke--647446815", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5955f0c-06", "ovs_interfaceid": "b5955f0c-06f4-4e74-ada0-a76a7d5a4d8b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 19:09:50 compute-0 nova_compute[254061]: 2026-01-20 19:09:50.747 254065 DEBUG nova.network.os_vif_util [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2d:7a:43,bridge_name='br-int',has_traffic_filtering=True,id=b5955f0c-06f4-4e74-ada0-a76a7d5a4d8b,network=Network(873c0e56-2798-477a-adc3-8a628bffd4e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5955f0c-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 19:09:50 compute-0 nova_compute[254061]: 2026-01-20 19:09:50.749 254065 DEBUG nova.objects.instance [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lazy-loading 'pci_devices' on Instance uuid 390552fe-c600-4ce3-a209-851b5c0a067d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 19:09:50 compute-0 nova_compute[254061]: 2026-01-20 19:09:50.792 254065 DEBUG nova.virt.libvirt.driver [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] End _get_guest_xml xml=<domain type="kvm">
Jan 20 19:09:50 compute-0 nova_compute[254061]:   <uuid>390552fe-c600-4ce3-a209-851b5c0a067d</uuid>
Jan 20 19:09:50 compute-0 nova_compute[254061]:   <name>instance-00000004</name>
Jan 20 19:09:50 compute-0 nova_compute[254061]:   <memory>131072</memory>
Jan 20 19:09:50 compute-0 nova_compute[254061]:   <vcpu>1</vcpu>
Jan 20 19:09:50 compute-0 nova_compute[254061]:   <metadata>
Jan 20 19:09:50 compute-0 nova_compute[254061]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 19:09:50 compute-0 nova_compute[254061]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 19:09:50 compute-0 nova_compute[254061]:       <nova:name>tempest-TestNetworkBasicOps-server-794136600</nova:name>
Jan 20 19:09:50 compute-0 nova_compute[254061]:       <nova:creationTime>2026-01-20 19:09:49</nova:creationTime>
Jan 20 19:09:50 compute-0 nova_compute[254061]:       <nova:flavor name="m1.nano">
Jan 20 19:09:50 compute-0 nova_compute[254061]:         <nova:memory>128</nova:memory>
Jan 20 19:09:50 compute-0 nova_compute[254061]:         <nova:disk>1</nova:disk>
Jan 20 19:09:50 compute-0 nova_compute[254061]:         <nova:swap>0</nova:swap>
Jan 20 19:09:50 compute-0 nova_compute[254061]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 19:09:50 compute-0 nova_compute[254061]:         <nova:vcpus>1</nova:vcpus>
Jan 20 19:09:50 compute-0 nova_compute[254061]:       </nova:flavor>
Jan 20 19:09:50 compute-0 nova_compute[254061]:       <nova:owner>
Jan 20 19:09:50 compute-0 nova_compute[254061]:         <nova:user uuid="d34bd159f8884263a7481e3fcff15267">tempest-TestNetworkBasicOps-899583499-project-member</nova:user>
Jan 20 19:09:50 compute-0 nova_compute[254061]:         <nova:project uuid="dc8a6ea17f334edbbfaf2a91ec6fd167">tempest-TestNetworkBasicOps-899583499</nova:project>
Jan 20 19:09:50 compute-0 nova_compute[254061]:       </nova:owner>
Jan 20 19:09:50 compute-0 nova_compute[254061]:       <nova:root type="image" uuid="bc57af0c-4b71-499e-9808-3c8fc070a488"/>
Jan 20 19:09:50 compute-0 nova_compute[254061]:       <nova:ports>
Jan 20 19:09:50 compute-0 nova_compute[254061]:         <nova:port uuid="b5955f0c-06f4-4e74-ada0-a76a7d5a4d8b">
Jan 20 19:09:50 compute-0 nova_compute[254061]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 20 19:09:50 compute-0 nova_compute[254061]:         </nova:port>
Jan 20 19:09:50 compute-0 nova_compute[254061]:       </nova:ports>
Jan 20 19:09:50 compute-0 nova_compute[254061]:     </nova:instance>
Jan 20 19:09:50 compute-0 nova_compute[254061]:   </metadata>
Jan 20 19:09:50 compute-0 nova_compute[254061]:   <sysinfo type="smbios">
Jan 20 19:09:50 compute-0 nova_compute[254061]:     <system>
Jan 20 19:09:50 compute-0 nova_compute[254061]:       <entry name="manufacturer">RDO</entry>
Jan 20 19:09:50 compute-0 nova_compute[254061]:       <entry name="product">OpenStack Compute</entry>
Jan 20 19:09:50 compute-0 nova_compute[254061]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 19:09:50 compute-0 nova_compute[254061]:       <entry name="serial">390552fe-c600-4ce3-a209-851b5c0a067d</entry>
Jan 20 19:09:50 compute-0 nova_compute[254061]:       <entry name="uuid">390552fe-c600-4ce3-a209-851b5c0a067d</entry>
Jan 20 19:09:50 compute-0 nova_compute[254061]:       <entry name="family">Virtual Machine</entry>
Jan 20 19:09:50 compute-0 nova_compute[254061]:     </system>
Jan 20 19:09:50 compute-0 nova_compute[254061]:   </sysinfo>
Jan 20 19:09:50 compute-0 nova_compute[254061]:   <os>
Jan 20 19:09:50 compute-0 nova_compute[254061]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 19:09:50 compute-0 nova_compute[254061]:     <boot dev="hd"/>
Jan 20 19:09:50 compute-0 nova_compute[254061]:     <smbios mode="sysinfo"/>
Jan 20 19:09:50 compute-0 nova_compute[254061]:   </os>
Jan 20 19:09:50 compute-0 nova_compute[254061]:   <features>
Jan 20 19:09:50 compute-0 nova_compute[254061]:     <acpi/>
Jan 20 19:09:50 compute-0 nova_compute[254061]:     <apic/>
Jan 20 19:09:50 compute-0 nova_compute[254061]:     <vmcoreinfo/>
Jan 20 19:09:50 compute-0 nova_compute[254061]:   </features>
Jan 20 19:09:50 compute-0 nova_compute[254061]:   <clock offset="utc">
Jan 20 19:09:50 compute-0 nova_compute[254061]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 19:09:50 compute-0 nova_compute[254061]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 19:09:50 compute-0 nova_compute[254061]:     <timer name="hpet" present="no"/>
Jan 20 19:09:50 compute-0 nova_compute[254061]:   </clock>
Jan 20 19:09:50 compute-0 nova_compute[254061]:   <cpu mode="host-model" match="exact">
Jan 20 19:09:50 compute-0 nova_compute[254061]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 19:09:50 compute-0 nova_compute[254061]:   </cpu>
Jan 20 19:09:50 compute-0 nova_compute[254061]:   <devices>
Jan 20 19:09:50 compute-0 nova_compute[254061]:     <disk type="network" device="disk">
Jan 20 19:09:50 compute-0 nova_compute[254061]:       <driver type="raw" cache="none"/>
Jan 20 19:09:50 compute-0 nova_compute[254061]:       <source protocol="rbd" name="vms/390552fe-c600-4ce3-a209-851b5c0a067d_disk">
Jan 20 19:09:50 compute-0 nova_compute[254061]:         <host name="192.168.122.100" port="6789"/>
Jan 20 19:09:50 compute-0 nova_compute[254061]:         <host name="192.168.122.102" port="6789"/>
Jan 20 19:09:50 compute-0 nova_compute[254061]:         <host name="192.168.122.101" port="6789"/>
Jan 20 19:09:50 compute-0 nova_compute[254061]:       </source>
Jan 20 19:09:50 compute-0 nova_compute[254061]:       <auth username="openstack">
Jan 20 19:09:50 compute-0 nova_compute[254061]:         <secret type="ceph" uuid="aecbbf3b-b405-507b-97d7-637a83f5b4b1"/>
Jan 20 19:09:50 compute-0 nova_compute[254061]:       </auth>
Jan 20 19:09:50 compute-0 nova_compute[254061]:       <target dev="vda" bus="virtio"/>
Jan 20 19:09:50 compute-0 nova_compute[254061]:     </disk>
Jan 20 19:09:50 compute-0 nova_compute[254061]:     <disk type="network" device="cdrom">
Jan 20 19:09:50 compute-0 nova_compute[254061]:       <driver type="raw" cache="none"/>
Jan 20 19:09:50 compute-0 nova_compute[254061]:       <source protocol="rbd" name="vms/390552fe-c600-4ce3-a209-851b5c0a067d_disk.config">
Jan 20 19:09:50 compute-0 nova_compute[254061]:         <host name="192.168.122.100" port="6789"/>
Jan 20 19:09:50 compute-0 nova_compute[254061]:         <host name="192.168.122.102" port="6789"/>
Jan 20 19:09:50 compute-0 nova_compute[254061]:         <host name="192.168.122.101" port="6789"/>
Jan 20 19:09:50 compute-0 nova_compute[254061]:       </source>
Jan 20 19:09:50 compute-0 nova_compute[254061]:       <auth username="openstack">
Jan 20 19:09:50 compute-0 nova_compute[254061]:         <secret type="ceph" uuid="aecbbf3b-b405-507b-97d7-637a83f5b4b1"/>
Jan 20 19:09:50 compute-0 nova_compute[254061]:       </auth>
Jan 20 19:09:50 compute-0 nova_compute[254061]:       <target dev="sda" bus="sata"/>
Jan 20 19:09:50 compute-0 nova_compute[254061]:     </disk>
Jan 20 19:09:50 compute-0 nova_compute[254061]:     <interface type="ethernet">
Jan 20 19:09:50 compute-0 nova_compute[254061]:       <mac address="fa:16:3e:2d:7a:43"/>
Jan 20 19:09:50 compute-0 nova_compute[254061]:       <model type="virtio"/>
Jan 20 19:09:50 compute-0 nova_compute[254061]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 19:09:50 compute-0 nova_compute[254061]:       <mtu size="1442"/>
Jan 20 19:09:50 compute-0 nova_compute[254061]:       <target dev="tapb5955f0c-06"/>
Jan 20 19:09:50 compute-0 nova_compute[254061]:     </interface>
Jan 20 19:09:50 compute-0 nova_compute[254061]:     <serial type="pty">
Jan 20 19:09:50 compute-0 nova_compute[254061]:       <log file="/var/lib/nova/instances/390552fe-c600-4ce3-a209-851b5c0a067d/console.log" append="off"/>
Jan 20 19:09:50 compute-0 nova_compute[254061]:     </serial>
Jan 20 19:09:50 compute-0 nova_compute[254061]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 19:09:50 compute-0 nova_compute[254061]:     <video>
Jan 20 19:09:50 compute-0 nova_compute[254061]:       <model type="virtio"/>
Jan 20 19:09:50 compute-0 nova_compute[254061]:     </video>
Jan 20 19:09:50 compute-0 nova_compute[254061]:     <input type="tablet" bus="usb"/>
Jan 20 19:09:50 compute-0 nova_compute[254061]:     <rng model="virtio">
Jan 20 19:09:50 compute-0 nova_compute[254061]:       <backend model="random">/dev/urandom</backend>
Jan 20 19:09:50 compute-0 nova_compute[254061]:     </rng>
Jan 20 19:09:50 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root"/>
Jan 20 19:09:50 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:09:50 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:09:50 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:09:50 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:09:50 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:09:50 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:09:50 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:09:50 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:09:50 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:09:50 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:09:50 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:09:50 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:09:50 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:09:50 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:09:50 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:09:50 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:09:50 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:09:50 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:09:50 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:09:50 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:09:50 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:09:50 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:09:50 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:09:50 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:09:50 compute-0 nova_compute[254061]:     <controller type="usb" index="0"/>
Jan 20 19:09:50 compute-0 nova_compute[254061]:     <memballoon model="virtio">
Jan 20 19:09:50 compute-0 nova_compute[254061]:       <stats period="10"/>
Jan 20 19:09:50 compute-0 nova_compute[254061]:     </memballoon>
Jan 20 19:09:50 compute-0 nova_compute[254061]:   </devices>
Jan 20 19:09:50 compute-0 nova_compute[254061]: </domain>
Jan 20 19:09:50 compute-0 nova_compute[254061]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 19:09:50 compute-0 nova_compute[254061]: 2026-01-20 19:09:50.793 254065 DEBUG nova.compute.manager [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] Preparing to wait for external event network-vif-plugged-b5955f0c-06f4-4e74-ada0-a76a7d5a4d8b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 19:09:50 compute-0 nova_compute[254061]: 2026-01-20 19:09:50.793 254065 DEBUG oslo_concurrency.lockutils [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Acquiring lock "390552fe-c600-4ce3-a209-851b5c0a067d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:09:50 compute-0 nova_compute[254061]: 2026-01-20 19:09:50.794 254065 DEBUG oslo_concurrency.lockutils [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "390552fe-c600-4ce3-a209-851b5c0a067d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:09:50 compute-0 nova_compute[254061]: 2026-01-20 19:09:50.794 254065 DEBUG oslo_concurrency.lockutils [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "390552fe-c600-4ce3-a209-851b5c0a067d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:09:50 compute-0 nova_compute[254061]: 2026-01-20 19:09:50.794 254065 DEBUG nova.virt.libvirt.vif [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T19:09:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-794136600',display_name='tempest-TestNetworkBasicOps-server-794136600',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-794136600',id=4,image_ref='bc57af0c-4b71-499e-9808-3c8fc070a488',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG3+rAyyjV/EzO01vUAJnl7IV+iKRclPYM6MZ5A/U1F9mbIlYEIUOrWmm0VSDtBi6EyX6b1roJWGutyV+ZX7+SU3lPvUOqicmJKar+2nRoxjyKH+QoCQaxwdC7KzJvVauA==',key_name='tempest-TestNetworkBasicOps-128194792',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dc8a6ea17f334edbbfaf2a91ec6fd167',ramdisk_id='',reservation_id='r-iydiyz0d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='bc57af0c-4b71-499e-9808-3c8fc070a488',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-899583499',owner_user_name='tempest-TestNetworkBasicOps-899583499-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T19:09:45Z,user_data=None,user_id='d34bd159f8884263a7481e3fcff15267',uuid=390552fe-c600-4ce3-a209-851b5c0a067d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b5955f0c-06f4-4e74-ada0-a76a7d5a4d8b", "address": "fa:16:3e:2d:7a:43", "network": {"id": "873c0e56-2798-477a-adc3-8a628bffd4e1", "bridge": "br-int", "label": "tempest-network-smoke--647446815", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5955f0c-06", "ovs_interfaceid": "b5955f0c-06f4-4e74-ada0-a76a7d5a4d8b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 19:09:50 compute-0 nova_compute[254061]: 2026-01-20 19:09:50.795 254065 DEBUG nova.network.os_vif_util [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Converting VIF {"id": "b5955f0c-06f4-4e74-ada0-a76a7d5a4d8b", "address": "fa:16:3e:2d:7a:43", "network": {"id": "873c0e56-2798-477a-adc3-8a628bffd4e1", "bridge": "br-int", "label": "tempest-network-smoke--647446815", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5955f0c-06", "ovs_interfaceid": "b5955f0c-06f4-4e74-ada0-a76a7d5a4d8b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 19:09:50 compute-0 nova_compute[254061]: 2026-01-20 19:09:50.795 254065 DEBUG nova.network.os_vif_util [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2d:7a:43,bridge_name='br-int',has_traffic_filtering=True,id=b5955f0c-06f4-4e74-ada0-a76a7d5a4d8b,network=Network(873c0e56-2798-477a-adc3-8a628bffd4e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5955f0c-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 19:09:50 compute-0 nova_compute[254061]: 2026-01-20 19:09:50.795 254065 DEBUG os_vif [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2d:7a:43,bridge_name='br-int',has_traffic_filtering=True,id=b5955f0c-06f4-4e74-ada0-a76a7d5a4d8b,network=Network(873c0e56-2798-477a-adc3-8a628bffd4e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5955f0c-06') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 19:09:50 compute-0 nova_compute[254061]: 2026-01-20 19:09:50.796 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:09:50 compute-0 nova_compute[254061]: 2026-01-20 19:09:50.796 254065 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 19:09:50 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 19:09:50 compute-0 nova_compute[254061]: 2026-01-20 19:09:50.797 254065 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 19:09:50 compute-0 nova_compute[254061]: 2026-01-20 19:09:50.799 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:09:50 compute-0 nova_compute[254061]: 2026-01-20 19:09:50.800 254065 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb5955f0c-06, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 19:09:50 compute-0 nova_compute[254061]: 2026-01-20 19:09:50.800 254065 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb5955f0c-06, col_values=(('external_ids', {'iface-id': 'b5955f0c-06f4-4e74-ada0-a76a7d5a4d8b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2d:7a:43', 'vm-uuid': '390552fe-c600-4ce3-a209-851b5c0a067d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 19:09:50 compute-0 nova_compute[254061]: 2026-01-20 19:09:50.801 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:09:50 compute-0 nova_compute[254061]: 2026-01-20 19:09:50.803 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:09:50 compute-0 NetworkManager[48914]: <info>  [1768936190.8031] manager: (tapb5955f0c-06): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/31)
Jan 20 19:09:50 compute-0 nova_compute[254061]: 2026-01-20 19:09:50.809 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:09:50 compute-0 nova_compute[254061]: 2026-01-20 19:09:50.810 254065 INFO os_vif [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2d:7a:43,bridge_name='br-int',has_traffic_filtering=True,id=b5955f0c-06f4-4e74-ada0-a76a7d5a4d8b,network=Network(873c0e56-2798-477a-adc3-8a628bffd4e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5955f0c-06')
Jan 20 19:09:50 compute-0 nova_compute[254061]: 2026-01-20 19:09:50.874 254065 DEBUG nova.virt.libvirt.driver [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 19:09:50 compute-0 nova_compute[254061]: 2026-01-20 19:09:50.875 254065 DEBUG nova.virt.libvirt.driver [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 19:09:50 compute-0 nova_compute[254061]: 2026-01-20 19:09:50.875 254065 DEBUG nova.virt.libvirt.driver [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] No VIF found with MAC fa:16:3e:2d:7a:43, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 19:09:50 compute-0 nova_compute[254061]: 2026-01-20 19:09:50.876 254065 INFO nova.virt.libvirt.driver [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] Using config drive
Jan 20 19:09:50 compute-0 nova_compute[254061]: 2026-01-20 19:09:50.910 254065 DEBUG nova.storage.rbd_utils [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] rbd image 390552fe-c600-4ce3-a209-851b5c0a067d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 19:09:51 compute-0 nova_compute[254061]: 2026-01-20 19:09:51.058 254065 DEBUG nova.network.neutron [req-4718cffa-9f4d-423e-b9c5-24740723be08 req-c3c95cda-200f-47cb-8e03-c543ae89214a 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] Updated VIF entry in instance network info cache for port b5955f0c-06f4-4e74-ada0-a76a7d5a4d8b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 19:09:51 compute-0 nova_compute[254061]: 2026-01-20 19:09:51.059 254065 DEBUG nova.network.neutron [req-4718cffa-9f4d-423e-b9c5-24740723be08 req-c3c95cda-200f-47cb-8e03-c543ae89214a 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] Updating instance_info_cache with network_info: [{"id": "b5955f0c-06f4-4e74-ada0-a76a7d5a4d8b", "address": "fa:16:3e:2d:7a:43", "network": {"id": "873c0e56-2798-477a-adc3-8a628bffd4e1", "bridge": "br-int", "label": "tempest-network-smoke--647446815", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5955f0c-06", "ovs_interfaceid": "b5955f0c-06f4-4e74-ada0-a76a7d5a4d8b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 19:09:51 compute-0 nova_compute[254061]: 2026-01-20 19:09:51.073 254065 DEBUG oslo_concurrency.lockutils [req-4718cffa-9f4d-423e-b9c5-24740723be08 req-c3c95cda-200f-47cb-8e03-c543ae89214a 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Releasing lock "refresh_cache-390552fe-c600-4ce3-a209-851b5c0a067d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 19:09:51 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:51 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdad4003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:51 compute-0 nova_compute[254061]: 2026-01-20 19:09:51.234 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:09:51 compute-0 nova_compute[254061]: 2026-01-20 19:09:51.280 254065 INFO nova.virt.libvirt.driver [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] Creating config drive at /var/lib/nova/instances/390552fe-c600-4ce3-a209-851b5c0a067d/disk.config
Jan 20 19:09:51 compute-0 nova_compute[254061]: 2026-01-20 19:09:51.289 254065 DEBUG oslo_concurrency.processutils [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/390552fe-c600-4ce3-a209-851b5c0a067d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpm9xgad1m execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:09:51 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:09:51 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 19:09:51 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:09:51.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 19:09:51 compute-0 nova_compute[254061]: 2026-01-20 19:09:51.432 254065 DEBUG oslo_concurrency.processutils [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/390552fe-c600-4ce3-a209-851b5c0a067d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpm9xgad1m" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:09:51 compute-0 nova_compute[254061]: 2026-01-20 19:09:51.492 254065 DEBUG nova.storage.rbd_utils [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] rbd image 390552fe-c600-4ce3-a209-851b5c0a067d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 19:09:51 compute-0 nova_compute[254061]: 2026-01-20 19:09:51.496 254065 DEBUG oslo_concurrency.processutils [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/390552fe-c600-4ce3-a209-851b5c0a067d/disk.config 390552fe-c600-4ce3-a209-851b5c0a067d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:09:51 compute-0 ceph-mon[74381]: pgmap v831: 337 pgs: 337 active+clean; 60 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 1.1 MiB/s wr, 1 op/s
Jan 20 19:09:51 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/1783454636' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 19:09:51 compute-0 nova_compute[254061]: 2026-01-20 19:09:51.758 254065 DEBUG oslo_concurrency.processutils [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/390552fe-c600-4ce3-a209-851b5c0a067d/disk.config 390552fe-c600-4ce3-a209-851b5c0a067d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.262s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:09:51 compute-0 nova_compute[254061]: 2026-01-20 19:09:51.761 254065 INFO nova.virt.libvirt.driver [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] Deleting local config drive /var/lib/nova/instances/390552fe-c600-4ce3-a209-851b5c0a067d/disk.config because it was imported into RBD.
Jan 20 19:09:51 compute-0 systemd[1]: Starting libvirt secret daemon...
Jan 20 19:09:51 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:09:51 compute-0 systemd[1]: Started libvirt secret daemon.
Jan 20 19:09:51 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:09:51 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:09:51 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:09:51.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:09:51 compute-0 kernel: tapb5955f0c-06: entered promiscuous mode
Jan 20 19:09:51 compute-0 NetworkManager[48914]: <info>  [1768936191.9020] manager: (tapb5955f0c-06): new Tun device (/org/freedesktop/NetworkManager/Devices/32)
Jan 20 19:09:51 compute-0 nova_compute[254061]: 2026-01-20 19:09:51.904 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:09:51 compute-0 ovn_controller[155128]: 2026-01-20T19:09:51Z|00039|binding|INFO|Claiming lport b5955f0c-06f4-4e74-ada0-a76a7d5a4d8b for this chassis.
Jan 20 19:09:51 compute-0 ovn_controller[155128]: 2026-01-20T19:09:51Z|00040|binding|INFO|b5955f0c-06f4-4e74-ada0-a76a7d5a4d8b: Claiming fa:16:3e:2d:7a:43 10.100.0.3
Jan 20 19:09:51 compute-0 nova_compute[254061]: 2026-01-20 19:09:51.913 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:09:51 compute-0 nova_compute[254061]: 2026-01-20 19:09:51.915 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:09:51 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:09:51.928 165659 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2d:7a:43 10.100.0.3'], port_security=['fa:16:3e:2d:7a:43 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '390552fe-c600-4ce3-a209-851b5c0a067d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-873c0e56-2798-477a-adc3-8a628bffd4e1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dc8a6ea17f334edbbfaf2a91ec6fd167', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0ac89db4-88de-404e-8497-9c576b033842', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2c6ad307-385c-4724-a394-d49c5c3a804b, chassis=[<ovs.db.idl.Row object at 0x7f4e780f9880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4e780f9880>], logical_port=b5955f0c-06f4-4e74-ada0-a76a7d5a4d8b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 19:09:51 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:09:51.929 165659 INFO neutron.agent.ovn.metadata.agent [-] Port b5955f0c-06f4-4e74-ada0-a76a7d5a4d8b in datapath 873c0e56-2798-477a-adc3-8a628bffd4e1 bound to our chassis
Jan 20 19:09:51 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:09:51.930 165659 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 873c0e56-2798-477a-adc3-8a628bffd4e1
Jan 20 19:09:51 compute-0 systemd-udevd[263946]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 19:09:51 compute-0 systemd-machined[220746]: New machine qemu-2-instance-00000004.
Jan 20 19:09:51 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:09:51.954 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[031e2287-bced-4872-9ba4-0b54f1083b3f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:09:51 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:09:51.955 165659 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap873c0e56-21 in ovnmeta-873c0e56-2798-477a-adc3-8a628bffd4e1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 19:09:51 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:09:51.957 259376 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap873c0e56-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 19:09:51 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:09:51.957 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[b3b8774b-a162-4d95-bd30-252c0ab152c8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:09:51 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:09:51.958 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[0cb5013f-c113-45bb-ad54-0dd5dd6515a0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:09:51 compute-0 NetworkManager[48914]: <info>  [1768936191.9625] device (tapb5955f0c-06): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 19:09:51 compute-0 NetworkManager[48914]: <info>  [1768936191.9636] device (tapb5955f0c-06): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 19:09:51 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:09:51.976 166372 DEBUG oslo.privsep.daemon [-] privsep: reply[2dd42774-35fa-4120-80e6-9d4b5562021f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:09:51 compute-0 systemd[1]: Started Virtual Machine qemu-2-instance-00000004.
Jan 20 19:09:52 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:09:52.008 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[ae7920a6-5b70-4d85-9bc3-6ce46269477e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:09:52 compute-0 ovn_controller[155128]: 2026-01-20T19:09:52Z|00041|binding|INFO|Setting lport b5955f0c-06f4-4e74-ada0-a76a7d5a4d8b ovn-installed in OVS
Jan 20 19:09:52 compute-0 ovn_controller[155128]: 2026-01-20T19:09:52Z|00042|binding|INFO|Setting lport b5955f0c-06f4-4e74-ada0-a76a7d5a4d8b up in Southbound
Jan 20 19:09:52 compute-0 nova_compute[254061]: 2026-01-20 19:09:52.022 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:09:52 compute-0 nova_compute[254061]: 2026-01-20 19:09:52.024 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:09:52 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:09:52.049 259434 DEBUG oslo.privsep.daemon [-] privsep: reply[2b6cd2e3-26f0-4a3d-96d6-e62aa5bac3cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:09:52 compute-0 systemd-udevd[263951]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 19:09:52 compute-0 NetworkManager[48914]: <info>  [1768936192.0570] manager: (tap873c0e56-20): new Veth device (/org/freedesktop/NetworkManager/Devices/33)
Jan 20 19:09:52 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:09:52.056 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[e1df720f-3780-4c06-bbe0-39058209908b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:09:52 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v832: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 19:09:52 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:09:52.100 259434 DEBUG oslo.privsep.daemon [-] privsep: reply[0b37b2af-80e1-474f-b1e0-ab8cc736f64d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:09:52 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:09:52.105 259434 DEBUG oslo.privsep.daemon [-] privsep: reply[c5fd8e5c-2858-47f4-9862-adea8014ea39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:09:52 compute-0 NetworkManager[48914]: <info>  [1768936192.1362] device (tap873c0e56-20): carrier: link connected
Jan 20 19:09:52 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:09:52.140 259434 DEBUG oslo.privsep.daemon [-] privsep: reply[ee450395-5cc9-4be5-a476-bd64981e2779]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:09:52 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:09:52.159 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[a16379e9-4fa2-42b8-9bd0-68e77201a3ff]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap873c0e56-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b5:fd:c5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 438792, 'reachable_time': 40501, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263981, 'error': None, 'target': 'ovnmeta-873c0e56-2798-477a-adc3-8a628bffd4e1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:09:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:52 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdaf80045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:52 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:09:52.185 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[0dc8e0c8-526e-4387-96de-776b7a4b002d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb5:fdc5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 438792, 'tstamp': 438792}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263997, 'error': None, 'target': 'ovnmeta-873c0e56-2798-477a-adc3-8a628bffd4e1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:09:52 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:09:52.209 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[b91e8581-3baf-4b35-8809-9ba167eef413]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap873c0e56-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b5:fd:c5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 438792, 'reachable_time': 40501, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 264001, 'error': None, 'target': 'ovnmeta-873c0e56-2798-477a-adc3-8a628bffd4e1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:09:52 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:09:52.245 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[b9f39167-6b46-4a52-93f7-5be816f22fb3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:09:52 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:09:52.317 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[e95c1e74-61ff-412c-8939-3fd715fd6406]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:09:52 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:09:52.318 165659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap873c0e56-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 19:09:52 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:09:52.318 165659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 19:09:52 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:09:52.319 165659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap873c0e56-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 19:09:52 compute-0 nova_compute[254061]: 2026-01-20 19:09:52.358 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:09:52 compute-0 kernel: tap873c0e56-20: entered promiscuous mode
Jan 20 19:09:52 compute-0 NetworkManager[48914]: <info>  [1768936192.3602] manager: (tap873c0e56-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/34)
Jan 20 19:09:52 compute-0 nova_compute[254061]: 2026-01-20 19:09:52.363 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:09:52 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:09:52.365 165659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap873c0e56-20, col_values=(('external_ids', {'iface-id': '64efd965-af91-4bc1-a323-2a5f288a9c0b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 19:09:52 compute-0 nova_compute[254061]: 2026-01-20 19:09:52.366 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:09:52 compute-0 ovn_controller[155128]: 2026-01-20T19:09:52Z|00043|binding|INFO|Releasing lport 64efd965-af91-4bc1-a323-2a5f288a9c0b from this chassis (sb_readonly=0)
Jan 20 19:09:52 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:09:52.370 165659 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/873c0e56-2798-477a-adc3-8a628bffd4e1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/873c0e56-2798-477a-adc3-8a628bffd4e1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 19:09:52 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:09:52.371 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[72b3b729-0f69-4f97-9e00-7e6ea11997a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:09:52 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:09:52.372 165659 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 19:09:52 compute-0 ovn_metadata_agent[165637]: global
Jan 20 19:09:52 compute-0 ovn_metadata_agent[165637]:     log         /dev/log local0 debug
Jan 20 19:09:52 compute-0 ovn_metadata_agent[165637]:     log-tag     haproxy-metadata-proxy-873c0e56-2798-477a-adc3-8a628bffd4e1
Jan 20 19:09:52 compute-0 ovn_metadata_agent[165637]:     user        root
Jan 20 19:09:52 compute-0 ovn_metadata_agent[165637]:     group       root
Jan 20 19:09:52 compute-0 ovn_metadata_agent[165637]:     maxconn     1024
Jan 20 19:09:52 compute-0 ovn_metadata_agent[165637]:     pidfile     /var/lib/neutron/external/pids/873c0e56-2798-477a-adc3-8a628bffd4e1.pid.haproxy
Jan 20 19:09:52 compute-0 ovn_metadata_agent[165637]:     daemon
Jan 20 19:09:52 compute-0 ovn_metadata_agent[165637]: 
Jan 20 19:09:52 compute-0 ovn_metadata_agent[165637]: defaults
Jan 20 19:09:52 compute-0 ovn_metadata_agent[165637]:     log global
Jan 20 19:09:52 compute-0 ovn_metadata_agent[165637]:     mode http
Jan 20 19:09:52 compute-0 ovn_metadata_agent[165637]:     option httplog
Jan 20 19:09:52 compute-0 ovn_metadata_agent[165637]:     option dontlognull
Jan 20 19:09:52 compute-0 ovn_metadata_agent[165637]:     option http-server-close
Jan 20 19:09:52 compute-0 ovn_metadata_agent[165637]:     option forwardfor
Jan 20 19:09:52 compute-0 ovn_metadata_agent[165637]:     retries                 3
Jan 20 19:09:52 compute-0 ovn_metadata_agent[165637]:     timeout http-request    30s
Jan 20 19:09:52 compute-0 ovn_metadata_agent[165637]:     timeout connect         30s
Jan 20 19:09:52 compute-0 ovn_metadata_agent[165637]:     timeout client          32s
Jan 20 19:09:52 compute-0 ovn_metadata_agent[165637]:     timeout server          32s
Jan 20 19:09:52 compute-0 ovn_metadata_agent[165637]:     timeout http-keep-alive 30s
Jan 20 19:09:52 compute-0 ovn_metadata_agent[165637]: 
Jan 20 19:09:52 compute-0 ovn_metadata_agent[165637]: 
Jan 20 19:09:52 compute-0 ovn_metadata_agent[165637]: listen listener
Jan 20 19:09:52 compute-0 ovn_metadata_agent[165637]:     bind 169.254.169.254:80
Jan 20 19:09:52 compute-0 ovn_metadata_agent[165637]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 19:09:52 compute-0 ovn_metadata_agent[165637]:     http-request add-header X-OVN-Network-ID 873c0e56-2798-477a-adc3-8a628bffd4e1
Jan 20 19:09:52 compute-0 ovn_metadata_agent[165637]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 19:09:52 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:09:52.373 165659 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-873c0e56-2798-477a-adc3-8a628bffd4e1', 'env', 'PROCESS_TAG=haproxy-873c0e56-2798-477a-adc3-8a628bffd4e1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/873c0e56-2798-477a-adc3-8a628bffd4e1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 19:09:52 compute-0 nova_compute[254061]: 2026-01-20 19:09:52.383 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:09:52 compute-0 nova_compute[254061]: 2026-01-20 19:09:52.406 254065 DEBUG nova.virt.driver [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] Emitting event <LifecycleEvent: 1768936192.405476, 390552fe-c600-4ce3-a209-851b5c0a067d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 19:09:52 compute-0 nova_compute[254061]: 2026-01-20 19:09:52.407 254065 INFO nova.compute.manager [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] VM Started (Lifecycle Event)
Jan 20 19:09:52 compute-0 nova_compute[254061]: 2026-01-20 19:09:52.432 254065 DEBUG nova.compute.manager [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 19:09:52 compute-0 nova_compute[254061]: 2026-01-20 19:09:52.437 254065 DEBUG nova.virt.driver [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] Emitting event <LifecycleEvent: 1768936192.4101183, 390552fe-c600-4ce3-a209-851b5c0a067d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 19:09:52 compute-0 nova_compute[254061]: 2026-01-20 19:09:52.437 254065 INFO nova.compute.manager [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] VM Paused (Lifecycle Event)
Jan 20 19:09:52 compute-0 nova_compute[254061]: 2026-01-20 19:09:52.469 254065 DEBUG nova.compute.manager [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 19:09:52 compute-0 nova_compute[254061]: 2026-01-20 19:09:52.473 254065 DEBUG nova.compute.manager [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 19:09:52 compute-0 nova_compute[254061]: 2026-01-20 19:09:52.499 254065 INFO nova.compute.manager [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 19:09:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:52 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdaf80045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:52 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [WARNING] 019/190952 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 20 19:09:52 compute-0 ceph-mon[74381]: pgmap v832: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 19:09:52 compute-0 podman[264058]: 2026-01-20 19:09:52.818597372 +0000 UTC m=+0.075168363 container create 80f0deb819cdaae645754b331be1592e47a46c8c13f4232eb77693032833cccc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-873c0e56-2798-477a-adc3-8a628bffd4e1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 20 19:09:52 compute-0 podman[264058]: 2026-01-20 19:09:52.778134889 +0000 UTC m=+0.034705930 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 19:09:52 compute-0 systemd[1]: Started libpod-conmon-80f0deb819cdaae645754b331be1592e47a46c8c13f4232eb77693032833cccc.scope.
Jan 20 19:09:52 compute-0 nova_compute[254061]: 2026-01-20 19:09:52.877 254065 DEBUG nova.compute.manager [req-2814ef41-d103-4ad6-ad59-a7867f3f0631 req-e9a311d6-d28d-4e8a-8861-d252812a69d1 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] Received event network-vif-plugged-b5955f0c-06f4-4e74-ada0-a76a7d5a4d8b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 19:09:52 compute-0 nova_compute[254061]: 2026-01-20 19:09:52.877 254065 DEBUG oslo_concurrency.lockutils [req-2814ef41-d103-4ad6-ad59-a7867f3f0631 req-e9a311d6-d28d-4e8a-8861-d252812a69d1 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Acquiring lock "390552fe-c600-4ce3-a209-851b5c0a067d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:09:52 compute-0 nova_compute[254061]: 2026-01-20 19:09:52.877 254065 DEBUG oslo_concurrency.lockutils [req-2814ef41-d103-4ad6-ad59-a7867f3f0631 req-e9a311d6-d28d-4e8a-8861-d252812a69d1 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Lock "390552fe-c600-4ce3-a209-851b5c0a067d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:09:52 compute-0 nova_compute[254061]: 2026-01-20 19:09:52.878 254065 DEBUG oslo_concurrency.lockutils [req-2814ef41-d103-4ad6-ad59-a7867f3f0631 req-e9a311d6-d28d-4e8a-8861-d252812a69d1 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Lock "390552fe-c600-4ce3-a209-851b5c0a067d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:09:52 compute-0 nova_compute[254061]: 2026-01-20 19:09:52.878 254065 DEBUG nova.compute.manager [req-2814ef41-d103-4ad6-ad59-a7867f3f0631 req-e9a311d6-d28d-4e8a-8861-d252812a69d1 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] Processing event network-vif-plugged-b5955f0c-06f4-4e74-ada0-a76a7d5a4d8b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 19:09:52 compute-0 nova_compute[254061]: 2026-01-20 19:09:52.879 254065 DEBUG nova.compute.manager [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 19:09:52 compute-0 nova_compute[254061]: 2026-01-20 19:09:52.885 254065 DEBUG nova.virt.driver [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] Emitting event <LifecycleEvent: 1768936192.883619, 390552fe-c600-4ce3-a209-851b5c0a067d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 19:09:52 compute-0 nova_compute[254061]: 2026-01-20 19:09:52.886 254065 INFO nova.compute.manager [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] VM Resumed (Lifecycle Event)
Jan 20 19:09:52 compute-0 nova_compute[254061]: 2026-01-20 19:09:52.889 254065 DEBUG nova.virt.libvirt.driver [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 19:09:52 compute-0 nova_compute[254061]: 2026-01-20 19:09:52.894 254065 INFO nova.virt.libvirt.driver [-] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] Instance spawned successfully.
Jan 20 19:09:52 compute-0 nova_compute[254061]: 2026-01-20 19:09:52.895 254065 DEBUG nova.virt.libvirt.driver [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 19:09:52 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:09:52 compute-0 nova_compute[254061]: 2026-01-20 19:09:52.919 254065 DEBUG nova.compute.manager [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 19:09:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5184f2c797fb5af1c8ebe580319c5f230e4b5280c61a25bacafb9019b80be4ff/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 19:09:52 compute-0 nova_compute[254061]: 2026-01-20 19:09:52.926 254065 DEBUG nova.virt.libvirt.driver [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 19:09:52 compute-0 nova_compute[254061]: 2026-01-20 19:09:52.926 254065 DEBUG nova.virt.libvirt.driver [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 19:09:52 compute-0 nova_compute[254061]: 2026-01-20 19:09:52.927 254065 DEBUG nova.virt.libvirt.driver [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 19:09:52 compute-0 nova_compute[254061]: 2026-01-20 19:09:52.928 254065 DEBUG nova.virt.libvirt.driver [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 19:09:52 compute-0 nova_compute[254061]: 2026-01-20 19:09:52.929 254065 DEBUG nova.virt.libvirt.driver [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 19:09:52 compute-0 nova_compute[254061]: 2026-01-20 19:09:52.929 254065 DEBUG nova.virt.libvirt.driver [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 19:09:52 compute-0 nova_compute[254061]: 2026-01-20 19:09:52.936 254065 DEBUG nova.compute.manager [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 19:09:52 compute-0 podman[264058]: 2026-01-20 19:09:52.942430442 +0000 UTC m=+0.199001473 container init 80f0deb819cdaae645754b331be1592e47a46c8c13f4232eb77693032833cccc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-873c0e56-2798-477a-adc3-8a628bffd4e1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 20 19:09:52 compute-0 podman[264058]: 2026-01-20 19:09:52.95399774 +0000 UTC m=+0.210568741 container start 80f0deb819cdaae645754b331be1592e47a46c8c13f4232eb77693032833cccc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-873c0e56-2798-477a-adc3-8a628bffd4e1, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 20 19:09:52 compute-0 nova_compute[254061]: 2026-01-20 19:09:52.968 254065 INFO nova.compute.manager [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 19:09:52 compute-0 nova_compute[254061]: 2026-01-20 19:09:52.985 254065 INFO nova.compute.manager [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] Took 7.08 seconds to spawn the instance on the hypervisor.
Jan 20 19:09:52 compute-0 nova_compute[254061]: 2026-01-20 19:09:52.985 254065 DEBUG nova.compute.manager [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 19:09:52 compute-0 neutron-haproxy-ovnmeta-873c0e56-2798-477a-adc3-8a628bffd4e1[264074]: [NOTICE]   (264078) : New worker (264080) forked
Jan 20 19:09:52 compute-0 neutron-haproxy-ovnmeta-873c0e56-2798-477a-adc3-8a628bffd4e1[264074]: [NOTICE]   (264078) : Loading success.
Jan 20 19:09:53 compute-0 nova_compute[254061]: 2026-01-20 19:09:53.052 254065 INFO nova.compute.manager [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] Took 8.08 seconds to build instance.
Jan 20 19:09:53 compute-0 nova_compute[254061]: 2026-01-20 19:09:53.067 254065 DEBUG oslo_concurrency.lockutils [None req-1a601fbf-e949-4504-a7b7-19c530c79368 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "390552fe-c600-4ce3-a209-851b5c0a067d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.180s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:09:53 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:53 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdae0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:53 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:09:53 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 19:09:53 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:09:53.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 19:09:53 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:09:53 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:09:53 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:09:53.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:09:54 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v833: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 20 19:09:54 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:54 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdad4003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:54 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:54 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdaf80045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:54 compute-0 nova_compute[254061]: 2026-01-20 19:09:54.979 254065 DEBUG nova.compute.manager [req-cbaa3637-931d-4dfc-91e1-c69349944eff req-45f09a3c-7107-4e3e-bae9-0a8f52f2cd9d 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] Received event network-vif-plugged-b5955f0c-06f4-4e74-ada0-a76a7d5a4d8b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 19:09:54 compute-0 nova_compute[254061]: 2026-01-20 19:09:54.980 254065 DEBUG oslo_concurrency.lockutils [req-cbaa3637-931d-4dfc-91e1-c69349944eff req-45f09a3c-7107-4e3e-bae9-0a8f52f2cd9d 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Acquiring lock "390552fe-c600-4ce3-a209-851b5c0a067d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:09:54 compute-0 nova_compute[254061]: 2026-01-20 19:09:54.981 254065 DEBUG oslo_concurrency.lockutils [req-cbaa3637-931d-4dfc-91e1-c69349944eff req-45f09a3c-7107-4e3e-bae9-0a8f52f2cd9d 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Lock "390552fe-c600-4ce3-a209-851b5c0a067d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:09:54 compute-0 nova_compute[254061]: 2026-01-20 19:09:54.981 254065 DEBUG oslo_concurrency.lockutils [req-cbaa3637-931d-4dfc-91e1-c69349944eff req-45f09a3c-7107-4e3e-bae9-0a8f52f2cd9d 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Lock "390552fe-c600-4ce3-a209-851b5c0a067d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:09:54 compute-0 nova_compute[254061]: 2026-01-20 19:09:54.982 254065 DEBUG nova.compute.manager [req-cbaa3637-931d-4dfc-91e1-c69349944eff req-45f09a3c-7107-4e3e-bae9-0a8f52f2cd9d 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] No waiting events found dispatching network-vif-plugged-b5955f0c-06f4-4e74-ada0-a76a7d5a4d8b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 19:09:54 compute-0 nova_compute[254061]: 2026-01-20 19:09:54.982 254065 WARNING nova.compute.manager [req-cbaa3637-931d-4dfc-91e1-c69349944eff req-45f09a3c-7107-4e3e-bae9-0a8f52f2cd9d 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] Received unexpected event network-vif-plugged-b5955f0c-06f4-4e74-ada0-a76a7d5a4d8b for instance with vm_state active and task_state None.
Jan 20 19:09:54 compute-0 ceph-mgr[74676]: [balancer INFO root] Optimize plan auto_2026-01-20_19:09:54
Jan 20 19:09:54 compute-0 ceph-mgr[74676]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 19:09:54 compute-0 ceph-mgr[74676]: [balancer INFO root] do_upmap
Jan 20 19:09:54 compute-0 ceph-mgr[74676]: [balancer INFO root] pools ['default.rgw.meta', '.rgw.root', 'backups', 'default.rgw.log', 'volumes', 'cephfs.cephfs.data', '.mgr', 'default.rgw.control', 'vms', '.nfs', 'cephfs.cephfs.meta', 'images']
Jan 20 19:09:54 compute-0 ceph-mgr[74676]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 19:09:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:09:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:09:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:09:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:09:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:09:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:09:55 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:55 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdaf80045e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:55 compute-0 ceph-mon[74381]: pgmap v833: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 20 19:09:55 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:09:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 19:09:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:09:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 19:09:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:09:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003460606319593671 of space, bias 1.0, pg target 0.10381818958781013 quantized to 32 (current 32)
Jan 20 19:09:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:09:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:09:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:09:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:09:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:09:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 20 19:09:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:09:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 20 19:09:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:09:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:09:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:09:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 19:09:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:09:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 20 19:09:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:09:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:09:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:09:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 19:09:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:09:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 20 19:09:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 19:09:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:09:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 19:09:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:09:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:09:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:09:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:09:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:09:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:09:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:09:55 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:09:55 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:09:55 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:09:55.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:09:55 compute-0 nova_compute[254061]: 2026-01-20 19:09:55.858 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:09:55 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:09:55 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:09:55 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:09:55.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:09:56 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v834: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 20 19:09:56 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:56 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdae0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:56 compute-0 nova_compute[254061]: 2026-01-20 19:09:56.237 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:09:56 compute-0 nova_compute[254061]: 2026-01-20 19:09:56.435 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:09:56 compute-0 NetworkManager[48914]: <info>  [1768936196.4362] manager: (patch-br-int-to-provnet-d23a33c8-2f92-47f7-9446-58aa0ac25f0e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/35)
Jan 20 19:09:56 compute-0 NetworkManager[48914]: <info>  [1768936196.4373] manager: (patch-provnet-d23a33c8-2f92-47f7-9446-58aa0ac25f0e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/36)
Jan 20 19:09:56 compute-0 ovn_controller[155128]: 2026-01-20T19:09:56Z|00044|binding|INFO|Releasing lport 64efd965-af91-4bc1-a323-2a5f288a9c0b from this chassis (sb_readonly=0)
Jan 20 19:09:56 compute-0 nova_compute[254061]: 2026-01-20 19:09:56.516 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:09:56 compute-0 ovn_controller[155128]: 2026-01-20T19:09:56Z|00045|binding|INFO|Releasing lport 64efd965-af91-4bc1-a323-2a5f288a9c0b from this chassis (sb_readonly=0)
Jan 20 19:09:56 compute-0 nova_compute[254061]: 2026-01-20 19:09:56.521 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:09:56 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:56 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdad4003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Jan 20 19:09:56 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:09:57 compute-0 nova_compute[254061]: 2026-01-20 19:09:57.072 254065 DEBUG nova.compute.manager [req-64c94250-c72b-495b-aa70-c800f8a2206b req-ed90c06f-9e89-4b38-88fa-12e1a77c18f5 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] Received event network-changed-b5955f0c-06f4-4e74-ada0-a76a7d5a4d8b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 19:09:57 compute-0 nova_compute[254061]: 2026-01-20 19:09:57.073 254065 DEBUG nova.compute.manager [req-64c94250-c72b-495b-aa70-c800f8a2206b req-ed90c06f-9e89-4b38-88fa-12e1a77c18f5 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] Refreshing instance network info cache due to event network-changed-b5955f0c-06f4-4e74-ada0-a76a7d5a4d8b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 19:09:57 compute-0 nova_compute[254061]: 2026-01-20 19:09:57.073 254065 DEBUG oslo_concurrency.lockutils [req-64c94250-c72b-495b-aa70-c800f8a2206b req-ed90c06f-9e89-4b38-88fa-12e1a77c18f5 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Acquiring lock "refresh_cache-390552fe-c600-4ce3-a209-851b5c0a067d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 19:09:57 compute-0 nova_compute[254061]: 2026-01-20 19:09:57.073 254065 DEBUG oslo_concurrency.lockutils [req-64c94250-c72b-495b-aa70-c800f8a2206b req-ed90c06f-9e89-4b38-88fa-12e1a77c18f5 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Acquired lock "refresh_cache-390552fe-c600-4ce3-a209-851b5c0a067d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 19:09:57 compute-0 nova_compute[254061]: 2026-01-20 19:09:57.074 254065 DEBUG nova.network.neutron [req-64c94250-c72b-495b-aa70-c800f8a2206b req-ed90c06f-9e89-4b38-88fa-12e1a77c18f5 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] Refreshing network info cache for port b5955f0c-06f4-4e74-ada0-a76a7d5a4d8b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 19:09:57 compute-0 kernel: ganesha.nfsd[262687]: segfault at 50 ip 00007fdb89a5932e sp 00007fdb17ffe210 error 4 in libntirpc.so.5.8[7fdb89a3e000+2c000] likely on CPU 4 (core 0, socket 4)
Jan 20 19:09:57 compute-0 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Jan 20 19:09:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx[262556]: 20/01/2026 19:09:57 : epoch 696fd2be : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fdaf80045e0 fd 38 proxy ignored for local
Jan 20 19:09:57 compute-0 sudo[264094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:09:57 compute-0 sudo[264094]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:09:57 compute-0 sudo[264094]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:57 compute-0 systemd[1]: Started Process Core Dump (PID 264119/UID 0).
Jan 20 19:09:57 compute-0 ceph-mon[74381]: pgmap v834: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 20 19:09:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:09:57.169Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:09:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:09:57.169Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:09:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:09:57.170Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:09:57 compute-0 podman[264121]: 2026-01-20 19:09:57.275321681 +0000 UTC m=+0.104825568 container health_status d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 19:09:57 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:09:57 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 19:09:57 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:09:57.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 19:09:57 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:09:57 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:09:57 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:09:57.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:09:58 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v835: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 20 19:09:58 compute-0 nova_compute[254061]: 2026-01-20 19:09:58.112 254065 DEBUG nova.network.neutron [req-64c94250-c72b-495b-aa70-c800f8a2206b req-ed90c06f-9e89-4b38-88fa-12e1a77c18f5 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] Updated VIF entry in instance network info cache for port b5955f0c-06f4-4e74-ada0-a76a7d5a4d8b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 19:09:58 compute-0 nova_compute[254061]: 2026-01-20 19:09:58.113 254065 DEBUG nova.network.neutron [req-64c94250-c72b-495b-aa70-c800f8a2206b req-ed90c06f-9e89-4b38-88fa-12e1a77c18f5 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] Updating instance_info_cache with network_info: [{"id": "b5955f0c-06f4-4e74-ada0-a76a7d5a4d8b", "address": "fa:16:3e:2d:7a:43", "network": {"id": "873c0e56-2798-477a-adc3-8a628bffd4e1", "bridge": "br-int", "label": "tempest-network-smoke--647446815", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5955f0c-06", "ovs_interfaceid": "b5955f0c-06f4-4e74-ada0-a76a7d5a4d8b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 19:09:58 compute-0 nova_compute[254061]: 2026-01-20 19:09:58.133 254065 DEBUG oslo_concurrency.lockutils [req-64c94250-c72b-495b-aa70-c800f8a2206b req-ed90c06f-9e89-4b38-88fa-12e1a77c18f5 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Releasing lock "refresh_cache-390552fe-c600-4ce3-a209-851b5c0a067d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 19:09:58 compute-0 systemd-coredump[264120]: Process 262560 (ganesha.nfsd) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 43:
                                                    #0  0x00007fdb89a5932e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                    ELF object binary architecture: AMD x86-64
Jan 20 19:09:58 compute-0 ceph-mon[74381]: pgmap v835: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 20 19:09:58 compute-0 systemd[1]: systemd-coredump@16-264119-0.service: Deactivated successfully.
Jan 20 19:09:58 compute-0 systemd[1]: systemd-coredump@16-264119-0.service: Consumed 1.230s CPU time.
Jan 20 19:09:58 compute-0 podman[264155]: 2026-01-20 19:09:58.642960469 +0000 UTC m=+0.031178958 container died 890c9045cd3ab3a7d7e549d04dfc48de28b810c959d1ccbe485ba26818660b0d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Jan 20 19:09:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-561f1266344d4a0521a9a4369fb24a0769c57ebea46ffc1a91604785be39fed5-merged.mount: Deactivated successfully.
Jan 20 19:09:58 compute-0 podman[264155]: 2026-01-20 19:09:58.682490377 +0000 UTC m=+0.070708846 container remove 890c9045cd3ab3a7d7e549d04dfc48de28b810c959d1ccbe485ba26818660b0d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-nfs-cephfs-2-0-compute-0-ulclbx, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:09:58 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Main process exited, code=exited, status=139/n/a
Jan 20 19:09:58 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:09:58.861Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:09:58 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Failed with result 'exit-code'.
Jan 20 19:09:58 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Consumed 1.724s CPU time.
Jan 20 19:09:59 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:09:59 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:09:59 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:09:59.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:09:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:09:59] "GET /metrics HTTP/1.1" 200 48471 "" "Prometheus/2.51.0"
Jan 20 19:09:59 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:09:59] "GET /metrics HTTP/1.1" 200 48471 "" "Prometheus/2.51.0"
Jan 20 19:09:59 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:09:59 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:09:59 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:09:59.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:10:00 compute-0 ceph-mon[74381]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 2 failed cephadm daemon(s)
Jan 20 19:10:00 compute-0 ceph-mon[74381]: log_channel(cluster) log [WRN] : [WRN] CEPHADM_FAILED_DAEMON: 2 failed cephadm daemon(s)
Jan 20 19:10:00 compute-0 ceph-mon[74381]: log_channel(cluster) log [WRN] :     daemon nfs.cephfs.0.0.compute-1.zazymd on compute-1 is in unknown state
Jan 20 19:10:00 compute-0 ceph-mon[74381]: log_channel(cluster) log [WRN] :     daemon nfs.cephfs.1.0.compute-2.logial on compute-2 is in unknown state
Jan 20 19:10:00 compute-0 ceph-mon[74381]: Health detail: HEALTH_WARN 2 failed cephadm daemon(s)
Jan 20 19:10:00 compute-0 ceph-mon[74381]: [WRN] CEPHADM_FAILED_DAEMON: 2 failed cephadm daemon(s)
Jan 20 19:10:00 compute-0 ceph-mon[74381]:     daemon nfs.cephfs.0.0.compute-1.zazymd on compute-1 is in unknown state
Jan 20 19:10:00 compute-0 ceph-mon[74381]:     daemon nfs.cephfs.1.0.compute-2.logial on compute-2 is in unknown state
Jan 20 19:10:00 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v836: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 722 KiB/s wr, 99 op/s
Jan 20 19:10:00 compute-0 nova_compute[254061]: 2026-01-20 19:10:00.862 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:10:01 compute-0 ceph-mon[74381]: pgmap v836: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 722 KiB/s wr, 99 op/s
Jan 20 19:10:01 compute-0 nova_compute[254061]: 2026-01-20 19:10:01.238 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:10:01 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:10:01 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:10:01 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:10:01.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:10:01 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:10:01 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:10:01 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 19:10:01 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:10:01.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 19:10:02 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v837: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 722 KiB/s wr, 99 op/s
Jan 20 19:10:02 compute-0 nova_compute[254061]: 2026-01-20 19:10:02.128 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:10:02 compute-0 nova_compute[254061]: 2026-01-20 19:10:02.163 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:10:02 compute-0 nova_compute[254061]: 2026-01-20 19:10:02.163 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:10:02 compute-0 nova_compute[254061]: 2026-01-20 19:10:02.164 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:10:02 compute-0 nova_compute[254061]: 2026-01-20 19:10:02.164 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 19:10:02 compute-0 nova_compute[254061]: 2026-01-20 19:10:02.165 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:10:02 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:10:02 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2884546764' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:10:02 compute-0 nova_compute[254061]: 2026-01-20 19:10:02.704 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:10:02 compute-0 nova_compute[254061]: 2026-01-20 19:10:02.802 254065 DEBUG nova.virt.libvirt.driver [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 19:10:02 compute-0 nova_compute[254061]: 2026-01-20 19:10:02.803 254065 DEBUG nova.virt.libvirt.driver [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] skipping disk for instance-00000004 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 19:10:02 compute-0 nova_compute[254061]: 2026-01-20 19:10:02.997 254065 WARNING nova.virt.libvirt.driver [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 19:10:02 compute-0 nova_compute[254061]: 2026-01-20 19:10:02.998 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4436MB free_disk=59.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 19:10:02 compute-0 nova_compute[254061]: 2026-01-20 19:10:02.998 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:10:02 compute-0 nova_compute[254061]: 2026-01-20 19:10:02.998 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:10:03 compute-0 nova_compute[254061]: 2026-01-20 19:10:03.077 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Instance 390552fe-c600-4ce3-a209-851b5c0a067d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 19:10:03 compute-0 nova_compute[254061]: 2026-01-20 19:10:03.078 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 19:10:03 compute-0 nova_compute[254061]: 2026-01-20 19:10:03.078 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 19:10:03 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [WARNING] 019/191003 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 20 19:10:03 compute-0 nova_compute[254061]: 2026-01-20 19:10:03.130 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:10:03 compute-0 ceph-mon[74381]: pgmap v837: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 722 KiB/s wr, 99 op/s
Jan 20 19:10:03 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/2884546764' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:10:03 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:10:03 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:10:03 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:10:03.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:10:03 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:10:03 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2433466912' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:10:03 compute-0 nova_compute[254061]: 2026-01-20 19:10:03.580 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:10:03 compute-0 nova_compute[254061]: 2026-01-20 19:10:03.586 254065 DEBUG nova.compute.provider_tree [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Inventory has not changed in ProviderTree for provider: cb9161e5-191d-495c-920a-01144f42a215 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 19:10:03 compute-0 nova_compute[254061]: 2026-01-20 19:10:03.602 254065 DEBUG nova.scheduler.client.report [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Inventory has not changed for provider cb9161e5-191d-495c-920a-01144f42a215 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 19:10:03 compute-0 nova_compute[254061]: 2026-01-20 19:10:03.627 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 19:10:03 compute-0 nova_compute[254061]: 2026-01-20 19:10:03.628 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:10:03 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:10:03 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 19:10:03 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:10:03.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 19:10:04 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v838: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 75 op/s
Jan 20 19:10:04 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/2433466912' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:10:04 compute-0 nova_compute[254061]: 2026-01-20 19:10:04.628 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:10:05 compute-0 ceph-mon[74381]: pgmap v838: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 75 op/s
Jan 20 19:10:05 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:10:05 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:10:05 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:10:05.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:10:05 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:10:05 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:10:05 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:10:05.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:10:05 compute-0 nova_compute[254061]: 2026-01-20 19:10:05.904 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:10:06 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v839: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Jan 20 19:10:06 compute-0 nova_compute[254061]: 2026-01-20 19:10:06.128 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:10:06 compute-0 nova_compute[254061]: 2026-01-20 19:10:06.128 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:10:06 compute-0 nova_compute[254061]: 2026-01-20 19:10:06.242 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:10:06 compute-0 ovn_controller[155128]: 2026-01-20T19:10:06Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2d:7a:43 10.100.0.3
Jan 20 19:10:06 compute-0 ovn_controller[155128]: 2026-01-20T19:10:06Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2d:7a:43 10.100.0.3
Jan 20 19:10:06 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:10:07 compute-0 nova_compute[254061]: 2026-01-20 19:10:07.128 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:10:07 compute-0 nova_compute[254061]: 2026-01-20 19:10:07.129 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 19:10:07 compute-0 nova_compute[254061]: 2026-01-20 19:10:07.129 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 19:10:07 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:10:07.171Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:10:07 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:10:07.172Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:10:07 compute-0 ceph-mon[74381]: pgmap v839: 337 pgs: 337 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Jan 20 19:10:07 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/2825284287' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:10:07 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/1002301251' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:10:07 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:10:07 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:10:07 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:10:07.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:10:07 compute-0 nova_compute[254061]: 2026-01-20 19:10:07.606 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Acquiring lock "refresh_cache-390552fe-c600-4ce3-a209-851b5c0a067d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 19:10:07 compute-0 nova_compute[254061]: 2026-01-20 19:10:07.607 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Acquired lock "refresh_cache-390552fe-c600-4ce3-a209-851b5c0a067d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 19:10:07 compute-0 nova_compute[254061]: 2026-01-20 19:10:07.607 254065 DEBUG nova.network.neutron [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 19:10:07 compute-0 nova_compute[254061]: 2026-01-20 19:10:07.608 254065 DEBUG nova.objects.instance [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 390552fe-c600-4ce3-a209-851b5c0a067d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 19:10:07 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:10:07 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:10:07 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:10:07.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:10:08 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v840: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 126 op/s
Jan 20 19:10:08 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/4002465441' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:10:08 compute-0 nova_compute[254061]: 2026-01-20 19:10:08.810 254065 DEBUG nova.network.neutron [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] Updating instance_info_cache with network_info: [{"id": "b5955f0c-06f4-4e74-ada0-a76a7d5a4d8b", "address": "fa:16:3e:2d:7a:43", "network": {"id": "873c0e56-2798-477a-adc3-8a628bffd4e1", "bridge": "br-int", "label": "tempest-network-smoke--647446815", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5955f0c-06", "ovs_interfaceid": "b5955f0c-06f4-4e74-ada0-a76a7d5a4d8b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 19:10:08 compute-0 nova_compute[254061]: 2026-01-20 19:10:08.843 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Releasing lock "refresh_cache-390552fe-c600-4ce3-a209-851b5c0a067d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 19:10:08 compute-0 nova_compute[254061]: 2026-01-20 19:10:08.844 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 19:10:08 compute-0 nova_compute[254061]: 2026-01-20 19:10:08.844 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:10:08 compute-0 nova_compute[254061]: 2026-01-20 19:10:08.845 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:10:08 compute-0 nova_compute[254061]: 2026-01-20 19:10:08.845 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 19:10:08 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:10:08.862Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:10:08 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Scheduled restart job, restart counter is at 17.
Jan 20 19:10:08 compute-0 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.ulclbx for aecbbf3b-b405-507b-97d7-637a83f5b4b1.
Jan 20 19:10:08 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Consumed 1.724s CPU time.
Jan 20 19:10:08 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Start request repeated too quickly.
Jan 20 19:10:08 compute-0 systemd[1]: ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1@nfs.cephfs.2.0.compute-0.ulclbx.service: Failed with result 'exit-code'.
Jan 20 19:10:09 compute-0 systemd[1]: Failed to start Ceph nfs.cephfs.2.0.compute-0.ulclbx for aecbbf3b-b405-507b-97d7-637a83f5b4b1.
Jan 20 19:10:09 compute-0 nova_compute[254061]: 2026-01-20 19:10:09.129 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:10:09 compute-0 nova_compute[254061]: 2026-01-20 19:10:09.130 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:10:09 compute-0 ceph-mon[74381]: pgmap v840: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 126 op/s
Jan 20 19:10:09 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/2278769063' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:10:09 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:10:09 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:10:09 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:10:09.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:10:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:10:09] "GET /metrics HTTP/1.1" 200 48471 "" "Prometheus/2.51.0"
Jan 20 19:10:09 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:10:09] "GET /metrics HTTP/1.1" 200 48471 "" "Prometheus/2.51.0"
Jan 20 19:10:09 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:10:09 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:10:09 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:10:09.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:10:10 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v841: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 238 KiB/s rd, 2.1 MiB/s wr, 54 op/s
Jan 20 19:10:10 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:10:10 compute-0 nova_compute[254061]: 2026-01-20 19:10:10.908 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:10:11 compute-0 nova_compute[254061]: 2026-01-20 19:10:11.276 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:10:11 compute-0 ceph-mon[74381]: pgmap v841: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 238 KiB/s rd, 2.1 MiB/s wr, 54 op/s
Jan 20 19:10:11 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:10:11 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:10:11 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:10:11.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:10:11 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:10:11 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:10:11 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:10:11 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:10:11.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:10:12 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v842: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 271 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 20 19:10:12 compute-0 nova_compute[254061]: 2026-01-20 19:10:12.124 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:10:13 compute-0 nova_compute[254061]: 2026-01-20 19:10:13.281 254065 INFO nova.compute.manager [None req-ba1147d2-1a34-474f-a8a0-ace15eeb722b d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] Get console output
Jan 20 19:10:13 compute-0 nova_compute[254061]: 2026-01-20 19:10:13.287 260360 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 20 19:10:13 compute-0 ceph-mon[74381]: pgmap v842: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 271 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 20 19:10:13 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:10:13 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:10:13 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:10:13.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:10:13 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:10:13 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:10:13 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:10:13.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:10:14 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v843: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 271 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 20 19:10:15 compute-0 nova_compute[254061]: 2026-01-20 19:10:15.113 254065 DEBUG nova.compute.manager [req-235925d0-d397-4c57-a401-7be6951a0728 req-0d0607c2-e013-43c9-b2c1-1869fdc768bd 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] Received event network-changed-b5955f0c-06f4-4e74-ada0-a76a7d5a4d8b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 19:10:15 compute-0 nova_compute[254061]: 2026-01-20 19:10:15.113 254065 DEBUG nova.compute.manager [req-235925d0-d397-4c57-a401-7be6951a0728 req-0d0607c2-e013-43c9-b2c1-1869fdc768bd 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] Refreshing instance network info cache due to event network-changed-b5955f0c-06f4-4e74-ada0-a76a7d5a4d8b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 19:10:15 compute-0 nova_compute[254061]: 2026-01-20 19:10:15.114 254065 DEBUG oslo_concurrency.lockutils [req-235925d0-d397-4c57-a401-7be6951a0728 req-0d0607c2-e013-43c9-b2c1-1869fdc768bd 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Acquiring lock "refresh_cache-390552fe-c600-4ce3-a209-851b5c0a067d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 19:10:15 compute-0 nova_compute[254061]: 2026-01-20 19:10:15.114 254065 DEBUG oslo_concurrency.lockutils [req-235925d0-d397-4c57-a401-7be6951a0728 req-0d0607c2-e013-43c9-b2c1-1869fdc768bd 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Acquired lock "refresh_cache-390552fe-c600-4ce3-a209-851b5c0a067d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 19:10:15 compute-0 nova_compute[254061]: 2026-01-20 19:10:15.114 254065 DEBUG nova.network.neutron [req-235925d0-d397-4c57-a401-7be6951a0728 req-0d0607c2-e013-43c9-b2c1-1869fdc768bd 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] Refreshing network info cache for port b5955f0c-06f4-4e74-ada0-a76a7d5a4d8b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 19:10:15 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:10:15 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:10:15 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:10:15.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:10:15 compute-0 ceph-mon[74381]: pgmap v843: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 271 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 20 19:10:15 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:10:15 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:10:15 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:10:15.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:10:15 compute-0 nova_compute[254061]: 2026-01-20 19:10:15.913 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:10:16 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v844: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 270 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Jan 20 19:10:16 compute-0 nova_compute[254061]: 2026-01-20 19:10:16.278 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:10:16 compute-0 nova_compute[254061]: 2026-01-20 19:10:16.371 254065 DEBUG nova.network.neutron [req-235925d0-d397-4c57-a401-7be6951a0728 req-0d0607c2-e013-43c9-b2c1-1869fdc768bd 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] Updated VIF entry in instance network info cache for port b5955f0c-06f4-4e74-ada0-a76a7d5a4d8b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 19:10:16 compute-0 nova_compute[254061]: 2026-01-20 19:10:16.371 254065 DEBUG nova.network.neutron [req-235925d0-d397-4c57-a401-7be6951a0728 req-0d0607c2-e013-43c9-b2c1-1869fdc768bd 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] Updating instance_info_cache with network_info: [{"id": "b5955f0c-06f4-4e74-ada0-a76a7d5a4d8b", "address": "fa:16:3e:2d:7a:43", "network": {"id": "873c0e56-2798-477a-adc3-8a628bffd4e1", "bridge": "br-int", "label": "tempest-network-smoke--647446815", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5955f0c-06", "ovs_interfaceid": "b5955f0c-06f4-4e74-ada0-a76a7d5a4d8b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 19:10:16 compute-0 nova_compute[254061]: 2026-01-20 19:10:16.387 254065 DEBUG oslo_concurrency.lockutils [req-235925d0-d397-4c57-a401-7be6951a0728 req-0d0607c2-e013-43c9-b2c1-1869fdc768bd 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Releasing lock "refresh_cache-390552fe-c600-4ce3-a209-851b5c0a067d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 19:10:16 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:10:16 compute-0 ceph-mon[74381]: pgmap v844: 337 pgs: 337 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 270 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Jan 20 19:10:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:10:17.173Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:10:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:10:17.174Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:10:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:10:17.174Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:10:17 compute-0 sudo[264262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:10:17 compute-0 sudo[264262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:10:17 compute-0 sudo[264262]: pam_unix(sudo:session): session closed for user root
Jan 20 19:10:17 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:10:17 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:10:17 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:10:17.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:10:17 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:10:17 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:10:17 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:10:17.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:10:18 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v845: 337 pgs: 337 active+clean; 121 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 276 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 20 19:10:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:10:18.863Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:10:19 compute-0 ceph-mon[74381]: pgmap v845: 337 pgs: 337 active+clean; 121 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 276 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 20 19:10:19 compute-0 podman[264289]: 2026-01-20 19:10:19.299275872 +0000 UTC m=+0.096404086 container health_status 7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 20 19:10:19 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:10:19 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:10:19 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:10:19.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:10:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:10:19] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Jan 20 19:10:19 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:10:19] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Jan 20 19:10:19 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:10:19 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:10:19 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:10:19.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:10:20 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v846: 337 pgs: 337 active+clean; 121 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 15 KiB/s wr, 9 op/s
Jan 20 19:10:20 compute-0 nova_compute[254061]: 2026-01-20 19:10:20.964 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:10:21 compute-0 ceph-mon[74381]: pgmap v846: 337 pgs: 337 active+clean; 121 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 15 KiB/s wr, 9 op/s
Jan 20 19:10:21 compute-0 nova_compute[254061]: 2026-01-20 19:10:21.281 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:10:21 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:10:21 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:10:21 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:10:21.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:10:21 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:10:21 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:10:21 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:10:21 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:10:21.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:10:22 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v847: 337 pgs: 337 active+clean; 121 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 18 KiB/s wr, 9 op/s
Jan 20 19:10:23 compute-0 ceph-mon[74381]: pgmap v847: 337 pgs: 337 active+clean; 121 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 18 KiB/s wr, 9 op/s
Jan 20 19:10:23 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/2003534155' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:10:23 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:10:23 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:10:23 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:10:23.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:10:23 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:10:23 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:10:23 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:10:23.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:10:24 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v848: 337 pgs: 337 active+clean; 121 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 16 KiB/s wr, 1 op/s
Jan 20 19:10:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:10:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:10:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:10:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:10:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:10:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:10:25 compute-0 ceph-mon[74381]: pgmap v848: 337 pgs: 337 active+clean; 121 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 16 KiB/s wr, 1 op/s
Jan 20 19:10:25 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:10:25 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:10:25 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:10:25 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:10:25.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:10:25 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:10:25 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:10:25 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:10:25.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:10:26 compute-0 nova_compute[254061]: 2026-01-20 19:10:26.014 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:10:26 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v849: 337 pgs: 337 active+clean; 121 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 6.0 KiB/s rd, 16 KiB/s wr, 1 op/s
Jan 20 19:10:26 compute-0 nova_compute[254061]: 2026-01-20 19:10:26.283 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:10:26 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:10:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:10:27.175Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:10:27 compute-0 ceph-mon[74381]: pgmap v849: 337 pgs: 337 active+clean; 121 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 6.0 KiB/s rd, 16 KiB/s wr, 1 op/s
Jan 20 19:10:27 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:10:27 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 19:10:27 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:10:27.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 19:10:27 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:10:27 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:10:27 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:10:27.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:10:28 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v850: 337 pgs: 337 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Jan 20 19:10:28 compute-0 podman[264316]: 2026-01-20 19:10:28.195631627 +0000 UTC m=+0.150212621 container health_status d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202)
Jan 20 19:10:28 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/2383281908' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 19:10:28 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/1523233943' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 19:10:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:10:28.864Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:10:29 compute-0 ceph-mon[74381]: pgmap v850: 337 pgs: 337 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Jan 20 19:10:29 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:10:29 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:10:29 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:10:29.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:10:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:10:29] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Jan 20 19:10:29 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:10:29] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Jan 20 19:10:29 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:10:29 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:10:29 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:10:29.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:10:30 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v851: 337 pgs: 337 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 19:10:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:10:30.286 165659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:10:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:10:30.287 165659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:10:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:10:30.287 165659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:10:31 compute-0 nova_compute[254061]: 2026-01-20 19:10:31.017 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:10:31 compute-0 nova_compute[254061]: 2026-01-20 19:10:31.285 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:10:31 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:10:31 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:10:31 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:10:31.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:10:31 compute-0 ceph-mon[74381]: pgmap v851: 337 pgs: 337 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 19:10:31 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:10:31 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:10:31 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 19:10:31 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:10:31.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 19:10:32 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v852: 337 pgs: 337 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 73 op/s
Jan 20 19:10:32 compute-0 ceph-mon[74381]: pgmap v852: 337 pgs: 337 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 73 op/s
Jan 20 19:10:32 compute-0 ceph-osd[82836]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 19:10:32 compute-0 ceph-osd[82836]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 10K writes, 40K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 10K writes, 2672 syncs, 4.02 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1723 writes, 5203 keys, 1723 commit groups, 1.0 writes per commit group, ingest: 5.27 MB, 0.01 MB/s
                                           Interval WAL: 1723 writes, 743 syncs, 2.32 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 20 19:10:33 compute-0 sudo[264348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:10:33 compute-0 sudo[264348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:10:33 compute-0 sudo[264348]: pam_unix(sudo:session): session closed for user root
Jan 20 19:10:33 compute-0 sudo[264373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Jan 20 19:10:33 compute-0 sudo[264373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:10:33 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:10:33 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:10:33 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:10:33.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:10:33 compute-0 sudo[264373]: pam_unix(sudo:session): session closed for user root
Jan 20 19:10:33 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:10:33 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:10:33 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:10:33 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:10:33 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 20 19:10:33 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:10:33 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 20 19:10:33 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:10:33 compute-0 sudo[264417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:10:33 compute-0 sudo[264417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:10:33 compute-0 sudo[264417]: pam_unix(sudo:session): session closed for user root
Jan 20 19:10:33 compute-0 sudo[264442]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 20 19:10:33 compute-0 sudo[264442]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:10:33 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:10:33 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:10:33 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:10:33.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:10:34 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v853: 337 pgs: 337 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 20 19:10:34 compute-0 sudo[264442]: pam_unix(sudo:session): session closed for user root
Jan 20 19:10:34 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v854: 337 pgs: 337 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.0 MiB/s wr, 116 op/s
Jan 20 19:10:34 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 19:10:34 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:10:34 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 20 19:10:34 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:10:34 compute-0 sudo[264500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:10:34 compute-0 sudo[264500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:10:34 compute-0 sudo[264500]: pam_unix(sudo:session): session closed for user root
Jan 20 19:10:34 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:10:34 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:10:34 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:10:34 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:10:34 compute-0 ceph-mon[74381]: pgmap v853: 337 pgs: 337 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 20 19:10:34 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 19:10:34 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 19:10:34 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:10:34 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:10:34 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 19:10:34 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 19:10:34 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 19:10:34 compute-0 sudo[264525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 19:10:34 compute-0 sudo[264525]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:10:35 compute-0 podman[264590]: 2026-01-20 19:10:35.233912706 +0000 UTC m=+0.058779138 container create 53e859901ea6ab4b29d766b84c659d2a16d0a6fe0f348a7c9761d9880c1ca351 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_hermann, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:10:35 compute-0 systemd[1]: Started libpod-conmon-53e859901ea6ab4b29d766b84c659d2a16d0a6fe0f348a7c9761d9880c1ca351.scope.
Jan 20 19:10:35 compute-0 podman[264590]: 2026-01-20 19:10:35.213195708 +0000 UTC m=+0.038062160 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:10:35 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:10:35 compute-0 podman[264590]: 2026-01-20 19:10:35.327658651 +0000 UTC m=+0.152525103 container init 53e859901ea6ab4b29d766b84c659d2a16d0a6fe0f348a7c9761d9880c1ca351 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_hermann, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 19:10:35 compute-0 podman[264590]: 2026-01-20 19:10:35.337977975 +0000 UTC m=+0.162844407 container start 53e859901ea6ab4b29d766b84c659d2a16d0a6fe0f348a7c9761d9880c1ca351 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_hermann, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:10:35 compute-0 podman[264590]: 2026-01-20 19:10:35.341903069 +0000 UTC m=+0.166769531 container attach 53e859901ea6ab4b29d766b84c659d2a16d0a6fe0f348a7c9761d9880c1ca351 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:10:35 compute-0 pedantic_hermann[264606]: 167 167
Jan 20 19:10:35 compute-0 systemd[1]: libpod-53e859901ea6ab4b29d766b84c659d2a16d0a6fe0f348a7c9761d9880c1ca351.scope: Deactivated successfully.
Jan 20 19:10:35 compute-0 conmon[264606]: conmon 53e859901ea6ab4b29d7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-53e859901ea6ab4b29d766b84c659d2a16d0a6fe0f348a7c9761d9880c1ca351.scope/container/memory.events
Jan 20 19:10:35 compute-0 podman[264590]: 2026-01-20 19:10:35.348479002 +0000 UTC m=+0.173345424 container died 53e859901ea6ab4b29d766b84c659d2a16d0a6fe0f348a7c9761d9880c1ca351 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_hermann, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:10:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-a6f333cfec04666b5036e767c4ff3e42ed5908919c0364d09b7d84b835548fb4-merged.mount: Deactivated successfully.
Jan 20 19:10:35 compute-0 podman[264590]: 2026-01-20 19:10:35.384459446 +0000 UTC m=+0.209325878 container remove 53e859901ea6ab4b29d766b84c659d2a16d0a6fe0f348a7c9761d9880c1ca351 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:10:35 compute-0 systemd[1]: libpod-conmon-53e859901ea6ab4b29d766b84c659d2a16d0a6fe0f348a7c9761d9880c1ca351.scope: Deactivated successfully.
Jan 20 19:10:35 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:10:35 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:10:35 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:10:35.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:10:35 compute-0 podman[264632]: 2026-01-20 19:10:35.561038655 +0000 UTC m=+0.047777767 container create 6846b0cc104b8274ff8c8d3ab865e917e7cc9bb7b143e79bd0ecc963cfb542d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_curie, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 19:10:35 compute-0 systemd[1]: Started libpod-conmon-6846b0cc104b8274ff8c8d3ab865e917e7cc9bb7b143e79bd0ecc963cfb542d5.scope.
Jan 20 19:10:35 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:10:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d97f88460918883bd5dd7ab5bf9aa499e9ac4df461822d52fd9bed5d8d967a78/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:10:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d97f88460918883bd5dd7ab5bf9aa499e9ac4df461822d52fd9bed5d8d967a78/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:10:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d97f88460918883bd5dd7ab5bf9aa499e9ac4df461822d52fd9bed5d8d967a78/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:10:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d97f88460918883bd5dd7ab5bf9aa499e9ac4df461822d52fd9bed5d8d967a78/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:10:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d97f88460918883bd5dd7ab5bf9aa499e9ac4df461822d52fd9bed5d8d967a78/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:10:35 compute-0 podman[264632]: 2026-01-20 19:10:35.539374001 +0000 UTC m=+0.026113133 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:10:35 compute-0 podman[264632]: 2026-01-20 19:10:35.652988152 +0000 UTC m=+0.139727274 container init 6846b0cc104b8274ff8c8d3ab865e917e7cc9bb7b143e79bd0ecc963cfb542d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_curie, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:10:35 compute-0 podman[264632]: 2026-01-20 19:10:35.664865016 +0000 UTC m=+0.151604128 container start 6846b0cc104b8274ff8c8d3ab865e917e7cc9bb7b143e79bd0ecc963cfb542d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_curie, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1)
Jan 20 19:10:35 compute-0 podman[264632]: 2026-01-20 19:10:35.668239896 +0000 UTC m=+0.154979008 container attach 6846b0cc104b8274ff8c8d3ab865e917e7cc9bb7b143e79bd0ecc963cfb542d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_curie, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:10:35 compute-0 ceph-mon[74381]: pgmap v854: 337 pgs: 337 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.0 MiB/s wr, 116 op/s
Jan 20 19:10:35 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:10:35 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:10:35 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:10:35.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:10:36 compute-0 vigilant_curie[264648]: --> passed data devices: 0 physical, 1 LVM
Jan 20 19:10:36 compute-0 vigilant_curie[264648]: --> All data devices are unavailable
Jan 20 19:10:36 compute-0 nova_compute[254061]: 2026-01-20 19:10:36.020 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:10:36 compute-0 systemd[1]: libpod-6846b0cc104b8274ff8c8d3ab865e917e7cc9bb7b143e79bd0ecc963cfb542d5.scope: Deactivated successfully.
Jan 20 19:10:36 compute-0 podman[264632]: 2026-01-20 19:10:36.041205338 +0000 UTC m=+0.527944450 container died 6846b0cc104b8274ff8c8d3ab865e917e7cc9bb7b143e79bd0ecc963cfb542d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_curie, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:10:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-d97f88460918883bd5dd7ab5bf9aa499e9ac4df461822d52fd9bed5d8d967a78-merged.mount: Deactivated successfully.
Jan 20 19:10:36 compute-0 podman[264632]: 2026-01-20 19:10:36.089023065 +0000 UTC m=+0.575762177 container remove 6846b0cc104b8274ff8c8d3ab865e917e7cc9bb7b143e79bd0ecc963cfb542d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_curie, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 20 19:10:36 compute-0 systemd[1]: libpod-conmon-6846b0cc104b8274ff8c8d3ab865e917e7cc9bb7b143e79bd0ecc963cfb542d5.scope: Deactivated successfully.
Jan 20 19:10:36 compute-0 sudo[264525]: pam_unix(sudo:session): session closed for user root
Jan 20 19:10:36 compute-0 sudo[264676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:10:36 compute-0 sudo[264676]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:10:36 compute-0 sudo[264676]: pam_unix(sudo:session): session closed for user root
Jan 20 19:10:36 compute-0 sudo[264701]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- lvm list --format json
Jan 20 19:10:36 compute-0 sudo[264701]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:10:36 compute-0 nova_compute[254061]: 2026-01-20 19:10:36.308 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:10:36 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v855: 337 pgs: 337 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.0 MiB/s wr, 116 op/s
Jan 20 19:10:36 compute-0 podman[264769]: 2026-01-20 19:10:36.673939923 +0000 UTC m=+0.043063072 container create b3145bc81fe2b544255a1fdc058dfecc0581ce1c1ace70c6928e295e7cc90af8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_rhodes, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Jan 20 19:10:36 compute-0 systemd[1]: Started libpod-conmon-b3145bc81fe2b544255a1fdc058dfecc0581ce1c1ace70c6928e295e7cc90af8.scope.
Jan 20 19:10:36 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:10:36 compute-0 podman[264769]: 2026-01-20 19:10:36.656232954 +0000 UTC m=+0.025356113 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:10:36 compute-0 podman[264769]: 2026-01-20 19:10:36.75495461 +0000 UTC m=+0.124077759 container init b3145bc81fe2b544255a1fdc058dfecc0581ce1c1ace70c6928e295e7cc90af8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_rhodes, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Jan 20 19:10:36 compute-0 podman[264769]: 2026-01-20 19:10:36.760746414 +0000 UTC m=+0.129869563 container start b3145bc81fe2b544255a1fdc058dfecc0581ce1c1ace70c6928e295e7cc90af8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_rhodes, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:10:36 compute-0 podman[264769]: 2026-01-20 19:10:36.765259543 +0000 UTC m=+0.134382722 container attach b3145bc81fe2b544255a1fdc058dfecc0581ce1c1ace70c6928e295e7cc90af8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_rhodes, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Jan 20 19:10:36 compute-0 intelligent_rhodes[264785]: 167 167
Jan 20 19:10:36 compute-0 systemd[1]: libpod-b3145bc81fe2b544255a1fdc058dfecc0581ce1c1ace70c6928e295e7cc90af8.scope: Deactivated successfully.
Jan 20 19:10:36 compute-0 podman[264769]: 2026-01-20 19:10:36.767419861 +0000 UTC m=+0.136543010 container died b3145bc81fe2b544255a1fdc058dfecc0581ce1c1ace70c6928e295e7cc90af8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_rhodes, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:10:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-561bc58d55802dd071f85d03b65c321efa1692a39942983b39d31699575795ac-merged.mount: Deactivated successfully.
Jan 20 19:10:36 compute-0 podman[264769]: 2026-01-20 19:10:36.805633643 +0000 UTC m=+0.174756792 container remove b3145bc81fe2b544255a1fdc058dfecc0581ce1c1ace70c6928e295e7cc90af8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_rhodes, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Jan 20 19:10:36 compute-0 systemd[1]: libpod-conmon-b3145bc81fe2b544255a1fdc058dfecc0581ce1c1ace70c6928e295e7cc90af8.scope: Deactivated successfully.
Jan 20 19:10:36 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:10:37 compute-0 podman[264809]: 2026-01-20 19:10:37.00591761 +0000 UTC m=+0.048495097 container create a90ad79c5a299d2ce8f1805c0f54970438c1b0023906cd55042b0b6fe427585d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_dijkstra, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:10:37 compute-0 systemd[1]: Started libpod-conmon-a90ad79c5a299d2ce8f1805c0f54970438c1b0023906cd55042b0b6fe427585d.scope.
Jan 20 19:10:37 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:10:37 compute-0 podman[264809]: 2026-01-20 19:10:36.983794643 +0000 UTC m=+0.026372220 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:10:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92d477c93f9545972331350235e215257bab50a802b25b326033b32d940b976e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:10:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92d477c93f9545972331350235e215257bab50a802b25b326033b32d940b976e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:10:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92d477c93f9545972331350235e215257bab50a802b25b326033b32d940b976e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:10:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92d477c93f9545972331350235e215257bab50a802b25b326033b32d940b976e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:10:37 compute-0 podman[264809]: 2026-01-20 19:10:37.09464189 +0000 UTC m=+0.137219387 container init a90ad79c5a299d2ce8f1805c0f54970438c1b0023906cd55042b0b6fe427585d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_dijkstra, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:10:37 compute-0 podman[264809]: 2026-01-20 19:10:37.101915004 +0000 UTC m=+0.144492491 container start a90ad79c5a299d2ce8f1805c0f54970438c1b0023906cd55042b0b6fe427585d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_dijkstra, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:10:37 compute-0 podman[264809]: 2026-01-20 19:10:37.106134106 +0000 UTC m=+0.148711603 container attach a90ad79c5a299d2ce8f1805c0f54970438c1b0023906cd55042b0b6fe427585d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_dijkstra, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 19:10:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:10:37.175Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:10:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:10:37.176Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:10:37 compute-0 sudo[264831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:10:37 compute-0 sudo[264831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:10:37 compute-0 sudo[264831]: pam_unix(sudo:session): session closed for user root
Jan 20 19:10:37 compute-0 reverent_dijkstra[264826]: {
Jan 20 19:10:37 compute-0 reverent_dijkstra[264826]:     "0": [
Jan 20 19:10:37 compute-0 reverent_dijkstra[264826]:         {
Jan 20 19:10:37 compute-0 reverent_dijkstra[264826]:             "devices": [
Jan 20 19:10:37 compute-0 reverent_dijkstra[264826]:                 "/dev/loop3"
Jan 20 19:10:37 compute-0 reverent_dijkstra[264826]:             ],
Jan 20 19:10:37 compute-0 reverent_dijkstra[264826]:             "lv_name": "ceph_lv0",
Jan 20 19:10:37 compute-0 reverent_dijkstra[264826]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:10:37 compute-0 reverent_dijkstra[264826]:             "lv_size": "21470642176",
Jan 20 19:10:37 compute-0 reverent_dijkstra[264826]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=aecbbf3b-b405-507b-97d7-637a83f5b4b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5f53c0c6-6046-4836-83f9-ff93da7e674e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:10:37 compute-0 reverent_dijkstra[264826]:             "lv_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 19:10:37 compute-0 reverent_dijkstra[264826]:             "name": "ceph_lv0",
Jan 20 19:10:37 compute-0 reverent_dijkstra[264826]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:10:37 compute-0 reverent_dijkstra[264826]:             "tags": {
Jan 20 19:10:37 compute-0 reverent_dijkstra[264826]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:10:37 compute-0 reverent_dijkstra[264826]:                 "ceph.block_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 19:10:37 compute-0 reverent_dijkstra[264826]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:10:37 compute-0 reverent_dijkstra[264826]:                 "ceph.cluster_fsid": "aecbbf3b-b405-507b-97d7-637a83f5b4b1",
Jan 20 19:10:37 compute-0 reverent_dijkstra[264826]:                 "ceph.cluster_name": "ceph",
Jan 20 19:10:37 compute-0 reverent_dijkstra[264826]:                 "ceph.crush_device_class": "",
Jan 20 19:10:37 compute-0 reverent_dijkstra[264826]:                 "ceph.encrypted": "0",
Jan 20 19:10:37 compute-0 reverent_dijkstra[264826]:                 "ceph.osd_fsid": "5f53c0c6-6046-4836-83f9-ff93da7e674e",
Jan 20 19:10:37 compute-0 reverent_dijkstra[264826]:                 "ceph.osd_id": "0",
Jan 20 19:10:37 compute-0 reverent_dijkstra[264826]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:10:37 compute-0 reverent_dijkstra[264826]:                 "ceph.type": "block",
Jan 20 19:10:37 compute-0 reverent_dijkstra[264826]:                 "ceph.vdo": "0",
Jan 20 19:10:37 compute-0 reverent_dijkstra[264826]:                 "ceph.with_tpm": "0"
Jan 20 19:10:37 compute-0 reverent_dijkstra[264826]:             },
Jan 20 19:10:37 compute-0 reverent_dijkstra[264826]:             "type": "block",
Jan 20 19:10:37 compute-0 reverent_dijkstra[264826]:             "vg_name": "ceph_vg0"
Jan 20 19:10:37 compute-0 reverent_dijkstra[264826]:         }
Jan 20 19:10:37 compute-0 reverent_dijkstra[264826]:     ]
Jan 20 19:10:37 compute-0 reverent_dijkstra[264826]: }
Jan 20 19:10:37 compute-0 systemd[1]: libpod-a90ad79c5a299d2ce8f1805c0f54970438c1b0023906cd55042b0b6fe427585d.scope: Deactivated successfully.
Jan 20 19:10:37 compute-0 podman[264809]: 2026-01-20 19:10:37.382232051 +0000 UTC m=+0.424809538 container died a90ad79c5a299d2ce8f1805c0f54970438c1b0023906cd55042b0b6fe427585d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:10:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-92d477c93f9545972331350235e215257bab50a802b25b326033b32d940b976e-merged.mount: Deactivated successfully.
Jan 20 19:10:37 compute-0 podman[264809]: 2026-01-20 19:10:37.423927886 +0000 UTC m=+0.466505373 container remove a90ad79c5a299d2ce8f1805c0f54970438c1b0023906cd55042b0b6fe427585d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_dijkstra, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:10:37 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:10:37 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:10:37 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:10:37.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:10:37 compute-0 systemd[1]: libpod-conmon-a90ad79c5a299d2ce8f1805c0f54970438c1b0023906cd55042b0b6fe427585d.scope: Deactivated successfully.
Jan 20 19:10:37 compute-0 sudo[264701]: pam_unix(sudo:session): session closed for user root
Jan 20 19:10:37 compute-0 sudo[264872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:10:37 compute-0 sudo[264872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:10:37 compute-0 sudo[264872]: pam_unix(sudo:session): session closed for user root
Jan 20 19:10:37 compute-0 sudo[264897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- raw list --format json
Jan 20 19:10:37 compute-0 sudo[264897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:10:37 compute-0 ceph-mon[74381]: pgmap v855: 337 pgs: 337 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.0 MiB/s wr, 116 op/s
Jan 20 19:10:37 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:10:37 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:10:37 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:10:37.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:10:38 compute-0 podman[264964]: 2026-01-20 19:10:38.006129062 +0000 UTC m=+0.042425045 container create dacc645fa0a3173db55c0330051c65cbb95e26c348605585ad6aa19489c93466 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_leakey, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 20 19:10:38 compute-0 systemd[1]: Started libpod-conmon-dacc645fa0a3173db55c0330051c65cbb95e26c348605585ad6aa19489c93466.scope.
Jan 20 19:10:38 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:10:38 compute-0 podman[264964]: 2026-01-20 19:10:38.085388252 +0000 UTC m=+0.121684225 container init dacc645fa0a3173db55c0330051c65cbb95e26c348605585ad6aa19489c93466 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_leakey, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:10:38 compute-0 podman[264964]: 2026-01-20 19:10:37.990223791 +0000 UTC m=+0.026519774 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:10:38 compute-0 podman[264964]: 2026-01-20 19:10:38.099080865 +0000 UTC m=+0.135376808 container start dacc645fa0a3173db55c0330051c65cbb95e26c348605585ad6aa19489c93466 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_leakey, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Jan 20 19:10:38 compute-0 podman[264964]: 2026-01-20 19:10:38.10268586 +0000 UTC m=+0.138981843 container attach dacc645fa0a3173db55c0330051c65cbb95e26c348605585ad6aa19489c93466 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:10:38 compute-0 systemd[1]: libpod-dacc645fa0a3173db55c0330051c65cbb95e26c348605585ad6aa19489c93466.scope: Deactivated successfully.
Jan 20 19:10:38 compute-0 fervent_leakey[264981]: 167 167
Jan 20 19:10:38 compute-0 conmon[264981]: conmon dacc645fa0a3173db55c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dacc645fa0a3173db55c0330051c65cbb95e26c348605585ad6aa19489c93466.scope/container/memory.events
Jan 20 19:10:38 compute-0 podman[264964]: 2026-01-20 19:10:38.105912966 +0000 UTC m=+0.142208909 container died dacc645fa0a3173db55c0330051c65cbb95e26c348605585ad6aa19489c93466 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 19:10:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f3232b76428c3b642fd508e27ce5f454840dc3c12cb455d9e7082f65d276be2-merged.mount: Deactivated successfully.
Jan 20 19:10:38 compute-0 podman[264964]: 2026-01-20 19:10:38.145022353 +0000 UTC m=+0.181318306 container remove dacc645fa0a3173db55c0330051c65cbb95e26c348605585ad6aa19489c93466 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=fervent_leakey, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 19:10:38 compute-0 systemd[1]: libpod-conmon-dacc645fa0a3173db55c0330051c65cbb95e26c348605585ad6aa19489c93466.scope: Deactivated successfully.
Jan 20 19:10:38 compute-0 podman[265005]: 2026-01-20 19:10:38.330918928 +0000 UTC m=+0.052168323 container create 9342a91a21dbdc6fa799701079e4ac3bb7e0676aba66b047ae11c8e03594b736 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 20 19:10:38 compute-0 systemd[1]: Started libpod-conmon-9342a91a21dbdc6fa799701079e4ac3bb7e0676aba66b047ae11c8e03594b736.scope.
Jan 20 19:10:38 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:10:38 compute-0 podman[265005]: 2026-01-20 19:10:38.308626617 +0000 UTC m=+0.029876112 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:10:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7becb97992e4d77203810c39d09ec9fc8d466b190e0635b500804fbc9b7472f6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:10:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7becb97992e4d77203810c39d09ec9fc8d466b190e0635b500804fbc9b7472f6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:10:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7becb97992e4d77203810c39d09ec9fc8d466b190e0635b500804fbc9b7472f6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:10:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7becb97992e4d77203810c39d09ec9fc8d466b190e0635b500804fbc9b7472f6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:10:38 compute-0 podman[265005]: 2026-01-20 19:10:38.420227754 +0000 UTC m=+0.141477199 container init 9342a91a21dbdc6fa799701079e4ac3bb7e0676aba66b047ae11c8e03594b736 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_banzai, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Jan 20 19:10:38 compute-0 podman[265005]: 2026-01-20 19:10:38.429023777 +0000 UTC m=+0.150273182 container start 9342a91a21dbdc6fa799701079e4ac3bb7e0676aba66b047ae11c8e03594b736 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Jan 20 19:10:38 compute-0 podman[265005]: 2026-01-20 19:10:38.4332969 +0000 UTC m=+0.154546305 container attach 9342a91a21dbdc6fa799701079e4ac3bb7e0676aba66b047ae11c8e03594b736 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:10:38 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v856: 337 pgs: 337 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 14 KiB/s wr, 85 op/s
Jan 20 19:10:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:10:38.864Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:10:39 compute-0 lvm[265097]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 19:10:39 compute-0 lvm[265097]: VG ceph_vg0 finished
Jan 20 19:10:39 compute-0 recursing_banzai[265022]: {}
Jan 20 19:10:39 compute-0 systemd[1]: libpod-9342a91a21dbdc6fa799701079e4ac3bb7e0676aba66b047ae11c8e03594b736.scope: Deactivated successfully.
Jan 20 19:10:39 compute-0 podman[265005]: 2026-01-20 19:10:39.255946768 +0000 UTC m=+0.977196183 container died 9342a91a21dbdc6fa799701079e4ac3bb7e0676aba66b047ae11c8e03594b736 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 19:10:39 compute-0 systemd[1]: libpod-9342a91a21dbdc6fa799701079e4ac3bb7e0676aba66b047ae11c8e03594b736.scope: Consumed 1.297s CPU time.
Jan 20 19:10:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-7becb97992e4d77203810c39d09ec9fc8d466b190e0635b500804fbc9b7472f6-merged.mount: Deactivated successfully.
Jan 20 19:10:39 compute-0 podman[265005]: 2026-01-20 19:10:39.308958223 +0000 UTC m=+1.030207628 container remove 9342a91a21dbdc6fa799701079e4ac3bb7e0676aba66b047ae11c8e03594b736 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_banzai, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid)
Jan 20 19:10:39 compute-0 systemd[1]: libpod-conmon-9342a91a21dbdc6fa799701079e4ac3bb7e0676aba66b047ae11c8e03594b736.scope: Deactivated successfully.
Jan 20 19:10:39 compute-0 sudo[264897]: pam_unix(sudo:session): session closed for user root
Jan 20 19:10:39 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:10:39 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:10:39 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:10:39 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:10:39 compute-0 sudo[265111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 19:10:39 compute-0 sudo[265111]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:10:39 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:10:39 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:10:39 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:10:39.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:10:39 compute-0 sudo[265111]: pam_unix(sudo:session): session closed for user root
Jan 20 19:10:39 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:10:39] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Jan 20 19:10:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:10:39] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Jan 20 19:10:39 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:10:39 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:10:39 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:10:39.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:10:40 compute-0 ceph-mon[74381]: pgmap v856: 337 pgs: 337 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 14 KiB/s wr, 85 op/s
Jan 20 19:10:40 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:10:40 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:10:40 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:10:40 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v857: 337 pgs: 337 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 14 KiB/s wr, 85 op/s
Jan 20 19:10:41 compute-0 nova_compute[254061]: 2026-01-20 19:10:41.024 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:10:41 compute-0 nova_compute[254061]: 2026-01-20 19:10:41.310 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:10:41 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:10:41 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:10:41 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:10:41.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:10:41 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:10:41 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:10:41 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:10:41 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:10:41.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:10:42 compute-0 ceph-mon[74381]: pgmap v857: 337 pgs: 337 active+clean; 167 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 14 KiB/s wr, 85 op/s
Jan 20 19:10:42 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v858: 337 pgs: 337 active+clean; 188 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 867 KiB/s rd, 2.3 MiB/s wr, 68 op/s
Jan 20 19:10:43 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:10:43 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:10:43 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:10:43.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:10:43 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:10:43 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:10:43 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:10:43.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:10:44 compute-0 ceph-mon[74381]: pgmap v858: 337 pgs: 337 active+clean; 188 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 867 KiB/s rd, 2.3 MiB/s wr, 68 op/s
Jan 20 19:10:44 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v859: 337 pgs: 337 active+clean; 188 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 114 KiB/s rd, 2.3 MiB/s wr, 36 op/s
Jan 20 19:10:45 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:10:45 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:10:45 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:10:45.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:10:45 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:10:45 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:10:45 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:10:45.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:10:46 compute-0 nova_compute[254061]: 2026-01-20 19:10:46.028 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:10:46 compute-0 nova_compute[254061]: 2026-01-20 19:10:46.312 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:10:46 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v860: 337 pgs: 337 active+clean; 193 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 235 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Jan 20 19:10:46 compute-0 ceph-mon[74381]: pgmap v859: 337 pgs: 337 active+clean; 188 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 114 KiB/s rd, 2.3 MiB/s wr, 36 op/s
Jan 20 19:10:46 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:10:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:10:47.176Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:10:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:10:47.177Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:10:47 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:10:47 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:10:47 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:10:47.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:10:47 compute-0 ceph-mon[74381]: pgmap v860: 337 pgs: 337 active+clean; 193 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 235 KiB/s rd, 2.1 MiB/s wr, 58 op/s
Jan 20 19:10:47 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:10:47 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:10:47 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:10:47.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:10:48 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v861: 337 pgs: 337 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 269 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 20 19:10:48 compute-0 ceph-mon[74381]: from='client.? 192.168.122.10:0/2550460248' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 19:10:48 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:10:48.867Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:10:48 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:10:48.867Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:10:49 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:10:49 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:10:49 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:10:49.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:10:49 compute-0 ceph-mon[74381]: pgmap v861: 337 pgs: 337 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 269 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 20 19:10:49 compute-0 ceph-mon[74381]: from='client.? 192.168.122.10:0/2550460248' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 19:10:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:10:49] "GET /metrics HTTP/1.1" 200 48472 "" "Prometheus/2.51.0"
Jan 20 19:10:49 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:10:49] "GET /metrics HTTP/1.1" 200 48472 "" "Prometheus/2.51.0"
Jan 20 19:10:49 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:10:49 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:10:49 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:10:49.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:10:50 compute-0 podman[265146]: 2026-01-20 19:10:50.096165118 +0000 UTC m=+0.065603080 container health_status 7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Jan 20 19:10:50 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:10:50.241 165659 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'be:fe:f2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '0e:71:69:cd:a8:95'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 19:10:50 compute-0 nova_compute[254061]: 2026-01-20 19:10:50.242 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:10:50 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:10:50.242 165659 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 19:10:50 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:10:50.243 165659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7018ca8a-de0e-4b56-bb43-675238d4f8b3, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 19:10:50 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v862: 337 pgs: 337 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 269 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 20 19:10:51 compute-0 nova_compute[254061]: 2026-01-20 19:10:51.031 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:10:51 compute-0 nova_compute[254061]: 2026-01-20 19:10:51.358 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:10:51 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:10:51 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:10:51 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:10:51.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:10:51 compute-0 ceph-mon[74381]: pgmap v862: 337 pgs: 337 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 269 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 20 19:10:51 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:10:51 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:10:51 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:10:51 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:10:51.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:10:52 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v863: 337 pgs: 337 active+clean; 121 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 288 KiB/s rd, 2.1 MiB/s wr, 90 op/s
Jan 20 19:10:52 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/1992118311' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:10:53 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:10:53 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:10:53 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:10:53.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:10:53 compute-0 ceph-mon[74381]: pgmap v863: 337 pgs: 337 active+clean; 121 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 288 KiB/s rd, 2.1 MiB/s wr, 90 op/s
Jan 20 19:10:53 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:10:53 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:10:53 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:10:53.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:10:54 compute-0 nova_compute[254061]: 2026-01-20 19:10:54.089 254065 DEBUG oslo_concurrency.lockutils [None req-6144bf93-2fa4-4822-a995-cda11964115d d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Acquiring lock "390552fe-c600-4ce3-a209-851b5c0a067d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:10:54 compute-0 nova_compute[254061]: 2026-01-20 19:10:54.090 254065 DEBUG oslo_concurrency.lockutils [None req-6144bf93-2fa4-4822-a995-cda11964115d d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "390552fe-c600-4ce3-a209-851b5c0a067d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:10:54 compute-0 nova_compute[254061]: 2026-01-20 19:10:54.090 254065 DEBUG oslo_concurrency.lockutils [None req-6144bf93-2fa4-4822-a995-cda11964115d d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Acquiring lock "390552fe-c600-4ce3-a209-851b5c0a067d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:10:54 compute-0 nova_compute[254061]: 2026-01-20 19:10:54.091 254065 DEBUG oslo_concurrency.lockutils [None req-6144bf93-2fa4-4822-a995-cda11964115d d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "390552fe-c600-4ce3-a209-851b5c0a067d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:10:54 compute-0 nova_compute[254061]: 2026-01-20 19:10:54.091 254065 DEBUG oslo_concurrency.lockutils [None req-6144bf93-2fa4-4822-a995-cda11964115d d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "390552fe-c600-4ce3-a209-851b5c0a067d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:10:54 compute-0 nova_compute[254061]: 2026-01-20 19:10:54.093 254065 INFO nova.compute.manager [None req-6144bf93-2fa4-4822-a995-cda11964115d d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] Terminating instance
Jan 20 19:10:54 compute-0 nova_compute[254061]: 2026-01-20 19:10:54.095 254065 DEBUG nova.compute.manager [None req-6144bf93-2fa4-4822-a995-cda11964115d d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 19:10:54 compute-0 kernel: tapb5955f0c-06 (unregistering): left promiscuous mode
Jan 20 19:10:54 compute-0 NetworkManager[48914]: <info>  [1768936254.1468] device (tapb5955f0c-06): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 19:10:54 compute-0 ovn_controller[155128]: 2026-01-20T19:10:54Z|00046|binding|INFO|Releasing lport b5955f0c-06f4-4e74-ada0-a76a7d5a4d8b from this chassis (sb_readonly=0)
Jan 20 19:10:54 compute-0 ovn_controller[155128]: 2026-01-20T19:10:54Z|00047|binding|INFO|Setting lport b5955f0c-06f4-4e74-ada0-a76a7d5a4d8b down in Southbound
Jan 20 19:10:54 compute-0 ovn_controller[155128]: 2026-01-20T19:10:54Z|00048|binding|INFO|Removing iface tapb5955f0c-06 ovn-installed in OVS
Jan 20 19:10:54 compute-0 nova_compute[254061]: 2026-01-20 19:10:54.166 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:10:54 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:10:54.173 165659 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2d:7a:43 10.100.0.3'], port_security=['fa:16:3e:2d:7a:43 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '390552fe-c600-4ce3-a209-851b5c0a067d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-873c0e56-2798-477a-adc3-8a628bffd4e1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dc8a6ea17f334edbbfaf2a91ec6fd167', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0ac89db4-88de-404e-8497-9c576b033842', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2c6ad307-385c-4724-a394-d49c5c3a804b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4e780f9880>], logical_port=b5955f0c-06f4-4e74-ada0-a76a7d5a4d8b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4e780f9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 19:10:54 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:10:54.175 165659 INFO neutron.agent.ovn.metadata.agent [-] Port b5955f0c-06f4-4e74-ada0-a76a7d5a4d8b in datapath 873c0e56-2798-477a-adc3-8a628bffd4e1 unbound from our chassis
Jan 20 19:10:54 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:10:54.177 165659 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 873c0e56-2798-477a-adc3-8a628bffd4e1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 19:10:54 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:10:54.178 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[fd18db3a-96b2-4d5c-b06f-59da2e064808]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:10:54 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:10:54.179 165659 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-873c0e56-2798-477a-adc3-8a628bffd4e1 namespace which is not needed anymore
Jan 20 19:10:54 compute-0 nova_compute[254061]: 2026-01-20 19:10:54.194 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:10:54 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000004.scope: Deactivated successfully.
Jan 20 19:10:54 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000004.scope: Consumed 16.065s CPU time.
Jan 20 19:10:54 compute-0 systemd-machined[220746]: Machine qemu-2-instance-00000004 terminated.
Jan 20 19:10:54 compute-0 nova_compute[254061]: 2026-01-20 19:10:54.318 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:10:54 compute-0 nova_compute[254061]: 2026-01-20 19:10:54.322 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:10:54 compute-0 neutron-haproxy-ovnmeta-873c0e56-2798-477a-adc3-8a628bffd4e1[264074]: [NOTICE]   (264078) : haproxy version is 2.8.14-c23fe91
Jan 20 19:10:54 compute-0 neutron-haproxy-ovnmeta-873c0e56-2798-477a-adc3-8a628bffd4e1[264074]: [NOTICE]   (264078) : path to executable is /usr/sbin/haproxy
Jan 20 19:10:54 compute-0 neutron-haproxy-ovnmeta-873c0e56-2798-477a-adc3-8a628bffd4e1[264074]: [WARNING]  (264078) : Exiting Master process...
Jan 20 19:10:54 compute-0 neutron-haproxy-ovnmeta-873c0e56-2798-477a-adc3-8a628bffd4e1[264074]: [WARNING]  (264078) : Exiting Master process...
Jan 20 19:10:54 compute-0 nova_compute[254061]: 2026-01-20 19:10:54.342 254065 INFO nova.virt.libvirt.driver [-] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] Instance destroyed successfully.
Jan 20 19:10:54 compute-0 nova_compute[254061]: 2026-01-20 19:10:54.342 254065 DEBUG nova.objects.instance [None req-6144bf93-2fa4-4822-a995-cda11964115d d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lazy-loading 'resources' on Instance uuid 390552fe-c600-4ce3-a209-851b5c0a067d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 19:10:54 compute-0 neutron-haproxy-ovnmeta-873c0e56-2798-477a-adc3-8a628bffd4e1[264074]: [ALERT]    (264078) : Current worker (264080) exited with code 143 (Terminated)
Jan 20 19:10:54 compute-0 neutron-haproxy-ovnmeta-873c0e56-2798-477a-adc3-8a628bffd4e1[264074]: [WARNING]  (264078) : All workers exited. Exiting... (0)
Jan 20 19:10:54 compute-0 systemd[1]: libpod-80f0deb819cdaae645754b331be1592e47a46c8c13f4232eb77693032833cccc.scope: Deactivated successfully.
Jan 20 19:10:54 compute-0 podman[265196]: 2026-01-20 19:10:54.353885163 +0000 UTC m=+0.061841719 container died 80f0deb819cdaae645754b331be1592e47a46c8c13f4232eb77693032833cccc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-873c0e56-2798-477a-adc3-8a628bffd4e1, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 20 19:10:54 compute-0 nova_compute[254061]: 2026-01-20 19:10:54.359 254065 DEBUG nova.virt.libvirt.vif [None req-6144bf93-2fa4-4822-a995-cda11964115d d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T19:09:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-794136600',display_name='tempest-TestNetworkBasicOps-server-794136600',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-794136600',id=4,image_ref='bc57af0c-4b71-499e-9808-3c8fc070a488',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG3+rAyyjV/EzO01vUAJnl7IV+iKRclPYM6MZ5A/U1F9mbIlYEIUOrWmm0VSDtBi6EyX6b1roJWGutyV+ZX7+SU3lPvUOqicmJKar+2nRoxjyKH+QoCQaxwdC7KzJvVauA==',key_name='tempest-TestNetworkBasicOps-128194792',keypairs=<?>,launch_index=0,launched_at=2026-01-20T19:09:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dc8a6ea17f334edbbfaf2a91ec6fd167',ramdisk_id='',reservation_id='r-iydiyz0d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='bc57af0c-4b71-499e-9808-3c8fc070a488',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-899583499',owner_user_name='tempest-TestNetworkBasicOps-899583499-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T19:09:53Z,user_data=None,user_id='d34bd159f8884263a7481e3fcff15267',uuid=390552fe-c600-4ce3-a209-851b5c0a067d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b5955f0c-06f4-4e74-ada0-a76a7d5a4d8b", "address": "fa:16:3e:2d:7a:43", "network": {"id": "873c0e56-2798-477a-adc3-8a628bffd4e1", "bridge": "br-int", "label": "tempest-network-smoke--647446815", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5955f0c-06", "ovs_interfaceid": "b5955f0c-06f4-4e74-ada0-a76a7d5a4d8b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 19:10:54 compute-0 nova_compute[254061]: 2026-01-20 19:10:54.359 254065 DEBUG nova.network.os_vif_util [None req-6144bf93-2fa4-4822-a995-cda11964115d d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Converting VIF {"id": "b5955f0c-06f4-4e74-ada0-a76a7d5a4d8b", "address": "fa:16:3e:2d:7a:43", "network": {"id": "873c0e56-2798-477a-adc3-8a628bffd4e1", "bridge": "br-int", "label": "tempest-network-smoke--647446815", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb5955f0c-06", "ovs_interfaceid": "b5955f0c-06f4-4e74-ada0-a76a7d5a4d8b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 19:10:54 compute-0 nova_compute[254061]: 2026-01-20 19:10:54.360 254065 DEBUG nova.network.os_vif_util [None req-6144bf93-2fa4-4822-a995-cda11964115d d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2d:7a:43,bridge_name='br-int',has_traffic_filtering=True,id=b5955f0c-06f4-4e74-ada0-a76a7d5a4d8b,network=Network(873c0e56-2798-477a-adc3-8a628bffd4e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5955f0c-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 19:10:54 compute-0 nova_compute[254061]: 2026-01-20 19:10:54.361 254065 DEBUG os_vif [None req-6144bf93-2fa4-4822-a995-cda11964115d d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2d:7a:43,bridge_name='br-int',has_traffic_filtering=True,id=b5955f0c-06f4-4e74-ada0-a76a7d5a4d8b,network=Network(873c0e56-2798-477a-adc3-8a628bffd4e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5955f0c-06') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 19:10:54 compute-0 nova_compute[254061]: 2026-01-20 19:10:54.363 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:10:54 compute-0 nova_compute[254061]: 2026-01-20 19:10:54.363 254065 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb5955f0c-06, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 19:10:54 compute-0 nova_compute[254061]: 2026-01-20 19:10:54.365 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:10:54 compute-0 nova_compute[254061]: 2026-01-20 19:10:54.367 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:10:54 compute-0 nova_compute[254061]: 2026-01-20 19:10:54.368 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:10:54 compute-0 nova_compute[254061]: 2026-01-20 19:10:54.370 254065 INFO os_vif [None req-6144bf93-2fa4-4822-a995-cda11964115d d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2d:7a:43,bridge_name='br-int',has_traffic_filtering=True,id=b5955f0c-06f4-4e74-ada0-a76a7d5a4d8b,network=Network(873c0e56-2798-477a-adc3-8a628bffd4e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb5955f0c-06')
Jan 20 19:10:54 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-80f0deb819cdaae645754b331be1592e47a46c8c13f4232eb77693032833cccc-userdata-shm.mount: Deactivated successfully.
Jan 20 19:10:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-5184f2c797fb5af1c8ebe580319c5f230e4b5280c61a25bacafb9019b80be4ff-merged.mount: Deactivated successfully.
Jan 20 19:10:54 compute-0 podman[265196]: 2026-01-20 19:10:54.402148573 +0000 UTC m=+0.110105109 container cleanup 80f0deb819cdaae645754b331be1592e47a46c8c13f4232eb77693032833cccc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-873c0e56-2798-477a-adc3-8a628bffd4e1, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 20 19:10:54 compute-0 systemd[1]: libpod-conmon-80f0deb819cdaae645754b331be1592e47a46c8c13f4232eb77693032833cccc.scope: Deactivated successfully.
Jan 20 19:10:54 compute-0 podman[265252]: 2026-01-20 19:10:54.470826452 +0000 UTC m=+0.048308021 container remove 80f0deb819cdaae645754b331be1592e47a46c8c13f4232eb77693032833cccc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-873c0e56-2798-477a-adc3-8a628bffd4e1, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 20 19:10:54 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:10:54.477 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[97474ecd-189a-49c1-a466-66d035a8f829]: (4, ('Tue Jan 20 07:10:54 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-873c0e56-2798-477a-adc3-8a628bffd4e1 (80f0deb819cdaae645754b331be1592e47a46c8c13f4232eb77693032833cccc)\n80f0deb819cdaae645754b331be1592e47a46c8c13f4232eb77693032833cccc\nTue Jan 20 07:10:54 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-873c0e56-2798-477a-adc3-8a628bffd4e1 (80f0deb819cdaae645754b331be1592e47a46c8c13f4232eb77693032833cccc)\n80f0deb819cdaae645754b331be1592e47a46c8c13f4232eb77693032833cccc\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:10:54 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:10:54.479 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[82e803ea-f0d5-4458-ae0c-2d24e2dbf2ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:10:54 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:10:54.481 165659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap873c0e56-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 19:10:54 compute-0 nova_compute[254061]: 2026-01-20 19:10:54.483 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:10:54 compute-0 kernel: tap873c0e56-20: left promiscuous mode
Jan 20 19:10:54 compute-0 nova_compute[254061]: 2026-01-20 19:10:54.488 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:10:54 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:10:54.491 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[d2601620-d1b9-4fcc-a5ba-16a9bbaf2fe2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:10:54 compute-0 nova_compute[254061]: 2026-01-20 19:10:54.505 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:10:54 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:10:54.505 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[52bc8b7a-6f35-48c0-88c5-b32f98ef3da1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:10:54 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:10:54.508 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[44aee3fc-71de-4ceb-b1e2-984f7dbf2d7e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:10:54 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:10:54.527 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[c5a55366-d02b-41fe-8c03-1fe9910c352c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 438782, 'reachable_time': 18595, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 265270, 'error': None, 'target': 'ovnmeta-873c0e56-2798-477a-adc3-8a628bffd4e1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:10:54 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:10:54.532 166372 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-873c0e56-2798-477a-adc3-8a628bffd4e1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 19:10:54 compute-0 systemd[1]: run-netns-ovnmeta\x2d873c0e56\x2d2798\x2d477a\x2dadc3\x2d8a628bffd4e1.mount: Deactivated successfully.
Jan 20 19:10:54 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:10:54.532 166372 DEBUG oslo.privsep.daemon [-] privsep: reply[5e4b5bf7-fac2-4b76-8e62-62707959b559]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:10:54 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v864: 337 pgs: 337 active+clean; 121 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 192 KiB/s rd, 115 KiB/s wr, 59 op/s
Jan 20 19:10:54 compute-0 nova_compute[254061]: 2026-01-20 19:10:54.764 254065 INFO nova.virt.libvirt.driver [None req-6144bf93-2fa4-4822-a995-cda11964115d d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] Deleting instance files /var/lib/nova/instances/390552fe-c600-4ce3-a209-851b5c0a067d_del
Jan 20 19:10:54 compute-0 nova_compute[254061]: 2026-01-20 19:10:54.765 254065 INFO nova.virt.libvirt.driver [None req-6144bf93-2fa4-4822-a995-cda11964115d d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] Deletion of /var/lib/nova/instances/390552fe-c600-4ce3-a209-851b5c0a067d_del complete
Jan 20 19:10:54 compute-0 nova_compute[254061]: 2026-01-20 19:10:54.831 254065 INFO nova.compute.manager [None req-6144bf93-2fa4-4822-a995-cda11964115d d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] Took 0.74 seconds to destroy the instance on the hypervisor.
Jan 20 19:10:54 compute-0 nova_compute[254061]: 2026-01-20 19:10:54.831 254065 DEBUG oslo.service.loopingcall [None req-6144bf93-2fa4-4822-a995-cda11964115d d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 19:10:54 compute-0 nova_compute[254061]: 2026-01-20 19:10:54.832 254065 DEBUG nova.compute.manager [-] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 19:10:54 compute-0 nova_compute[254061]: 2026-01-20 19:10:54.832 254065 DEBUG nova.network.neutron [-] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 19:10:54 compute-0 nova_compute[254061]: 2026-01-20 19:10:54.870 254065 DEBUG nova.compute.manager [req-b35c6680-fa7b-45f9-9374-327de7c0b4f2 req-d7ad995a-8f64-43ef-9731-fcb9a70269af 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] Received event network-vif-unplugged-b5955f0c-06f4-4e74-ada0-a76a7d5a4d8b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 19:10:54 compute-0 nova_compute[254061]: 2026-01-20 19:10:54.871 254065 DEBUG oslo_concurrency.lockutils [req-b35c6680-fa7b-45f9-9374-327de7c0b4f2 req-d7ad995a-8f64-43ef-9731-fcb9a70269af 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Acquiring lock "390552fe-c600-4ce3-a209-851b5c0a067d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:10:54 compute-0 nova_compute[254061]: 2026-01-20 19:10:54.871 254065 DEBUG oslo_concurrency.lockutils [req-b35c6680-fa7b-45f9-9374-327de7c0b4f2 req-d7ad995a-8f64-43ef-9731-fcb9a70269af 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Lock "390552fe-c600-4ce3-a209-851b5c0a067d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:10:54 compute-0 nova_compute[254061]: 2026-01-20 19:10:54.872 254065 DEBUG oslo_concurrency.lockutils [req-b35c6680-fa7b-45f9-9374-327de7c0b4f2 req-d7ad995a-8f64-43ef-9731-fcb9a70269af 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Lock "390552fe-c600-4ce3-a209-851b5c0a067d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:10:54 compute-0 nova_compute[254061]: 2026-01-20 19:10:54.872 254065 DEBUG nova.compute.manager [req-b35c6680-fa7b-45f9-9374-327de7c0b4f2 req-d7ad995a-8f64-43ef-9731-fcb9a70269af 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] No waiting events found dispatching network-vif-unplugged-b5955f0c-06f4-4e74-ada0-a76a7d5a4d8b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 19:10:54 compute-0 nova_compute[254061]: 2026-01-20 19:10:54.873 254065 DEBUG nova.compute.manager [req-b35c6680-fa7b-45f9-9374-327de7c0b4f2 req-d7ad995a-8f64-43ef-9731-fcb9a70269af 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] Received event network-vif-unplugged-b5955f0c-06f4-4e74-ada0-a76a7d5a4d8b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 19:10:55 compute-0 ceph-mgr[74676]: [balancer INFO root] Optimize plan auto_2026-01-20_19:10:54
Jan 20 19:10:55 compute-0 ceph-mgr[74676]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 19:10:55 compute-0 ceph-mgr[74676]: [balancer INFO root] do_upmap
Jan 20 19:10:55 compute-0 ceph-mgr[74676]: [balancer INFO root] pools ['.mgr', 'volumes', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.log', '.nfs', 'images', '.rgw.root', 'vms', 'cephfs.cephfs.meta', 'default.rgw.meta', 'backups']
Jan 20 19:10:55 compute-0 ceph-mgr[74676]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 19:10:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:10:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:10:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:10:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:10:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:10:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:10:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 19:10:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:10:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 19:10:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:10:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007596545956241453 of space, bias 1.0, pg target 0.22789637868724358 quantized to 32 (current 32)
Jan 20 19:10:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:10:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:10:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:10:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:10:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:10:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 20 19:10:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:10:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 20 19:10:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:10:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:10:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:10:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 19:10:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:10:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 20 19:10:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:10:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:10:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:10:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 19:10:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:10:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 20 19:10:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 19:10:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:10:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 19:10:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:10:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:10:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:10:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:10:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:10:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:10:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:10:55 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:10:55 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:10:55 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:10:55.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:10:55 compute-0 nova_compute[254061]: 2026-01-20 19:10:55.699 254065 DEBUG nova.network.neutron [-] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 19:10:55 compute-0 nova_compute[254061]: 2026-01-20 19:10:55.716 254065 INFO nova.compute.manager [-] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] Took 0.88 seconds to deallocate network for instance.
Jan 20 19:10:55 compute-0 nova_compute[254061]: 2026-01-20 19:10:55.762 254065 DEBUG oslo_concurrency.lockutils [None req-6144bf93-2fa4-4822-a995-cda11964115d d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:10:55 compute-0 nova_compute[254061]: 2026-01-20 19:10:55.763 254065 DEBUG oslo_concurrency.lockutils [None req-6144bf93-2fa4-4822-a995-cda11964115d d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:10:55 compute-0 nova_compute[254061]: 2026-01-20 19:10:55.818 254065 DEBUG oslo_concurrency.processutils [None req-6144bf93-2fa4-4822-a995-cda11964115d d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:10:55 compute-0 ceph-mon[74381]: pgmap v864: 337 pgs: 337 active+clean; 121 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 192 KiB/s rd, 115 KiB/s wr, 59 op/s
Jan 20 19:10:55 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:10:55 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:10:55 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:10:55 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:10:55.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:10:56 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:10:56 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3811186971' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:10:56 compute-0 nova_compute[254061]: 2026-01-20 19:10:56.276 254065 DEBUG oslo_concurrency.processutils [None req-6144bf93-2fa4-4822-a995-cda11964115d d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:10:56 compute-0 nova_compute[254061]: 2026-01-20 19:10:56.285 254065 DEBUG nova.compute.provider_tree [None req-6144bf93-2fa4-4822-a995-cda11964115d d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Inventory has not changed in ProviderTree for provider: cb9161e5-191d-495c-920a-01144f42a215 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 19:10:56 compute-0 nova_compute[254061]: 2026-01-20 19:10:56.301 254065 DEBUG nova.scheduler.client.report [None req-6144bf93-2fa4-4822-a995-cda11964115d d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Inventory has not changed for provider cb9161e5-191d-495c-920a-01144f42a215 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 19:10:56 compute-0 nova_compute[254061]: 2026-01-20 19:10:56.339 254065 DEBUG oslo_concurrency.lockutils [None req-6144bf93-2fa4-4822-a995-cda11964115d d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.575s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:10:56 compute-0 nova_compute[254061]: 2026-01-20 19:10:56.360 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:10:56 compute-0 nova_compute[254061]: 2026-01-20 19:10:56.366 254065 INFO nova.scheduler.client.report [None req-6144bf93-2fa4-4822-a995-cda11964115d d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Deleted allocations for instance 390552fe-c600-4ce3-a209-851b5c0a067d
Jan 20 19:10:56 compute-0 nova_compute[254061]: 2026-01-20 19:10:56.430 254065 DEBUG oslo_concurrency.lockutils [None req-6144bf93-2fa4-4822-a995-cda11964115d d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "390552fe-c600-4ce3-a209-851b5c0a067d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.341s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:10:56 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v865: 337 pgs: 337 active+clean; 74 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 209 KiB/s rd, 116 KiB/s wr, 84 op/s
Jan 20 19:10:56 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:10:56 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/3811186971' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:10:56 compute-0 nova_compute[254061]: 2026-01-20 19:10:56.956 254065 DEBUG nova.compute.manager [req-b9360d45-88ff-4a3f-b60b-8af234894da6 req-985cd795-f20b-48f6-aca1-f5404e58fa48 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] Received event network-vif-plugged-b5955f0c-06f4-4e74-ada0-a76a7d5a4d8b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 19:10:56 compute-0 nova_compute[254061]: 2026-01-20 19:10:56.957 254065 DEBUG oslo_concurrency.lockutils [req-b9360d45-88ff-4a3f-b60b-8af234894da6 req-985cd795-f20b-48f6-aca1-f5404e58fa48 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Acquiring lock "390552fe-c600-4ce3-a209-851b5c0a067d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:10:56 compute-0 nova_compute[254061]: 2026-01-20 19:10:56.958 254065 DEBUG oslo_concurrency.lockutils [req-b9360d45-88ff-4a3f-b60b-8af234894da6 req-985cd795-f20b-48f6-aca1-f5404e58fa48 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Lock "390552fe-c600-4ce3-a209-851b5c0a067d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:10:56 compute-0 nova_compute[254061]: 2026-01-20 19:10:56.958 254065 DEBUG oslo_concurrency.lockutils [req-b9360d45-88ff-4a3f-b60b-8af234894da6 req-985cd795-f20b-48f6-aca1-f5404e58fa48 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Lock "390552fe-c600-4ce3-a209-851b5c0a067d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:10:56 compute-0 nova_compute[254061]: 2026-01-20 19:10:56.958 254065 DEBUG nova.compute.manager [req-b9360d45-88ff-4a3f-b60b-8af234894da6 req-985cd795-f20b-48f6-aca1-f5404e58fa48 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] No waiting events found dispatching network-vif-plugged-b5955f0c-06f4-4e74-ada0-a76a7d5a4d8b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 19:10:56 compute-0 nova_compute[254061]: 2026-01-20 19:10:56.959 254065 WARNING nova.compute.manager [req-b9360d45-88ff-4a3f-b60b-8af234894da6 req-985cd795-f20b-48f6-aca1-f5404e58fa48 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] Received unexpected event network-vif-plugged-b5955f0c-06f4-4e74-ada0-a76a7d5a4d8b for instance with vm_state deleted and task_state None.
Jan 20 19:10:56 compute-0 nova_compute[254061]: 2026-01-20 19:10:56.959 254065 DEBUG nova.compute.manager [req-b9360d45-88ff-4a3f-b60b-8af234894da6 req-985cd795-f20b-48f6-aca1-f5404e58fa48 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] Received event network-vif-deleted-b5955f0c-06f4-4e74-ada0-a76a7d5a4d8b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 19:10:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:10:57.178Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:10:57 compute-0 sudo[265296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:10:57 compute-0 sudo[265296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:10:57 compute-0 sudo[265296]: pam_unix(sudo:session): session closed for user root
Jan 20 19:10:57 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:10:57 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:10:57 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:10:57.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:10:57 compute-0 ceph-mon[74381]: pgmap v865: 337 pgs: 337 active+clean; 74 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 209 KiB/s rd, 116 KiB/s wr, 84 op/s
Jan 20 19:10:57 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:10:57 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:10:57 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:10:57.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:10:58 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v866: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 75 KiB/s rd, 39 KiB/s wr, 61 op/s
Jan 20 19:10:58 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:10:58.868Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:10:59 compute-0 podman[265323]: 2026-01-20 19:10:59.154255937 +0000 UTC m=+0.119046135 container health_status d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 20 19:10:59 compute-0 nova_compute[254061]: 2026-01-20 19:10:59.366 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:10:59 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:10:59 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:10:59 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:10:59.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:10:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:10:59] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Jan 20 19:10:59 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:10:59] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Jan 20 19:10:59 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:10:59 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:10:59 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:10:59.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:11:00 compute-0 nova_compute[254061]: 2026-01-20 19:11:00.371 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:11:00 compute-0 ceph-mon[74381]: pgmap v866: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 75 KiB/s rd, 39 KiB/s wr, 61 op/s
Jan 20 19:11:00 compute-0 nova_compute[254061]: 2026-01-20 19:11:00.489 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:11:00 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v867: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 22 KiB/s wr, 57 op/s
Jan 20 19:11:01 compute-0 nova_compute[254061]: 2026-01-20 19:11:01.363 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:11:01 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:11:01 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 19:11:01 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:11:01.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 19:11:01 compute-0 ceph-mon[74381]: pgmap v867: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 22 KiB/s wr, 57 op/s
Jan 20 19:11:01 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:11:01 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:11:01 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:11:01 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:11:01.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:11:02 compute-0 nova_compute[254061]: 2026-01-20 19:11:02.129 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:11:02 compute-0 nova_compute[254061]: 2026-01-20 19:11:02.153 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:11:02 compute-0 nova_compute[254061]: 2026-01-20 19:11:02.154 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:11:02 compute-0 nova_compute[254061]: 2026-01-20 19:11:02.154 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:11:02 compute-0 nova_compute[254061]: 2026-01-20 19:11:02.154 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 19:11:02 compute-0 nova_compute[254061]: 2026-01-20 19:11:02.155 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:11:02 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v868: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 22 KiB/s wr, 58 op/s
Jan 20 19:11:02 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:11:02 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/500112500' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:11:02 compute-0 nova_compute[254061]: 2026-01-20 19:11:02.643 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:11:02 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/500112500' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:11:02 compute-0 nova_compute[254061]: 2026-01-20 19:11:02.819 254065 WARNING nova.virt.libvirt.driver [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 19:11:02 compute-0 nova_compute[254061]: 2026-01-20 19:11:02.821 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4594MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 19:11:02 compute-0 nova_compute[254061]: 2026-01-20 19:11:02.821 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:11:02 compute-0 nova_compute[254061]: 2026-01-20 19:11:02.821 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:11:02 compute-0 nova_compute[254061]: 2026-01-20 19:11:02.909 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 19:11:02 compute-0 nova_compute[254061]: 2026-01-20 19:11:02.911 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 19:11:02 compute-0 nova_compute[254061]: 2026-01-20 19:11:02.927 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:11:03 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:11:03 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2320103259' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:11:03 compute-0 nova_compute[254061]: 2026-01-20 19:11:03.400 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:11:03 compute-0 nova_compute[254061]: 2026-01-20 19:11:03.407 254065 DEBUG nova.compute.provider_tree [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Inventory has not changed in ProviderTree for provider: cb9161e5-191d-495c-920a-01144f42a215 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 19:11:03 compute-0 nova_compute[254061]: 2026-01-20 19:11:03.434 254065 DEBUG nova.scheduler.client.report [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Inventory has not changed for provider cb9161e5-191d-495c-920a-01144f42a215 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 19:11:03 compute-0 nova_compute[254061]: 2026-01-20 19:11:03.466 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 19:11:03 compute-0 nova_compute[254061]: 2026-01-20 19:11:03.467 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.646s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:11:03 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:11:03 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:11:03 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:11:03.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:11:03 compute-0 ceph-mon[74381]: pgmap v868: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 22 KiB/s wr, 58 op/s
Jan 20 19:11:03 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/2320103259' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:11:03 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:11:03 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:11:03 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:11:03.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:11:04 compute-0 nova_compute[254061]: 2026-01-20 19:11:04.368 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:11:04 compute-0 nova_compute[254061]: 2026-01-20 19:11:04.468 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:11:04 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v869: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 9.6 KiB/s wr, 29 op/s
Jan 20 19:11:05 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:11:05 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:11:05 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:11:05.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:11:05 compute-0 ceph-mon[74381]: pgmap v869: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 9.6 KiB/s wr, 29 op/s
Jan 20 19:11:05 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:11:05 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:11:05 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:11:05.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:11:06 compute-0 nova_compute[254061]: 2026-01-20 19:11:06.128 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:11:06 compute-0 nova_compute[254061]: 2026-01-20 19:11:06.365 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:11:06 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v870: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 9.6 KiB/s wr, 29 op/s
Jan 20 19:11:06 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:11:07 compute-0 nova_compute[254061]: 2026-01-20 19:11:07.128 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:11:07 compute-0 nova_compute[254061]: 2026-01-20 19:11:07.129 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:11:07 compute-0 nova_compute[254061]: 2026-01-20 19:11:07.129 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 19:11:07 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:11:07.179Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:11:07 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:11:07.179Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:11:07 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:11:07.179Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:11:07 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:11:07 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:11:07 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:11:07.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:11:07 compute-0 ceph-mon[74381]: pgmap v870: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 9.6 KiB/s wr, 29 op/s
Jan 20 19:11:07 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:11:07 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:11:07 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:11:07.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:11:08 compute-0 nova_compute[254061]: 2026-01-20 19:11:08.129 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:11:08 compute-0 nova_compute[254061]: 2026-01-20 19:11:08.130 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 19:11:08 compute-0 nova_compute[254061]: 2026-01-20 19:11:08.130 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 19:11:08 compute-0 nova_compute[254061]: 2026-01-20 19:11:08.159 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 19:11:08 compute-0 nova_compute[254061]: 2026-01-20 19:11:08.159 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:11:08 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v871: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 8.3 KiB/s wr, 4 op/s
Jan 20 19:11:08 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/3530596939' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:11:08 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:11:08.869Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:11:09 compute-0 nova_compute[254061]: 2026-01-20 19:11:09.338 254065 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768936254.336674, 390552fe-c600-4ce3-a209-851b5c0a067d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 19:11:09 compute-0 nova_compute[254061]: 2026-01-20 19:11:09.338 254065 INFO nova.compute.manager [-] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] VM Stopped (Lifecycle Event)
Jan 20 19:11:09 compute-0 nova_compute[254061]: 2026-01-20 19:11:09.369 254065 DEBUG nova.compute.manager [None req-d3b185b1-6aa1-41f0-b6ec-798783d89a7b - - - - - -] [instance: 390552fe-c600-4ce3-a209-851b5c0a067d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 19:11:09 compute-0 nova_compute[254061]: 2026-01-20 19:11:09.370 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:11:09 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:11:09 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 19:11:09 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:11:09.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 19:11:09 compute-0 ceph-mon[74381]: pgmap v871: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 8.3 KiB/s wr, 4 op/s
Jan 20 19:11:09 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/358894664' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:11:09 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/61920256' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:11:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:11:09] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Jan 20 19:11:09 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:11:09] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Jan 20 19:11:09 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:11:09 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 19:11:09 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:11:09.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 19:11:10 compute-0 nova_compute[254061]: 2026-01-20 19:11:10.155 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:11:10 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v872: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Jan 20 19:11:10 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/1342817738' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:11:10 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:11:11 compute-0 nova_compute[254061]: 2026-01-20 19:11:11.128 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:11:11 compute-0 nova_compute[254061]: 2026-01-20 19:11:11.368 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:11:11 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:11:11 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:11:11 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:11:11.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:11:11 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:11:11 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:11:11 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:11:11 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:11:11.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:11:12 compute-0 ceph-mon[74381]: pgmap v872: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Jan 20 19:11:12 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v873: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 0 op/s
Jan 20 19:11:13 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:11:13 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:11:13 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:11:13.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:11:13 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:11:13 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:11:13 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:11:13.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:11:14 compute-0 ceph-mon[74381]: pgmap v873: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 0 op/s
Jan 20 19:11:14 compute-0 nova_compute[254061]: 2026-01-20 19:11:14.372 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:11:14 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v874: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Jan 20 19:11:15 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:11:15 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:11:15 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:11:15.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:11:15 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:11:15 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:11:15 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:11:15.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:11:16 compute-0 nova_compute[254061]: 2026-01-20 19:11:16.369 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:11:16 compute-0 ceph-mon[74381]: pgmap v874: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Jan 20 19:11:16 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v875: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Jan 20 19:11:16 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:11:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:11:17.180Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:11:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:11:17.181Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:11:17 compute-0 sudo[265414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:11:17 compute-0 sudo[265414]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:11:17 compute-0 sudo[265414]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:17 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:11:17 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 19:11:17 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:11:17.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 19:11:17 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:11:17 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:11:17 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:11:17.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:11:18 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v876: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 20 19:11:18 compute-0 nova_compute[254061]: 2026-01-20 19:11:18.604 254065 DEBUG oslo_concurrency.lockutils [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Acquiring lock "bfdc2bf6-cb73-4586-861c-e6057f75edcc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:11:18 compute-0 nova_compute[254061]: 2026-01-20 19:11:18.605 254065 DEBUG oslo_concurrency.lockutils [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "bfdc2bf6-cb73-4586-861c-e6057f75edcc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:11:18 compute-0 nova_compute[254061]: 2026-01-20 19:11:18.627 254065 DEBUG nova.compute.manager [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 19:11:18 compute-0 ceph-mon[74381]: pgmap v875: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Jan 20 19:11:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:11:18.869Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:11:19 compute-0 nova_compute[254061]: 2026-01-20 19:11:19.086 254065 DEBUG oslo_concurrency.lockutils [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:11:19 compute-0 nova_compute[254061]: 2026-01-20 19:11:19.087 254065 DEBUG oslo_concurrency.lockutils [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:11:19 compute-0 nova_compute[254061]: 2026-01-20 19:11:19.134 254065 DEBUG nova.virt.hardware [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 19:11:19 compute-0 nova_compute[254061]: 2026-01-20 19:11:19.135 254065 INFO nova.compute.claims [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Claim successful on node compute-0.ctlplane.example.com
Jan 20 19:11:19 compute-0 nova_compute[254061]: 2026-01-20 19:11:19.269 254065 DEBUG oslo_concurrency.processutils [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:11:19 compute-0 nova_compute[254061]: 2026-01-20 19:11:19.374 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:11:19 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:11:19 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:11:19 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:11:19.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:11:19 compute-0 ceph-mon[74381]: pgmap v876: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 20 19:11:19 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:11:19 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2296003157' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:11:19 compute-0 nova_compute[254061]: 2026-01-20 19:11:19.746 254065 DEBUG oslo_concurrency.processutils [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:11:19 compute-0 nova_compute[254061]: 2026-01-20 19:11:19.752 254065 DEBUG nova.compute.provider_tree [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Inventory has not changed in ProviderTree for provider: cb9161e5-191d-495c-920a-01144f42a215 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 19:11:19 compute-0 nova_compute[254061]: 2026-01-20 19:11:19.769 254065 DEBUG nova.scheduler.client.report [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Inventory has not changed for provider cb9161e5-191d-495c-920a-01144f42a215 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 19:11:19 compute-0 nova_compute[254061]: 2026-01-20 19:11:19.796 254065 DEBUG oslo_concurrency.lockutils [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.709s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:11:19 compute-0 nova_compute[254061]: 2026-01-20 19:11:19.797 254065 DEBUG nova.compute.manager [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 19:11:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:11:19] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Jan 20 19:11:19 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:11:19] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Jan 20 19:11:19 compute-0 nova_compute[254061]: 2026-01-20 19:11:19.859 254065 DEBUG nova.compute.manager [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 19:11:19 compute-0 nova_compute[254061]: 2026-01-20 19:11:19.859 254065 DEBUG nova.network.neutron [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 19:11:19 compute-0 nova_compute[254061]: 2026-01-20 19:11:19.882 254065 INFO nova.virt.libvirt.driver [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 19:11:19 compute-0 nova_compute[254061]: 2026-01-20 19:11:19.959 254065 DEBUG nova.compute.manager [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 19:11:19 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:11:19 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:11:19 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:11:19.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:11:20 compute-0 nova_compute[254061]: 2026-01-20 19:11:20.086 254065 DEBUG nova.compute.manager [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 19:11:20 compute-0 nova_compute[254061]: 2026-01-20 19:11:20.089 254065 DEBUG nova.virt.libvirt.driver [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 19:11:20 compute-0 nova_compute[254061]: 2026-01-20 19:11:20.090 254065 INFO nova.virt.libvirt.driver [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Creating image(s)
Jan 20 19:11:20 compute-0 nova_compute[254061]: 2026-01-20 19:11:20.131 254065 DEBUG nova.storage.rbd_utils [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] rbd image bfdc2bf6-cb73-4586-861c-e6057f75edcc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 19:11:20 compute-0 nova_compute[254061]: 2026-01-20 19:11:20.175 254065 DEBUG nova.storage.rbd_utils [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] rbd image bfdc2bf6-cb73-4586-861c-e6057f75edcc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 19:11:20 compute-0 nova_compute[254061]: 2026-01-20 19:11:20.216 254065 DEBUG nova.storage.rbd_utils [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] rbd image bfdc2bf6-cb73-4586-861c-e6057f75edcc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 19:11:20 compute-0 nova_compute[254061]: 2026-01-20 19:11:20.221 254065 DEBUG oslo_concurrency.processutils [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2e09eef5d7d60aeeb43ad4911302a9acdced7386 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:11:20 compute-0 nova_compute[254061]: 2026-01-20 19:11:20.251 254065 DEBUG nova.policy [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd34bd159f8884263a7481e3fcff15267', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dc8a6ea17f334edbbfaf2a91ec6fd167', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 19:11:20 compute-0 nova_compute[254061]: 2026-01-20 19:11:20.312 254065 DEBUG oslo_concurrency.processutils [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2e09eef5d7d60aeeb43ad4911302a9acdced7386 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:11:20 compute-0 nova_compute[254061]: 2026-01-20 19:11:20.313 254065 DEBUG oslo_concurrency.lockutils [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Acquiring lock "2e09eef5d7d60aeeb43ad4911302a9acdced7386" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:11:20 compute-0 nova_compute[254061]: 2026-01-20 19:11:20.314 254065 DEBUG oslo_concurrency.lockutils [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "2e09eef5d7d60aeeb43ad4911302a9acdced7386" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:11:20 compute-0 nova_compute[254061]: 2026-01-20 19:11:20.314 254065 DEBUG oslo_concurrency.lockutils [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "2e09eef5d7d60aeeb43ad4911302a9acdced7386" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:11:20 compute-0 nova_compute[254061]: 2026-01-20 19:11:20.346 254065 DEBUG nova.storage.rbd_utils [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] rbd image bfdc2bf6-cb73-4586-861c-e6057f75edcc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 19:11:20 compute-0 nova_compute[254061]: 2026-01-20 19:11:20.351 254065 DEBUG oslo_concurrency.processutils [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/2e09eef5d7d60aeeb43ad4911302a9acdced7386 bfdc2bf6-cb73-4586-861c-e6057f75edcc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:11:20 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v877: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Jan 20 19:11:20 compute-0 nova_compute[254061]: 2026-01-20 19:11:20.632 254065 DEBUG oslo_concurrency.processutils [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/2e09eef5d7d60aeeb43ad4911302a9acdced7386 bfdc2bf6-cb73-4586-861c-e6057f75edcc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.281s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:11:20 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/2296003157' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:11:20 compute-0 nova_compute[254061]: 2026-01-20 19:11:20.720 254065 DEBUG nova.storage.rbd_utils [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] resizing rbd image bfdc2bf6-cb73-4586-861c-e6057f75edcc_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 19:11:20 compute-0 nova_compute[254061]: 2026-01-20 19:11:20.827 254065 DEBUG nova.objects.instance [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lazy-loading 'migration_context' on Instance uuid bfdc2bf6-cb73-4586-861c-e6057f75edcc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 19:11:20 compute-0 nova_compute[254061]: 2026-01-20 19:11:20.843 254065 DEBUG nova.virt.libvirt.driver [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 19:11:20 compute-0 nova_compute[254061]: 2026-01-20 19:11:20.844 254065 DEBUG nova.virt.libvirt.driver [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Ensure instance console log exists: /var/lib/nova/instances/bfdc2bf6-cb73-4586-861c-e6057f75edcc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 19:11:20 compute-0 nova_compute[254061]: 2026-01-20 19:11:20.844 254065 DEBUG oslo_concurrency.lockutils [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:11:20 compute-0 nova_compute[254061]: 2026-01-20 19:11:20.845 254065 DEBUG oslo_concurrency.lockutils [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:11:20 compute-0 nova_compute[254061]: 2026-01-20 19:11:20.845 254065 DEBUG oslo_concurrency.lockutils [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:11:21 compute-0 podman[265631]: 2026-01-20 19:11:21.100766606 +0000 UTC m=+0.062345573 container health_status 7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 20 19:11:21 compute-0 nova_compute[254061]: 2026-01-20 19:11:21.131 254065 DEBUG nova.network.neutron [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Successfully created port: 8d71eaa1-d4f2-413e-9640-7704328de4fc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 19:11:21 compute-0 nova_compute[254061]: 2026-01-20 19:11:21.371 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:11:21 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:11:21 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:11:21 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:11:21.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:11:21 compute-0 ceph-mon[74381]: pgmap v877: 337 pgs: 337 active+clean; 41 MiB data, 260 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Jan 20 19:11:21 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:11:21 compute-0 nova_compute[254061]: 2026-01-20 19:11:21.895 254065 DEBUG nova.network.neutron [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Successfully updated port: 8d71eaa1-d4f2-413e-9640-7704328de4fc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 19:11:21 compute-0 nova_compute[254061]: 2026-01-20 19:11:21.916 254065 DEBUG oslo_concurrency.lockutils [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Acquiring lock "refresh_cache-bfdc2bf6-cb73-4586-861c-e6057f75edcc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 19:11:21 compute-0 nova_compute[254061]: 2026-01-20 19:11:21.916 254065 DEBUG oslo_concurrency.lockutils [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Acquired lock "refresh_cache-bfdc2bf6-cb73-4586-861c-e6057f75edcc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 19:11:21 compute-0 nova_compute[254061]: 2026-01-20 19:11:21.916 254065 DEBUG nova.network.neutron [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 19:11:21 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:11:21 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:11:21 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:11:21.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:11:22 compute-0 nova_compute[254061]: 2026-01-20 19:11:22.021 254065 DEBUG nova.compute.manager [req-0b9bfc1b-832c-4449-9a8d-2cb7209da10b req-ea4fd99f-8fab-4304-9e01-bb9e758bd469 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Received event network-changed-8d71eaa1-d4f2-413e-9640-7704328de4fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 19:11:22 compute-0 nova_compute[254061]: 2026-01-20 19:11:22.022 254065 DEBUG nova.compute.manager [req-0b9bfc1b-832c-4449-9a8d-2cb7209da10b req-ea4fd99f-8fab-4304-9e01-bb9e758bd469 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Refreshing instance network info cache due to event network-changed-8d71eaa1-d4f2-413e-9640-7704328de4fc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 19:11:22 compute-0 nova_compute[254061]: 2026-01-20 19:11:22.022 254065 DEBUG oslo_concurrency.lockutils [req-0b9bfc1b-832c-4449-9a8d-2cb7209da10b req-ea4fd99f-8fab-4304-9e01-bb9e758bd469 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Acquiring lock "refresh_cache-bfdc2bf6-cb73-4586-861c-e6057f75edcc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 19:11:22 compute-0 nova_compute[254061]: 2026-01-20 19:11:22.083 254065 DEBUG nova.network.neutron [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 19:11:22 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v878: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 19:11:22 compute-0 nova_compute[254061]: 2026-01-20 19:11:22.972 254065 DEBUG nova.network.neutron [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Updating instance_info_cache with network_info: [{"id": "8d71eaa1-d4f2-413e-9640-7704328de4fc", "address": "fa:16:3e:c8:97:18", "network": {"id": "d89a966b-cfbe-45ff-b257-05d5877a2da4", "bridge": "br-int", "label": "tempest-network-smoke--23620673", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d71eaa1-d4", "ovs_interfaceid": "8d71eaa1-d4f2-413e-9640-7704328de4fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 19:11:22 compute-0 nova_compute[254061]: 2026-01-20 19:11:22.992 254065 DEBUG oslo_concurrency.lockutils [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Releasing lock "refresh_cache-bfdc2bf6-cb73-4586-861c-e6057f75edcc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 19:11:22 compute-0 nova_compute[254061]: 2026-01-20 19:11:22.993 254065 DEBUG nova.compute.manager [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Instance network_info: |[{"id": "8d71eaa1-d4f2-413e-9640-7704328de4fc", "address": "fa:16:3e:c8:97:18", "network": {"id": "d89a966b-cfbe-45ff-b257-05d5877a2da4", "bridge": "br-int", "label": "tempest-network-smoke--23620673", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d71eaa1-d4", "ovs_interfaceid": "8d71eaa1-d4f2-413e-9640-7704328de4fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 19:11:22 compute-0 nova_compute[254061]: 2026-01-20 19:11:22.993 254065 DEBUG oslo_concurrency.lockutils [req-0b9bfc1b-832c-4449-9a8d-2cb7209da10b req-ea4fd99f-8fab-4304-9e01-bb9e758bd469 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Acquired lock "refresh_cache-bfdc2bf6-cb73-4586-861c-e6057f75edcc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 19:11:22 compute-0 nova_compute[254061]: 2026-01-20 19:11:22.993 254065 DEBUG nova.network.neutron [req-0b9bfc1b-832c-4449-9a8d-2cb7209da10b req-ea4fd99f-8fab-4304-9e01-bb9e758bd469 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Refreshing network info cache for port 8d71eaa1-d4f2-413e-9640-7704328de4fc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 19:11:22 compute-0 nova_compute[254061]: 2026-01-20 19:11:22.995 254065 DEBUG nova.virt.libvirt.driver [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Start _get_guest_xml network_info=[{"id": "8d71eaa1-d4f2-413e-9640-7704328de4fc", "address": "fa:16:3e:c8:97:18", "network": {"id": "d89a966b-cfbe-45ff-b257-05d5877a2da4", "bridge": "br-int", "label": "tempest-network-smoke--23620673", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d71eaa1-d4", "ovs_interfaceid": "8d71eaa1-d4f2-413e-9640-7704328de4fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T19:05:59Z,direct_url=<?>,disk_format='qcow2',id=bc57af0c-4b71-499e-9808-3c8fc070a488,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='811a4eb676464ca2bd20c0cc2d2f61c9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T19:06:05Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_secret_uuid': None, 'encryption_format': None, 'size': 0, 'encrypted': False, 'guest_format': None, 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': 'bc57af0c-4b71-499e-9808-3c8fc070a488'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 19:11:23 compute-0 nova_compute[254061]: 2026-01-20 19:11:23.000 254065 WARNING nova.virt.libvirt.driver [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 19:11:23 compute-0 nova_compute[254061]: 2026-01-20 19:11:23.005 254065 DEBUG nova.virt.libvirt.host [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 19:11:23 compute-0 nova_compute[254061]: 2026-01-20 19:11:23.006 254065 DEBUG nova.virt.libvirt.host [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 19:11:23 compute-0 nova_compute[254061]: 2026-01-20 19:11:23.014 254065 DEBUG nova.virt.libvirt.host [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 19:11:23 compute-0 nova_compute[254061]: 2026-01-20 19:11:23.014 254065 DEBUG nova.virt.libvirt.host [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 19:11:23 compute-0 nova_compute[254061]: 2026-01-20 19:11:23.015 254065 DEBUG nova.virt.libvirt.driver [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 19:11:23 compute-0 nova_compute[254061]: 2026-01-20 19:11:23.015 254065 DEBUG nova.virt.hardware [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T19:05:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7446c314-5a17-42fd-97d9-a7a94e27bff9',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T19:05:59Z,direct_url=<?>,disk_format='qcow2',id=bc57af0c-4b71-499e-9808-3c8fc070a488,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='811a4eb676464ca2bd20c0cc2d2f61c9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T19:06:05Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 19:11:23 compute-0 nova_compute[254061]: 2026-01-20 19:11:23.015 254065 DEBUG nova.virt.hardware [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 19:11:23 compute-0 nova_compute[254061]: 2026-01-20 19:11:23.016 254065 DEBUG nova.virt.hardware [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 19:11:23 compute-0 nova_compute[254061]: 2026-01-20 19:11:23.016 254065 DEBUG nova.virt.hardware [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 19:11:23 compute-0 nova_compute[254061]: 2026-01-20 19:11:23.016 254065 DEBUG nova.virt.hardware [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 19:11:23 compute-0 nova_compute[254061]: 2026-01-20 19:11:23.016 254065 DEBUG nova.virt.hardware [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 19:11:23 compute-0 nova_compute[254061]: 2026-01-20 19:11:23.016 254065 DEBUG nova.virt.hardware [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 19:11:23 compute-0 nova_compute[254061]: 2026-01-20 19:11:23.017 254065 DEBUG nova.virt.hardware [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 19:11:23 compute-0 nova_compute[254061]: 2026-01-20 19:11:23.017 254065 DEBUG nova.virt.hardware [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 19:11:23 compute-0 nova_compute[254061]: 2026-01-20 19:11:23.017 254065 DEBUG nova.virt.hardware [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 19:11:23 compute-0 nova_compute[254061]: 2026-01-20 19:11:23.017 254065 DEBUG nova.virt.hardware [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 19:11:23 compute-0 nova_compute[254061]: 2026-01-20 19:11:23.019 254065 DEBUG oslo_concurrency.processutils [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:11:23 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 20 19:11:23 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3233877545' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 19:11:23 compute-0 nova_compute[254061]: 2026-01-20 19:11:23.488 254065 DEBUG oslo_concurrency.processutils [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:11:23 compute-0 nova_compute[254061]: 2026-01-20 19:11:23.514 254065 DEBUG nova.storage.rbd_utils [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] rbd image bfdc2bf6-cb73-4586-861c-e6057f75edcc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 19:11:23 compute-0 nova_compute[254061]: 2026-01-20 19:11:23.517 254065 DEBUG oslo_concurrency.processutils [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:11:23 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:11:23 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:11:23 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:11:23.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:11:23 compute-0 ceph-mon[74381]: pgmap v878: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 19:11:23 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/3233877545' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 19:11:23 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 20 19:11:23 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3451685413' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 19:11:23 compute-0 nova_compute[254061]: 2026-01-20 19:11:23.969 254065 DEBUG oslo_concurrency.processutils [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:11:23 compute-0 nova_compute[254061]: 2026-01-20 19:11:23.971 254065 DEBUG nova.virt.libvirt.vif [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T19:11:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1235924562',display_name='tempest-TestNetworkBasicOps-server-1235924562',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1235924562',id=6,image_ref='bc57af0c-4b71-499e-9808-3c8fc070a488',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIjr6syangYxWXc3r4dCfhnhxcAaEg5oVCWg4X5MCcn6n80x4JhggPSqDkhncvG7NiQVFxqb5q9kQ+/60IAt0rodPBVgAFfcYlPDnsnj3CLQm3+or3usHJ4CImoOM7F42A==',key_name='tempest-TestNetworkBasicOps-1374279328',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dc8a6ea17f334edbbfaf2a91ec6fd167',ramdisk_id='',reservation_id='r-7t9yeoab',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='bc57af0c-4b71-499e-9808-3c8fc070a488',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-899583499',owner_user_name='tempest-TestNetworkBasicOps-899583499-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T19:11:20Z,user_data=None,user_id='d34bd159f8884263a7481e3fcff15267',uuid=bfdc2bf6-cb73-4586-861c-e6057f75edcc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8d71eaa1-d4f2-413e-9640-7704328de4fc", "address": "fa:16:3e:c8:97:18", "network": {"id": "d89a966b-cfbe-45ff-b257-05d5877a2da4", "bridge": "br-int", "label": "tempest-network-smoke--23620673", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d71eaa1-d4", "ovs_interfaceid": "8d71eaa1-d4f2-413e-9640-7704328de4fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 19:11:23 compute-0 nova_compute[254061]: 2026-01-20 19:11:23.971 254065 DEBUG nova.network.os_vif_util [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Converting VIF {"id": "8d71eaa1-d4f2-413e-9640-7704328de4fc", "address": "fa:16:3e:c8:97:18", "network": {"id": "d89a966b-cfbe-45ff-b257-05d5877a2da4", "bridge": "br-int", "label": "tempest-network-smoke--23620673", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d71eaa1-d4", "ovs_interfaceid": "8d71eaa1-d4f2-413e-9640-7704328de4fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 19:11:23 compute-0 nova_compute[254061]: 2026-01-20 19:11:23.972 254065 DEBUG nova.network.os_vif_util [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c8:97:18,bridge_name='br-int',has_traffic_filtering=True,id=8d71eaa1-d4f2-413e-9640-7704328de4fc,network=Network(d89a966b-cfbe-45ff-b257-05d5877a2da4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d71eaa1-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 19:11:23 compute-0 nova_compute[254061]: 2026-01-20 19:11:23.973 254065 DEBUG nova.objects.instance [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lazy-loading 'pci_devices' on Instance uuid bfdc2bf6-cb73-4586-861c-e6057f75edcc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 19:11:23 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:11:23 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:11:23 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:11:23.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:11:24 compute-0 nova_compute[254061]: 2026-01-20 19:11:24.004 254065 DEBUG nova.virt.libvirt.driver [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] End _get_guest_xml xml=<domain type="kvm">
Jan 20 19:11:24 compute-0 nova_compute[254061]:   <uuid>bfdc2bf6-cb73-4586-861c-e6057f75edcc</uuid>
Jan 20 19:11:24 compute-0 nova_compute[254061]:   <name>instance-00000006</name>
Jan 20 19:11:24 compute-0 nova_compute[254061]:   <memory>131072</memory>
Jan 20 19:11:24 compute-0 nova_compute[254061]:   <vcpu>1</vcpu>
Jan 20 19:11:24 compute-0 nova_compute[254061]:   <metadata>
Jan 20 19:11:24 compute-0 nova_compute[254061]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 19:11:24 compute-0 nova_compute[254061]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 19:11:24 compute-0 nova_compute[254061]:       <nova:name>tempest-TestNetworkBasicOps-server-1235924562</nova:name>
Jan 20 19:11:24 compute-0 nova_compute[254061]:       <nova:creationTime>2026-01-20 19:11:23</nova:creationTime>
Jan 20 19:11:24 compute-0 nova_compute[254061]:       <nova:flavor name="m1.nano">
Jan 20 19:11:24 compute-0 nova_compute[254061]:         <nova:memory>128</nova:memory>
Jan 20 19:11:24 compute-0 nova_compute[254061]:         <nova:disk>1</nova:disk>
Jan 20 19:11:24 compute-0 nova_compute[254061]:         <nova:swap>0</nova:swap>
Jan 20 19:11:24 compute-0 nova_compute[254061]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 19:11:24 compute-0 nova_compute[254061]:         <nova:vcpus>1</nova:vcpus>
Jan 20 19:11:24 compute-0 nova_compute[254061]:       </nova:flavor>
Jan 20 19:11:24 compute-0 nova_compute[254061]:       <nova:owner>
Jan 20 19:11:24 compute-0 nova_compute[254061]:         <nova:user uuid="d34bd159f8884263a7481e3fcff15267">tempest-TestNetworkBasicOps-899583499-project-member</nova:user>
Jan 20 19:11:24 compute-0 nova_compute[254061]:         <nova:project uuid="dc8a6ea17f334edbbfaf2a91ec6fd167">tempest-TestNetworkBasicOps-899583499</nova:project>
Jan 20 19:11:24 compute-0 nova_compute[254061]:       </nova:owner>
Jan 20 19:11:24 compute-0 nova_compute[254061]:       <nova:root type="image" uuid="bc57af0c-4b71-499e-9808-3c8fc070a488"/>
Jan 20 19:11:24 compute-0 nova_compute[254061]:       <nova:ports>
Jan 20 19:11:24 compute-0 nova_compute[254061]:         <nova:port uuid="8d71eaa1-d4f2-413e-9640-7704328de4fc">
Jan 20 19:11:24 compute-0 nova_compute[254061]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 20 19:11:24 compute-0 nova_compute[254061]:         </nova:port>
Jan 20 19:11:24 compute-0 nova_compute[254061]:       </nova:ports>
Jan 20 19:11:24 compute-0 nova_compute[254061]:     </nova:instance>
Jan 20 19:11:24 compute-0 nova_compute[254061]:   </metadata>
Jan 20 19:11:24 compute-0 nova_compute[254061]:   <sysinfo type="smbios">
Jan 20 19:11:24 compute-0 nova_compute[254061]:     <system>
Jan 20 19:11:24 compute-0 nova_compute[254061]:       <entry name="manufacturer">RDO</entry>
Jan 20 19:11:24 compute-0 nova_compute[254061]:       <entry name="product">OpenStack Compute</entry>
Jan 20 19:11:24 compute-0 nova_compute[254061]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 19:11:24 compute-0 nova_compute[254061]:       <entry name="serial">bfdc2bf6-cb73-4586-861c-e6057f75edcc</entry>
Jan 20 19:11:24 compute-0 nova_compute[254061]:       <entry name="uuid">bfdc2bf6-cb73-4586-861c-e6057f75edcc</entry>
Jan 20 19:11:24 compute-0 nova_compute[254061]:       <entry name="family">Virtual Machine</entry>
Jan 20 19:11:24 compute-0 nova_compute[254061]:     </system>
Jan 20 19:11:24 compute-0 nova_compute[254061]:   </sysinfo>
Jan 20 19:11:24 compute-0 nova_compute[254061]:   <os>
Jan 20 19:11:24 compute-0 nova_compute[254061]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 19:11:24 compute-0 nova_compute[254061]:     <boot dev="hd"/>
Jan 20 19:11:24 compute-0 nova_compute[254061]:     <smbios mode="sysinfo"/>
Jan 20 19:11:24 compute-0 nova_compute[254061]:   </os>
Jan 20 19:11:24 compute-0 nova_compute[254061]:   <features>
Jan 20 19:11:24 compute-0 nova_compute[254061]:     <acpi/>
Jan 20 19:11:24 compute-0 nova_compute[254061]:     <apic/>
Jan 20 19:11:24 compute-0 nova_compute[254061]:     <vmcoreinfo/>
Jan 20 19:11:24 compute-0 nova_compute[254061]:   </features>
Jan 20 19:11:24 compute-0 nova_compute[254061]:   <clock offset="utc">
Jan 20 19:11:24 compute-0 nova_compute[254061]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 19:11:24 compute-0 nova_compute[254061]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 19:11:24 compute-0 nova_compute[254061]:     <timer name="hpet" present="no"/>
Jan 20 19:11:24 compute-0 nova_compute[254061]:   </clock>
Jan 20 19:11:24 compute-0 nova_compute[254061]:   <cpu mode="host-model" match="exact">
Jan 20 19:11:24 compute-0 nova_compute[254061]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 19:11:24 compute-0 nova_compute[254061]:   </cpu>
Jan 20 19:11:24 compute-0 nova_compute[254061]:   <devices>
Jan 20 19:11:24 compute-0 nova_compute[254061]:     <disk type="network" device="disk">
Jan 20 19:11:24 compute-0 nova_compute[254061]:       <driver type="raw" cache="none"/>
Jan 20 19:11:24 compute-0 nova_compute[254061]:       <source protocol="rbd" name="vms/bfdc2bf6-cb73-4586-861c-e6057f75edcc_disk">
Jan 20 19:11:24 compute-0 nova_compute[254061]:         <host name="192.168.122.100" port="6789"/>
Jan 20 19:11:24 compute-0 nova_compute[254061]:         <host name="192.168.122.102" port="6789"/>
Jan 20 19:11:24 compute-0 nova_compute[254061]:         <host name="192.168.122.101" port="6789"/>
Jan 20 19:11:24 compute-0 nova_compute[254061]:       </source>
Jan 20 19:11:24 compute-0 nova_compute[254061]:       <auth username="openstack">
Jan 20 19:11:24 compute-0 nova_compute[254061]:         <secret type="ceph" uuid="aecbbf3b-b405-507b-97d7-637a83f5b4b1"/>
Jan 20 19:11:24 compute-0 nova_compute[254061]:       </auth>
Jan 20 19:11:24 compute-0 nova_compute[254061]:       <target dev="vda" bus="virtio"/>
Jan 20 19:11:24 compute-0 nova_compute[254061]:     </disk>
Jan 20 19:11:24 compute-0 nova_compute[254061]:     <disk type="network" device="cdrom">
Jan 20 19:11:24 compute-0 nova_compute[254061]:       <driver type="raw" cache="none"/>
Jan 20 19:11:24 compute-0 nova_compute[254061]:       <source protocol="rbd" name="vms/bfdc2bf6-cb73-4586-861c-e6057f75edcc_disk.config">
Jan 20 19:11:24 compute-0 nova_compute[254061]:         <host name="192.168.122.100" port="6789"/>
Jan 20 19:11:24 compute-0 nova_compute[254061]:         <host name="192.168.122.102" port="6789"/>
Jan 20 19:11:24 compute-0 nova_compute[254061]:         <host name="192.168.122.101" port="6789"/>
Jan 20 19:11:24 compute-0 nova_compute[254061]:       </source>
Jan 20 19:11:24 compute-0 nova_compute[254061]:       <auth username="openstack">
Jan 20 19:11:24 compute-0 nova_compute[254061]:         <secret type="ceph" uuid="aecbbf3b-b405-507b-97d7-637a83f5b4b1"/>
Jan 20 19:11:24 compute-0 nova_compute[254061]:       </auth>
Jan 20 19:11:24 compute-0 nova_compute[254061]:       <target dev="sda" bus="sata"/>
Jan 20 19:11:24 compute-0 nova_compute[254061]:     </disk>
Jan 20 19:11:24 compute-0 nova_compute[254061]:     <interface type="ethernet">
Jan 20 19:11:24 compute-0 nova_compute[254061]:       <mac address="fa:16:3e:c8:97:18"/>
Jan 20 19:11:24 compute-0 nova_compute[254061]:       <model type="virtio"/>
Jan 20 19:11:24 compute-0 nova_compute[254061]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 19:11:24 compute-0 nova_compute[254061]:       <mtu size="1442"/>
Jan 20 19:11:24 compute-0 nova_compute[254061]:       <target dev="tap8d71eaa1-d4"/>
Jan 20 19:11:24 compute-0 nova_compute[254061]:     </interface>
Jan 20 19:11:24 compute-0 nova_compute[254061]:     <serial type="pty">
Jan 20 19:11:24 compute-0 nova_compute[254061]:       <log file="/var/lib/nova/instances/bfdc2bf6-cb73-4586-861c-e6057f75edcc/console.log" append="off"/>
Jan 20 19:11:24 compute-0 nova_compute[254061]:     </serial>
Jan 20 19:11:24 compute-0 nova_compute[254061]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 19:11:24 compute-0 nova_compute[254061]:     <video>
Jan 20 19:11:24 compute-0 nova_compute[254061]:       <model type="virtio"/>
Jan 20 19:11:24 compute-0 nova_compute[254061]:     </video>
Jan 20 19:11:24 compute-0 nova_compute[254061]:     <input type="tablet" bus="usb"/>
Jan 20 19:11:24 compute-0 nova_compute[254061]:     <rng model="virtio">
Jan 20 19:11:24 compute-0 nova_compute[254061]:       <backend model="random">/dev/urandom</backend>
Jan 20 19:11:24 compute-0 nova_compute[254061]:     </rng>
Jan 20 19:11:24 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root"/>
Jan 20 19:11:24 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:11:24 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:11:24 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:11:24 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:11:24 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:11:24 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:11:24 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:11:24 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:11:24 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:11:24 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:11:24 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:11:24 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:11:24 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:11:24 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:11:24 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:11:24 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:11:24 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:11:24 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:11:24 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:11:24 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:11:24 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:11:24 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:11:24 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:11:24 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:11:24 compute-0 nova_compute[254061]:     <controller type="usb" index="0"/>
Jan 20 19:11:24 compute-0 nova_compute[254061]:     <memballoon model="virtio">
Jan 20 19:11:24 compute-0 nova_compute[254061]:       <stats period="10"/>
Jan 20 19:11:24 compute-0 nova_compute[254061]:     </memballoon>
Jan 20 19:11:24 compute-0 nova_compute[254061]:   </devices>
Jan 20 19:11:24 compute-0 nova_compute[254061]: </domain>
Jan 20 19:11:24 compute-0 nova_compute[254061]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 19:11:24 compute-0 nova_compute[254061]: 2026-01-20 19:11:24.005 254065 DEBUG nova.compute.manager [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Preparing to wait for external event network-vif-plugged-8d71eaa1-d4f2-413e-9640-7704328de4fc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 19:11:24 compute-0 nova_compute[254061]: 2026-01-20 19:11:24.005 254065 DEBUG oslo_concurrency.lockutils [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Acquiring lock "bfdc2bf6-cb73-4586-861c-e6057f75edcc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:11:24 compute-0 nova_compute[254061]: 2026-01-20 19:11:24.006 254065 DEBUG oslo_concurrency.lockutils [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "bfdc2bf6-cb73-4586-861c-e6057f75edcc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:11:24 compute-0 nova_compute[254061]: 2026-01-20 19:11:24.006 254065 DEBUG oslo_concurrency.lockutils [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "bfdc2bf6-cb73-4586-861c-e6057f75edcc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:11:24 compute-0 nova_compute[254061]: 2026-01-20 19:11:24.007 254065 DEBUG nova.virt.libvirt.vif [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T19:11:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1235924562',display_name='tempest-TestNetworkBasicOps-server-1235924562',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1235924562',id=6,image_ref='bc57af0c-4b71-499e-9808-3c8fc070a488',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIjr6syangYxWXc3r4dCfhnhxcAaEg5oVCWg4X5MCcn6n80x4JhggPSqDkhncvG7NiQVFxqb5q9kQ+/60IAt0rodPBVgAFfcYlPDnsnj3CLQm3+or3usHJ4CImoOM7F42A==',key_name='tempest-TestNetworkBasicOps-1374279328',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dc8a6ea17f334edbbfaf2a91ec6fd167',ramdisk_id='',reservation_id='r-7t9yeoab',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='bc57af0c-4b71-499e-9808-3c8fc070a488',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-899583499',owner_user_name='tempest-TestNetworkBasicOps-899583499-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T19:11:20Z,user_data=None,user_id='d34bd159f8884263a7481e3fcff15267',uuid=bfdc2bf6-cb73-4586-861c-e6057f75edcc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8d71eaa1-d4f2-413e-9640-7704328de4fc", "address": "fa:16:3e:c8:97:18", "network": {"id": "d89a966b-cfbe-45ff-b257-05d5877a2da4", "bridge": "br-int", "label": "tempest-network-smoke--23620673", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d71eaa1-d4", "ovs_interfaceid": "8d71eaa1-d4f2-413e-9640-7704328de4fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 19:11:24 compute-0 nova_compute[254061]: 2026-01-20 19:11:24.007 254065 DEBUG nova.network.os_vif_util [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Converting VIF {"id": "8d71eaa1-d4f2-413e-9640-7704328de4fc", "address": "fa:16:3e:c8:97:18", "network": {"id": "d89a966b-cfbe-45ff-b257-05d5877a2da4", "bridge": "br-int", "label": "tempest-network-smoke--23620673", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d71eaa1-d4", "ovs_interfaceid": "8d71eaa1-d4f2-413e-9640-7704328de4fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 19:11:24 compute-0 nova_compute[254061]: 2026-01-20 19:11:24.007 254065 DEBUG nova.network.os_vif_util [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c8:97:18,bridge_name='br-int',has_traffic_filtering=True,id=8d71eaa1-d4f2-413e-9640-7704328de4fc,network=Network(d89a966b-cfbe-45ff-b257-05d5877a2da4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d71eaa1-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 19:11:24 compute-0 nova_compute[254061]: 2026-01-20 19:11:24.008 254065 DEBUG os_vif [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c8:97:18,bridge_name='br-int',has_traffic_filtering=True,id=8d71eaa1-d4f2-413e-9640-7704328de4fc,network=Network(d89a966b-cfbe-45ff-b257-05d5877a2da4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d71eaa1-d4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 19:11:24 compute-0 nova_compute[254061]: 2026-01-20 19:11:24.008 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:11:24 compute-0 nova_compute[254061]: 2026-01-20 19:11:24.009 254065 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 19:11:24 compute-0 nova_compute[254061]: 2026-01-20 19:11:24.009 254065 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 19:11:24 compute-0 nova_compute[254061]: 2026-01-20 19:11:24.011 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:11:24 compute-0 nova_compute[254061]: 2026-01-20 19:11:24.011 254065 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8d71eaa1-d4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 19:11:24 compute-0 nova_compute[254061]: 2026-01-20 19:11:24.012 254065 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8d71eaa1-d4, col_values=(('external_ids', {'iface-id': '8d71eaa1-d4f2-413e-9640-7704328de4fc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c8:97:18', 'vm-uuid': 'bfdc2bf6-cb73-4586-861c-e6057f75edcc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 19:11:24 compute-0 nova_compute[254061]: 2026-01-20 19:11:24.071 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:11:24 compute-0 NetworkManager[48914]: <info>  [1768936284.0729] manager: (tap8d71eaa1-d4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Jan 20 19:11:24 compute-0 nova_compute[254061]: 2026-01-20 19:11:24.075 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:11:24 compute-0 nova_compute[254061]: 2026-01-20 19:11:24.078 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:11:24 compute-0 nova_compute[254061]: 2026-01-20 19:11:24.079 254065 INFO os_vif [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c8:97:18,bridge_name='br-int',has_traffic_filtering=True,id=8d71eaa1-d4f2-413e-9640-7704328de4fc,network=Network(d89a966b-cfbe-45ff-b257-05d5877a2da4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d71eaa1-d4')
Jan 20 19:11:24 compute-0 nova_compute[254061]: 2026-01-20 19:11:24.137 254065 DEBUG nova.virt.libvirt.driver [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 19:11:24 compute-0 nova_compute[254061]: 2026-01-20 19:11:24.137 254065 DEBUG nova.virt.libvirt.driver [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 19:11:24 compute-0 nova_compute[254061]: 2026-01-20 19:11:24.137 254065 DEBUG nova.virt.libvirt.driver [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] No VIF found with MAC fa:16:3e:c8:97:18, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 19:11:24 compute-0 nova_compute[254061]: 2026-01-20 19:11:24.138 254065 INFO nova.virt.libvirt.driver [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Using config drive
Jan 20 19:11:24 compute-0 nova_compute[254061]: 2026-01-20 19:11:24.162 254065 DEBUG nova.storage.rbd_utils [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] rbd image bfdc2bf6-cb73-4586-861c-e6057f75edcc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 19:11:24 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v879: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 19:11:24 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/3451685413' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 19:11:25 compute-0 nova_compute[254061]: 2026-01-20 19:11:25.010 254065 INFO nova.virt.libvirt.driver [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Creating config drive at /var/lib/nova/instances/bfdc2bf6-cb73-4586-861c-e6057f75edcc/disk.config
Jan 20 19:11:25 compute-0 nova_compute[254061]: 2026-01-20 19:11:25.014 254065 DEBUG oslo_concurrency.processutils [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bfdc2bf6-cb73-4586-861c-e6057f75edcc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpx7huuzo0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:11:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:11:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:11:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:11:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:11:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:11:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:11:25 compute-0 nova_compute[254061]: 2026-01-20 19:11:25.141 254065 DEBUG oslo_concurrency.processutils [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bfdc2bf6-cb73-4586-861c-e6057f75edcc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpx7huuzo0" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:11:25 compute-0 nova_compute[254061]: 2026-01-20 19:11:25.176 254065 DEBUG nova.storage.rbd_utils [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] rbd image bfdc2bf6-cb73-4586-861c-e6057f75edcc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 19:11:25 compute-0 nova_compute[254061]: 2026-01-20 19:11:25.180 254065 DEBUG oslo_concurrency.processutils [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bfdc2bf6-cb73-4586-861c-e6057f75edcc/disk.config bfdc2bf6-cb73-4586-861c-e6057f75edcc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:11:25 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:11:25 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 19:11:25 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:11:25.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 19:11:25 compute-0 nova_compute[254061]: 2026-01-20 19:11:25.600 254065 DEBUG nova.network.neutron [req-0b9bfc1b-832c-4449-9a8d-2cb7209da10b req-ea4fd99f-8fab-4304-9e01-bb9e758bd469 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Updated VIF entry in instance network info cache for port 8d71eaa1-d4f2-413e-9640-7704328de4fc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 19:11:25 compute-0 nova_compute[254061]: 2026-01-20 19:11:25.601 254065 DEBUG nova.network.neutron [req-0b9bfc1b-832c-4449-9a8d-2cb7209da10b req-ea4fd99f-8fab-4304-9e01-bb9e758bd469 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Updating instance_info_cache with network_info: [{"id": "8d71eaa1-d4f2-413e-9640-7704328de4fc", "address": "fa:16:3e:c8:97:18", "network": {"id": "d89a966b-cfbe-45ff-b257-05d5877a2da4", "bridge": "br-int", "label": "tempest-network-smoke--23620673", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d71eaa1-d4", "ovs_interfaceid": "8d71eaa1-d4f2-413e-9640-7704328de4fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 19:11:25 compute-0 nova_compute[254061]: 2026-01-20 19:11:25.618 254065 DEBUG oslo_concurrency.lockutils [req-0b9bfc1b-832c-4449-9a8d-2cb7209da10b req-ea4fd99f-8fab-4304-9e01-bb9e758bd469 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Releasing lock "refresh_cache-bfdc2bf6-cb73-4586-861c-e6057f75edcc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 19:11:25 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:11:25 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:11:25 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:11:25.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:11:26 compute-0 ceph-mon[74381]: pgmap v879: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 19:11:26 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:11:26 compute-0 nova_compute[254061]: 2026-01-20 19:11:26.394 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:11:26 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v880: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 20 19:11:26 compute-0 nova_compute[254061]: 2026-01-20 19:11:26.812 254065 DEBUG oslo_concurrency.processutils [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bfdc2bf6-cb73-4586-861c-e6057f75edcc/disk.config bfdc2bf6-cb73-4586-861c-e6057f75edcc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.632s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:11:26 compute-0 nova_compute[254061]: 2026-01-20 19:11:26.813 254065 INFO nova.virt.libvirt.driver [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Deleting local config drive /var/lib/nova/instances/bfdc2bf6-cb73-4586-861c-e6057f75edcc/disk.config because it was imported into RBD.
Jan 20 19:11:26 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:11:26 compute-0 kernel: tap8d71eaa1-d4: entered promiscuous mode
Jan 20 19:11:26 compute-0 NetworkManager[48914]: <info>  [1768936286.8834] manager: (tap8d71eaa1-d4): new Tun device (/org/freedesktop/NetworkManager/Devices/38)
Jan 20 19:11:26 compute-0 nova_compute[254061]: 2026-01-20 19:11:26.884 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:11:26 compute-0 ovn_controller[155128]: 2026-01-20T19:11:26Z|00049|binding|INFO|Claiming lport 8d71eaa1-d4f2-413e-9640-7704328de4fc for this chassis.
Jan 20 19:11:26 compute-0 ovn_controller[155128]: 2026-01-20T19:11:26Z|00050|binding|INFO|8d71eaa1-d4f2-413e-9640-7704328de4fc: Claiming fa:16:3e:c8:97:18 10.100.0.10
Jan 20 19:11:26 compute-0 nova_compute[254061]: 2026-01-20 19:11:26.899 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:11:26 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:11:26.933 165659 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c8:97:18 10.100.0.10'], port_security=['fa:16:3e:c8:97:18 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'bfdc2bf6-cb73-4586-861c-e6057f75edcc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d89a966b-cfbe-45ff-b257-05d5877a2da4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dc8a6ea17f334edbbfaf2a91ec6fd167', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1da04d3e-03f4-48b8-9af0-ca4e3c95d834', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b92aeeb0-ccb0-440f-b327-55f658bc00cf, chassis=[<ovs.db.idl.Row object at 0x7f4e780f9880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4e780f9880>], logical_port=8d71eaa1-d4f2-413e-9640-7704328de4fc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 19:11:26 compute-0 systemd-udevd[265792]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 19:11:26 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:11:26.934 165659 INFO neutron.agent.ovn.metadata.agent [-] Port 8d71eaa1-d4f2-413e-9640-7704328de4fc in datapath d89a966b-cfbe-45ff-b257-05d5877a2da4 bound to our chassis
Jan 20 19:11:26 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:11:26.935 165659 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d89a966b-cfbe-45ff-b257-05d5877a2da4
Jan 20 19:11:26 compute-0 systemd-machined[220746]: New machine qemu-3-instance-00000006.
Jan 20 19:11:26 compute-0 NetworkManager[48914]: <info>  [1768936286.9498] device (tap8d71eaa1-d4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 19:11:26 compute-0 NetworkManager[48914]: <info>  [1768936286.9507] device (tap8d71eaa1-d4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 19:11:26 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:11:26.950 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[8e829be0-0736-48df-b0be-9f68f6323897]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:11:26 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:11:26.952 165659 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd89a966b-c1 in ovnmeta-d89a966b-cfbe-45ff-b257-05d5877a2da4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 19:11:26 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:11:26.954 259376 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd89a966b-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 19:11:26 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:11:26.954 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[2989503c-0ad7-4145-914f-48a87d0888cd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:11:26 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:11:26.955 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[247ac818-b777-46b2-9a16-94f32bafa097]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:11:26 compute-0 systemd[1]: Started Virtual Machine qemu-3-instance-00000006.
Jan 20 19:11:26 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:11:26.969 166372 DEBUG oslo.privsep.daemon [-] privsep: reply[7064fa1f-ce24-4860-aff8-8ae414b9992b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:11:26 compute-0 ovn_controller[155128]: 2026-01-20T19:11:26Z|00051|binding|INFO|Setting lport 8d71eaa1-d4f2-413e-9640-7704328de4fc ovn-installed in OVS
Jan 20 19:11:26 compute-0 ovn_controller[155128]: 2026-01-20T19:11:26Z|00052|binding|INFO|Setting lport 8d71eaa1-d4f2-413e-9640-7704328de4fc up in Southbound
Jan 20 19:11:26 compute-0 nova_compute[254061]: 2026-01-20 19:11:26.987 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:11:26 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:11:26.997 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[c9edc095-dc66-4c61-a0bd-428ced3c7488]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:11:27 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:11:27.029 259434 DEBUG oslo.privsep.daemon [-] privsep: reply[7727912c-5715-4af9-b200-c2390cd6f5a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:11:27 compute-0 NetworkManager[48914]: <info>  [1768936287.0367] manager: (tapd89a966b-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/39)
Jan 20 19:11:27 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:11:27.036 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[fbad01b2-e175-4ac7-9a76-65481e04d7a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:11:27 compute-0 systemd-udevd[265795]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 19:11:27 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:11:27.071 259434 DEBUG oslo.privsep.daemon [-] privsep: reply[77f1bbf3-da35-4540-8aae-575a2730379c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:11:27 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:11:27.073 259434 DEBUG oslo.privsep.daemon [-] privsep: reply[7753ef76-1dde-4af4-9726-cd82660602d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:11:27 compute-0 NetworkManager[48914]: <info>  [1768936287.0962] device (tapd89a966b-c0): carrier: link connected
Jan 20 19:11:27 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:11:27.101 259434 DEBUG oslo.privsep.daemon [-] privsep: reply[67120657-77d5-4085-ae2b-fb4f95be3096]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:11:27 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:11:27.119 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[6c9547e3-3b8f-471f-bd65-27cbfaa2a5ef]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd89a966b-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:13:dc:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 448288, 'reachable_time': 40125, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 265825, 'error': None, 'target': 'ovnmeta-d89a966b-cfbe-45ff-b257-05d5877a2da4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:11:27 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:11:27.136 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[80c7ba31-9661-4c91-b566-4310490ca44e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe13:dc4f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 448288, 'tstamp': 448288}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 265826, 'error': None, 'target': 'ovnmeta-d89a966b-cfbe-45ff-b257-05d5877a2da4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:11:27 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:11:27.154 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[196971f9-830d-4b9f-afcd-a9391a5ea4d7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd89a966b-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:13:dc:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 448288, 'reachable_time': 40125, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 265827, 'error': None, 'target': 'ovnmeta-d89a966b-cfbe-45ff-b257-05d5877a2da4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:11:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:11:27.181Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:11:27 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:11:27.184 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[e827c626-a3f4-4ff8-aef9-4bc8817060a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:11:27 compute-0 nova_compute[254061]: 2026-01-20 19:11:27.190 254065 DEBUG nova.compute.manager [req-bba30da7-0bce-4ded-aa21-59630798ceca req-d9a76fc8-5003-4d6c-bf0d-53b7587215f2 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Received event network-vif-plugged-8d71eaa1-d4f2-413e-9640-7704328de4fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 19:11:27 compute-0 nova_compute[254061]: 2026-01-20 19:11:27.191 254065 DEBUG oslo_concurrency.lockutils [req-bba30da7-0bce-4ded-aa21-59630798ceca req-d9a76fc8-5003-4d6c-bf0d-53b7587215f2 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Acquiring lock "bfdc2bf6-cb73-4586-861c-e6057f75edcc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:11:27 compute-0 nova_compute[254061]: 2026-01-20 19:11:27.191 254065 DEBUG oslo_concurrency.lockutils [req-bba30da7-0bce-4ded-aa21-59630798ceca req-d9a76fc8-5003-4d6c-bf0d-53b7587215f2 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Lock "bfdc2bf6-cb73-4586-861c-e6057f75edcc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:11:27 compute-0 nova_compute[254061]: 2026-01-20 19:11:27.191 254065 DEBUG oslo_concurrency.lockutils [req-bba30da7-0bce-4ded-aa21-59630798ceca req-d9a76fc8-5003-4d6c-bf0d-53b7587215f2 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Lock "bfdc2bf6-cb73-4586-861c-e6057f75edcc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:11:27 compute-0 nova_compute[254061]: 2026-01-20 19:11:27.192 254065 DEBUG nova.compute.manager [req-bba30da7-0bce-4ded-aa21-59630798ceca req-d9a76fc8-5003-4d6c-bf0d-53b7587215f2 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Processing event network-vif-plugged-8d71eaa1-d4f2-413e-9640-7704328de4fc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 19:11:27 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:11:27.250 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[95119de3-2cc0-448a-9db9-bade4d56aab4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:11:27 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:11:27.252 165659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd89a966b-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 19:11:27 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:11:27.252 165659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 19:11:27 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:11:27.252 165659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd89a966b-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 19:11:27 compute-0 kernel: tapd89a966b-c0: entered promiscuous mode
Jan 20 19:11:27 compute-0 nova_compute[254061]: 2026-01-20 19:11:27.254 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:11:27 compute-0 nova_compute[254061]: 2026-01-20 19:11:27.256 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:11:27 compute-0 NetworkManager[48914]: <info>  [1768936287.2573] manager: (tapd89a966b-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/40)
Jan 20 19:11:27 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:11:27.257 165659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd89a966b-c0, col_values=(('external_ids', {'iface-id': 'db876216-b29f-45ae-933e-70465cd9196a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 19:11:27 compute-0 nova_compute[254061]: 2026-01-20 19:11:27.258 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:11:27 compute-0 ovn_controller[155128]: 2026-01-20T19:11:27Z|00053|binding|INFO|Releasing lport db876216-b29f-45ae-933e-70465cd9196a from this chassis (sb_readonly=0)
Jan 20 19:11:27 compute-0 nova_compute[254061]: 2026-01-20 19:11:27.276 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:11:27 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:11:27.277 165659 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d89a966b-cfbe-45ff-b257-05d5877a2da4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d89a966b-cfbe-45ff-b257-05d5877a2da4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 19:11:27 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:11:27.278 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[e72e48ce-7d7d-499f-97e6-10d74d4b5fd8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:11:27 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:11:27.279 165659 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 19:11:27 compute-0 ovn_metadata_agent[165637]: global
Jan 20 19:11:27 compute-0 ovn_metadata_agent[165637]:     log         /dev/log local0 debug
Jan 20 19:11:27 compute-0 ovn_metadata_agent[165637]:     log-tag     haproxy-metadata-proxy-d89a966b-cfbe-45ff-b257-05d5877a2da4
Jan 20 19:11:27 compute-0 ovn_metadata_agent[165637]:     user        root
Jan 20 19:11:27 compute-0 ovn_metadata_agent[165637]:     group       root
Jan 20 19:11:27 compute-0 ovn_metadata_agent[165637]:     maxconn     1024
Jan 20 19:11:27 compute-0 ovn_metadata_agent[165637]:     pidfile     /var/lib/neutron/external/pids/d89a966b-cfbe-45ff-b257-05d5877a2da4.pid.haproxy
Jan 20 19:11:27 compute-0 ovn_metadata_agent[165637]:     daemon
Jan 20 19:11:27 compute-0 ovn_metadata_agent[165637]: 
Jan 20 19:11:27 compute-0 ovn_metadata_agent[165637]: defaults
Jan 20 19:11:27 compute-0 ovn_metadata_agent[165637]:     log global
Jan 20 19:11:27 compute-0 ovn_metadata_agent[165637]:     mode http
Jan 20 19:11:27 compute-0 ovn_metadata_agent[165637]:     option httplog
Jan 20 19:11:27 compute-0 ovn_metadata_agent[165637]:     option dontlognull
Jan 20 19:11:27 compute-0 ovn_metadata_agent[165637]:     option http-server-close
Jan 20 19:11:27 compute-0 ovn_metadata_agent[165637]:     option forwardfor
Jan 20 19:11:27 compute-0 ovn_metadata_agent[165637]:     retries                 3
Jan 20 19:11:27 compute-0 ovn_metadata_agent[165637]:     timeout http-request    30s
Jan 20 19:11:27 compute-0 ovn_metadata_agent[165637]:     timeout connect         30s
Jan 20 19:11:27 compute-0 ovn_metadata_agent[165637]:     timeout client          32s
Jan 20 19:11:27 compute-0 ovn_metadata_agent[165637]:     timeout server          32s
Jan 20 19:11:27 compute-0 ovn_metadata_agent[165637]:     timeout http-keep-alive 30s
Jan 20 19:11:27 compute-0 ovn_metadata_agent[165637]: 
Jan 20 19:11:27 compute-0 ovn_metadata_agent[165637]: 
Jan 20 19:11:27 compute-0 ovn_metadata_agent[165637]: listen listener
Jan 20 19:11:27 compute-0 ovn_metadata_agent[165637]:     bind 169.254.169.254:80
Jan 20 19:11:27 compute-0 ovn_metadata_agent[165637]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 19:11:27 compute-0 ovn_metadata_agent[165637]:     http-request add-header X-OVN-Network-ID d89a966b-cfbe-45ff-b257-05d5877a2da4
Jan 20 19:11:27 compute-0 ovn_metadata_agent[165637]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 19:11:27 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:11:27.279 165659 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d89a966b-cfbe-45ff-b257-05d5877a2da4', 'env', 'PROCESS_TAG=haproxy-d89a966b-cfbe-45ff-b257-05d5877a2da4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d89a966b-cfbe-45ff-b257-05d5877a2da4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 19:11:27 compute-0 nova_compute[254061]: 2026-01-20 19:11:27.476 254065 DEBUG nova.virt.driver [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] Emitting event <LifecycleEvent: 1768936287.4755077, bfdc2bf6-cb73-4586-861c-e6057f75edcc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 19:11:27 compute-0 nova_compute[254061]: 2026-01-20 19:11:27.476 254065 INFO nova.compute.manager [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] VM Started (Lifecycle Event)
Jan 20 19:11:27 compute-0 nova_compute[254061]: 2026-01-20 19:11:27.479 254065 DEBUG nova.compute.manager [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 19:11:27 compute-0 nova_compute[254061]: 2026-01-20 19:11:27.485 254065 DEBUG nova.virt.libvirt.driver [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 19:11:27 compute-0 nova_compute[254061]: 2026-01-20 19:11:27.488 254065 INFO nova.virt.libvirt.driver [-] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Instance spawned successfully.
Jan 20 19:11:27 compute-0 nova_compute[254061]: 2026-01-20 19:11:27.488 254065 DEBUG nova.virt.libvirt.driver [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 19:11:27 compute-0 nova_compute[254061]: 2026-01-20 19:11:27.502 254065 DEBUG nova.compute.manager [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 19:11:27 compute-0 nova_compute[254061]: 2026-01-20 19:11:27.507 254065 DEBUG nova.compute.manager [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 19:11:27 compute-0 nova_compute[254061]: 2026-01-20 19:11:27.511 254065 DEBUG nova.virt.libvirt.driver [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 19:11:27 compute-0 nova_compute[254061]: 2026-01-20 19:11:27.511 254065 DEBUG nova.virt.libvirt.driver [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 19:11:27 compute-0 nova_compute[254061]: 2026-01-20 19:11:27.512 254065 DEBUG nova.virt.libvirt.driver [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 19:11:27 compute-0 nova_compute[254061]: 2026-01-20 19:11:27.512 254065 DEBUG nova.virt.libvirt.driver [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 19:11:27 compute-0 nova_compute[254061]: 2026-01-20 19:11:27.512 254065 DEBUG nova.virt.libvirt.driver [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 19:11:27 compute-0 nova_compute[254061]: 2026-01-20 19:11:27.513 254065 DEBUG nova.virt.libvirt.driver [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 19:11:27 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:11:27 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:11:27 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:11:27.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:11:27 compute-0 nova_compute[254061]: 2026-01-20 19:11:27.564 254065 INFO nova.compute.manager [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 19:11:27 compute-0 nova_compute[254061]: 2026-01-20 19:11:27.564 254065 DEBUG nova.virt.driver [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] Emitting event <LifecycleEvent: 1768936287.4758513, bfdc2bf6-cb73-4586-861c-e6057f75edcc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 19:11:27 compute-0 nova_compute[254061]: 2026-01-20 19:11:27.564 254065 INFO nova.compute.manager [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] VM Paused (Lifecycle Event)
Jan 20 19:11:27 compute-0 nova_compute[254061]: 2026-01-20 19:11:27.598 254065 DEBUG nova.compute.manager [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 19:11:27 compute-0 nova_compute[254061]: 2026-01-20 19:11:27.601 254065 DEBUG nova.virt.driver [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] Emitting event <LifecycleEvent: 1768936287.4844723, bfdc2bf6-cb73-4586-861c-e6057f75edcc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 19:11:27 compute-0 nova_compute[254061]: 2026-01-20 19:11:27.601 254065 INFO nova.compute.manager [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] VM Resumed (Lifecycle Event)
Jan 20 19:11:27 compute-0 nova_compute[254061]: 2026-01-20 19:11:27.614 254065 INFO nova.compute.manager [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Took 7.53 seconds to spawn the instance on the hypervisor.
Jan 20 19:11:27 compute-0 nova_compute[254061]: 2026-01-20 19:11:27.614 254065 DEBUG nova.compute.manager [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 19:11:27 compute-0 nova_compute[254061]: 2026-01-20 19:11:27.658 254065 DEBUG nova.compute.manager [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 19:11:27 compute-0 podman[265902]: 2026-01-20 19:11:27.66026556 +0000 UTC m=+0.043685148 container create dbbf74dbcc022618a6a8da17f19eaf41e29886e5f13c357da7e281b9d54e82b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d89a966b-cfbe-45ff-b257-05d5877a2da4, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 19:11:27 compute-0 nova_compute[254061]: 2026-01-20 19:11:27.661 254065 DEBUG nova.compute.manager [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 19:11:27 compute-0 nova_compute[254061]: 2026-01-20 19:11:27.690 254065 INFO nova.compute.manager [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 19:11:27 compute-0 systemd[1]: Started libpod-conmon-dbbf74dbcc022618a6a8da17f19eaf41e29886e5f13c357da7e281b9d54e82b3.scope.
Jan 20 19:11:27 compute-0 nova_compute[254061]: 2026-01-20 19:11:27.703 254065 INFO nova.compute.manager [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Took 9.01 seconds to build instance.
Jan 20 19:11:27 compute-0 nova_compute[254061]: 2026-01-20 19:11:27.720 254065 DEBUG oslo_concurrency.lockutils [None req-7ca29d45-9402-4014-b990-5deecc2ab8b2 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "bfdc2bf6-cb73-4586-861c-e6057f75edcc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.114s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:11:27 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:11:27 compute-0 podman[265902]: 2026-01-20 19:11:27.637397584 +0000 UTC m=+0.020817192 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 19:11:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e27ebe66e9fb365857b3ceb27fc74b0b511892a1ebc7200cd7dfd559b9088cb4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 19:11:27 compute-0 podman[265902]: 2026-01-20 19:11:27.74633168 +0000 UTC m=+0.129751278 container init dbbf74dbcc022618a6a8da17f19eaf41e29886e5f13c357da7e281b9d54e82b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d89a966b-cfbe-45ff-b257-05d5877a2da4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 20 19:11:27 compute-0 podman[265902]: 2026-01-20 19:11:27.7516162 +0000 UTC m=+0.135035788 container start dbbf74dbcc022618a6a8da17f19eaf41e29886e5f13c357da7e281b9d54e82b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d89a966b-cfbe-45ff-b257-05d5877a2da4, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 20 19:11:27 compute-0 neutron-haproxy-ovnmeta-d89a966b-cfbe-45ff-b257-05d5877a2da4[265918]: [NOTICE]   (265922) : New worker (265924) forked
Jan 20 19:11:27 compute-0 neutron-haproxy-ovnmeta-d89a966b-cfbe-45ff-b257-05d5877a2da4[265918]: [NOTICE]   (265922) : Loading success.
Jan 20 19:11:27 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:11:27 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:11:27 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:11:27.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:11:28 compute-0 ceph-mon[74381]: pgmap v880: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 20 19:11:28 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v881: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Jan 20 19:11:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:11:28.870Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:11:29 compute-0 nova_compute[254061]: 2026-01-20 19:11:29.073 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:11:29 compute-0 nova_compute[254061]: 2026-01-20 19:11:29.264 254065 DEBUG nova.compute.manager [req-b4698c4b-862e-472e-9154-e3664be68aa3 req-732fead3-1a59-4e58-821d-3f3f6c9cdff9 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Received event network-vif-plugged-8d71eaa1-d4f2-413e-9640-7704328de4fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 19:11:29 compute-0 nova_compute[254061]: 2026-01-20 19:11:29.264 254065 DEBUG oslo_concurrency.lockutils [req-b4698c4b-862e-472e-9154-e3664be68aa3 req-732fead3-1a59-4e58-821d-3f3f6c9cdff9 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Acquiring lock "bfdc2bf6-cb73-4586-861c-e6057f75edcc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:11:29 compute-0 nova_compute[254061]: 2026-01-20 19:11:29.264 254065 DEBUG oslo_concurrency.lockutils [req-b4698c4b-862e-472e-9154-e3664be68aa3 req-732fead3-1a59-4e58-821d-3f3f6c9cdff9 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Lock "bfdc2bf6-cb73-4586-861c-e6057f75edcc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:11:29 compute-0 nova_compute[254061]: 2026-01-20 19:11:29.264 254065 DEBUG oslo_concurrency.lockutils [req-b4698c4b-862e-472e-9154-e3664be68aa3 req-732fead3-1a59-4e58-821d-3f3f6c9cdff9 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Lock "bfdc2bf6-cb73-4586-861c-e6057f75edcc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:11:29 compute-0 nova_compute[254061]: 2026-01-20 19:11:29.264 254065 DEBUG nova.compute.manager [req-b4698c4b-862e-472e-9154-e3664be68aa3 req-732fead3-1a59-4e58-821d-3f3f6c9cdff9 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] No waiting events found dispatching network-vif-plugged-8d71eaa1-d4f2-413e-9640-7704328de4fc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 19:11:29 compute-0 nova_compute[254061]: 2026-01-20 19:11:29.265 254065 WARNING nova.compute.manager [req-b4698c4b-862e-472e-9154-e3664be68aa3 req-732fead3-1a59-4e58-821d-3f3f6c9cdff9 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Received unexpected event network-vif-plugged-8d71eaa1-d4f2-413e-9640-7704328de4fc for instance with vm_state active and task_state None.
Jan 20 19:11:29 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:11:29 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:11:29 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:11:29.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:11:29 compute-0 ceph-mon[74381]: pgmap v881: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Jan 20 19:11:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:11:29] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Jan 20 19:11:29 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:11:29] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Jan 20 19:11:29 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:11:29 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 19:11:29 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:11:29.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 19:11:30 compute-0 podman[265935]: 2026-01-20 19:11:30.142735597 +0000 UTC m=+0.109363799 container health_status d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 19:11:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:11:30.287 165659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:11:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:11:30.287 165659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:11:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:11:30.288 165659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:11:30 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v882: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Jan 20 19:11:31 compute-0 ovn_controller[155128]: 2026-01-20T19:11:31Z|00054|binding|INFO|Releasing lport db876216-b29f-45ae-933e-70465cd9196a from this chassis (sb_readonly=0)
Jan 20 19:11:31 compute-0 NetworkManager[48914]: <info>  [1768936291.1486] manager: (patch-br-int-to-provnet-d23a33c8-2f92-47f7-9446-58aa0ac25f0e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/41)
Jan 20 19:11:31 compute-0 NetworkManager[48914]: <info>  [1768936291.1500] manager: (patch-provnet-d23a33c8-2f92-47f7-9446-58aa0ac25f0e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/42)
Jan 20 19:11:31 compute-0 nova_compute[254061]: 2026-01-20 19:11:31.149 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:11:31 compute-0 nova_compute[254061]: 2026-01-20 19:11:31.184 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:11:31 compute-0 ovn_controller[155128]: 2026-01-20T19:11:31Z|00055|binding|INFO|Releasing lport db876216-b29f-45ae-933e-70465cd9196a from this chassis (sb_readonly=0)
Jan 20 19:11:31 compute-0 nova_compute[254061]: 2026-01-20 19:11:31.190 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:11:31 compute-0 nova_compute[254061]: 2026-01-20 19:11:31.397 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:11:31 compute-0 nova_compute[254061]: 2026-01-20 19:11:31.435 254065 DEBUG nova.compute.manager [req-e63f4b44-f7ab-4e5d-99c6-729141af5118 req-6aecc0eb-850a-41d6-aae8-f756a2f8abb1 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Received event network-changed-8d71eaa1-d4f2-413e-9640-7704328de4fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 19:11:31 compute-0 nova_compute[254061]: 2026-01-20 19:11:31.436 254065 DEBUG nova.compute.manager [req-e63f4b44-f7ab-4e5d-99c6-729141af5118 req-6aecc0eb-850a-41d6-aae8-f756a2f8abb1 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Refreshing instance network info cache due to event network-changed-8d71eaa1-d4f2-413e-9640-7704328de4fc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 19:11:31 compute-0 nova_compute[254061]: 2026-01-20 19:11:31.436 254065 DEBUG oslo_concurrency.lockutils [req-e63f4b44-f7ab-4e5d-99c6-729141af5118 req-6aecc0eb-850a-41d6-aae8-f756a2f8abb1 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Acquiring lock "refresh_cache-bfdc2bf6-cb73-4586-861c-e6057f75edcc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 19:11:31 compute-0 nova_compute[254061]: 2026-01-20 19:11:31.436 254065 DEBUG oslo_concurrency.lockutils [req-e63f4b44-f7ab-4e5d-99c6-729141af5118 req-6aecc0eb-850a-41d6-aae8-f756a2f8abb1 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Acquired lock "refresh_cache-bfdc2bf6-cb73-4586-861c-e6057f75edcc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 19:11:31 compute-0 nova_compute[254061]: 2026-01-20 19:11:31.436 254065 DEBUG nova.network.neutron [req-e63f4b44-f7ab-4e5d-99c6-729141af5118 req-6aecc0eb-850a-41d6-aae8-f756a2f8abb1 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Refreshing network info cache for port 8d71eaa1-d4f2-413e-9640-7704328de4fc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 19:11:31 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:11:31 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:11:31 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:11:31.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:11:31 compute-0 ceph-mon[74381]: pgmap v882: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Jan 20 19:11:31 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:11:31 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:11:31 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:11:31 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:11:31.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:11:32 compute-0 nova_compute[254061]: 2026-01-20 19:11:32.555 254065 DEBUG nova.network.neutron [req-e63f4b44-f7ab-4e5d-99c6-729141af5118 req-6aecc0eb-850a-41d6-aae8-f756a2f8abb1 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Updated VIF entry in instance network info cache for port 8d71eaa1-d4f2-413e-9640-7704328de4fc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 19:11:32 compute-0 nova_compute[254061]: 2026-01-20 19:11:32.556 254065 DEBUG nova.network.neutron [req-e63f4b44-f7ab-4e5d-99c6-729141af5118 req-6aecc0eb-850a-41d6-aae8-f756a2f8abb1 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Updating instance_info_cache with network_info: [{"id": "8d71eaa1-d4f2-413e-9640-7704328de4fc", "address": "fa:16:3e:c8:97:18", "network": {"id": "d89a966b-cfbe-45ff-b257-05d5877a2da4", "bridge": "br-int", "label": "tempest-network-smoke--23620673", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d71eaa1-d4", "ovs_interfaceid": "8d71eaa1-d4f2-413e-9640-7704328de4fc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 19:11:32 compute-0 nova_compute[254061]: 2026-01-20 19:11:32.577 254065 DEBUG oslo_concurrency.lockutils [req-e63f4b44-f7ab-4e5d-99c6-729141af5118 req-6aecc0eb-850a-41d6-aae8-f756a2f8abb1 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Releasing lock "refresh_cache-bfdc2bf6-cb73-4586-861c-e6057f75edcc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 19:11:32 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v883: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 20 19:11:33 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:11:33 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:11:33 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:11:33.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:11:33 compute-0 ceph-mon[74381]: pgmap v883: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 20 19:11:33 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:11:33 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 19:11:33 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:11:33.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 19:11:34 compute-0 nova_compute[254061]: 2026-01-20 19:11:34.078 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:11:34 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v884: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 20 19:11:35 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:11:35 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:11:35 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:11:35.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:11:35 compute-0 ceph-mon[74381]: pgmap v884: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 20 19:11:35 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:11:35 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:11:35 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:11:35.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:11:36 compute-0 nova_compute[254061]: 2026-01-20 19:11:36.398 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:11:36 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v885: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 20 19:11:36 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:11:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:11:37.183Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:11:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:11:37.183Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:11:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:11:37.184Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:11:37 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:11:37 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:11:37 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:11:37.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:11:37 compute-0 sudo[265970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:11:37 compute-0 sudo[265970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:11:37 compute-0 sudo[265970]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:37 compute-0 ceph-mon[74381]: pgmap v885: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 20 19:11:37 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:11:37 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:11:37 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:11:37.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:11:38 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v886: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 20 19:11:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:11:38.872Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:11:39 compute-0 nova_compute[254061]: 2026-01-20 19:11:39.081 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:11:39 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:11:39 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:11:39 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:11:39.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:11:39 compute-0 sudo[265997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:11:39 compute-0 sudo[265997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:11:39 compute-0 sudo[265997]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:39 compute-0 sudo[266022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 20 19:11:39 compute-0 sudo[266022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:11:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:11:39] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Jan 20 19:11:39 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:11:39] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Jan 20 19:11:39 compute-0 ceph-mon[74381]: pgmap v886: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 20 19:11:39 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:11:39 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:11:39 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:11:39.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:11:40 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 20 19:11:40 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:11:40 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 20 19:11:40 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:11:40 compute-0 sudo[266022]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:40 compute-0 sudo[266081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:11:40 compute-0 sudo[266081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:11:40 compute-0 sudo[266081]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:40 compute-0 sudo[266106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- inventory --format=json-pretty --filter-for-batch
Jan 20 19:11:40 compute-0 sudo[266106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:11:40 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v887: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 255 B/s wr, 70 op/s
Jan 20 19:11:40 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:11:40 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:11:40 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:11:40 compute-0 podman[266173]: 2026-01-20 19:11:40.987758983 +0000 UTC m=+0.044983643 container create 216e723a182dd765985b010dbe977fe26d26b50d27593deca7073f43a4ea2b4e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_merkle, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Jan 20 19:11:41 compute-0 systemd[1]: Started libpod-conmon-216e723a182dd765985b010dbe977fe26d26b50d27593deca7073f43a4ea2b4e.scope.
Jan 20 19:11:41 compute-0 podman[266173]: 2026-01-20 19:11:40.967794073 +0000 UTC m=+0.025018753 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:11:41 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:11:41 compute-0 podman[266173]: 2026-01-20 19:11:41.087172197 +0000 UTC m=+0.144396867 container init 216e723a182dd765985b010dbe977fe26d26b50d27593deca7073f43a4ea2b4e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_merkle, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:11:41 compute-0 podman[266173]: 2026-01-20 19:11:41.096297779 +0000 UTC m=+0.153522439 container start 216e723a182dd765985b010dbe977fe26d26b50d27593deca7073f43a4ea2b4e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:11:41 compute-0 podman[266173]: 2026-01-20 19:11:41.099208946 +0000 UTC m=+0.156433596 container attach 216e723a182dd765985b010dbe977fe26d26b50d27593deca7073f43a4ea2b4e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:11:41 compute-0 awesome_merkle[266189]: 167 167
Jan 20 19:11:41 compute-0 conmon[266189]: conmon 216e723a182dd765985b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-216e723a182dd765985b010dbe977fe26d26b50d27593deca7073f43a4ea2b4e.scope/container/memory.events
Jan 20 19:11:41 compute-0 systemd[1]: libpod-216e723a182dd765985b010dbe977fe26d26b50d27593deca7073f43a4ea2b4e.scope: Deactivated successfully.
Jan 20 19:11:41 compute-0 podman[266173]: 2026-01-20 19:11:41.103463608 +0000 UTC m=+0.160688268 container died 216e723a182dd765985b010dbe977fe26d26b50d27593deca7073f43a4ea2b4e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_merkle, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 20 19:11:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-f681fe64425c48a24db3571777f1f160563f53a8e39630e49f2eefbab2d64eca-merged.mount: Deactivated successfully.
Jan 20 19:11:41 compute-0 podman[266173]: 2026-01-20 19:11:41.143179081 +0000 UTC m=+0.200403741 container remove 216e723a182dd765985b010dbe977fe26d26b50d27593deca7073f43a4ea2b4e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_merkle, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Jan 20 19:11:41 compute-0 systemd[1]: libpod-conmon-216e723a182dd765985b010dbe977fe26d26b50d27593deca7073f43a4ea2b4e.scope: Deactivated successfully.
Jan 20 19:11:41 compute-0 podman[266211]: 2026-01-20 19:11:41.34354001 +0000 UTC m=+0.046467852 container create 37a606d034444def41fa988c00d1868e978afdee71e6293ead1e4f5188733ce3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_poitras, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Jan 20 19:11:41 compute-0 systemd[1]: Started libpod-conmon-37a606d034444def41fa988c00d1868e978afdee71e6293ead1e4f5188733ce3.scope.
Jan 20 19:11:41 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:11:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edad10cab659e4ce9c6d73dee6e4f408139fbf162b474af968e74c008c531bed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:11:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edad10cab659e4ce9c6d73dee6e4f408139fbf162b474af968e74c008c531bed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:11:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edad10cab659e4ce9c6d73dee6e4f408139fbf162b474af968e74c008c531bed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:11:41 compute-0 nova_compute[254061]: 2026-01-20 19:11:41.400 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:11:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edad10cab659e4ce9c6d73dee6e4f408139fbf162b474af968e74c008c531bed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:11:41 compute-0 podman[266211]: 2026-01-20 19:11:41.414599113 +0000 UTC m=+0.117526975 container init 37a606d034444def41fa988c00d1868e978afdee71e6293ead1e4f5188733ce3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_poitras, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 20 19:11:41 compute-0 podman[266211]: 2026-01-20 19:11:41.324205468 +0000 UTC m=+0.027133340 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:11:41 compute-0 podman[266211]: 2026-01-20 19:11:41.424249488 +0000 UTC m=+0.127177340 container start 37a606d034444def41fa988c00d1868e978afdee71e6293ead1e4f5188733ce3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_poitras, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Jan 20 19:11:41 compute-0 podman[266211]: 2026-01-20 19:11:41.427516235 +0000 UTC m=+0.130444127 container attach 37a606d034444def41fa988c00d1868e978afdee71e6293ead1e4f5188733ce3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_poitras, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 19:11:41 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:11:41 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 19:11:41 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:11:41.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 19:11:41 compute-0 ovn_controller[155128]: 2026-01-20T19:11:41Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c8:97:18 10.100.0.10
Jan 20 19:11:41 compute-0 ovn_controller[155128]: 2026-01-20T19:11:41Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c8:97:18 10.100.0.10
Jan 20 19:11:41 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:11:42 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:11:42 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:11:42 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:11:42.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:11:42 compute-0 thirsty_poitras[266229]: [
Jan 20 19:11:42 compute-0 thirsty_poitras[266229]:     {
Jan 20 19:11:42 compute-0 thirsty_poitras[266229]:         "available": false,
Jan 20 19:11:42 compute-0 thirsty_poitras[266229]:         "being_replaced": false,
Jan 20 19:11:42 compute-0 thirsty_poitras[266229]:         "ceph_device_lvm": false,
Jan 20 19:11:42 compute-0 thirsty_poitras[266229]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 20 19:11:42 compute-0 thirsty_poitras[266229]:         "lsm_data": {},
Jan 20 19:11:42 compute-0 thirsty_poitras[266229]:         "lvs": [],
Jan 20 19:11:42 compute-0 thirsty_poitras[266229]:         "path": "/dev/sr0",
Jan 20 19:11:42 compute-0 thirsty_poitras[266229]:         "rejected_reasons": [
Jan 20 19:11:42 compute-0 thirsty_poitras[266229]:             "Insufficient space (<5GB)",
Jan 20 19:11:42 compute-0 thirsty_poitras[266229]:             "Has a FileSystem"
Jan 20 19:11:42 compute-0 thirsty_poitras[266229]:         ],
Jan 20 19:11:42 compute-0 thirsty_poitras[266229]:         "sys_api": {
Jan 20 19:11:42 compute-0 thirsty_poitras[266229]:             "actuators": null,
Jan 20 19:11:42 compute-0 thirsty_poitras[266229]:             "device_nodes": [
Jan 20 19:11:42 compute-0 thirsty_poitras[266229]:                 "sr0"
Jan 20 19:11:42 compute-0 thirsty_poitras[266229]:             ],
Jan 20 19:11:42 compute-0 thirsty_poitras[266229]:             "devname": "sr0",
Jan 20 19:11:42 compute-0 thirsty_poitras[266229]:             "human_readable_size": "482.00 KB",
Jan 20 19:11:42 compute-0 thirsty_poitras[266229]:             "id_bus": "ata",
Jan 20 19:11:42 compute-0 thirsty_poitras[266229]:             "model": "QEMU DVD-ROM",
Jan 20 19:11:42 compute-0 thirsty_poitras[266229]:             "nr_requests": "2",
Jan 20 19:11:42 compute-0 thirsty_poitras[266229]:             "parent": "/dev/sr0",
Jan 20 19:11:42 compute-0 thirsty_poitras[266229]:             "partitions": {},
Jan 20 19:11:42 compute-0 thirsty_poitras[266229]:             "path": "/dev/sr0",
Jan 20 19:11:42 compute-0 thirsty_poitras[266229]:             "removable": "1",
Jan 20 19:11:42 compute-0 thirsty_poitras[266229]:             "rev": "2.5+",
Jan 20 19:11:42 compute-0 thirsty_poitras[266229]:             "ro": "0",
Jan 20 19:11:42 compute-0 thirsty_poitras[266229]:             "rotational": "1",
Jan 20 19:11:42 compute-0 thirsty_poitras[266229]:             "sas_address": "",
Jan 20 19:11:42 compute-0 thirsty_poitras[266229]:             "sas_device_handle": "",
Jan 20 19:11:42 compute-0 thirsty_poitras[266229]:             "scheduler_mode": "mq-deadline",
Jan 20 19:11:42 compute-0 thirsty_poitras[266229]:             "sectors": 0,
Jan 20 19:11:42 compute-0 thirsty_poitras[266229]:             "sectorsize": "2048",
Jan 20 19:11:42 compute-0 thirsty_poitras[266229]:             "size": 493568.0,
Jan 20 19:11:42 compute-0 thirsty_poitras[266229]:             "support_discard": "2048",
Jan 20 19:11:42 compute-0 thirsty_poitras[266229]:             "type": "disk",
Jan 20 19:11:42 compute-0 thirsty_poitras[266229]:             "vendor": "QEMU"
Jan 20 19:11:42 compute-0 thirsty_poitras[266229]:         }
Jan 20 19:11:42 compute-0 thirsty_poitras[266229]:     }
Jan 20 19:11:42 compute-0 thirsty_poitras[266229]: ]
Jan 20 19:11:42 compute-0 systemd[1]: libpod-37a606d034444def41fa988c00d1868e978afdee71e6293ead1e4f5188733ce3.scope: Deactivated successfully.
Jan 20 19:11:42 compute-0 podman[266211]: 2026-01-20 19:11:42.249786992 +0000 UTC m=+0.952714824 container died 37a606d034444def41fa988c00d1868e978afdee71e6293ead1e4f5188733ce3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_poitras, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:11:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-edad10cab659e4ce9c6d73dee6e4f408139fbf162b474af968e74c008c531bed-merged.mount: Deactivated successfully.
Jan 20 19:11:42 compute-0 podman[266211]: 2026-01-20 19:11:42.299054647 +0000 UTC m=+1.001982499 container remove 37a606d034444def41fa988c00d1868e978afdee71e6293ead1e4f5188733ce3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 19:11:42 compute-0 systemd[1]: libpod-conmon-37a606d034444def41fa988c00d1868e978afdee71e6293ead1e4f5188733ce3.scope: Deactivated successfully.
Jan 20 19:11:42 compute-0 sudo[266106]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:42 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:11:42 compute-0 ceph-mon[74381]: pgmap v887: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 255 B/s wr, 70 op/s
Jan 20 19:11:42 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v888: 337 pgs: 337 active+clean; 112 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 120 op/s
Jan 20 19:11:42 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:11:42 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:11:43 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:11:43 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 20 19:11:43 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:11:43 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 20 19:11:43 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:11:43 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v889: 337 pgs: 337 active+clean; 112 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 266 KiB/s rd, 2.4 MiB/s wr, 56 op/s
Jan 20 19:11:43 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v890: 337 pgs: 337 active+clean; 112 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 329 KiB/s rd, 2.9 MiB/s wr, 70 op/s
Jan 20 19:11:43 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 19:11:43 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:11:43 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 20 19:11:43 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:11:43 compute-0 sudo[267645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:11:43 compute-0 sudo[267645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:11:43 compute-0 sudo[267645]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:43 compute-0 sudo[267670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 19:11:43 compute-0 sudo[267670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:11:43 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:11:43 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:11:43 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:11:43.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:11:43 compute-0 podman[267737]: 2026-01-20 19:11:43.77252334 +0000 UTC m=+0.057495754 container create 304bcb28e3cb5c6b5aa9c3c9b4ffed404335fdf3c16ce5b59087b5da33fc3aab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Jan 20 19:11:43 compute-0 systemd[1]: Started libpod-conmon-304bcb28e3cb5c6b5aa9c3c9b4ffed404335fdf3c16ce5b59087b5da33fc3aab.scope.
Jan 20 19:11:43 compute-0 podman[267737]: 2026-01-20 19:11:43.747661561 +0000 UTC m=+0.032634085 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:11:43 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:11:43 compute-0 podman[267737]: 2026-01-20 19:11:43.860643005 +0000 UTC m=+0.145615509 container init 304bcb28e3cb5c6b5aa9c3c9b4ffed404335fdf3c16ce5b59087b5da33fc3aab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True)
Jan 20 19:11:43 compute-0 podman[267737]: 2026-01-20 19:11:43.870598419 +0000 UTC m=+0.155570823 container start 304bcb28e3cb5c6b5aa9c3c9b4ffed404335fdf3c16ce5b59087b5da33fc3aab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_germain, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 19:11:43 compute-0 podman[267737]: 2026-01-20 19:11:43.874966314 +0000 UTC m=+0.159938968 container attach 304bcb28e3cb5c6b5aa9c3c9b4ffed404335fdf3c16ce5b59087b5da33fc3aab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_germain, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:11:43 compute-0 hopeful_germain[267753]: 167 167
Jan 20 19:11:43 compute-0 systemd[1]: libpod-304bcb28e3cb5c6b5aa9c3c9b4ffed404335fdf3c16ce5b59087b5da33fc3aab.scope: Deactivated successfully.
Jan 20 19:11:43 compute-0 podman[267737]: 2026-01-20 19:11:43.878574519 +0000 UTC m=+0.163546933 container died 304bcb28e3cb5c6b5aa9c3c9b4ffed404335fdf3c16ce5b59087b5da33fc3aab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_germain, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True)
Jan 20 19:11:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b40274037771752cd3b1af347434fafeefd89c56c19597d8a98d9278d866f5a-merged.mount: Deactivated successfully.
Jan 20 19:11:43 compute-0 podman[267737]: 2026-01-20 19:11:43.915143768 +0000 UTC m=+0.200116172 container remove 304bcb28e3cb5c6b5aa9c3c9b4ffed404335fdf3c16ce5b59087b5da33fc3aab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_germain, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 20 19:11:43 compute-0 systemd[1]: libpod-conmon-304bcb28e3cb5c6b5aa9c3c9b4ffed404335fdf3c16ce5b59087b5da33fc3aab.scope: Deactivated successfully.
Jan 20 19:11:43 compute-0 ceph-mon[74381]: pgmap v888: 337 pgs: 337 active+clean; 112 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 120 op/s
Jan 20 19:11:43 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:11:43 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:11:43 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:11:43 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:11:43 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 19:11:43 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 19:11:43 compute-0 ceph-mon[74381]: pgmap v889: 337 pgs: 337 active+clean; 112 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 266 KiB/s rd, 2.4 MiB/s wr, 56 op/s
Jan 20 19:11:43 compute-0 ceph-mon[74381]: pgmap v890: 337 pgs: 337 active+clean; 112 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 329 KiB/s rd, 2.9 MiB/s wr, 70 op/s
Jan 20 19:11:43 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:11:43 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:11:43 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 19:11:43 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 19:11:43 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 19:11:44 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:11:44 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:11:44 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:11:44.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:11:44 compute-0 podman[267778]: 2026-01-20 19:11:44.129494448 +0000 UTC m=+0.058986804 container create 02f77dde6ec454eb95b6995f3c6ccff3bf58dc3a59060030130aa018c18e2b2b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Jan 20 19:11:44 compute-0 nova_compute[254061]: 2026-01-20 19:11:44.138 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:11:44 compute-0 systemd[1]: Started libpod-conmon-02f77dde6ec454eb95b6995f3c6ccff3bf58dc3a59060030130aa018c18e2b2b.scope.
Jan 20 19:11:44 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:11:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d6502f3deb05d4948c150351693039c0b6066d83ee70e0008326fa1c767d673/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:11:44 compute-0 podman[267778]: 2026-01-20 19:11:44.10314221 +0000 UTC m=+0.032634586 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:11:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d6502f3deb05d4948c150351693039c0b6066d83ee70e0008326fa1c767d673/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:11:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d6502f3deb05d4948c150351693039c0b6066d83ee70e0008326fa1c767d673/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:11:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d6502f3deb05d4948c150351693039c0b6066d83ee70e0008326fa1c767d673/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:11:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d6502f3deb05d4948c150351693039c0b6066d83ee70e0008326fa1c767d673/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:11:44 compute-0 podman[267778]: 2026-01-20 19:11:44.214616263 +0000 UTC m=+0.144108609 container init 02f77dde6ec454eb95b6995f3c6ccff3bf58dc3a59060030130aa018c18e2b2b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_heisenberg, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:11:44 compute-0 podman[267778]: 2026-01-20 19:11:44.223754896 +0000 UTC m=+0.153247222 container start 02f77dde6ec454eb95b6995f3c6ccff3bf58dc3a59060030130aa018c18e2b2b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_heisenberg, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 20 19:11:44 compute-0 podman[267778]: 2026-01-20 19:11:44.229258422 +0000 UTC m=+0.158750778 container attach 02f77dde6ec454eb95b6995f3c6ccff3bf58dc3a59060030130aa018c18e2b2b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_heisenberg, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 20 19:11:44 compute-0 beautiful_heisenberg[267795]: --> passed data devices: 0 physical, 1 LVM
Jan 20 19:11:44 compute-0 beautiful_heisenberg[267795]: --> All data devices are unavailable
Jan 20 19:11:44 compute-0 systemd[1]: libpod-02f77dde6ec454eb95b6995f3c6ccff3bf58dc3a59060030130aa018c18e2b2b.scope: Deactivated successfully.
Jan 20 19:11:44 compute-0 conmon[267795]: conmon 02f77dde6ec454eb95b6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-02f77dde6ec454eb95b6995f3c6ccff3bf58dc3a59060030130aa018c18e2b2b.scope/container/memory.events
Jan 20 19:11:44 compute-0 podman[267778]: 2026-01-20 19:11:44.585130041 +0000 UTC m=+0.514622387 container died 02f77dde6ec454eb95b6995f3c6ccff3bf58dc3a59060030130aa018c18e2b2b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Jan 20 19:11:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-8d6502f3deb05d4948c150351693039c0b6066d83ee70e0008326fa1c767d673-merged.mount: Deactivated successfully.
Jan 20 19:11:44 compute-0 podman[267778]: 2026-01-20 19:11:44.621200607 +0000 UTC m=+0.550692933 container remove 02f77dde6ec454eb95b6995f3c6ccff3bf58dc3a59060030130aa018c18e2b2b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1)
Jan 20 19:11:44 compute-0 systemd[1]: libpod-conmon-02f77dde6ec454eb95b6995f3c6ccff3bf58dc3a59060030130aa018c18e2b2b.scope: Deactivated successfully.
Jan 20 19:11:44 compute-0 sudo[267670]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:44 compute-0 sudo[267825]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:11:44 compute-0 sudo[267825]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:11:44 compute-0 sudo[267825]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:44 compute-0 sudo[267850]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- lvm list --format json
Jan 20 19:11:44 compute-0 sudo[267850]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:11:45 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v891: 337 pgs: 337 active+clean; 112 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 329 KiB/s rd, 2.9 MiB/s wr, 70 op/s
Jan 20 19:11:45 compute-0 podman[267917]: 2026-01-20 19:11:45.206009811 +0000 UTC m=+0.040259226 container create 76d78845298b19e68feb02ad34b982fc2a21c2b818838117d462b4925b61d8c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Jan 20 19:11:45 compute-0 systemd[1]: Started libpod-conmon-76d78845298b19e68feb02ad34b982fc2a21c2b818838117d462b4925b61d8c4.scope.
Jan 20 19:11:45 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:11:45 compute-0 podman[267917]: 2026-01-20 19:11:45.273725396 +0000 UTC m=+0.107974831 container init 76d78845298b19e68feb02ad34b982fc2a21c2b818838117d462b4925b61d8c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_boyd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 19:11:45 compute-0 podman[267917]: 2026-01-20 19:11:45.284002388 +0000 UTC m=+0.118251803 container start 76d78845298b19e68feb02ad34b982fc2a21c2b818838117d462b4925b61d8c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:11:45 compute-0 elegant_boyd[267933]: 167 167
Jan 20 19:11:45 compute-0 podman[267917]: 2026-01-20 19:11:45.190655335 +0000 UTC m=+0.024904770 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:11:45 compute-0 systemd[1]: libpod-76d78845298b19e68feb02ad34b982fc2a21c2b818838117d462b4925b61d8c4.scope: Deactivated successfully.
Jan 20 19:11:45 compute-0 podman[267917]: 2026-01-20 19:11:45.287719627 +0000 UTC m=+0.121969062 container attach 76d78845298b19e68feb02ad34b982fc2a21c2b818838117d462b4925b61d8c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Jan 20 19:11:45 compute-0 podman[267917]: 2026-01-20 19:11:45.288029465 +0000 UTC m=+0.122278880 container died 76d78845298b19e68feb02ad34b982fc2a21c2b818838117d462b4925b61d8c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 20 19:11:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-d1eb3cb29af78cf318da1c12117183d4a1411a4dbf3f7abf2f64eb6d610e8d76-merged.mount: Deactivated successfully.
Jan 20 19:11:45 compute-0 podman[267917]: 2026-01-20 19:11:45.323242538 +0000 UTC m=+0.157491953 container remove 76d78845298b19e68feb02ad34b982fc2a21c2b818838117d462b4925b61d8c4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_boyd, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:11:45 compute-0 systemd[1]: libpod-conmon-76d78845298b19e68feb02ad34b982fc2a21c2b818838117d462b4925b61d8c4.scope: Deactivated successfully.
Jan 20 19:11:45 compute-0 podman[267955]: 2026-01-20 19:11:45.476970241 +0000 UTC m=+0.040879834 container create ba5d6b255773a89c2f8ff5ffe845eaa810fe145da80c610f2bde63e7b168ab5d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_kapitsa, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:11:45 compute-0 systemd[1]: Started libpod-conmon-ba5d6b255773a89c2f8ff5ffe845eaa810fe145da80c610f2bde63e7b168ab5d.scope.
Jan 20 19:11:45 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:11:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4a69ee7bfe8d1a803ddb0d0c78934851c62afa134883a385ed6d49141d78614/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:11:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4a69ee7bfe8d1a803ddb0d0c78934851c62afa134883a385ed6d49141d78614/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:11:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4a69ee7bfe8d1a803ddb0d0c78934851c62afa134883a385ed6d49141d78614/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:11:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4a69ee7bfe8d1a803ddb0d0c78934851c62afa134883a385ed6d49141d78614/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:11:45 compute-0 podman[267955]: 2026-01-20 19:11:45.552075692 +0000 UTC m=+0.115985305 container init ba5d6b255773a89c2f8ff5ffe845eaa810fe145da80c610f2bde63e7b168ab5d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_kapitsa, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 20 19:11:45 compute-0 podman[267955]: 2026-01-20 19:11:45.45769113 +0000 UTC m=+0.021600743 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:11:45 compute-0 podman[267955]: 2026-01-20 19:11:45.558728497 +0000 UTC m=+0.122638090 container start ba5d6b255773a89c2f8ff5ffe845eaa810fe145da80c610f2bde63e7b168ab5d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_kapitsa, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 20 19:11:45 compute-0 podman[267955]: 2026-01-20 19:11:45.561771778 +0000 UTC m=+0.125681371 container attach ba5d6b255773a89c2f8ff5ffe845eaa810fe145da80c610f2bde63e7b168ab5d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_kapitsa, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Jan 20 19:11:45 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:11:45 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:11:45 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:11:45.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:11:45 compute-0 elated_kapitsa[267971]: {
Jan 20 19:11:45 compute-0 elated_kapitsa[267971]:     "0": [
Jan 20 19:11:45 compute-0 elated_kapitsa[267971]:         {
Jan 20 19:11:45 compute-0 elated_kapitsa[267971]:             "devices": [
Jan 20 19:11:45 compute-0 elated_kapitsa[267971]:                 "/dev/loop3"
Jan 20 19:11:45 compute-0 elated_kapitsa[267971]:             ],
Jan 20 19:11:45 compute-0 elated_kapitsa[267971]:             "lv_name": "ceph_lv0",
Jan 20 19:11:45 compute-0 elated_kapitsa[267971]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:11:45 compute-0 elated_kapitsa[267971]:             "lv_size": "21470642176",
Jan 20 19:11:45 compute-0 elated_kapitsa[267971]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=aecbbf3b-b405-507b-97d7-637a83f5b4b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5f53c0c6-6046-4836-83f9-ff93da7e674e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:11:45 compute-0 elated_kapitsa[267971]:             "lv_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 19:11:45 compute-0 elated_kapitsa[267971]:             "name": "ceph_lv0",
Jan 20 19:11:45 compute-0 elated_kapitsa[267971]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:11:45 compute-0 elated_kapitsa[267971]:             "tags": {
Jan 20 19:11:45 compute-0 elated_kapitsa[267971]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:11:45 compute-0 elated_kapitsa[267971]:                 "ceph.block_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 19:11:45 compute-0 elated_kapitsa[267971]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:11:45 compute-0 elated_kapitsa[267971]:                 "ceph.cluster_fsid": "aecbbf3b-b405-507b-97d7-637a83f5b4b1",
Jan 20 19:11:45 compute-0 elated_kapitsa[267971]:                 "ceph.cluster_name": "ceph",
Jan 20 19:11:45 compute-0 elated_kapitsa[267971]:                 "ceph.crush_device_class": "",
Jan 20 19:11:45 compute-0 elated_kapitsa[267971]:                 "ceph.encrypted": "0",
Jan 20 19:11:45 compute-0 elated_kapitsa[267971]:                 "ceph.osd_fsid": "5f53c0c6-6046-4836-83f9-ff93da7e674e",
Jan 20 19:11:45 compute-0 elated_kapitsa[267971]:                 "ceph.osd_id": "0",
Jan 20 19:11:45 compute-0 elated_kapitsa[267971]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:11:45 compute-0 elated_kapitsa[267971]:                 "ceph.type": "block",
Jan 20 19:11:45 compute-0 elated_kapitsa[267971]:                 "ceph.vdo": "0",
Jan 20 19:11:45 compute-0 elated_kapitsa[267971]:                 "ceph.with_tpm": "0"
Jan 20 19:11:45 compute-0 elated_kapitsa[267971]:             },
Jan 20 19:11:45 compute-0 elated_kapitsa[267971]:             "type": "block",
Jan 20 19:11:45 compute-0 elated_kapitsa[267971]:             "vg_name": "ceph_vg0"
Jan 20 19:11:45 compute-0 elated_kapitsa[267971]:         }
Jan 20 19:11:45 compute-0 elated_kapitsa[267971]:     ]
Jan 20 19:11:45 compute-0 elated_kapitsa[267971]: }
Jan 20 19:11:45 compute-0 systemd[1]: libpod-ba5d6b255773a89c2f8ff5ffe845eaa810fe145da80c610f2bde63e7b168ab5d.scope: Deactivated successfully.
Jan 20 19:11:45 compute-0 podman[267955]: 2026-01-20 19:11:45.829165713 +0000 UTC m=+0.393075406 container died ba5d6b255773a89c2f8ff5ffe845eaa810fe145da80c610f2bde63e7b168ab5d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 20 19:11:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-f4a69ee7bfe8d1a803ddb0d0c78934851c62afa134883a385ed6d49141d78614-merged.mount: Deactivated successfully.
Jan 20 19:11:45 compute-0 podman[267955]: 2026-01-20 19:11:45.881392187 +0000 UTC m=+0.445301780 container remove ba5d6b255773a89c2f8ff5ffe845eaa810fe145da80c610f2bde63e7b168ab5d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:11:45 compute-0 systemd[1]: libpod-conmon-ba5d6b255773a89c2f8ff5ffe845eaa810fe145da80c610f2bde63e7b168ab5d.scope: Deactivated successfully.
Jan 20 19:11:45 compute-0 sudo[267850]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:46 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:11:46 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:11:46 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:11:46.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:11:46 compute-0 sudo[267992]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:11:46 compute-0 sudo[267992]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:11:46 compute-0 sudo[267992]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:46 compute-0 sudo[268017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- raw list --format json
Jan 20 19:11:46 compute-0 sudo[268017]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:11:46 compute-0 ceph-mon[74381]: pgmap v891: 337 pgs: 337 active+clean; 112 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 329 KiB/s rd, 2.9 MiB/s wr, 70 op/s
Jan 20 19:11:46 compute-0 nova_compute[254061]: 2026-01-20 19:11:46.448 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:11:46 compute-0 podman[268085]: 2026-01-20 19:11:46.615728314 +0000 UTC m=+0.040186905 container create 3d378d689fdb3e138815e6b17b743ef360a7099bbc462b7159117d6cc48bfb1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_mclean, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Jan 20 19:11:46 compute-0 systemd[1]: Started libpod-conmon-3d378d689fdb3e138815e6b17b743ef360a7099bbc462b7159117d6cc48bfb1e.scope.
Jan 20 19:11:46 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:11:46 compute-0 podman[268085]: 2026-01-20 19:11:46.599228347 +0000 UTC m=+0.023686958 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:11:46 compute-0 podman[268085]: 2026-01-20 19:11:46.701481116 +0000 UTC m=+0.125939727 container init 3d378d689fdb3e138815e6b17b743ef360a7099bbc462b7159117d6cc48bfb1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_mclean, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:11:46 compute-0 podman[268085]: 2026-01-20 19:11:46.714725268 +0000 UTC m=+0.139183849 container start 3d378d689fdb3e138815e6b17b743ef360a7099bbc462b7159117d6cc48bfb1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_mclean, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Jan 20 19:11:46 compute-0 quizzical_mclean[268102]: 167 167
Jan 20 19:11:46 compute-0 systemd[1]: libpod-3d378d689fdb3e138815e6b17b743ef360a7099bbc462b7159117d6cc48bfb1e.scope: Deactivated successfully.
Jan 20 19:11:46 compute-0 podman[268085]: 2026-01-20 19:11:46.720591553 +0000 UTC m=+0.145050164 container attach 3d378d689fdb3e138815e6b17b743ef360a7099bbc462b7159117d6cc48bfb1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_mclean, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:11:46 compute-0 podman[268085]: 2026-01-20 19:11:46.721417435 +0000 UTC m=+0.145876026 container died 3d378d689fdb3e138815e6b17b743ef360a7099bbc462b7159117d6cc48bfb1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Jan 20 19:11:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-a17742f55efebf4ac91368de5f02046acef3731b8372237adf50ac23be5993ae-merged.mount: Deactivated successfully.
Jan 20 19:11:46 compute-0 podman[268085]: 2026-01-20 19:11:46.755143739 +0000 UTC m=+0.179602330 container remove 3d378d689fdb3e138815e6b17b743ef360a7099bbc462b7159117d6cc48bfb1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_mclean, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:11:46 compute-0 systemd[1]: libpod-conmon-3d378d689fdb3e138815e6b17b743ef360a7099bbc462b7159117d6cc48bfb1e.scope: Deactivated successfully.
Jan 20 19:11:46 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:11:46 compute-0 podman[268127]: 2026-01-20 19:11:46.96956545 +0000 UTC m=+0.038766738 container create 15a8a4fd94bc9d822b9eadc06c3be066a2318dee7847fd04036af618268294be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 20 19:11:47 compute-0 systemd[1]: Started libpod-conmon-15a8a4fd94bc9d822b9eadc06c3be066a2318dee7847fd04036af618268294be.scope.
Jan 20 19:11:47 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:11:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b116138c2aa7d36ec39bdd1cffe36246c6fc36827e0c48cb3ff98c25cf20ca6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:11:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b116138c2aa7d36ec39bdd1cffe36246c6fc36827e0c48cb3ff98c25cf20ca6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:11:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b116138c2aa7d36ec39bdd1cffe36246c6fc36827e0c48cb3ff98c25cf20ca6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:11:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b116138c2aa7d36ec39bdd1cffe36246c6fc36827e0c48cb3ff98c25cf20ca6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:11:47 compute-0 podman[268127]: 2026-01-20 19:11:46.954764337 +0000 UTC m=+0.023965605 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:11:47 compute-0 podman[268127]: 2026-01-20 19:11:47.055444725 +0000 UTC m=+0.124645993 container init 15a8a4fd94bc9d822b9eadc06c3be066a2318dee7847fd04036af618268294be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_tesla, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:11:47 compute-0 podman[268127]: 2026-01-20 19:11:47.066490758 +0000 UTC m=+0.135691996 container start 15a8a4fd94bc9d822b9eadc06c3be066a2318dee7847fd04036af618268294be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_tesla, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:11:47 compute-0 podman[268127]: 2026-01-20 19:11:47.073496074 +0000 UTC m=+0.142697352 container attach 15a8a4fd94bc9d822b9eadc06c3be066a2318dee7847fd04036af618268294be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_tesla, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Jan 20 19:11:47 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v892: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 432 KiB/s rd, 3.0 MiB/s wr, 86 op/s
Jan 20 19:11:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:11:47.185Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:11:47 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:11:47 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:11:47 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:11:47.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:11:47 compute-0 lvm[268218]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 19:11:47 compute-0 lvm[268218]: VG ceph_vg0 finished
Jan 20 19:11:47 compute-0 wizardly_tesla[268142]: {}
Jan 20 19:11:47 compute-0 systemd[1]: libpod-15a8a4fd94bc9d822b9eadc06c3be066a2318dee7847fd04036af618268294be.scope: Deactivated successfully.
Jan 20 19:11:47 compute-0 podman[268127]: 2026-01-20 19:11:47.854524878 +0000 UTC m=+0.923726216 container died 15a8a4fd94bc9d822b9eadc06c3be066a2318dee7847fd04036af618268294be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_tesla, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Jan 20 19:11:47 compute-0 systemd[1]: libpod-15a8a4fd94bc9d822b9eadc06c3be066a2318dee7847fd04036af618268294be.scope: Consumed 1.260s CPU time.
Jan 20 19:11:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-8b116138c2aa7d36ec39bdd1cffe36246c6fc36827e0c48cb3ff98c25cf20ca6-merged.mount: Deactivated successfully.
Jan 20 19:11:48 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:11:48 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:11:48 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:11:48.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:11:48 compute-0 podman[268127]: 2026-01-20 19:11:48.045029716 +0000 UTC m=+1.114230964 container remove 15a8a4fd94bc9d822b9eadc06c3be066a2318dee7847fd04036af618268294be (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 20 19:11:48 compute-0 systemd[1]: libpod-conmon-15a8a4fd94bc9d822b9eadc06c3be066a2318dee7847fd04036af618268294be.scope: Deactivated successfully.
Jan 20 19:11:48 compute-0 sudo[268017]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:48 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:11:48 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:11:48 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:11:48 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:11:48 compute-0 nova_compute[254061]: 2026-01-20 19:11:48.209 254065 INFO nova.compute.manager [None req-bc6ce432-0195-4600-8f23-63eddc083558 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Get console output
Jan 20 19:11:48 compute-0 nova_compute[254061]: 2026-01-20 19:11:48.216 260360 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 20 19:11:48 compute-0 sudo[268236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 19:11:48 compute-0 sudo[268236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:11:48 compute-0 sudo[268236]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:48 compute-0 ceph-mon[74381]: pgmap v892: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 432 KiB/s rd, 3.0 MiB/s wr, 86 op/s
Jan 20 19:11:48 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:11:48 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:11:48 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:11:48.874Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:11:48 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:11:48.874Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:11:49 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v893: 337 pgs: 337 active+clean; 121 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 470 KiB/s rd, 3.0 MiB/s wr, 92 op/s
Jan 20 19:11:49 compute-0 nova_compute[254061]: 2026-01-20 19:11:49.144 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:11:49 compute-0 ceph-mon[74381]: from='client.? 192.168.122.10:0/1205697298' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 19:11:49 compute-0 ceph-mon[74381]: from='client.? 192.168.122.10:0/1205697298' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 19:11:49 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:11:49 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:11:49 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:11:49.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:11:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:11:49] "GET /metrics HTTP/1.1" 200 48476 "" "Prometheus/2.51.0"
Jan 20 19:11:49 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:11:49] "GET /metrics HTTP/1.1" 200 48476 "" "Prometheus/2.51.0"
Jan 20 19:11:50 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:11:50 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:11:50 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:11:50.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:11:50 compute-0 ceph-mon[74381]: pgmap v893: 337 pgs: 337 active+clean; 121 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 470 KiB/s rd, 3.0 MiB/s wr, 92 op/s
Jan 20 19:11:51 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v894: 337 pgs: 337 active+clean; 121 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 142 KiB/s rd, 53 KiB/s wr, 22 op/s
Jan 20 19:11:51 compute-0 nova_compute[254061]: 2026-01-20 19:11:51.451 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:11:51 compute-0 nova_compute[254061]: 2026-01-20 19:11:51.500 254065 DEBUG oslo_concurrency.lockutils [None req-7a86482c-04b1-4315-927e-dd44b9e8b22c d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Acquiring lock "interface-bfdc2bf6-cb73-4586-861c-e6057f75edcc-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:11:51 compute-0 nova_compute[254061]: 2026-01-20 19:11:51.501 254065 DEBUG oslo_concurrency.lockutils [None req-7a86482c-04b1-4315-927e-dd44b9e8b22c d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "interface-bfdc2bf6-cb73-4586-861c-e6057f75edcc-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:11:51 compute-0 nova_compute[254061]: 2026-01-20 19:11:51.501 254065 DEBUG nova.objects.instance [None req-7a86482c-04b1-4315-927e-dd44b9e8b22c d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lazy-loading 'flavor' on Instance uuid bfdc2bf6-cb73-4586-861c-e6057f75edcc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 19:11:51 compute-0 ceph-mon[74381]: pgmap v894: 337 pgs: 337 active+clean; 121 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 142 KiB/s rd, 53 KiB/s wr, 22 op/s
Jan 20 19:11:51 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:11:51 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:11:51 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:11:51.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:11:51 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:11:51 compute-0 nova_compute[254061]: 2026-01-20 19:11:51.878 254065 DEBUG nova.objects.instance [None req-7a86482c-04b1-4315-927e-dd44b9e8b22c d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lazy-loading 'pci_requests' on Instance uuid bfdc2bf6-cb73-4586-861c-e6057f75edcc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 19:11:51 compute-0 nova_compute[254061]: 2026-01-20 19:11:51.892 254065 DEBUG nova.network.neutron [None req-7a86482c-04b1-4315-927e-dd44b9e8b22c d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 19:11:52 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:11:52 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:11:52 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:11:52.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:11:52 compute-0 nova_compute[254061]: 2026-01-20 19:11:52.037 254065 DEBUG nova.policy [None req-7a86482c-04b1-4315-927e-dd44b9e8b22c d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd34bd159f8884263a7481e3fcff15267', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dc8a6ea17f334edbbfaf2a91ec6fd167', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 19:11:52 compute-0 podman[268264]: 2026-01-20 19:11:52.092757967 +0000 UTC m=+0.064650183 container health_status 7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 19:11:52 compute-0 nova_compute[254061]: 2026-01-20 19:11:52.527 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:11:52 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:11:52.528 165659 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'be:fe:f2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '0e:71:69:cd:a8:95'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 19:11:52 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:11:52.529 165659 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 19:11:52 compute-0 nova_compute[254061]: 2026-01-20 19:11:52.653 254065 DEBUG nova.network.neutron [None req-7a86482c-04b1-4315-927e-dd44b9e8b22c d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Successfully created port: 9aea074b-ae18-481e-9e32-d20b598171be _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 19:11:53 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v895: 337 pgs: 337 active+clean; 121 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 122 KiB/s rd, 60 KiB/s wr, 20 op/s
Jan 20 19:11:53 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:11:53 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:11:53 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:11:53.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:11:54 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:11:54 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:11:54 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:11:54.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:11:54 compute-0 nova_compute[254061]: 2026-01-20 19:11:54.182 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:11:54 compute-0 ceph-mon[74381]: pgmap v895: 337 pgs: 337 active+clean; 121 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 122 KiB/s rd, 60 KiB/s wr, 20 op/s
Jan 20 19:11:55 compute-0 ceph-mgr[74676]: [balancer INFO root] Optimize plan auto_2026-01-20_19:11:55
Jan 20 19:11:55 compute-0 ceph-mgr[74676]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 19:11:55 compute-0 ceph-mgr[74676]: [balancer INFO root] do_upmap
Jan 20 19:11:55 compute-0 ceph-mgr[74676]: [balancer INFO root] pools ['volumes', 'default.rgw.log', 'images', '.rgw.root', '.nfs', 'default.rgw.control', '.mgr', 'default.rgw.meta', 'backups', 'cephfs.cephfs.meta', 'vms', 'cephfs.cephfs.data']
Jan 20 19:11:55 compute-0 nova_compute[254061]: 2026-01-20 19:11:55.007 254065 DEBUG nova.network.neutron [None req-7a86482c-04b1-4315-927e-dd44b9e8b22c d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Successfully updated port: 9aea074b-ae18-481e-9e32-d20b598171be _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 19:11:55 compute-0 ceph-mgr[74676]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 19:11:55 compute-0 nova_compute[254061]: 2026-01-20 19:11:55.025 254065 DEBUG oslo_concurrency.lockutils [None req-7a86482c-04b1-4315-927e-dd44b9e8b22c d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Acquiring lock "refresh_cache-bfdc2bf6-cb73-4586-861c-e6057f75edcc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 19:11:55 compute-0 nova_compute[254061]: 2026-01-20 19:11:55.026 254065 DEBUG oslo_concurrency.lockutils [None req-7a86482c-04b1-4315-927e-dd44b9e8b22c d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Acquired lock "refresh_cache-bfdc2bf6-cb73-4586-861c-e6057f75edcc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 19:11:55 compute-0 nova_compute[254061]: 2026-01-20 19:11:55.026 254065 DEBUG nova.network.neutron [None req-7a86482c-04b1-4315-927e-dd44b9e8b22c d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 19:11:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:11:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:11:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:11:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:11:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:11:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:11:55 compute-0 nova_compute[254061]: 2026-01-20 19:11:55.115 254065 DEBUG nova.compute.manager [req-9aca7821-78d7-4eba-abf5-ae1bb397c85c req-e358027d-bdc4-48c1-af70-aa05b58dfd43 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Received event network-changed-9aea074b-ae18-481e-9e32-d20b598171be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 19:11:55 compute-0 nova_compute[254061]: 2026-01-20 19:11:55.115 254065 DEBUG nova.compute.manager [req-9aca7821-78d7-4eba-abf5-ae1bb397c85c req-e358027d-bdc4-48c1-af70-aa05b58dfd43 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Refreshing instance network info cache due to event network-changed-9aea074b-ae18-481e-9e32-d20b598171be. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 19:11:55 compute-0 nova_compute[254061]: 2026-01-20 19:11:55.115 254065 DEBUG oslo_concurrency.lockutils [req-9aca7821-78d7-4eba-abf5-ae1bb397c85c req-e358027d-bdc4-48c1-af70-aa05b58dfd43 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Acquiring lock "refresh_cache-bfdc2bf6-cb73-4586-861c-e6057f75edcc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 19:11:55 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v896: 337 pgs: 337 active+clean; 121 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 102 KiB/s rd, 50 KiB/s wr, 16 op/s
Jan 20 19:11:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 19:11:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:11:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 19:11:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:11:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007589550978381194 of space, bias 1.0, pg target 0.22768652935143582 quantized to 32 (current 32)
Jan 20 19:11:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:11:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:11:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:11:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:11:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:11:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 20 19:11:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:11:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 20 19:11:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:11:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:11:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:11:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 19:11:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:11:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 20 19:11:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:11:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:11:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:11:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 19:11:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:11:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 20 19:11:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 19:11:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:11:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 19:11:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:11:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:11:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:11:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:11:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:11:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:11:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:11:55 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:11:55 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:11:55 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:11:55.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:11:56 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:11:56 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:11:56 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:11:56.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:11:56 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:11:56 compute-0 ceph-mon[74381]: pgmap v896: 337 pgs: 337 active+clean; 121 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 102 KiB/s rd, 50 KiB/s wr, 16 op/s
Jan 20 19:11:56 compute-0 nova_compute[254061]: 2026-01-20 19:11:56.456 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:11:56 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:11:57 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v897: 337 pgs: 337 active+clean; 121 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 102 KiB/s rd, 50 KiB/s wr, 16 op/s
Jan 20 19:11:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:11:57.187Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:11:57 compute-0 nova_compute[254061]: 2026-01-20 19:11:57.209 254065 DEBUG nova.network.neutron [None req-7a86482c-04b1-4315-927e-dd44b9e8b22c d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Updating instance_info_cache with network_info: [{"id": "8d71eaa1-d4f2-413e-9640-7704328de4fc", "address": "fa:16:3e:c8:97:18", "network": {"id": "d89a966b-cfbe-45ff-b257-05d5877a2da4", "bridge": "br-int", "label": "tempest-network-smoke--23620673", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d71eaa1-d4", "ovs_interfaceid": "8d71eaa1-d4f2-413e-9640-7704328de4fc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9aea074b-ae18-481e-9e32-d20b598171be", "address": "fa:16:3e:a5:ca:35", "network": {"id": "527c809d-016a-41e2-8792-ec37a5eee918", "bridge": "br-int", "label": "tempest-network-smoke--83173917", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9aea074b-ae", "ovs_interfaceid": "9aea074b-ae18-481e-9e32-d20b598171be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 19:11:57 compute-0 nova_compute[254061]: 2026-01-20 19:11:57.229 254065 DEBUG oslo_concurrency.lockutils [None req-7a86482c-04b1-4315-927e-dd44b9e8b22c d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Releasing lock "refresh_cache-bfdc2bf6-cb73-4586-861c-e6057f75edcc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 19:11:57 compute-0 nova_compute[254061]: 2026-01-20 19:11:57.230 254065 DEBUG oslo_concurrency.lockutils [req-9aca7821-78d7-4eba-abf5-ae1bb397c85c req-e358027d-bdc4-48c1-af70-aa05b58dfd43 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Acquired lock "refresh_cache-bfdc2bf6-cb73-4586-861c-e6057f75edcc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 19:11:57 compute-0 nova_compute[254061]: 2026-01-20 19:11:57.230 254065 DEBUG nova.network.neutron [req-9aca7821-78d7-4eba-abf5-ae1bb397c85c req-e358027d-bdc4-48c1-af70-aa05b58dfd43 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Refreshing network info cache for port 9aea074b-ae18-481e-9e32-d20b598171be _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 19:11:57 compute-0 nova_compute[254061]: 2026-01-20 19:11:57.233 254065 DEBUG nova.virt.libvirt.vif [None req-7a86482c-04b1-4315-927e-dd44b9e8b22c d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T19:11:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1235924562',display_name='tempest-TestNetworkBasicOps-server-1235924562',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1235924562',id=6,image_ref='bc57af0c-4b71-499e-9808-3c8fc070a488',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIjr6syangYxWXc3r4dCfhnhxcAaEg5oVCWg4X5MCcn6n80x4JhggPSqDkhncvG7NiQVFxqb5q9kQ+/60IAt0rodPBVgAFfcYlPDnsnj3CLQm3+or3usHJ4CImoOM7F42A==',key_name='tempest-TestNetworkBasicOps-1374279328',keypairs=<?>,launch_index=0,launched_at=2026-01-20T19:11:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='dc8a6ea17f334edbbfaf2a91ec6fd167',ramdisk_id='',reservation_id='r-7t9yeoab',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='bc57af0c-4b71-499e-9808-3c8fc070a488',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-899583499',owner_user_name='tempest-TestNetworkBasicOps-899583499-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T19:11:27Z,user_data=None,user_id='d34bd159f8884263a7481e3fcff15267',uuid=bfdc2bf6-cb73-4586-861c-e6057f75edcc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9aea074b-ae18-481e-9e32-d20b598171be", "address": "fa:16:3e:a5:ca:35", "network": {"id": "527c809d-016a-41e2-8792-ec37a5eee918", "bridge": "br-int", "label": "tempest-network-smoke--83173917", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9aea074b-ae", "ovs_interfaceid": "9aea074b-ae18-481e-9e32-d20b598171be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 19:11:57 compute-0 nova_compute[254061]: 2026-01-20 19:11:57.233 254065 DEBUG nova.network.os_vif_util [None req-7a86482c-04b1-4315-927e-dd44b9e8b22c d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Converting VIF {"id": "9aea074b-ae18-481e-9e32-d20b598171be", "address": "fa:16:3e:a5:ca:35", "network": {"id": "527c809d-016a-41e2-8792-ec37a5eee918", "bridge": "br-int", "label": "tempest-network-smoke--83173917", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9aea074b-ae", "ovs_interfaceid": "9aea074b-ae18-481e-9e32-d20b598171be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 19:11:57 compute-0 nova_compute[254061]: 2026-01-20 19:11:57.234 254065 DEBUG nova.network.os_vif_util [None req-7a86482c-04b1-4315-927e-dd44b9e8b22c d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:ca:35,bridge_name='br-int',has_traffic_filtering=True,id=9aea074b-ae18-481e-9e32-d20b598171be,network=Network(527c809d-016a-41e2-8792-ec37a5eee918),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9aea074b-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 19:11:57 compute-0 nova_compute[254061]: 2026-01-20 19:11:57.234 254065 DEBUG os_vif [None req-7a86482c-04b1-4315-927e-dd44b9e8b22c d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:ca:35,bridge_name='br-int',has_traffic_filtering=True,id=9aea074b-ae18-481e-9e32-d20b598171be,network=Network(527c809d-016a-41e2-8792-ec37a5eee918),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9aea074b-ae') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 19:11:57 compute-0 nova_compute[254061]: 2026-01-20 19:11:57.235 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:11:57 compute-0 nova_compute[254061]: 2026-01-20 19:11:57.235 254065 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 19:11:57 compute-0 nova_compute[254061]: 2026-01-20 19:11:57.235 254065 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 19:11:57 compute-0 nova_compute[254061]: 2026-01-20 19:11:57.237 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:11:57 compute-0 nova_compute[254061]: 2026-01-20 19:11:57.238 254065 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9aea074b-ae, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 19:11:57 compute-0 nova_compute[254061]: 2026-01-20 19:11:57.238 254065 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9aea074b-ae, col_values=(('external_ids', {'iface-id': '9aea074b-ae18-481e-9e32-d20b598171be', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a5:ca:35', 'vm-uuid': 'bfdc2bf6-cb73-4586-861c-e6057f75edcc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 19:11:57 compute-0 nova_compute[254061]: 2026-01-20 19:11:57.239 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:11:57 compute-0 NetworkManager[48914]: <info>  [1768936317.2404] manager: (tap9aea074b-ae): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/43)
Jan 20 19:11:57 compute-0 nova_compute[254061]: 2026-01-20 19:11:57.244 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:11:57 compute-0 nova_compute[254061]: 2026-01-20 19:11:57.247 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:11:57 compute-0 nova_compute[254061]: 2026-01-20 19:11:57.248 254065 INFO os_vif [None req-7a86482c-04b1-4315-927e-dd44b9e8b22c d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:ca:35,bridge_name='br-int',has_traffic_filtering=True,id=9aea074b-ae18-481e-9e32-d20b598171be,network=Network(527c809d-016a-41e2-8792-ec37a5eee918),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9aea074b-ae')
Jan 20 19:11:57 compute-0 nova_compute[254061]: 2026-01-20 19:11:57.248 254065 DEBUG nova.virt.libvirt.vif [None req-7a86482c-04b1-4315-927e-dd44b9e8b22c d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T19:11:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1235924562',display_name='tempest-TestNetworkBasicOps-server-1235924562',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1235924562',id=6,image_ref='bc57af0c-4b71-499e-9808-3c8fc070a488',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIjr6syangYxWXc3r4dCfhnhxcAaEg5oVCWg4X5MCcn6n80x4JhggPSqDkhncvG7NiQVFxqb5q9kQ+/60IAt0rodPBVgAFfcYlPDnsnj3CLQm3+or3usHJ4CImoOM7F42A==',key_name='tempest-TestNetworkBasicOps-1374279328',keypairs=<?>,launch_index=0,launched_at=2026-01-20T19:11:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='dc8a6ea17f334edbbfaf2a91ec6fd167',ramdisk_id='',reservation_id='r-7t9yeoab',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='bc57af0c-4b71-499e-9808-3c8fc070a488',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-899583499',owner_user_name='tempest-TestNetworkBasicOps-899583499-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T19:11:27Z,user_data=None,user_id='d34bd159f8884263a7481e3fcff15267',uuid=bfdc2bf6-cb73-4586-861c-e6057f75edcc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9aea074b-ae18-481e-9e32-d20b598171be", "address": "fa:16:3e:a5:ca:35", "network": {"id": "527c809d-016a-41e2-8792-ec37a5eee918", "bridge": "br-int", "label": "tempest-network-smoke--83173917", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9aea074b-ae", "ovs_interfaceid": "9aea074b-ae18-481e-9e32-d20b598171be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 19:11:57 compute-0 nova_compute[254061]: 2026-01-20 19:11:57.248 254065 DEBUG nova.network.os_vif_util [None req-7a86482c-04b1-4315-927e-dd44b9e8b22c d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Converting VIF {"id": "9aea074b-ae18-481e-9e32-d20b598171be", "address": "fa:16:3e:a5:ca:35", "network": {"id": "527c809d-016a-41e2-8792-ec37a5eee918", "bridge": "br-int", "label": "tempest-network-smoke--83173917", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9aea074b-ae", "ovs_interfaceid": "9aea074b-ae18-481e-9e32-d20b598171be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 19:11:57 compute-0 nova_compute[254061]: 2026-01-20 19:11:57.249 254065 DEBUG nova.network.os_vif_util [None req-7a86482c-04b1-4315-927e-dd44b9e8b22c d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:ca:35,bridge_name='br-int',has_traffic_filtering=True,id=9aea074b-ae18-481e-9e32-d20b598171be,network=Network(527c809d-016a-41e2-8792-ec37a5eee918),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9aea074b-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 19:11:57 compute-0 nova_compute[254061]: 2026-01-20 19:11:57.251 254065 DEBUG nova.virt.libvirt.guest [None req-7a86482c-04b1-4315-927e-dd44b9e8b22c d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] attach device xml: <interface type="ethernet">
Jan 20 19:11:57 compute-0 nova_compute[254061]:   <mac address="fa:16:3e:a5:ca:35"/>
Jan 20 19:11:57 compute-0 nova_compute[254061]:   <model type="virtio"/>
Jan 20 19:11:57 compute-0 nova_compute[254061]:   <driver name="vhost" rx_queue_size="512"/>
Jan 20 19:11:57 compute-0 nova_compute[254061]:   <mtu size="1442"/>
Jan 20 19:11:57 compute-0 nova_compute[254061]:   <target dev="tap9aea074b-ae"/>
Jan 20 19:11:57 compute-0 nova_compute[254061]: </interface>
Jan 20 19:11:57 compute-0 nova_compute[254061]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Jan 20 19:11:57 compute-0 kernel: tap9aea074b-ae: entered promiscuous mode
Jan 20 19:11:57 compute-0 NetworkManager[48914]: <info>  [1768936317.2622] manager: (tap9aea074b-ae): new Tun device (/org/freedesktop/NetworkManager/Devices/44)
Jan 20 19:11:57 compute-0 ovn_controller[155128]: 2026-01-20T19:11:57Z|00056|binding|INFO|Claiming lport 9aea074b-ae18-481e-9e32-d20b598171be for this chassis.
Jan 20 19:11:57 compute-0 ovn_controller[155128]: 2026-01-20T19:11:57Z|00057|binding|INFO|9aea074b-ae18-481e-9e32-d20b598171be: Claiming fa:16:3e:a5:ca:35 10.100.0.24
Jan 20 19:11:57 compute-0 nova_compute[254061]: 2026-01-20 19:11:57.265 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:11:57 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:11:57.274 165659 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:ca:35 10.100.0.24'], port_security=['fa:16:3e:a5:ca:35 10.100.0.24'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.24/28', 'neutron:device_id': 'bfdc2bf6-cb73-4586-861c-e6057f75edcc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-527c809d-016a-41e2-8792-ec37a5eee918', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dc8a6ea17f334edbbfaf2a91ec6fd167', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9669d00f-1ed1-4975-b80c-aea64099b405', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=61d5db56-85bc-41c4-b081-957ba735f06d, chassis=[<ovs.db.idl.Row object at 0x7f4e780f9880>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4e780f9880>], logical_port=9aea074b-ae18-481e-9e32-d20b598171be) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 19:11:57 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:11:57.276 165659 INFO neutron.agent.ovn.metadata.agent [-] Port 9aea074b-ae18-481e-9e32-d20b598171be in datapath 527c809d-016a-41e2-8792-ec37a5eee918 bound to our chassis
Jan 20 19:11:57 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:11:57.277 165659 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 527c809d-016a-41e2-8792-ec37a5eee918
Jan 20 19:11:57 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:11:57.288 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[d0d00c8d-e1c0-4852-9cb9-a989e5d8848d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:11:57 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:11:57.289 165659 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap527c809d-01 in ovnmeta-527c809d-016a-41e2-8792-ec37a5eee918 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 19:11:57 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:11:57.290 259376 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap527c809d-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 19:11:57 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:11:57.290 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[82c7230c-9a4e-4a9c-9704-3033aa9f883c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:11:57 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:11:57.291 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[756afc25-f7d8-47e2-b602-a1b98f271d8f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:11:57 compute-0 systemd-udevd[268297]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 19:11:57 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:11:57.302 166372 DEBUG oslo.privsep.daemon [-] privsep: reply[63b32fcb-5ef7-42c2-b984-15e30126207a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:11:57 compute-0 nova_compute[254061]: 2026-01-20 19:11:57.305 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:11:57 compute-0 ovn_controller[155128]: 2026-01-20T19:11:57Z|00058|binding|INFO|Setting lport 9aea074b-ae18-481e-9e32-d20b598171be ovn-installed in OVS
Jan 20 19:11:57 compute-0 ovn_controller[155128]: 2026-01-20T19:11:57Z|00059|binding|INFO|Setting lport 9aea074b-ae18-481e-9e32-d20b598171be up in Southbound
Jan 20 19:11:57 compute-0 NetworkManager[48914]: <info>  [1768936317.3086] device (tap9aea074b-ae): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 19:11:57 compute-0 NetworkManager[48914]: <info>  [1768936317.3096] device (tap9aea074b-ae): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 19:11:57 compute-0 nova_compute[254061]: 2026-01-20 19:11:57.309 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:11:57 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:11:57.330 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[fab24695-4bf5-4f3d-9c2f-27ab11f1bcaa]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:11:57 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:11:57.354 259434 DEBUG oslo.privsep.daemon [-] privsep: reply[7d5ed8f2-e107-4cd6-804e-400e834430e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:11:57 compute-0 nova_compute[254061]: 2026-01-20 19:11:57.356 254065 DEBUG nova.virt.libvirt.driver [None req-7a86482c-04b1-4315-927e-dd44b9e8b22c d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 19:11:57 compute-0 nova_compute[254061]: 2026-01-20 19:11:57.356 254065 DEBUG nova.virt.libvirt.driver [None req-7a86482c-04b1-4315-927e-dd44b9e8b22c d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 19:11:57 compute-0 nova_compute[254061]: 2026-01-20 19:11:57.356 254065 DEBUG nova.virt.libvirt.driver [None req-7a86482c-04b1-4315-927e-dd44b9e8b22c d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] No VIF found with MAC fa:16:3e:c8:97:18, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 19:11:57 compute-0 nova_compute[254061]: 2026-01-20 19:11:57.357 254065 DEBUG nova.virt.libvirt.driver [None req-7a86482c-04b1-4315-927e-dd44b9e8b22c d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] No VIF found with MAC fa:16:3e:a5:ca:35, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 19:11:57 compute-0 systemd-udevd[268301]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 19:11:57 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:11:57.360 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[35b2be69-cf78-4eb5-80e8-f51a6caef8f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:11:57 compute-0 NetworkManager[48914]: <info>  [1768936317.3609] manager: (tap527c809d-00): new Veth device (/org/freedesktop/NetworkManager/Devices/45)
Jan 20 19:11:57 compute-0 nova_compute[254061]: 2026-01-20 19:11:57.385 254065 DEBUG nova.virt.libvirt.guest [None req-7a86482c-04b1-4315-927e-dd44b9e8b22c d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 19:11:57 compute-0 nova_compute[254061]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 19:11:57 compute-0 nova_compute[254061]:   <nova:name>tempest-TestNetworkBasicOps-server-1235924562</nova:name>
Jan 20 19:11:57 compute-0 nova_compute[254061]:   <nova:creationTime>2026-01-20 19:11:57</nova:creationTime>
Jan 20 19:11:57 compute-0 nova_compute[254061]:   <nova:flavor name="m1.nano">
Jan 20 19:11:57 compute-0 nova_compute[254061]:     <nova:memory>128</nova:memory>
Jan 20 19:11:57 compute-0 nova_compute[254061]:     <nova:disk>1</nova:disk>
Jan 20 19:11:57 compute-0 nova_compute[254061]:     <nova:swap>0</nova:swap>
Jan 20 19:11:57 compute-0 nova_compute[254061]:     <nova:ephemeral>0</nova:ephemeral>
Jan 20 19:11:57 compute-0 nova_compute[254061]:     <nova:vcpus>1</nova:vcpus>
Jan 20 19:11:57 compute-0 nova_compute[254061]:   </nova:flavor>
Jan 20 19:11:57 compute-0 nova_compute[254061]:   <nova:owner>
Jan 20 19:11:57 compute-0 nova_compute[254061]:     <nova:user uuid="d34bd159f8884263a7481e3fcff15267">tempest-TestNetworkBasicOps-899583499-project-member</nova:user>
Jan 20 19:11:57 compute-0 nova_compute[254061]:     <nova:project uuid="dc8a6ea17f334edbbfaf2a91ec6fd167">tempest-TestNetworkBasicOps-899583499</nova:project>
Jan 20 19:11:57 compute-0 nova_compute[254061]:   </nova:owner>
Jan 20 19:11:57 compute-0 nova_compute[254061]:   <nova:root type="image" uuid="bc57af0c-4b71-499e-9808-3c8fc070a488"/>
Jan 20 19:11:57 compute-0 nova_compute[254061]:   <nova:ports>
Jan 20 19:11:57 compute-0 nova_compute[254061]:     <nova:port uuid="8d71eaa1-d4f2-413e-9640-7704328de4fc">
Jan 20 19:11:57 compute-0 nova_compute[254061]:       <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 20 19:11:57 compute-0 nova_compute[254061]:     </nova:port>
Jan 20 19:11:57 compute-0 nova_compute[254061]:     <nova:port uuid="9aea074b-ae18-481e-9e32-d20b598171be">
Jan 20 19:11:57 compute-0 nova_compute[254061]:       <nova:ip type="fixed" address="10.100.0.24" ipVersion="4"/>
Jan 20 19:11:57 compute-0 nova_compute[254061]:     </nova:port>
Jan 20 19:11:57 compute-0 nova_compute[254061]:   </nova:ports>
Jan 20 19:11:57 compute-0 nova_compute[254061]: </nova:instance>
Jan 20 19:11:57 compute-0 nova_compute[254061]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Jan 20 19:11:57 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:11:57.390 259434 DEBUG oslo.privsep.daemon [-] privsep: reply[18aafd78-0769-4ba5-9c96-0f6df0ad69c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:11:57 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:11:57.393 259434 DEBUG oslo.privsep.daemon [-] privsep: reply[8b22f5e4-9236-47d5-8304-ad5d7e62eb07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:11:57 compute-0 NetworkManager[48914]: <info>  [1768936317.4143] device (tap527c809d-00): carrier: link connected
Jan 20 19:11:57 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:11:57.418 259434 DEBUG oslo.privsep.daemon [-] privsep: reply[fc157866-fef6-406b-a6d2-d36bd7a933f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:11:57 compute-0 nova_compute[254061]: 2026-01-20 19:11:57.421 254065 DEBUG oslo_concurrency.lockutils [None req-7a86482c-04b1-4315-927e-dd44b9e8b22c d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "interface-bfdc2bf6-cb73-4586-861c-e6057f75edcc-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 5.920s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:11:57 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:11:57.433 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[9af9880d-4374-41f1-918d-72a24a702567]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap527c809d-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:43:53:40'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 451320, 'reachable_time': 15255, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268324, 'error': None, 'target': 'ovnmeta-527c809d-016a-41e2-8792-ec37a5eee918', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:11:57 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:11:57.450 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[6a916912-d5da-4604-a625-70d026b486b1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe43:5340'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 451320, 'tstamp': 451320}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 268325, 'error': None, 'target': 'ovnmeta-527c809d-016a-41e2-8792-ec37a5eee918', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:11:57 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:11:57.466 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[cb705f8d-2582-4b1b-a414-aae5f0cd7b12]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap527c809d-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:43:53:40'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 451320, 'reachable_time': 15255, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 268326, 'error': None, 'target': 'ovnmeta-527c809d-016a-41e2-8792-ec37a5eee918', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:11:57 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:11:57.495 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[49506825-ef48-4dbb-833f-24659ee173f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:11:57 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:11:57.533 165659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7018ca8a-de0e-4b56-bb43-675238d4f8b3, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 19:11:57 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:11:57.548 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[1eee5fa3-4cad-4461-a47b-9a6cdfefbd57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:11:57 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:11:57.549 165659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap527c809d-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 19:11:57 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:11:57.550 165659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 19:11:57 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:11:57.550 165659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap527c809d-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 19:11:57 compute-0 nova_compute[254061]: 2026-01-20 19:11:57.551 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:11:57 compute-0 NetworkManager[48914]: <info>  [1768936317.5525] manager: (tap527c809d-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/46)
Jan 20 19:11:57 compute-0 kernel: tap527c809d-00: entered promiscuous mode
Jan 20 19:11:57 compute-0 nova_compute[254061]: 2026-01-20 19:11:57.555 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:11:57 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:11:57.555 165659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap527c809d-00, col_values=(('external_ids', {'iface-id': 'c5740b21-b474-40b4-a4c2-de021dea5ad2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 19:11:57 compute-0 nova_compute[254061]: 2026-01-20 19:11:57.556 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:11:57 compute-0 ovn_controller[155128]: 2026-01-20T19:11:57Z|00060|binding|INFO|Releasing lport c5740b21-b474-40b4-a4c2-de021dea5ad2 from this chassis (sb_readonly=0)
Jan 20 19:11:57 compute-0 nova_compute[254061]: 2026-01-20 19:11:57.562 254065 DEBUG nova.compute.manager [req-611015de-5c5e-4de6-9a7d-bfb012169e97 req-b629c8c1-2104-4670-b901-1acf06a768ca 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Received event network-vif-plugged-9aea074b-ae18-481e-9e32-d20b598171be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 19:11:57 compute-0 nova_compute[254061]: 2026-01-20 19:11:57.562 254065 DEBUG oslo_concurrency.lockutils [req-611015de-5c5e-4de6-9a7d-bfb012169e97 req-b629c8c1-2104-4670-b901-1acf06a768ca 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Acquiring lock "bfdc2bf6-cb73-4586-861c-e6057f75edcc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:11:57 compute-0 nova_compute[254061]: 2026-01-20 19:11:57.563 254065 DEBUG oslo_concurrency.lockutils [req-611015de-5c5e-4de6-9a7d-bfb012169e97 req-b629c8c1-2104-4670-b901-1acf06a768ca 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Lock "bfdc2bf6-cb73-4586-861c-e6057f75edcc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:11:57 compute-0 nova_compute[254061]: 2026-01-20 19:11:57.563 254065 DEBUG oslo_concurrency.lockutils [req-611015de-5c5e-4de6-9a7d-bfb012169e97 req-b629c8c1-2104-4670-b901-1acf06a768ca 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Lock "bfdc2bf6-cb73-4586-861c-e6057f75edcc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:11:57 compute-0 nova_compute[254061]: 2026-01-20 19:11:57.563 254065 DEBUG nova.compute.manager [req-611015de-5c5e-4de6-9a7d-bfb012169e97 req-b629c8c1-2104-4670-b901-1acf06a768ca 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] No waiting events found dispatching network-vif-plugged-9aea074b-ae18-481e-9e32-d20b598171be pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 19:11:57 compute-0 nova_compute[254061]: 2026-01-20 19:11:57.563 254065 WARNING nova.compute.manager [req-611015de-5c5e-4de6-9a7d-bfb012169e97 req-b629c8c1-2104-4670-b901-1acf06a768ca 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Received unexpected event network-vif-plugged-9aea074b-ae18-481e-9e32-d20b598171be for instance with vm_state active and task_state None.
Jan 20 19:11:57 compute-0 nova_compute[254061]: 2026-01-20 19:11:57.570 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:11:57 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:11:57.571 165659 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/527c809d-016a-41e2-8792-ec37a5eee918.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/527c809d-016a-41e2-8792-ec37a5eee918.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 19:11:57 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:11:57.572 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[3ca66cf2-89fb-463d-b075-66a8701068bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:11:57 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:11:57.572 165659 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 19:11:57 compute-0 ovn_metadata_agent[165637]: global
Jan 20 19:11:57 compute-0 ovn_metadata_agent[165637]:     log         /dev/log local0 debug
Jan 20 19:11:57 compute-0 ovn_metadata_agent[165637]:     log-tag     haproxy-metadata-proxy-527c809d-016a-41e2-8792-ec37a5eee918
Jan 20 19:11:57 compute-0 ovn_metadata_agent[165637]:     user        root
Jan 20 19:11:57 compute-0 ovn_metadata_agent[165637]:     group       root
Jan 20 19:11:57 compute-0 ovn_metadata_agent[165637]:     maxconn     1024
Jan 20 19:11:57 compute-0 ovn_metadata_agent[165637]:     pidfile     /var/lib/neutron/external/pids/527c809d-016a-41e2-8792-ec37a5eee918.pid.haproxy
Jan 20 19:11:57 compute-0 ovn_metadata_agent[165637]:     daemon
Jan 20 19:11:57 compute-0 ovn_metadata_agent[165637]: 
Jan 20 19:11:57 compute-0 ovn_metadata_agent[165637]: defaults
Jan 20 19:11:57 compute-0 ovn_metadata_agent[165637]:     log global
Jan 20 19:11:57 compute-0 ovn_metadata_agent[165637]:     mode http
Jan 20 19:11:57 compute-0 ovn_metadata_agent[165637]:     option httplog
Jan 20 19:11:57 compute-0 ovn_metadata_agent[165637]:     option dontlognull
Jan 20 19:11:57 compute-0 ovn_metadata_agent[165637]:     option http-server-close
Jan 20 19:11:57 compute-0 ovn_metadata_agent[165637]:     option forwardfor
Jan 20 19:11:57 compute-0 ovn_metadata_agent[165637]:     retries                 3
Jan 20 19:11:57 compute-0 ovn_metadata_agent[165637]:     timeout http-request    30s
Jan 20 19:11:57 compute-0 ovn_metadata_agent[165637]:     timeout connect         30s
Jan 20 19:11:57 compute-0 ovn_metadata_agent[165637]:     timeout client          32s
Jan 20 19:11:57 compute-0 ovn_metadata_agent[165637]:     timeout server          32s
Jan 20 19:11:57 compute-0 ovn_metadata_agent[165637]:     timeout http-keep-alive 30s
Jan 20 19:11:57 compute-0 ovn_metadata_agent[165637]: 
Jan 20 19:11:57 compute-0 ovn_metadata_agent[165637]: 
Jan 20 19:11:57 compute-0 ovn_metadata_agent[165637]: listen listener
Jan 20 19:11:57 compute-0 ovn_metadata_agent[165637]:     bind 169.254.169.254:80
Jan 20 19:11:57 compute-0 ovn_metadata_agent[165637]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 19:11:57 compute-0 ovn_metadata_agent[165637]:     http-request add-header X-OVN-Network-ID 527c809d-016a-41e2-8792-ec37a5eee918
Jan 20 19:11:57 compute-0 ovn_metadata_agent[165637]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 19:11:57 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:11:57.573 165659 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-527c809d-016a-41e2-8792-ec37a5eee918', 'env', 'PROCESS_TAG=haproxy-527c809d-016a-41e2-8792-ec37a5eee918', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/527c809d-016a-41e2-8792-ec37a5eee918.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 19:11:57 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:11:57 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:11:57 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:11:57.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:11:57 compute-0 sudo[268336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:11:57 compute-0 sudo[268336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:11:57 compute-0 sudo[268336]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:57 compute-0 podman[268382]: 2026-01-20 19:11:57.980101794 +0000 UTC m=+0.060078133 container create e931f3457980810fda7876b2d5bb78e79d169ab22a736229c1f4303950edae64 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-527c809d-016a-41e2-8792-ec37a5eee918, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 20 19:11:58 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:11:58 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:11:58 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:11:58.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:11:58 compute-0 systemd[1]: Started libpod-conmon-e931f3457980810fda7876b2d5bb78e79d169ab22a736229c1f4303950edae64.scope.
Jan 20 19:11:58 compute-0 podman[268382]: 2026-01-20 19:11:57.944314385 +0000 UTC m=+0.024290764 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 19:11:58 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:11:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22c4a1c451cefa6d5589fc67a8c62ee5cb1f14f5f076bd312400110fee8eca56/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 19:11:58 compute-0 podman[268382]: 2026-01-20 19:11:58.072021569 +0000 UTC m=+0.151997928 container init e931f3457980810fda7876b2d5bb78e79d169ab22a736229c1f4303950edae64 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-527c809d-016a-41e2-8792-ec37a5eee918, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 19:11:58 compute-0 podman[268382]: 2026-01-20 19:11:58.077051572 +0000 UTC m=+0.157027911 container start e931f3457980810fda7876b2d5bb78e79d169ab22a736229c1f4303950edae64 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-527c809d-016a-41e2-8792-ec37a5eee918, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 20 19:11:58 compute-0 neutron-haproxy-ovnmeta-527c809d-016a-41e2-8792-ec37a5eee918[268397]: [NOTICE]   (268402) : New worker (268404) forked
Jan 20 19:11:58 compute-0 neutron-haproxy-ovnmeta-527c809d-016a-41e2-8792-ec37a5eee918[268397]: [NOTICE]   (268402) : Loading success.
Jan 20 19:11:58 compute-0 ceph-mon[74381]: pgmap v897: 337 pgs: 337 active+clean; 121 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 102 KiB/s rd, 50 KiB/s wr, 16 op/s
Jan 20 19:11:58 compute-0 ovn_controller[155128]: 2026-01-20T19:11:58Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a5:ca:35 10.100.0.24
Jan 20 19:11:58 compute-0 ovn_controller[155128]: 2026-01-20T19:11:58Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a5:ca:35 10.100.0.24
Jan 20 19:11:58 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:11:58.875Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:11:58 compute-0 nova_compute[254061]: 2026-01-20 19:11:58.982 254065 DEBUG nova.network.neutron [req-9aca7821-78d7-4eba-abf5-ae1bb397c85c req-e358027d-bdc4-48c1-af70-aa05b58dfd43 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Updated VIF entry in instance network info cache for port 9aea074b-ae18-481e-9e32-d20b598171be. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 19:11:58 compute-0 nova_compute[254061]: 2026-01-20 19:11:58.983 254065 DEBUG nova.network.neutron [req-9aca7821-78d7-4eba-abf5-ae1bb397c85c req-e358027d-bdc4-48c1-af70-aa05b58dfd43 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Updating instance_info_cache with network_info: [{"id": "8d71eaa1-d4f2-413e-9640-7704328de4fc", "address": "fa:16:3e:c8:97:18", "network": {"id": "d89a966b-cfbe-45ff-b257-05d5877a2da4", "bridge": "br-int", "label": "tempest-network-smoke--23620673", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d71eaa1-d4", "ovs_interfaceid": "8d71eaa1-d4f2-413e-9640-7704328de4fc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9aea074b-ae18-481e-9e32-d20b598171be", "address": "fa:16:3e:a5:ca:35", "network": {"id": "527c809d-016a-41e2-8792-ec37a5eee918", "bridge": "br-int", "label": "tempest-network-smoke--83173917", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9aea074b-ae", "ovs_interfaceid": "9aea074b-ae18-481e-9e32-d20b598171be", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 19:11:59 compute-0 nova_compute[254061]: 2026-01-20 19:11:59.055 254065 DEBUG oslo_concurrency.lockutils [req-9aca7821-78d7-4eba-abf5-ae1bb397c85c req-e358027d-bdc4-48c1-af70-aa05b58dfd43 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Releasing lock "refresh_cache-bfdc2bf6-cb73-4586-861c-e6057f75edcc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 19:11:59 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v898: 337 pgs: 337 active+clean; 121 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 14 KiB/s wr, 5 op/s
Jan 20 19:11:59 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:11:59 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:11:59 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:11:59.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:11:59 compute-0 nova_compute[254061]: 2026-01-20 19:11:59.669 254065 DEBUG nova.compute.manager [req-7a4d3184-5728-4fbf-b731-d7ed25488081 req-b49f81ae-a798-4134-94f6-ced11cc71298 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Received event network-vif-plugged-9aea074b-ae18-481e-9e32-d20b598171be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 19:11:59 compute-0 nova_compute[254061]: 2026-01-20 19:11:59.670 254065 DEBUG oslo_concurrency.lockutils [req-7a4d3184-5728-4fbf-b731-d7ed25488081 req-b49f81ae-a798-4134-94f6-ced11cc71298 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Acquiring lock "bfdc2bf6-cb73-4586-861c-e6057f75edcc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:11:59 compute-0 nova_compute[254061]: 2026-01-20 19:11:59.670 254065 DEBUG oslo_concurrency.lockutils [req-7a4d3184-5728-4fbf-b731-d7ed25488081 req-b49f81ae-a798-4134-94f6-ced11cc71298 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Lock "bfdc2bf6-cb73-4586-861c-e6057f75edcc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:11:59 compute-0 nova_compute[254061]: 2026-01-20 19:11:59.671 254065 DEBUG oslo_concurrency.lockutils [req-7a4d3184-5728-4fbf-b731-d7ed25488081 req-b49f81ae-a798-4134-94f6-ced11cc71298 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Lock "bfdc2bf6-cb73-4586-861c-e6057f75edcc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:11:59 compute-0 nova_compute[254061]: 2026-01-20 19:11:59.671 254065 DEBUG nova.compute.manager [req-7a4d3184-5728-4fbf-b731-d7ed25488081 req-b49f81ae-a798-4134-94f6-ced11cc71298 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] No waiting events found dispatching network-vif-plugged-9aea074b-ae18-481e-9e32-d20b598171be pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 19:11:59 compute-0 nova_compute[254061]: 2026-01-20 19:11:59.671 254065 WARNING nova.compute.manager [req-7a4d3184-5728-4fbf-b731-d7ed25488081 req-b49f81ae-a798-4134-94f6-ced11cc71298 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Received unexpected event network-vif-plugged-9aea074b-ae18-481e-9e32-d20b598171be for instance with vm_state active and task_state None.
Jan 20 19:11:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:11:59] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Jan 20 19:11:59 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:11:59] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Jan 20 19:12:00 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:12:00 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:12:00 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:12:00.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:12:00 compute-0 ceph-mon[74381]: pgmap v898: 337 pgs: 337 active+clean; 121 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 14 KiB/s wr, 5 op/s
Jan 20 19:12:01 compute-0 podman[268416]: 2026-01-20 19:12:01.125011374 +0000 UTC m=+0.100240608 container health_status d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Jan 20 19:12:01 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v899: 337 pgs: 337 active+clean; 121 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 14 KiB/s wr, 1 op/s
Jan 20 19:12:01 compute-0 nova_compute[254061]: 2026-01-20 19:12:01.456 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:12:01 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:12:01 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:12:01 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:12:01.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:12:01 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:12:02 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:12:02 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:12:02 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:12:02.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:12:02 compute-0 nova_compute[254061]: 2026-01-20 19:12:02.240 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:12:02 compute-0 ceph-mon[74381]: pgmap v899: 337 pgs: 337 active+clean; 121 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 14 KiB/s wr, 1 op/s
Jan 20 19:12:03 compute-0 nova_compute[254061]: 2026-01-20 19:12:03.129 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:12:03 compute-0 nova_compute[254061]: 2026-01-20 19:12:03.130 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:12:03 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v900: 337 pgs: 337 active+clean; 121 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 3.9 KiB/s rd, 20 KiB/s wr, 2 op/s
Jan 20 19:12:03 compute-0 nova_compute[254061]: 2026-01-20 19:12:03.164 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:12:03 compute-0 nova_compute[254061]: 2026-01-20 19:12:03.165 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:12:03 compute-0 nova_compute[254061]: 2026-01-20 19:12:03.166 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:12:03 compute-0 nova_compute[254061]: 2026-01-20 19:12:03.166 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 19:12:03 compute-0 nova_compute[254061]: 2026-01-20 19:12:03.167 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:12:03 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:12:03 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:12:03 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:12:03.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:12:03 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:12:03 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2012330695' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:12:03 compute-0 nova_compute[254061]: 2026-01-20 19:12:03.700 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:12:03 compute-0 nova_compute[254061]: 2026-01-20 19:12:03.785 254065 DEBUG nova.virt.libvirt.driver [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 19:12:03 compute-0 nova_compute[254061]: 2026-01-20 19:12:03.785 254065 DEBUG nova.virt.libvirt.driver [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 19:12:03 compute-0 nova_compute[254061]: 2026-01-20 19:12:03.944 254065 WARNING nova.virt.libvirt.driver [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 19:12:03 compute-0 nova_compute[254061]: 2026-01-20 19:12:03.945 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4384MB free_disk=59.94266891479492GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 19:12:03 compute-0 nova_compute[254061]: 2026-01-20 19:12:03.945 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:12:03 compute-0 nova_compute[254061]: 2026-01-20 19:12:03.945 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:12:04 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:12:04 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:12:04 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:12:04.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:12:04 compute-0 nova_compute[254061]: 2026-01-20 19:12:04.171 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Instance bfdc2bf6-cb73-4586-861c-e6057f75edcc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 19:12:04 compute-0 nova_compute[254061]: 2026-01-20 19:12:04.171 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 19:12:04 compute-0 nova_compute[254061]: 2026-01-20 19:12:04.172 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 19:12:04 compute-0 nova_compute[254061]: 2026-01-20 19:12:04.263 254065 DEBUG nova.scheduler.client.report [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Refreshing inventories for resource provider cb9161e5-191d-495c-920a-01144f42a215 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 20 19:12:04 compute-0 nova_compute[254061]: 2026-01-20 19:12:04.284 254065 DEBUG nova.scheduler.client.report [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Updating ProviderTree inventory for provider cb9161e5-191d-495c-920a-01144f42a215 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 20 19:12:04 compute-0 nova_compute[254061]: 2026-01-20 19:12:04.285 254065 DEBUG nova.compute.provider_tree [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Updating inventory in ProviderTree for provider cb9161e5-191d-495c-920a-01144f42a215 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 20 19:12:04 compute-0 nova_compute[254061]: 2026-01-20 19:12:04.487 254065 DEBUG nova.scheduler.client.report [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Refreshing aggregate associations for resource provider cb9161e5-191d-495c-920a-01144f42a215, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 20 19:12:04 compute-0 nova_compute[254061]: 2026-01-20 19:12:04.598 254065 DEBUG nova.scheduler.client.report [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Refreshing trait associations for resource provider cb9161e5-191d-495c-920a-01144f42a215, traits: COMPUTE_DEVICE_TAGGING,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AESNI,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_ABM,COMPUTE_ACCELERATORS,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE4A,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NODE,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_F16C,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_USB,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE,HW_CPU_X86_SSE2,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_BMI,HW_CPU_X86_SVM,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 20 19:12:04 compute-0 nova_compute[254061]: 2026-01-20 19:12:04.680 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:12:04 compute-0 ceph-mon[74381]: pgmap v900: 337 pgs: 337 active+clean; 121 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 3.9 KiB/s rd, 20 KiB/s wr, 2 op/s
Jan 20 19:12:04 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/2012330695' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:12:05 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v901: 337 pgs: 337 active+clean; 121 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 7.3 KiB/s wr, 1 op/s
Jan 20 19:12:05 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:12:05 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/605430807' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:12:05 compute-0 nova_compute[254061]: 2026-01-20 19:12:05.198 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:12:05 compute-0 nova_compute[254061]: 2026-01-20 19:12:05.203 254065 DEBUG nova.compute.provider_tree [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Inventory has not changed in ProviderTree for provider: cb9161e5-191d-495c-920a-01144f42a215 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 19:12:05 compute-0 nova_compute[254061]: 2026-01-20 19:12:05.235 254065 DEBUG nova.scheduler.client.report [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Inventory has not changed for provider cb9161e5-191d-495c-920a-01144f42a215 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 19:12:05 compute-0 nova_compute[254061]: 2026-01-20 19:12:05.265 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 19:12:05 compute-0 nova_compute[254061]: 2026-01-20 19:12:05.266 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.320s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:12:05 compute-0 nova_compute[254061]: 2026-01-20 19:12:05.266 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:12:05 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:12:05 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:12:05 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:12:05.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:12:05 compute-0 ceph-mon[74381]: pgmap v901: 337 pgs: 337 active+clean; 121 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 7.3 KiB/s wr, 1 op/s
Jan 20 19:12:05 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/605430807' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:12:06 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:12:06 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:12:06 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:12:06.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:12:06 compute-0 nova_compute[254061]: 2026-01-20 19:12:06.459 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:12:06 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/1245943853' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:12:06 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:12:07 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v902: 337 pgs: 337 active+clean; 121 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 7.3 KiB/s wr, 1 op/s
Jan 20 19:12:07 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:12:07.189Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:12:07 compute-0 nova_compute[254061]: 2026-01-20 19:12:07.242 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:12:07 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:12:07 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:12:07 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:12:07.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:12:07 compute-0 ceph-mon[74381]: pgmap v902: 337 pgs: 337 active+clean; 121 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 7.3 KiB/s wr, 1 op/s
Jan 20 19:12:08 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:12:08 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:12:08 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:12:08.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:12:08 compute-0 nova_compute[254061]: 2026-01-20 19:12:08.277 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:12:08 compute-0 nova_compute[254061]: 2026-01-20 19:12:08.277 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:12:08 compute-0 nova_compute[254061]: 2026-01-20 19:12:08.277 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:12:08 compute-0 nova_compute[254061]: 2026-01-20 19:12:08.277 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 19:12:08 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:12:08.876Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:12:09 compute-0 nova_compute[254061]: 2026-01-20 19:12:09.129 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:12:09 compute-0 nova_compute[254061]: 2026-01-20 19:12:09.129 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 19:12:09 compute-0 nova_compute[254061]: 2026-01-20 19:12:09.130 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 19:12:09 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v903: 337 pgs: 337 active+clean; 131 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 696 KiB/s wr, 4 op/s
Jan 20 19:12:09 compute-0 nova_compute[254061]: 2026-01-20 19:12:09.343 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Acquiring lock "refresh_cache-bfdc2bf6-cb73-4586-861c-e6057f75edcc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 19:12:09 compute-0 nova_compute[254061]: 2026-01-20 19:12:09.344 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Acquired lock "refresh_cache-bfdc2bf6-cb73-4586-861c-e6057f75edcc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 19:12:09 compute-0 nova_compute[254061]: 2026-01-20 19:12:09.344 254065 DEBUG nova.network.neutron [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 20 19:12:09 compute-0 nova_compute[254061]: 2026-01-20 19:12:09.345 254065 DEBUG nova.objects.instance [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lazy-loading 'info_cache' on Instance uuid bfdc2bf6-cb73-4586-861c-e6057f75edcc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 19:12:09 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:12:09 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:12:09 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:12:09.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:12:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:12:09] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Jan 20 19:12:09 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:12:09] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Jan 20 19:12:10 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:12:10 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:12:10 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:12:10.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:12:10 compute-0 ceph-mon[74381]: pgmap v903: 337 pgs: 337 active+clean; 131 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 696 KiB/s wr, 4 op/s
Jan 20 19:12:10 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/738407294' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:12:10 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:12:10 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/483241036' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:12:11 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v904: 337 pgs: 337 active+clean; 131 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 3.6 KiB/s rd, 694 KiB/s wr, 3 op/s
Jan 20 19:12:11 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/574184080' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:12:11 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/295688091' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:12:11 compute-0 nova_compute[254061]: 2026-01-20 19:12:11.460 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:12:11 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:12:11 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:12:11 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:12:11.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:12:11 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:12:12 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:12:12 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:12:12 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:12:12.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:12:12 compute-0 nova_compute[254061]: 2026-01-20 19:12:12.244 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:12:12 compute-0 ceph-mon[74381]: pgmap v904: 337 pgs: 337 active+clean; 131 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 3.6 KiB/s rd, 694 KiB/s wr, 3 op/s
Jan 20 19:12:12 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/547648689' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 19:12:13 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v905: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 20 19:12:13 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/1842458928' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 19:12:13 compute-0 ceph-mon[74381]: pgmap v905: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 20 19:12:13 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:12:13 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:12:13 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:12:13.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:12:14 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:12:14 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 19:12:14 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:12:14.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 19:12:15 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v906: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 19:12:15 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:12:15 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:12:15 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:12:15.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:12:15 compute-0 nova_compute[254061]: 2026-01-20 19:12:15.761 254065 DEBUG nova.network.neutron [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Updating instance_info_cache with network_info: [{"id": "8d71eaa1-d4f2-413e-9640-7704328de4fc", "address": "fa:16:3e:c8:97:18", "network": {"id": "d89a966b-cfbe-45ff-b257-05d5877a2da4", "bridge": "br-int", "label": "tempest-network-smoke--23620673", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d71eaa1-d4", "ovs_interfaceid": "8d71eaa1-d4f2-413e-9640-7704328de4fc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9aea074b-ae18-481e-9e32-d20b598171be", "address": "fa:16:3e:a5:ca:35", "network": {"id": "527c809d-016a-41e2-8792-ec37a5eee918", "bridge": "br-int", "label": "tempest-network-smoke--83173917", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9aea074b-ae", "ovs_interfaceid": "9aea074b-ae18-481e-9e32-d20b598171be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 19:12:15 compute-0 nova_compute[254061]: 2026-01-20 19:12:15.790 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Releasing lock "refresh_cache-bfdc2bf6-cb73-4586-861c-e6057f75edcc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 19:12:15 compute-0 nova_compute[254061]: 2026-01-20 19:12:15.790 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 20 19:12:15 compute-0 nova_compute[254061]: 2026-01-20 19:12:15.790 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:12:15 compute-0 nova_compute[254061]: 2026-01-20 19:12:15.791 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:12:15 compute-0 nova_compute[254061]: 2026-01-20 19:12:15.791 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:12:15 compute-0 nova_compute[254061]: 2026-01-20 19:12:15.791 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 20 19:12:15 compute-0 nova_compute[254061]: 2026-01-20 19:12:15.815 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 20 19:12:15 compute-0 nova_compute[254061]: 2026-01-20 19:12:15.816 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:12:15 compute-0 nova_compute[254061]: 2026-01-20 19:12:15.816 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 20 19:12:16 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:12:16 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:12:16 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:12:16.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:12:16 compute-0 ceph-mon[74381]: pgmap v906: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 19:12:16 compute-0 ceph-mon[74381]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Jan 20 19:12:16 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:12:16.381608) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 19:12:16 compute-0 ceph-mon[74381]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Jan 20 19:12:16 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936336381654, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 2122, "num_deletes": 251, "total_data_size": 4183279, "memory_usage": 4248928, "flush_reason": "Manual Compaction"}
Jan 20 19:12:16 compute-0 ceph-mon[74381]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Jan 20 19:12:16 compute-0 nova_compute[254061]: 2026-01-20 19:12:16.462 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:12:16 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936336487777, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 4029078, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25213, "largest_seqno": 27334, "table_properties": {"data_size": 4019719, "index_size": 5789, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 19960, "raw_average_key_size": 20, "raw_value_size": 4000764, "raw_average_value_size": 4086, "num_data_blocks": 253, "num_entries": 979, "num_filter_entries": 979, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768936134, "oldest_key_time": 1768936134, "file_creation_time": 1768936336, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cbf3ab03-d51c-4622-b6c7-e997cd5246eb", "db_session_id": "I40O2DG19JCNHUB0JQU4", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Jan 20 19:12:16 compute-0 ceph-mon[74381]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 106241 microseconds, and 9948 cpu microseconds.
Jan 20 19:12:16 compute-0 ceph-mon[74381]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 19:12:16 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:12:16.487850) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 4029078 bytes OK
Jan 20 19:12:16 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:12:16.487870) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Jan 20 19:12:16 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:12:16.491010) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Jan 20 19:12:16 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:12:16.491027) EVENT_LOG_v1 {"time_micros": 1768936336491022, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 19:12:16 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:12:16.491044) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 19:12:16 compute-0 ceph-mon[74381]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 4174570, prev total WAL file size 4174570, number of live WAL files 2.
Jan 20 19:12:16 compute-0 ceph-mon[74381]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 19:12:16 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:12:16.492191) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Jan 20 19:12:16 compute-0 ceph-mon[74381]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 19:12:16 compute-0 ceph-mon[74381]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(3934KB)], [56(12MB)]
Jan 20 19:12:16 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936336492228, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 17396414, "oldest_snapshot_seqno": -1}
Jan 20 19:12:16 compute-0 ceph-mon[74381]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 6106 keys, 15305801 bytes, temperature: kUnknown
Jan 20 19:12:16 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936336746956, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 15305801, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15263849, "index_size": 25586, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15301, "raw_key_size": 155310, "raw_average_key_size": 25, "raw_value_size": 15152425, "raw_average_value_size": 2481, "num_data_blocks": 1041, "num_entries": 6106, "num_filter_entries": 6106, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768934326, "oldest_key_time": 0, "file_creation_time": 1768936336, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cbf3ab03-d51c-4622-b6c7-e997cd5246eb", "db_session_id": "I40O2DG19JCNHUB0JQU4", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Jan 20 19:12:16 compute-0 ceph-mon[74381]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 19:12:16 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:12:16.747247) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 15305801 bytes
Jan 20 19:12:16 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:12:16.749531) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 68.3 rd, 60.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.8, 12.7 +0.0 blob) out(14.6 +0.0 blob), read-write-amplify(8.1) write-amplify(3.8) OK, records in: 6626, records dropped: 520 output_compression: NoCompression
Jan 20 19:12:16 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:12:16.749566) EVENT_LOG_v1 {"time_micros": 1768936336749552, "job": 30, "event": "compaction_finished", "compaction_time_micros": 254839, "compaction_time_cpu_micros": 29849, "output_level": 6, "num_output_files": 1, "total_output_size": 15305801, "num_input_records": 6626, "num_output_records": 6106, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 19:12:16 compute-0 ceph-mon[74381]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 19:12:16 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936336750820, "job": 30, "event": "table_file_deletion", "file_number": 58}
Jan 20 19:12:16 compute-0 ceph-mon[74381]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 19:12:16 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936336753961, "job": 30, "event": "table_file_deletion", "file_number": 56}
Jan 20 19:12:16 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:12:16.492062) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:12:16 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:12:16.754182) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:12:16 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:12:16.754193) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:12:16 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:12:16.754199) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:12:16 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:12:16.754202) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:12:16 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:12:16.754205) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:12:16 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:12:17 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v907: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 932 KiB/s rd, 1.8 MiB/s wr, 62 op/s
Jan 20 19:12:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:12:17.190Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:12:17 compute-0 nova_compute[254061]: 2026-01-20 19:12:17.246 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:12:17 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:12:17 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:12:17 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:12:17.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:12:17 compute-0 sudo[268504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:12:17 compute-0 sudo[268504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:12:17 compute-0 sudo[268504]: pam_unix(sudo:session): session closed for user root
Jan 20 19:12:18 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:12:18 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:12:18 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:12:18.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:12:18 compute-0 ceph-mon[74381]: pgmap v907: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 932 KiB/s rd, 1.8 MiB/s wr, 62 op/s
Jan 20 19:12:18 compute-0 nova_compute[254061]: 2026-01-20 19:12:18.839 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:12:18 compute-0 nova_compute[254061]: 2026-01-20 19:12:18.840 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:12:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:12:18.878Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:12:19 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v908: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 89 op/s
Jan 20 19:12:19 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:12:19 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:12:19 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:12:19.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:12:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:12:19] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Jan 20 19:12:19 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:12:19] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Jan 20 19:12:19 compute-0 ceph-mon[74381]: pgmap v908: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 89 op/s
Jan 20 19:12:20 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:12:20 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 19:12:20 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:12:20.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 19:12:21 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v909: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.1 MiB/s wr, 86 op/s
Jan 20 19:12:21 compute-0 nova_compute[254061]: 2026-01-20 19:12:21.465 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:12:21 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:12:21 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:12:21 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:12:21.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:12:21 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:12:22 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:12:22 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:12:22 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:12:22.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:12:22 compute-0 nova_compute[254061]: 2026-01-20 19:12:22.275 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:12:22 compute-0 ceph-mon[74381]: pgmap v909: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.1 MiB/s wr, 86 op/s
Jan 20 19:12:23 compute-0 podman[268535]: 2026-01-20 19:12:23.074682678 +0000 UTC m=+0.049096922 container health_status 7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Jan 20 19:12:23 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v910: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 100 op/s
Jan 20 19:12:23 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:12:23 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:12:23 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:12:23.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:12:24 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:12:24 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:12:24 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:12:24.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:12:24 compute-0 radosgw[89571]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Jan 20 19:12:24 compute-0 radosgw[89571]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Jan 20 19:12:24 compute-0 ceph-mon[74381]: pgmap v910: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 100 op/s
Jan 20 19:12:24 compute-0 radosgw[89571]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Jan 20 19:12:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:12:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:12:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:12:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:12:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:12:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:12:25 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v911: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 75 op/s
Jan 20 19:12:25 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:12:25 compute-0 ceph-mon[74381]: pgmap v911: 337 pgs: 337 active+clean; 167 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 75 op/s
Jan 20 19:12:25 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:12:25 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:12:25 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:12:25.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:12:26 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:12:26 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 19:12:26 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:12:26.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 19:12:26 compute-0 nova_compute[254061]: 2026-01-20 19:12:26.469 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:12:26 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:12:27 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v912: 337 pgs: 337 active+clean; 187 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 213 op/s
Jan 20 19:12:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:12:27.190Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:12:27 compute-0 nova_compute[254061]: 2026-01-20 19:12:27.278 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:12:27 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:12:27 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:12:27 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:12:27.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:12:28 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:12:28 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:12:28 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:12:28.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:12:28 compute-0 ceph-mon[74381]: pgmap v912: 337 pgs: 337 active+clean; 187 MiB data, 328 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 213 op/s
Jan 20 19:12:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:12:28.878Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:12:29 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v913: 337 pgs: 337 active+clean; 187 MiB data, 329 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.9 MiB/s wr, 231 op/s
Jan 20 19:12:29 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:12:29 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:12:29 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:12:29.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:12:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:12:29] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Jan 20 19:12:29 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:12:29] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Jan 20 19:12:30 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:12:30 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:12:30 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:12:30.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:12:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:12:30.288 165659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:12:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:12:30.288 165659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:12:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:12:30.289 165659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:12:30 compute-0 ceph-mon[74381]: pgmap v913: 337 pgs: 337 active+clean; 187 MiB data, 329 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.9 MiB/s wr, 231 op/s
Jan 20 19:12:31 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v914: 337 pgs: 337 active+clean; 187 MiB data, 329 MiB used, 60 GiB / 60 GiB avail; 573 KiB/s rd, 1.9 MiB/s wr, 203 op/s
Jan 20 19:12:31 compute-0 nova_compute[254061]: 2026-01-20 19:12:31.472 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:12:31 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:12:31 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:12:31 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:12:31.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:12:31 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:12:32 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:12:32 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:12:32 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:12:32.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:12:32 compute-0 podman[268563]: 2026-01-20 19:12:32.140783241 +0000 UTC m=+0.114345891 container health_status d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 20 19:12:32 compute-0 nova_compute[254061]: 2026-01-20 19:12:32.280 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:12:32 compute-0 ceph-mon[74381]: pgmap v914: 337 pgs: 337 active+clean; 187 MiB data, 329 MiB used, 60 GiB / 60 GiB avail; 573 KiB/s rd, 1.9 MiB/s wr, 203 op/s
Jan 20 19:12:33 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v915: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 883 KiB/s rd, 2.1 MiB/s wr, 243 op/s
Jan 20 19:12:33 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:12:33 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:12:33 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:12:33.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:12:34 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:12:34 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:12:34 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:12:34.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:12:34 compute-0 ceph-mon[74381]: pgmap v915: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 883 KiB/s rd, 2.1 MiB/s wr, 243 op/s
Jan 20 19:12:35 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v916: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 468 KiB/s rd, 2.1 MiB/s wr, 229 op/s
Jan 20 19:12:35 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:12:35 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 19:12:35 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:12:35.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 19:12:36 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:12:36 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:12:36 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:12:36.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:12:36 compute-0 ceph-mon[74381]: pgmap v916: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 468 KiB/s rd, 2.1 MiB/s wr, 229 op/s
Jan 20 19:12:36 compute-0 nova_compute[254061]: 2026-01-20 19:12:36.476 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:12:36 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:12:37 compute-0 nova_compute[254061]: 2026-01-20 19:12:37.116 254065 DEBUG nova.compute.manager [req-56996540-8edb-4465-b6d3-6e34cf3acc93 req-19c60945-9e1e-44ba-a96b-bc4e432af321 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Received event network-changed-9aea074b-ae18-481e-9e32-d20b598171be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 19:12:37 compute-0 nova_compute[254061]: 2026-01-20 19:12:37.117 254065 DEBUG nova.compute.manager [req-56996540-8edb-4465-b6d3-6e34cf3acc93 req-19c60945-9e1e-44ba-a96b-bc4e432af321 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Refreshing instance network info cache due to event network-changed-9aea074b-ae18-481e-9e32-d20b598171be. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 19:12:37 compute-0 nova_compute[254061]: 2026-01-20 19:12:37.117 254065 DEBUG oslo_concurrency.lockutils [req-56996540-8edb-4465-b6d3-6e34cf3acc93 req-19c60945-9e1e-44ba-a96b-bc4e432af321 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Acquiring lock "refresh_cache-bfdc2bf6-cb73-4586-861c-e6057f75edcc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 19:12:37 compute-0 nova_compute[254061]: 2026-01-20 19:12:37.117 254065 DEBUG oslo_concurrency.lockutils [req-56996540-8edb-4465-b6d3-6e34cf3acc93 req-19c60945-9e1e-44ba-a96b-bc4e432af321 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Acquired lock "refresh_cache-bfdc2bf6-cb73-4586-861c-e6057f75edcc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 19:12:37 compute-0 nova_compute[254061]: 2026-01-20 19:12:37.118 254065 DEBUG nova.network.neutron [req-56996540-8edb-4465-b6d3-6e34cf3acc93 req-19c60945-9e1e-44ba-a96b-bc4e432af321 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Refreshing network info cache for port 9aea074b-ae18-481e-9e32-d20b598171be _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 19:12:37 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v917: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 468 KiB/s rd, 2.1 MiB/s wr, 229 op/s
Jan 20 19:12:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:12:37.190Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:12:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:12:37.190Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:12:37 compute-0 nova_compute[254061]: 2026-01-20 19:12:37.282 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:12:37 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:12:37 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:12:37 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:12:37.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:12:37 compute-0 sudo[268595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:12:37 compute-0 sudo[268595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:12:37 compute-0 sudo[268595]: pam_unix(sudo:session): session closed for user root
Jan 20 19:12:38 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:12:38 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:12:38 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:12:38.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:12:38 compute-0 ceph-mon[74381]: pgmap v917: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 468 KiB/s rd, 2.1 MiB/s wr, 229 op/s
Jan 20 19:12:38 compute-0 nova_compute[254061]: 2026-01-20 19:12:38.756 254065 DEBUG nova.network.neutron [req-56996540-8edb-4465-b6d3-6e34cf3acc93 req-19c60945-9e1e-44ba-a96b-bc4e432af321 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Updated VIF entry in instance network info cache for port 9aea074b-ae18-481e-9e32-d20b598171be. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 19:12:38 compute-0 nova_compute[254061]: 2026-01-20 19:12:38.757 254065 DEBUG nova.network.neutron [req-56996540-8edb-4465-b6d3-6e34cf3acc93 req-19c60945-9e1e-44ba-a96b-bc4e432af321 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Updating instance_info_cache with network_info: [{"id": "8d71eaa1-d4f2-413e-9640-7704328de4fc", "address": "fa:16:3e:c8:97:18", "network": {"id": "d89a966b-cfbe-45ff-b257-05d5877a2da4", "bridge": "br-int", "label": "tempest-network-smoke--23620673", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d71eaa1-d4", "ovs_interfaceid": "8d71eaa1-d4f2-413e-9640-7704328de4fc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9aea074b-ae18-481e-9e32-d20b598171be", "address": "fa:16:3e:a5:ca:35", "network": {"id": "527c809d-016a-41e2-8792-ec37a5eee918", "bridge": "br-int", "label": "tempest-network-smoke--83173917", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9aea074b-ae", "ovs_interfaceid": "9aea074b-ae18-481e-9e32-d20b598171be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 19:12:38 compute-0 nova_compute[254061]: 2026-01-20 19:12:38.775 254065 DEBUG oslo_concurrency.lockutils [req-56996540-8edb-4465-b6d3-6e34cf3acc93 req-19c60945-9e1e-44ba-a96b-bc4e432af321 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Releasing lock "refresh_cache-bfdc2bf6-cb73-4586-861c-e6057f75edcc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 19:12:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:12:38.879Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:12:39 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v918: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 378 KiB/s rd, 297 KiB/s wr, 92 op/s
Jan 20 19:12:39 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:12:39 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:12:39 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:12:39.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:12:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:12:39] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Jan 20 19:12:39 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:12:39] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Jan 20 19:12:40 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:12:40 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 19:12:40 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:12:40.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 19:12:40 compute-0 ceph-mon[74381]: pgmap v918: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 378 KiB/s rd, 297 KiB/s wr, 92 op/s
Jan 20 19:12:40 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:12:41 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v919: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 310 KiB/s rd, 242 KiB/s wr, 40 op/s
Jan 20 19:12:41 compute-0 nova_compute[254061]: 2026-01-20 19:12:41.477 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:12:41 compute-0 ceph-mon[74381]: pgmap v919: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 310 KiB/s rd, 242 KiB/s wr, 40 op/s
Jan 20 19:12:41 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:12:41 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:12:41 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:12:41.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:12:41 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:12:42 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:12:42 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:12:42 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:12:42.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:12:42 compute-0 nova_compute[254061]: 2026-01-20 19:12:42.284 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:12:43 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v920: 337 pgs: 337 active+clean; 189 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 319 KiB/s rd, 248 KiB/s wr, 53 op/s
Jan 20 19:12:43 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:12:43 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 19:12:43 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:12:43.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 19:12:44 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:12:44 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:12:44 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:12:44.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:12:44 compute-0 ceph-mon[74381]: pgmap v920: 337 pgs: 337 active+clean; 189 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 319 KiB/s rd, 248 KiB/s wr, 53 op/s
Jan 20 19:12:44 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/4197335820' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:12:44 compute-0 nova_compute[254061]: 2026-01-20 19:12:44.443 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:12:44 compute-0 nova_compute[254061]: 2026-01-20 19:12:44.460 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Triggering sync for uuid bfdc2bf6-cb73-4586-861c-e6057f75edcc _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 20 19:12:44 compute-0 nova_compute[254061]: 2026-01-20 19:12:44.460 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Acquiring lock "bfdc2bf6-cb73-4586-861c-e6057f75edcc" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:12:44 compute-0 nova_compute[254061]: 2026-01-20 19:12:44.460 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "bfdc2bf6-cb73-4586-861c-e6057f75edcc" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:12:44 compute-0 nova_compute[254061]: 2026-01-20 19:12:44.479 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "bfdc2bf6-cb73-4586-861c-e6057f75edcc" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.018s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:12:45 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v921: 337 pgs: 337 active+clean; 189 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 9.6 KiB/s rd, 18 KiB/s wr, 14 op/s
Jan 20 19:12:45 compute-0 nova_compute[254061]: 2026-01-20 19:12:45.377 254065 DEBUG oslo_concurrency.lockutils [None req-8a1fe699-d167-4b49-945e-8359820194e6 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Acquiring lock "interface-bfdc2bf6-cb73-4586-861c-e6057f75edcc-9aea074b-ae18-481e-9e32-d20b598171be" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:12:45 compute-0 nova_compute[254061]: 2026-01-20 19:12:45.378 254065 DEBUG oslo_concurrency.lockutils [None req-8a1fe699-d167-4b49-945e-8359820194e6 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "interface-bfdc2bf6-cb73-4586-861c-e6057f75edcc-9aea074b-ae18-481e-9e32-d20b598171be" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:12:45 compute-0 nova_compute[254061]: 2026-01-20 19:12:45.396 254065 DEBUG nova.objects.instance [None req-8a1fe699-d167-4b49-945e-8359820194e6 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lazy-loading 'flavor' on Instance uuid bfdc2bf6-cb73-4586-861c-e6057f75edcc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 19:12:45 compute-0 nova_compute[254061]: 2026-01-20 19:12:45.419 254065 DEBUG nova.virt.libvirt.vif [None req-8a1fe699-d167-4b49-945e-8359820194e6 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T19:11:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1235924562',display_name='tempest-TestNetworkBasicOps-server-1235924562',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1235924562',id=6,image_ref='bc57af0c-4b71-499e-9808-3c8fc070a488',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIjr6syangYxWXc3r4dCfhnhxcAaEg5oVCWg4X5MCcn6n80x4JhggPSqDkhncvG7NiQVFxqb5q9kQ+/60IAt0rodPBVgAFfcYlPDnsnj3CLQm3+or3usHJ4CImoOM7F42A==',key_name='tempest-TestNetworkBasicOps-1374279328',keypairs=<?>,launch_index=0,launched_at=2026-01-20T19:11:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dc8a6ea17f334edbbfaf2a91ec6fd167',ramdisk_id='',reservation_id='r-7t9yeoab',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='bc57af0c-4b71-499e-9808-3c8fc070a488',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-899583499',owner_user_name='tempest-TestNetworkBasicOps-899583499-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T19:11:27Z,user_data=None,user_id='d34bd159f8884263a7481e3fcff15267',uuid=bfdc2bf6-cb73-4586-861c-e6057f75edcc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9aea074b-ae18-481e-9e32-d20b598171be", "address": "fa:16:3e:a5:ca:35", "network": {"id": "527c809d-016a-41e2-8792-ec37a5eee918", "bridge": "br-int", "label": "tempest-network-smoke--83173917", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9aea074b-ae", "ovs_interfaceid": "9aea074b-ae18-481e-9e32-d20b598171be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 19:12:45 compute-0 nova_compute[254061]: 2026-01-20 19:12:45.420 254065 DEBUG nova.network.os_vif_util [None req-8a1fe699-d167-4b49-945e-8359820194e6 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Converting VIF {"id": "9aea074b-ae18-481e-9e32-d20b598171be", "address": "fa:16:3e:a5:ca:35", "network": {"id": "527c809d-016a-41e2-8792-ec37a5eee918", "bridge": "br-int", "label": "tempest-network-smoke--83173917", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9aea074b-ae", "ovs_interfaceid": "9aea074b-ae18-481e-9e32-d20b598171be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 19:12:45 compute-0 nova_compute[254061]: 2026-01-20 19:12:45.420 254065 DEBUG nova.network.os_vif_util [None req-8a1fe699-d167-4b49-945e-8359820194e6 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a5:ca:35,bridge_name='br-int',has_traffic_filtering=True,id=9aea074b-ae18-481e-9e32-d20b598171be,network=Network(527c809d-016a-41e2-8792-ec37a5eee918),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9aea074b-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 19:12:45 compute-0 nova_compute[254061]: 2026-01-20 19:12:45.424 254065 DEBUG nova.virt.libvirt.guest [None req-8a1fe699-d167-4b49-945e-8359820194e6 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:a5:ca:35"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9aea074b-ae"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 20 19:12:45 compute-0 nova_compute[254061]: 2026-01-20 19:12:45.426 254065 DEBUG nova.virt.libvirt.guest [None req-8a1fe699-d167-4b49-945e-8359820194e6 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:a5:ca:35"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9aea074b-ae"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 20 19:12:45 compute-0 nova_compute[254061]: 2026-01-20 19:12:45.428 254065 DEBUG nova.virt.libvirt.driver [None req-8a1fe699-d167-4b49-945e-8359820194e6 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Attempting to detach device tap9aea074b-ae from instance bfdc2bf6-cb73-4586-861c-e6057f75edcc from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Jan 20 19:12:45 compute-0 nova_compute[254061]: 2026-01-20 19:12:45.428 254065 DEBUG nova.virt.libvirt.guest [None req-8a1fe699-d167-4b49-945e-8359820194e6 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] detach device xml: <interface type="ethernet">
Jan 20 19:12:45 compute-0 nova_compute[254061]:   <mac address="fa:16:3e:a5:ca:35"/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   <model type="virtio"/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   <driver name="vhost" rx_queue_size="512"/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   <mtu size="1442"/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   <target dev="tap9aea074b-ae"/>
Jan 20 19:12:45 compute-0 nova_compute[254061]: </interface>
Jan 20 19:12:45 compute-0 nova_compute[254061]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 20 19:12:45 compute-0 nova_compute[254061]: 2026-01-20 19:12:45.434 254065 DEBUG nova.virt.libvirt.guest [None req-8a1fe699-d167-4b49-945e-8359820194e6 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:a5:ca:35"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9aea074b-ae"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 20 19:12:45 compute-0 nova_compute[254061]: 2026-01-20 19:12:45.437 254065 DEBUG nova.virt.libvirt.guest [None req-8a1fe699-d167-4b49-945e-8359820194e6 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:a5:ca:35"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9aea074b-ae"/></interface>not found in domain: <domain type='kvm' id='3'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   <name>instance-00000006</name>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   <uuid>bfdc2bf6-cb73-4586-861c-e6057f75edcc</uuid>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   <metadata>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 19:12:45 compute-0 nova_compute[254061]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   <nova:name>tempest-TestNetworkBasicOps-server-1235924562</nova:name>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   <nova:creationTime>2026-01-20 19:11:57</nova:creationTime>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   <nova:flavor name="m1.nano">
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <nova:memory>128</nova:memory>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <nova:disk>1</nova:disk>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <nova:swap>0</nova:swap>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <nova:ephemeral>0</nova:ephemeral>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <nova:vcpus>1</nova:vcpus>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   </nova:flavor>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   <nova:owner>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <nova:user uuid="d34bd159f8884263a7481e3fcff15267">tempest-TestNetworkBasicOps-899583499-project-member</nova:user>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <nova:project uuid="dc8a6ea17f334edbbfaf2a91ec6fd167">tempest-TestNetworkBasicOps-899583499</nova:project>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   </nova:owner>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   <nova:root type="image" uuid="bc57af0c-4b71-499e-9808-3c8fc070a488"/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   <nova:ports>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <nova:port uuid="8d71eaa1-d4f2-413e-9640-7704328de4fc">
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </nova:port>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <nova:port uuid="9aea074b-ae18-481e-9e32-d20b598171be">
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <nova:ip type="fixed" address="10.100.0.24" ipVersion="4"/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </nova:port>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   </nova:ports>
Jan 20 19:12:45 compute-0 nova_compute[254061]: </nova:instance>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   </metadata>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   <memory unit='KiB'>131072</memory>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   <currentMemory unit='KiB'>131072</currentMemory>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   <vcpu placement='static'>1</vcpu>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   <resource>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <partition>/machine</partition>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   </resource>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   <sysinfo type='smbios'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <system>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <entry name='manufacturer'>RDO</entry>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <entry name='product'>OpenStack Compute</entry>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <entry name='serial'>bfdc2bf6-cb73-4586-861c-e6057f75edcc</entry>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <entry name='uuid'>bfdc2bf6-cb73-4586-861c-e6057f75edcc</entry>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <entry name='family'>Virtual Machine</entry>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </system>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   </sysinfo>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   <os>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <boot dev='hd'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <smbios mode='sysinfo'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   </os>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   <features>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <acpi/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <apic/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <vmcoreinfo state='on'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   </features>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   <cpu mode='custom' match='exact' check='full'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <model fallback='forbid'>EPYC-Rome</model>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <vendor>AMD</vendor>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <feature policy='require' name='x2apic'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <feature policy='require' name='tsc-deadline'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <feature policy='require' name='hypervisor'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <feature policy='require' name='tsc_adjust'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <feature policy='require' name='spec-ctrl'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <feature policy='require' name='stibp'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <feature policy='require' name='ssbd'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <feature policy='require' name='cmp_legacy'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <feature policy='require' name='overflow-recov'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <feature policy='require' name='succor'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <feature policy='require' name='ibrs'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <feature policy='require' name='amd-ssbd'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <feature policy='require' name='virt-ssbd'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <feature policy='disable' name='lbrv'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <feature policy='disable' name='tsc-scale'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <feature policy='disable' name='vmcb-clean'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <feature policy='disable' name='flushbyasid'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <feature policy='disable' name='pause-filter'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <feature policy='disable' name='pfthreshold'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <feature policy='disable' name='svme-addr-chk'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <feature policy='require' name='lfence-always-serializing'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <feature policy='disable' name='xsaves'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <feature policy='disable' name='svm'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <feature policy='require' name='topoext'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <feature policy='disable' name='npt'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <feature policy='disable' name='nrip-save'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   </cpu>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   <clock offset='utc'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <timer name='pit' tickpolicy='delay'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <timer name='rtc' tickpolicy='catchup'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <timer name='hpet' present='no'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   </clock>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   <on_poweroff>destroy</on_poweroff>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   <on_reboot>restart</on_reboot>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   <on_crash>destroy</on_crash>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   <devices>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <disk type='network' device='disk'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <driver name='qemu' type='raw' cache='none'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <auth username='openstack'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:         <secret type='ceph' uuid='aecbbf3b-b405-507b-97d7-637a83f5b4b1'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       </auth>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <source protocol='rbd' name='vms/bfdc2bf6-cb73-4586-861c-e6057f75edcc_disk' index='2'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:         <host name='192.168.122.100' port='6789'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:         <host name='192.168.122.102' port='6789'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:         <host name='192.168.122.101' port='6789'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       </source>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <target dev='vda' bus='virtio'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='virtio-disk0'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </disk>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <disk type='network' device='cdrom'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <driver name='qemu' type='raw' cache='none'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <auth username='openstack'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:         <secret type='ceph' uuid='aecbbf3b-b405-507b-97d7-637a83f5b4b1'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       </auth>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <source protocol='rbd' name='vms/bfdc2bf6-cb73-4586-861c-e6057f75edcc_disk.config' index='1'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:         <host name='192.168.122.100' port='6789'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:         <host name='192.168.122.102' port='6789'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:         <host name='192.168.122.101' port='6789'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       </source>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <target dev='sda' bus='sata'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <readonly/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='sata0-0-0'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </disk>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <controller type='pci' index='0' model='pcie-root'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='pcie.0'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <controller type='pci' index='1' model='pcie-root-port'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <target chassis='1' port='0x10'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='pci.1'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <controller type='pci' index='2' model='pcie-root-port'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <target chassis='2' port='0x11'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='pci.2'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <controller type='pci' index='3' model='pcie-root-port'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <target chassis='3' port='0x12'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='pci.3'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <controller type='pci' index='4' model='pcie-root-port'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <target chassis='4' port='0x13'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='pci.4'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <controller type='pci' index='5' model='pcie-root-port'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <target chassis='5' port='0x14'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='pci.5'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <controller type='pci' index='6' model='pcie-root-port'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <target chassis='6' port='0x15'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='pci.6'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <controller type='pci' index='7' model='pcie-root-port'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <target chassis='7' port='0x16'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='pci.7'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <controller type='pci' index='8' model='pcie-root-port'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <target chassis='8' port='0x17'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='pci.8'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <controller type='pci' index='9' model='pcie-root-port'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <target chassis='9' port='0x18'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='pci.9'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <controller type='pci' index='10' model='pcie-root-port'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <target chassis='10' port='0x19'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='pci.10'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <controller type='pci' index='11' model='pcie-root-port'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <target chassis='11' port='0x1a'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='pci.11'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <controller type='pci' index='12' model='pcie-root-port'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <target chassis='12' port='0x1b'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='pci.12'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <controller type='pci' index='13' model='pcie-root-port'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <target chassis='13' port='0x1c'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='pci.13'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <controller type='pci' index='14' model='pcie-root-port'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <target chassis='14' port='0x1d'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='pci.14'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <controller type='pci' index='15' model='pcie-root-port'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <target chassis='15' port='0x1e'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='pci.15'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <controller type='pci' index='16' model='pcie-root-port'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <target chassis='16' port='0x1f'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='pci.16'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <controller type='pci' index='17' model='pcie-root-port'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <target chassis='17' port='0x20'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='pci.17'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <controller type='pci' index='18' model='pcie-root-port'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <target chassis='18' port='0x21'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='pci.18'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <controller type='pci' index='19' model='pcie-root-port'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <target chassis='19' port='0x22'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='pci.19'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <controller type='pci' index='20' model='pcie-root-port'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <target chassis='20' port='0x23'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='pci.20'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <controller type='pci' index='21' model='pcie-root-port'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <target chassis='21' port='0x24'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='pci.21'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <controller type='pci' index='22' model='pcie-root-port'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <target chassis='22' port='0x25'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='pci.22'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <controller type='pci' index='23' model='pcie-root-port'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <target chassis='23' port='0x26'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='pci.23'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <controller type='pci' index='24' model='pcie-root-port'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <target chassis='24' port='0x27'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='pci.24'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <controller type='pci' index='25' model='pcie-root-port'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <target chassis='25' port='0x28'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='pci.25'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <model name='pcie-pci-bridge'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='pci.26'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <controller type='usb' index='0' model='piix3-uhci'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='usb'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <controller type='sata' index='0'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='ide'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <interface type='ethernet'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <mac address='fa:16:3e:c8:97:18'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <target dev='tap8d71eaa1-d4'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <model type='virtio'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <driver name='vhost' rx_queue_size='512'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <mtu size='1442'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='net0'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </interface>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <interface type='ethernet'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <mac address='fa:16:3e:a5:ca:35'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <target dev='tap9aea074b-ae'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <model type='virtio'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <driver name='vhost' rx_queue_size='512'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <mtu size='1442'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='net1'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </interface>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <serial type='pty'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <source path='/dev/pts/0'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <log file='/var/lib/nova/instances/bfdc2bf6-cb73-4586-861c-e6057f75edcc/console.log' append='off'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <target type='isa-serial' port='0'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:         <model name='isa-serial'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       </target>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='serial0'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </serial>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <console type='pty' tty='/dev/pts/0'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <source path='/dev/pts/0'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <log file='/var/lib/nova/instances/bfdc2bf6-cb73-4586-861c-e6057f75edcc/console.log' append='off'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <target type='serial' port='0'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='serial0'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </console>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <input type='tablet' bus='usb'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='input0'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <address type='usb' bus='0' port='1'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </input>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <input type='mouse' bus='ps2'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='input1'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </input>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <input type='keyboard' bus='ps2'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='input2'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </input>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <listen type='address' address='::0'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </graphics>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <audio id='1' type='none'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <video>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <model type='virtio' heads='1' primary='yes'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='video0'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </video>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <watchdog model='itco' action='reset'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='watchdog0'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </watchdog>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <memballoon model='virtio'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <stats period='10'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='balloon0'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </memballoon>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <rng model='virtio'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <backend model='random'>/dev/urandom</backend>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='rng0'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </rng>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   </devices>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <label>system_u:system_r:svirt_t:s0:c378,c582</label>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c378,c582</imagelabel>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   </seclabel>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <label>+107:+107</label>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <imagelabel>+107:+107</imagelabel>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   </seclabel>
Jan 20 19:12:45 compute-0 nova_compute[254061]: </domain>
Jan 20 19:12:45 compute-0 nova_compute[254061]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Jan 20 19:12:45 compute-0 nova_compute[254061]: 2026-01-20 19:12:45.438 254065 INFO nova.virt.libvirt.driver [None req-8a1fe699-d167-4b49-945e-8359820194e6 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Successfully detached device tap9aea074b-ae from instance bfdc2bf6-cb73-4586-861c-e6057f75edcc from the persistent domain config.
Jan 20 19:12:45 compute-0 nova_compute[254061]: 2026-01-20 19:12:45.438 254065 DEBUG nova.virt.libvirt.driver [None req-8a1fe699-d167-4b49-945e-8359820194e6 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] (1/8): Attempting to detach device tap9aea074b-ae with device alias net1 from instance bfdc2bf6-cb73-4586-861c-e6057f75edcc from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Jan 20 19:12:45 compute-0 nova_compute[254061]: 2026-01-20 19:12:45.439 254065 DEBUG nova.virt.libvirt.guest [None req-8a1fe699-d167-4b49-945e-8359820194e6 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] detach device xml: <interface type="ethernet">
Jan 20 19:12:45 compute-0 nova_compute[254061]:   <mac address="fa:16:3e:a5:ca:35"/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   <model type="virtio"/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   <driver name="vhost" rx_queue_size="512"/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   <mtu size="1442"/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   <target dev="tap9aea074b-ae"/>
Jan 20 19:12:45 compute-0 nova_compute[254061]: </interface>
Jan 20 19:12:45 compute-0 nova_compute[254061]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 20 19:12:45 compute-0 kernel: tap9aea074b-ae (unregistering): left promiscuous mode
Jan 20 19:12:45 compute-0 NetworkManager[48914]: <info>  [1768936365.5446] device (tap9aea074b-ae): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 19:12:45 compute-0 ovn_controller[155128]: 2026-01-20T19:12:45Z|00061|binding|INFO|Releasing lport 9aea074b-ae18-481e-9e32-d20b598171be from this chassis (sb_readonly=0)
Jan 20 19:12:45 compute-0 ovn_controller[155128]: 2026-01-20T19:12:45Z|00062|binding|INFO|Setting lport 9aea074b-ae18-481e-9e32-d20b598171be down in Southbound
Jan 20 19:12:45 compute-0 ovn_controller[155128]: 2026-01-20T19:12:45Z|00063|binding|INFO|Removing iface tap9aea074b-ae ovn-installed in OVS
Jan 20 19:12:45 compute-0 nova_compute[254061]: 2026-01-20 19:12:45.550 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:12:45 compute-0 nova_compute[254061]: 2026-01-20 19:12:45.552 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:12:45 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:12:45.556 165659 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:ca:35 10.100.0.24', 'unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.24/28', 'neutron:device_id': 'bfdc2bf6-cb73-4586-861c-e6057f75edcc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-527c809d-016a-41e2-8792-ec37a5eee918', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dc8a6ea17f334edbbfaf2a91ec6fd167', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=61d5db56-85bc-41c4-b081-957ba735f06d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4e780f9880>], logical_port=9aea074b-ae18-481e-9e32-d20b598171be) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4e780f9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 19:12:45 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:12:45.557 165659 INFO neutron.agent.ovn.metadata.agent [-] Port 9aea074b-ae18-481e-9e32-d20b598171be in datapath 527c809d-016a-41e2-8792-ec37a5eee918 unbound from our chassis
Jan 20 19:12:45 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:12:45.558 165659 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 527c809d-016a-41e2-8792-ec37a5eee918, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 19:12:45 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:12:45.560 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[93510bde-bf90-477c-a13a-3cab5cb14ce1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:12:45 compute-0 nova_compute[254061]: 2026-01-20 19:12:45.559 254065 DEBUG nova.virt.libvirt.driver [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] Received event <DeviceRemovedEvent: 1768936365.559224, bfdc2bf6-cb73-4586-861c-e6057f75edcc => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Jan 20 19:12:45 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:12:45.560 165659 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-527c809d-016a-41e2-8792-ec37a5eee918 namespace which is not needed anymore
Jan 20 19:12:45 compute-0 nova_compute[254061]: 2026-01-20 19:12:45.561 254065 DEBUG nova.virt.libvirt.driver [None req-8a1fe699-d167-4b49-945e-8359820194e6 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Start waiting for the detach event from libvirt for device tap9aea074b-ae with device alias net1 for instance bfdc2bf6-cb73-4586-861c-e6057f75edcc _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Jan 20 19:12:45 compute-0 nova_compute[254061]: 2026-01-20 19:12:45.562 254065 DEBUG nova.virt.libvirt.guest [None req-8a1fe699-d167-4b49-945e-8359820194e6 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:a5:ca:35"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9aea074b-ae"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 20 19:12:45 compute-0 nova_compute[254061]: 2026-01-20 19:12:45.566 254065 DEBUG nova.virt.libvirt.guest [None req-8a1fe699-d167-4b49-945e-8359820194e6 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:a5:ca:35"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9aea074b-ae"/></interface>not found in domain: <domain type='kvm' id='3'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   <name>instance-00000006</name>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   <uuid>bfdc2bf6-cb73-4586-861c-e6057f75edcc</uuid>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   <metadata>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 19:12:45 compute-0 nova_compute[254061]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   <nova:name>tempest-TestNetworkBasicOps-server-1235924562</nova:name>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   <nova:creationTime>2026-01-20 19:11:57</nova:creationTime>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   <nova:flavor name="m1.nano">
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <nova:memory>128</nova:memory>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <nova:disk>1</nova:disk>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <nova:swap>0</nova:swap>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <nova:ephemeral>0</nova:ephemeral>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <nova:vcpus>1</nova:vcpus>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   </nova:flavor>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   <nova:owner>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <nova:user uuid="d34bd159f8884263a7481e3fcff15267">tempest-TestNetworkBasicOps-899583499-project-member</nova:user>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <nova:project uuid="dc8a6ea17f334edbbfaf2a91ec6fd167">tempest-TestNetworkBasicOps-899583499</nova:project>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   </nova:owner>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   <nova:root type="image" uuid="bc57af0c-4b71-499e-9808-3c8fc070a488"/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   <nova:ports>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <nova:port uuid="8d71eaa1-d4f2-413e-9640-7704328de4fc">
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </nova:port>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <nova:port uuid="9aea074b-ae18-481e-9e32-d20b598171be">
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <nova:ip type="fixed" address="10.100.0.24" ipVersion="4"/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </nova:port>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   </nova:ports>
Jan 20 19:12:45 compute-0 nova_compute[254061]: </nova:instance>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   </metadata>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   <memory unit='KiB'>131072</memory>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   <currentMemory unit='KiB'>131072</currentMemory>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   <vcpu placement='static'>1</vcpu>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   <resource>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <partition>/machine</partition>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   </resource>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   <sysinfo type='smbios'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <system>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <entry name='manufacturer'>RDO</entry>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <entry name='product'>OpenStack Compute</entry>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <entry name='serial'>bfdc2bf6-cb73-4586-861c-e6057f75edcc</entry>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <entry name='uuid'>bfdc2bf6-cb73-4586-861c-e6057f75edcc</entry>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <entry name='family'>Virtual Machine</entry>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </system>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   </sysinfo>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   <os>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <boot dev='hd'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <smbios mode='sysinfo'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   </os>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   <features>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <acpi/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <apic/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <vmcoreinfo state='on'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   </features>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   <cpu mode='custom' match='exact' check='full'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <model fallback='forbid'>EPYC-Rome</model>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <vendor>AMD</vendor>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <feature policy='require' name='x2apic'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <feature policy='require' name='tsc-deadline'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <feature policy='require' name='hypervisor'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <feature policy='require' name='tsc_adjust'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <feature policy='require' name='spec-ctrl'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <feature policy='require' name='stibp'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <feature policy='require' name='ssbd'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <feature policy='require' name='cmp_legacy'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <feature policy='require' name='overflow-recov'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <feature policy='require' name='succor'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <feature policy='require' name='ibrs'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <feature policy='require' name='amd-ssbd'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <feature policy='require' name='virt-ssbd'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <feature policy='disable' name='lbrv'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <feature policy='disable' name='tsc-scale'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <feature policy='disable' name='vmcb-clean'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <feature policy='disable' name='flushbyasid'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <feature policy='disable' name='pause-filter'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <feature policy='disable' name='pfthreshold'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <feature policy='disable' name='svme-addr-chk'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <feature policy='require' name='lfence-always-serializing'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <feature policy='disable' name='xsaves'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <feature policy='disable' name='svm'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <feature policy='require' name='topoext'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <feature policy='disable' name='npt'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <feature policy='disable' name='nrip-save'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   </cpu>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   <clock offset='utc'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <timer name='pit' tickpolicy='delay'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <timer name='rtc' tickpolicy='catchup'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <timer name='hpet' present='no'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   </clock>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   <on_poweroff>destroy</on_poweroff>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   <on_reboot>restart</on_reboot>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   <on_crash>destroy</on_crash>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   <devices>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <disk type='network' device='disk'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <driver name='qemu' type='raw' cache='none'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <auth username='openstack'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:         <secret type='ceph' uuid='aecbbf3b-b405-507b-97d7-637a83f5b4b1'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       </auth>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <source protocol='rbd' name='vms/bfdc2bf6-cb73-4586-861c-e6057f75edcc_disk' index='2'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:         <host name='192.168.122.100' port='6789'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:         <host name='192.168.122.102' port='6789'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:         <host name='192.168.122.101' port='6789'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       </source>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <target dev='vda' bus='virtio'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='virtio-disk0'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </disk>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <disk type='network' device='cdrom'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <driver name='qemu' type='raw' cache='none'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <auth username='openstack'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:         <secret type='ceph' uuid='aecbbf3b-b405-507b-97d7-637a83f5b4b1'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       </auth>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <source protocol='rbd' name='vms/bfdc2bf6-cb73-4586-861c-e6057f75edcc_disk.config' index='1'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:         <host name='192.168.122.100' port='6789'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:         <host name='192.168.122.102' port='6789'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:         <host name='192.168.122.101' port='6789'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       </source>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <target dev='sda' bus='sata'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <readonly/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='sata0-0-0'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </disk>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <controller type='pci' index='0' model='pcie-root'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='pcie.0'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <controller type='pci' index='1' model='pcie-root-port'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <target chassis='1' port='0x10'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='pci.1'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <controller type='pci' index='2' model='pcie-root-port'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <target chassis='2' port='0x11'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='pci.2'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <controller type='pci' index='3' model='pcie-root-port'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <target chassis='3' port='0x12'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='pci.3'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <controller type='pci' index='4' model='pcie-root-port'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <target chassis='4' port='0x13'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='pci.4'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <controller type='pci' index='5' model='pcie-root-port'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <target chassis='5' port='0x14'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='pci.5'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <controller type='pci' index='6' model='pcie-root-port'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <target chassis='6' port='0x15'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='pci.6'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <controller type='pci' index='7' model='pcie-root-port'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <target chassis='7' port='0x16'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='pci.7'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <controller type='pci' index='8' model='pcie-root-port'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <target chassis='8' port='0x17'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='pci.8'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <controller type='pci' index='9' model='pcie-root-port'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <target chassis='9' port='0x18'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='pci.9'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <controller type='pci' index='10' model='pcie-root-port'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <target chassis='10' port='0x19'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='pci.10'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <controller type='pci' index='11' model='pcie-root-port'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <target chassis='11' port='0x1a'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='pci.11'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <controller type='pci' index='12' model='pcie-root-port'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <target chassis='12' port='0x1b'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='pci.12'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <controller type='pci' index='13' model='pcie-root-port'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <target chassis='13' port='0x1c'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='pci.13'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <controller type='pci' index='14' model='pcie-root-port'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <target chassis='14' port='0x1d'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='pci.14'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <controller type='pci' index='15' model='pcie-root-port'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <target chassis='15' port='0x1e'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='pci.15'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <controller type='pci' index='16' model='pcie-root-port'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <target chassis='16' port='0x1f'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='pci.16'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <controller type='pci' index='17' model='pcie-root-port'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <target chassis='17' port='0x20'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='pci.17'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <controller type='pci' index='18' model='pcie-root-port'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <target chassis='18' port='0x21'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='pci.18'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <controller type='pci' index='19' model='pcie-root-port'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <target chassis='19' port='0x22'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='pci.19'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <controller type='pci' index='20' model='pcie-root-port'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <target chassis='20' port='0x23'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='pci.20'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <controller type='pci' index='21' model='pcie-root-port'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <target chassis='21' port='0x24'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='pci.21'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <controller type='pci' index='22' model='pcie-root-port'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <target chassis='22' port='0x25'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='pci.22'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <controller type='pci' index='23' model='pcie-root-port'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <target chassis='23' port='0x26'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='pci.23'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <controller type='pci' index='24' model='pcie-root-port'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <target chassis='24' port='0x27'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='pci.24'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <controller type='pci' index='25' model='pcie-root-port'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <target chassis='25' port='0x28'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='pci.25'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <model name='pcie-pci-bridge'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='pci.26'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <controller type='usb' index='0' model='piix3-uhci'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='usb'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <controller type='sata' index='0'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='ide'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <interface type='ethernet'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <mac address='fa:16:3e:c8:97:18'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <target dev='tap8d71eaa1-d4'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <model type='virtio'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <driver name='vhost' rx_queue_size='512'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <mtu size='1442'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='net0'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </interface>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <serial type='pty'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <source path='/dev/pts/0'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <log file='/var/lib/nova/instances/bfdc2bf6-cb73-4586-861c-e6057f75edcc/console.log' append='off'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <target type='isa-serial' port='0'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:         <model name='isa-serial'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       </target>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='serial0'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </serial>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <console type='pty' tty='/dev/pts/0'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <source path='/dev/pts/0'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <log file='/var/lib/nova/instances/bfdc2bf6-cb73-4586-861c-e6057f75edcc/console.log' append='off'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <target type='serial' port='0'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='serial0'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </console>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <input type='tablet' bus='usb'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='input0'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <address type='usb' bus='0' port='1'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </input>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <input type='mouse' bus='ps2'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='input1'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </input>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <input type='keyboard' bus='ps2'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='input2'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </input>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <listen type='address' address='::0'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </graphics>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <audio id='1' type='none'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <video>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <model type='virtio' heads='1' primary='yes'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='video0'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </video>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <watchdog model='itco' action='reset'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='watchdog0'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </watchdog>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <memballoon model='virtio'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <stats period='10'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='balloon0'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </memballoon>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <rng model='virtio'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <backend model='random'>/dev/urandom</backend>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <alias name='rng0'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </rng>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   </devices>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <label>system_u:system_r:svirt_t:s0:c378,c582</label>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c378,c582</imagelabel>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   </seclabel>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <label>+107:+107</label>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <imagelabel>+107:+107</imagelabel>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   </seclabel>
Jan 20 19:12:45 compute-0 nova_compute[254061]: </domain>
Jan 20 19:12:45 compute-0 nova_compute[254061]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Jan 20 19:12:45 compute-0 nova_compute[254061]: 2026-01-20 19:12:45.567 254065 INFO nova.virt.libvirt.driver [None req-8a1fe699-d167-4b49-945e-8359820194e6 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Successfully detached device tap9aea074b-ae from instance bfdc2bf6-cb73-4586-861c-e6057f75edcc from the live domain config.
Jan 20 19:12:45 compute-0 nova_compute[254061]: 2026-01-20 19:12:45.568 254065 DEBUG nova.virt.libvirt.vif [None req-8a1fe699-d167-4b49-945e-8359820194e6 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T19:11:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1235924562',display_name='tempest-TestNetworkBasicOps-server-1235924562',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1235924562',id=6,image_ref='bc57af0c-4b71-499e-9808-3c8fc070a488',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIjr6syangYxWXc3r4dCfhnhxcAaEg5oVCWg4X5MCcn6n80x4JhggPSqDkhncvG7NiQVFxqb5q9kQ+/60IAt0rodPBVgAFfcYlPDnsnj3CLQm3+or3usHJ4CImoOM7F42A==',key_name='tempest-TestNetworkBasicOps-1374279328',keypairs=<?>,launch_index=0,launched_at=2026-01-20T19:11:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dc8a6ea17f334edbbfaf2a91ec6fd167',ramdisk_id='',reservation_id='r-7t9yeoab',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='bc57af0c-4b71-499e-9808-3c8fc070a488',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-899583499',owner_user_name='tempest-TestNetworkBasicOps-899583499-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T19:11:27Z,user_data=None,user_id='d34bd159f8884263a7481e3fcff15267',uuid=bfdc2bf6-cb73-4586-861c-e6057f75edcc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9aea074b-ae18-481e-9e32-d20b598171be", "address": "fa:16:3e:a5:ca:35", "network": {"id": "527c809d-016a-41e2-8792-ec37a5eee918", "bridge": "br-int", "label": "tempest-network-smoke--83173917", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9aea074b-ae", "ovs_interfaceid": "9aea074b-ae18-481e-9e32-d20b598171be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 19:12:45 compute-0 nova_compute[254061]: 2026-01-20 19:12:45.568 254065 DEBUG nova.network.os_vif_util [None req-8a1fe699-d167-4b49-945e-8359820194e6 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Converting VIF {"id": "9aea074b-ae18-481e-9e32-d20b598171be", "address": "fa:16:3e:a5:ca:35", "network": {"id": "527c809d-016a-41e2-8792-ec37a5eee918", "bridge": "br-int", "label": "tempest-network-smoke--83173917", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9aea074b-ae", "ovs_interfaceid": "9aea074b-ae18-481e-9e32-d20b598171be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 19:12:45 compute-0 nova_compute[254061]: 2026-01-20 19:12:45.569 254065 DEBUG nova.network.os_vif_util [None req-8a1fe699-d167-4b49-945e-8359820194e6 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a5:ca:35,bridge_name='br-int',has_traffic_filtering=True,id=9aea074b-ae18-481e-9e32-d20b598171be,network=Network(527c809d-016a-41e2-8792-ec37a5eee918),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9aea074b-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 19:12:45 compute-0 nova_compute[254061]: 2026-01-20 19:12:45.569 254065 DEBUG os_vif [None req-8a1fe699-d167-4b49-945e-8359820194e6 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a5:ca:35,bridge_name='br-int',has_traffic_filtering=True,id=9aea074b-ae18-481e-9e32-d20b598171be,network=Network(527c809d-016a-41e2-8792-ec37a5eee918),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9aea074b-ae') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 19:12:45 compute-0 nova_compute[254061]: 2026-01-20 19:12:45.571 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:12:45 compute-0 nova_compute[254061]: 2026-01-20 19:12:45.571 254065 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9aea074b-ae, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 19:12:45 compute-0 nova_compute[254061]: 2026-01-20 19:12:45.573 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:12:45 compute-0 nova_compute[254061]: 2026-01-20 19:12:45.576 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:12:45 compute-0 nova_compute[254061]: 2026-01-20 19:12:45.579 254065 INFO os_vif [None req-8a1fe699-d167-4b49-945e-8359820194e6 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a5:ca:35,bridge_name='br-int',has_traffic_filtering=True,id=9aea074b-ae18-481e-9e32-d20b598171be,network=Network(527c809d-016a-41e2-8792-ec37a5eee918),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9aea074b-ae')
Jan 20 19:12:45 compute-0 nova_compute[254061]: 2026-01-20 19:12:45.580 254065 DEBUG nova.virt.libvirt.guest [None req-8a1fe699-d167-4b49-945e-8359820194e6 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 19:12:45 compute-0 nova_compute[254061]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   <nova:name>tempest-TestNetworkBasicOps-server-1235924562</nova:name>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   <nova:creationTime>2026-01-20 19:12:45</nova:creationTime>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   <nova:flavor name="m1.nano">
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <nova:memory>128</nova:memory>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <nova:disk>1</nova:disk>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <nova:swap>0</nova:swap>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <nova:ephemeral>0</nova:ephemeral>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <nova:vcpus>1</nova:vcpus>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   </nova:flavor>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   <nova:owner>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <nova:user uuid="d34bd159f8884263a7481e3fcff15267">tempest-TestNetworkBasicOps-899583499-project-member</nova:user>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <nova:project uuid="dc8a6ea17f334edbbfaf2a91ec6fd167">tempest-TestNetworkBasicOps-899583499</nova:project>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   </nova:owner>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   <nova:root type="image" uuid="bc57af0c-4b71-499e-9808-3c8fc070a488"/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   <nova:ports>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     <nova:port uuid="8d71eaa1-d4f2-413e-9640-7704328de4fc">
Jan 20 19:12:45 compute-0 nova_compute[254061]:       <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 20 19:12:45 compute-0 nova_compute[254061]:     </nova:port>
Jan 20 19:12:45 compute-0 nova_compute[254061]:   </nova:ports>
Jan 20 19:12:45 compute-0 nova_compute[254061]: </nova:instance>
Jan 20 19:12:45 compute-0 nova_compute[254061]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Jan 20 19:12:45 compute-0 neutron-haproxy-ovnmeta-527c809d-016a-41e2-8792-ec37a5eee918[268397]: [NOTICE]   (268402) : haproxy version is 2.8.14-c23fe91
Jan 20 19:12:45 compute-0 neutron-haproxy-ovnmeta-527c809d-016a-41e2-8792-ec37a5eee918[268397]: [NOTICE]   (268402) : path to executable is /usr/sbin/haproxy
Jan 20 19:12:45 compute-0 neutron-haproxy-ovnmeta-527c809d-016a-41e2-8792-ec37a5eee918[268397]: [WARNING]  (268402) : Exiting Master process...
Jan 20 19:12:45 compute-0 neutron-haproxy-ovnmeta-527c809d-016a-41e2-8792-ec37a5eee918[268397]: [ALERT]    (268402) : Current worker (268404) exited with code 143 (Terminated)
Jan 20 19:12:45 compute-0 neutron-haproxy-ovnmeta-527c809d-016a-41e2-8792-ec37a5eee918[268397]: [WARNING]  (268402) : All workers exited. Exiting... (0)
Jan 20 19:12:45 compute-0 systemd[1]: libpod-e931f3457980810fda7876b2d5bb78e79d169ab22a736229c1f4303950edae64.scope: Deactivated successfully.
Jan 20 19:12:45 compute-0 podman[268652]: 2026-01-20 19:12:45.686463628 +0000 UTC m=+0.042617340 container died e931f3457980810fda7876b2d5bb78e79d169ab22a736229c1f4303950edae64 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-527c809d-016a-41e2-8792-ec37a5eee918, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 19:12:45 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:12:45 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:12:45 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:12:45.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:12:45 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e931f3457980810fda7876b2d5bb78e79d169ab22a736229c1f4303950edae64-userdata-shm.mount: Deactivated successfully.
Jan 20 19:12:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-22c4a1c451cefa6d5589fc67a8c62ee5cb1f14f5f076bd312400110fee8eca56-merged.mount: Deactivated successfully.
Jan 20 19:12:45 compute-0 podman[268652]: 2026-01-20 19:12:45.721961099 +0000 UTC m=+0.078114801 container cleanup e931f3457980810fda7876b2d5bb78e79d169ab22a736229c1f4303950edae64 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-527c809d-016a-41e2-8792-ec37a5eee918, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 20 19:12:45 compute-0 systemd[1]: libpod-conmon-e931f3457980810fda7876b2d5bb78e79d169ab22a736229c1f4303950edae64.scope: Deactivated successfully.
Jan 20 19:12:45 compute-0 podman[268684]: 2026-01-20 19:12:45.785723338 +0000 UTC m=+0.039544898 container remove e931f3457980810fda7876b2d5bb78e79d169ab22a736229c1f4303950edae64 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-527c809d-016a-41e2-8792-ec37a5eee918, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Jan 20 19:12:45 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:12:45.791 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[ed58e95d-36de-4e7b-b551-4b9e0899a1af]: (4, ('Tue Jan 20 07:12:45 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-527c809d-016a-41e2-8792-ec37a5eee918 (e931f3457980810fda7876b2d5bb78e79d169ab22a736229c1f4303950edae64)\ne931f3457980810fda7876b2d5bb78e79d169ab22a736229c1f4303950edae64\nTue Jan 20 07:12:45 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-527c809d-016a-41e2-8792-ec37a5eee918 (e931f3457980810fda7876b2d5bb78e79d169ab22a736229c1f4303950edae64)\ne931f3457980810fda7876b2d5bb78e79d169ab22a736229c1f4303950edae64\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:12:45 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:12:45.792 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[6f87814d-e0fe-41a7-86ba-12a184d56fd2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:12:45 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:12:45.793 165659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap527c809d-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 19:12:45 compute-0 kernel: tap527c809d-00: left promiscuous mode
Jan 20 19:12:45 compute-0 nova_compute[254061]: 2026-01-20 19:12:45.794 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:12:45 compute-0 nova_compute[254061]: 2026-01-20 19:12:45.814 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:12:45 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:12:45.816 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[b730e035-b3da-4df2-a2ba-650fc2093da4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:12:45 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:12:45.831 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[88d6d1b3-414f-43a8-a26a-3f50d4862a0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:12:45 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:12:45.832 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[c08b0960-723f-4378-90d7-f4c57942e830]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:12:45 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:12:45.844 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[5729e291-9f76-4c6b-8cf8-33eabcca81e8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 451313, 'reachable_time': 21142, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268699, 'error': None, 'target': 'ovnmeta-527c809d-016a-41e2-8792-ec37a5eee918', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:12:45 compute-0 systemd[1]: run-netns-ovnmeta\x2d527c809d\x2d016a\x2d41e2\x2d8792\x2dec37a5eee918.mount: Deactivated successfully.
Jan 20 19:12:45 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:12:45.848 166372 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-527c809d-016a-41e2-8792-ec37a5eee918 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 19:12:45 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:12:45.848 166372 DEBUG oslo.privsep.daemon [-] privsep: reply[8bbb7244-cb6b-4797-ab2c-384a52500703]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:12:45 compute-0 nova_compute[254061]: 2026-01-20 19:12:45.853 254065 DEBUG nova.compute.manager [req-ad9c7d86-a6ec-4f5f-b92e-f1998530edf4 req-fe6e2b34-058b-4400-8d5b-7447c312b011 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Received event network-vif-unplugged-9aea074b-ae18-481e-9e32-d20b598171be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 19:12:45 compute-0 nova_compute[254061]: 2026-01-20 19:12:45.853 254065 DEBUG oslo_concurrency.lockutils [req-ad9c7d86-a6ec-4f5f-b92e-f1998530edf4 req-fe6e2b34-058b-4400-8d5b-7447c312b011 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Acquiring lock "bfdc2bf6-cb73-4586-861c-e6057f75edcc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:12:45 compute-0 nova_compute[254061]: 2026-01-20 19:12:45.854 254065 DEBUG oslo_concurrency.lockutils [req-ad9c7d86-a6ec-4f5f-b92e-f1998530edf4 req-fe6e2b34-058b-4400-8d5b-7447c312b011 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Lock "bfdc2bf6-cb73-4586-861c-e6057f75edcc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:12:45 compute-0 nova_compute[254061]: 2026-01-20 19:12:45.854 254065 DEBUG oslo_concurrency.lockutils [req-ad9c7d86-a6ec-4f5f-b92e-f1998530edf4 req-fe6e2b34-058b-4400-8d5b-7447c312b011 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Lock "bfdc2bf6-cb73-4586-861c-e6057f75edcc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:12:45 compute-0 nova_compute[254061]: 2026-01-20 19:12:45.854 254065 DEBUG nova.compute.manager [req-ad9c7d86-a6ec-4f5f-b92e-f1998530edf4 req-fe6e2b34-058b-4400-8d5b-7447c312b011 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] No waiting events found dispatching network-vif-unplugged-9aea074b-ae18-481e-9e32-d20b598171be pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 19:12:45 compute-0 nova_compute[254061]: 2026-01-20 19:12:45.854 254065 WARNING nova.compute.manager [req-ad9c7d86-a6ec-4f5f-b92e-f1998530edf4 req-fe6e2b34-058b-4400-8d5b-7447c312b011 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Received unexpected event network-vif-unplugged-9aea074b-ae18-481e-9e32-d20b598171be for instance with vm_state active and task_state None.
Jan 20 19:12:46 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:12:46 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:12:46 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:12:46.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:12:46 compute-0 ceph-mon[74381]: pgmap v921: 337 pgs: 337 active+clean; 189 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 9.6 KiB/s rd, 18 KiB/s wr, 14 op/s
Jan 20 19:12:46 compute-0 nova_compute[254061]: 2026-01-20 19:12:46.523 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:12:46 compute-0 nova_compute[254061]: 2026-01-20 19:12:46.566 254065 DEBUG oslo_concurrency.lockutils [None req-8a1fe699-d167-4b49-945e-8359820194e6 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Acquiring lock "refresh_cache-bfdc2bf6-cb73-4586-861c-e6057f75edcc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 19:12:46 compute-0 nova_compute[254061]: 2026-01-20 19:12:46.566 254065 DEBUG oslo_concurrency.lockutils [None req-8a1fe699-d167-4b49-945e-8359820194e6 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Acquired lock "refresh_cache-bfdc2bf6-cb73-4586-861c-e6057f75edcc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 19:12:46 compute-0 nova_compute[254061]: 2026-01-20 19:12:46.566 254065 DEBUG nova.network.neutron [None req-8a1fe699-d167-4b49-945e-8359820194e6 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 19:12:46 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:12:47 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v922: 337 pgs: 337 active+clean; 121 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 28 KiB/s wr, 30 op/s
Jan 20 19:12:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:12:47.191Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:12:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:12:47.192Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:12:47 compute-0 ovn_controller[155128]: 2026-01-20T19:12:47Z|00064|binding|INFO|Releasing lport db876216-b29f-45ae-933e-70465cd9196a from this chassis (sb_readonly=0)
Jan 20 19:12:47 compute-0 nova_compute[254061]: 2026-01-20 19:12:47.523 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:12:47 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:12:47 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:12:47 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:12:47.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:12:47 compute-0 nova_compute[254061]: 2026-01-20 19:12:47.944 254065 DEBUG nova.compute.manager [req-118d94cf-71f2-4302-b249-0f96a8c9e54e req-a60e5f5f-f59a-426f-981a-7a594054ddeb 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Received event network-vif-plugged-9aea074b-ae18-481e-9e32-d20b598171be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 19:12:47 compute-0 nova_compute[254061]: 2026-01-20 19:12:47.946 254065 DEBUG oslo_concurrency.lockutils [req-118d94cf-71f2-4302-b249-0f96a8c9e54e req-a60e5f5f-f59a-426f-981a-7a594054ddeb 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Acquiring lock "bfdc2bf6-cb73-4586-861c-e6057f75edcc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:12:47 compute-0 nova_compute[254061]: 2026-01-20 19:12:47.946 254065 DEBUG oslo_concurrency.lockutils [req-118d94cf-71f2-4302-b249-0f96a8c9e54e req-a60e5f5f-f59a-426f-981a-7a594054ddeb 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Lock "bfdc2bf6-cb73-4586-861c-e6057f75edcc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:12:47 compute-0 nova_compute[254061]: 2026-01-20 19:12:47.947 254065 DEBUG oslo_concurrency.lockutils [req-118d94cf-71f2-4302-b249-0f96a8c9e54e req-a60e5f5f-f59a-426f-981a-7a594054ddeb 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Lock "bfdc2bf6-cb73-4586-861c-e6057f75edcc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:12:47 compute-0 nova_compute[254061]: 2026-01-20 19:12:47.947 254065 DEBUG nova.compute.manager [req-118d94cf-71f2-4302-b249-0f96a8c9e54e req-a60e5f5f-f59a-426f-981a-7a594054ddeb 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] No waiting events found dispatching network-vif-plugged-9aea074b-ae18-481e-9e32-d20b598171be pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 19:12:47 compute-0 nova_compute[254061]: 2026-01-20 19:12:47.947 254065 WARNING nova.compute.manager [req-118d94cf-71f2-4302-b249-0f96a8c9e54e req-a60e5f5f-f59a-426f-981a-7a594054ddeb 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Received unexpected event network-vif-plugged-9aea074b-ae18-481e-9e32-d20b598171be for instance with vm_state active and task_state None.
Jan 20 19:12:47 compute-0 nova_compute[254061]: 2026-01-20 19:12:47.947 254065 DEBUG nova.compute.manager [req-118d94cf-71f2-4302-b249-0f96a8c9e54e req-a60e5f5f-f59a-426f-981a-7a594054ddeb 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Received event network-vif-deleted-9aea074b-ae18-481e-9e32-d20b598171be external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 19:12:47 compute-0 nova_compute[254061]: 2026-01-20 19:12:47.948 254065 INFO nova.compute.manager [req-118d94cf-71f2-4302-b249-0f96a8c9e54e req-a60e5f5f-f59a-426f-981a-7a594054ddeb 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Neutron deleted interface 9aea074b-ae18-481e-9e32-d20b598171be; detaching it from the instance and deleting it from the info cache
Jan 20 19:12:47 compute-0 nova_compute[254061]: 2026-01-20 19:12:47.948 254065 DEBUG nova.network.neutron [req-118d94cf-71f2-4302-b249-0f96a8c9e54e req-a60e5f5f-f59a-426f-981a-7a594054ddeb 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Updating instance_info_cache with network_info: [{"id": "8d71eaa1-d4f2-413e-9640-7704328de4fc", "address": "fa:16:3e:c8:97:18", "network": {"id": "d89a966b-cfbe-45ff-b257-05d5877a2da4", "bridge": "br-int", "label": "tempest-network-smoke--23620673", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d71eaa1-d4", "ovs_interfaceid": "8d71eaa1-d4f2-413e-9640-7704328de4fc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 19:12:47 compute-0 nova_compute[254061]: 2026-01-20 19:12:47.971 254065 DEBUG nova.objects.instance [req-118d94cf-71f2-4302-b249-0f96a8c9e54e req-a60e5f5f-f59a-426f-981a-7a594054ddeb 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Lazy-loading 'system_metadata' on Instance uuid bfdc2bf6-cb73-4586-861c-e6057f75edcc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 19:12:47 compute-0 nova_compute[254061]: 2026-01-20 19:12:47.993 254065 DEBUG nova.objects.instance [req-118d94cf-71f2-4302-b249-0f96a8c9e54e req-a60e5f5f-f59a-426f-981a-7a594054ddeb 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Lazy-loading 'flavor' on Instance uuid bfdc2bf6-cb73-4586-861c-e6057f75edcc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 19:12:48 compute-0 nova_compute[254061]: 2026-01-20 19:12:48.016 254065 DEBUG nova.virt.libvirt.vif [req-118d94cf-71f2-4302-b249-0f96a8c9e54e req-a60e5f5f-f59a-426f-981a-7a594054ddeb 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T19:11:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1235924562',display_name='tempest-TestNetworkBasicOps-server-1235924562',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1235924562',id=6,image_ref='bc57af0c-4b71-499e-9808-3c8fc070a488',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIjr6syangYxWXc3r4dCfhnhxcAaEg5oVCWg4X5MCcn6n80x4JhggPSqDkhncvG7NiQVFxqb5q9kQ+/60IAt0rodPBVgAFfcYlPDnsnj3CLQm3+or3usHJ4CImoOM7F42A==',key_name='tempest-TestNetworkBasicOps-1374279328',keypairs=<?>,launch_index=0,launched_at=2026-01-20T19:11:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dc8a6ea17f334edbbfaf2a91ec6fd167',ramdisk_id='',reservation_id='r-7t9yeoab',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='bc57af0c-4b71-499e-9808-3c8fc070a488',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-899583499',owner_user_name='tempest-TestNetworkBasicOps-899583499-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T19:11:27Z,user_data=None,user_id='d34bd159f8884263a7481e3fcff15267',uuid=bfdc2bf6-cb73-4586-861c-e6057f75edcc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9aea074b-ae18-481e-9e32-d20b598171be", "address": "fa:16:3e:a5:ca:35", "network": {"id": "527c809d-016a-41e2-8792-ec37a5eee918", "bridge": "br-int", "label": "tempest-network-smoke--83173917", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9aea074b-ae", "ovs_interfaceid": "9aea074b-ae18-481e-9e32-d20b598171be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 19:12:48 compute-0 nova_compute[254061]: 2026-01-20 19:12:48.017 254065 DEBUG nova.network.os_vif_util [req-118d94cf-71f2-4302-b249-0f96a8c9e54e req-a60e5f5f-f59a-426f-981a-7a594054ddeb 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Converting VIF {"id": "9aea074b-ae18-481e-9e32-d20b598171be", "address": "fa:16:3e:a5:ca:35", "network": {"id": "527c809d-016a-41e2-8792-ec37a5eee918", "bridge": "br-int", "label": "tempest-network-smoke--83173917", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9aea074b-ae", "ovs_interfaceid": "9aea074b-ae18-481e-9e32-d20b598171be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 19:12:48 compute-0 nova_compute[254061]: 2026-01-20 19:12:48.017 254065 DEBUG nova.network.os_vif_util [req-118d94cf-71f2-4302-b249-0f96a8c9e54e req-a60e5f5f-f59a-426f-981a-7a594054ddeb 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a5:ca:35,bridge_name='br-int',has_traffic_filtering=True,id=9aea074b-ae18-481e-9e32-d20b598171be,network=Network(527c809d-016a-41e2-8792-ec37a5eee918),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9aea074b-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 19:12:48 compute-0 nova_compute[254061]: 2026-01-20 19:12:48.022 254065 DEBUG nova.virt.libvirt.guest [req-118d94cf-71f2-4302-b249-0f96a8c9e54e req-a60e5f5f-f59a-426f-981a-7a594054ddeb 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:a5:ca:35"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9aea074b-ae"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 20 19:12:48 compute-0 nova_compute[254061]: 2026-01-20 19:12:48.025 254065 DEBUG nova.virt.libvirt.guest [req-118d94cf-71f2-4302-b249-0f96a8c9e54e req-a60e5f5f-f59a-426f-981a-7a594054ddeb 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:a5:ca:35"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9aea074b-ae"/></interface>not found in domain: <domain type='kvm' id='3'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   <name>instance-00000006</name>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   <uuid>bfdc2bf6-cb73-4586-861c-e6057f75edcc</uuid>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   <metadata>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 19:12:48 compute-0 nova_compute[254061]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   <nova:name>tempest-TestNetworkBasicOps-server-1235924562</nova:name>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   <nova:creationTime>2026-01-20 19:12:45</nova:creationTime>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   <nova:flavor name="m1.nano">
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <nova:memory>128</nova:memory>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <nova:disk>1</nova:disk>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <nova:swap>0</nova:swap>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <nova:ephemeral>0</nova:ephemeral>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <nova:vcpus>1</nova:vcpus>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   </nova:flavor>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   <nova:owner>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <nova:user uuid="d34bd159f8884263a7481e3fcff15267">tempest-TestNetworkBasicOps-899583499-project-member</nova:user>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <nova:project uuid="dc8a6ea17f334edbbfaf2a91ec6fd167">tempest-TestNetworkBasicOps-899583499</nova:project>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   </nova:owner>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   <nova:root type="image" uuid="bc57af0c-4b71-499e-9808-3c8fc070a488"/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   <nova:ports>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <nova:port uuid="8d71eaa1-d4f2-413e-9640-7704328de4fc">
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </nova:port>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   </nova:ports>
Jan 20 19:12:48 compute-0 nova_compute[254061]: </nova:instance>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   </metadata>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   <memory unit='KiB'>131072</memory>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   <currentMemory unit='KiB'>131072</currentMemory>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   <vcpu placement='static'>1</vcpu>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   <resource>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <partition>/machine</partition>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   </resource>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   <sysinfo type='smbios'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <system>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <entry name='manufacturer'>RDO</entry>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <entry name='product'>OpenStack Compute</entry>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <entry name='serial'>bfdc2bf6-cb73-4586-861c-e6057f75edcc</entry>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <entry name='uuid'>bfdc2bf6-cb73-4586-861c-e6057f75edcc</entry>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <entry name='family'>Virtual Machine</entry>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </system>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   </sysinfo>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   <os>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <boot dev='hd'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <smbios mode='sysinfo'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   </os>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   <features>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <acpi/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <apic/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <vmcoreinfo state='on'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   </features>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   <cpu mode='custom' match='exact' check='full'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <model fallback='forbid'>EPYC-Rome</model>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <vendor>AMD</vendor>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <feature policy='require' name='x2apic'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <feature policy='require' name='tsc-deadline'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <feature policy='require' name='hypervisor'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <feature policy='require' name='tsc_adjust'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <feature policy='require' name='spec-ctrl'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <feature policy='require' name='stibp'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <feature policy='require' name='ssbd'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <feature policy='require' name='cmp_legacy'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <feature policy='require' name='overflow-recov'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <feature policy='require' name='succor'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <feature policy='require' name='ibrs'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <feature policy='require' name='amd-ssbd'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <feature policy='require' name='virt-ssbd'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <feature policy='disable' name='lbrv'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <feature policy='disable' name='tsc-scale'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <feature policy='disable' name='vmcb-clean'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <feature policy='disable' name='flushbyasid'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <feature policy='disable' name='pause-filter'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <feature policy='disable' name='pfthreshold'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <feature policy='disable' name='svme-addr-chk'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <feature policy='require' name='lfence-always-serializing'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <feature policy='disable' name='xsaves'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <feature policy='disable' name='svm'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <feature policy='require' name='topoext'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <feature policy='disable' name='npt'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <feature policy='disable' name='nrip-save'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   </cpu>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   <clock offset='utc'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <timer name='pit' tickpolicy='delay'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <timer name='rtc' tickpolicy='catchup'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <timer name='hpet' present='no'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   </clock>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   <on_poweroff>destroy</on_poweroff>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   <on_reboot>restart</on_reboot>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   <on_crash>destroy</on_crash>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   <devices>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <disk type='network' device='disk'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <driver name='qemu' type='raw' cache='none'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <auth username='openstack'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:         <secret type='ceph' uuid='aecbbf3b-b405-507b-97d7-637a83f5b4b1'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       </auth>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <source protocol='rbd' name='vms/bfdc2bf6-cb73-4586-861c-e6057f75edcc_disk' index='2'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:         <host name='192.168.122.100' port='6789'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:         <host name='192.168.122.102' port='6789'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:         <host name='192.168.122.101' port='6789'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       </source>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <target dev='vda' bus='virtio'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='virtio-disk0'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </disk>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <disk type='network' device='cdrom'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <driver name='qemu' type='raw' cache='none'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <auth username='openstack'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:         <secret type='ceph' uuid='aecbbf3b-b405-507b-97d7-637a83f5b4b1'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       </auth>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <source protocol='rbd' name='vms/bfdc2bf6-cb73-4586-861c-e6057f75edcc_disk.config' index='1'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:         <host name='192.168.122.100' port='6789'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:         <host name='192.168.122.102' port='6789'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:         <host name='192.168.122.101' port='6789'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       </source>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <target dev='sda' bus='sata'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <readonly/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='sata0-0-0'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </disk>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <controller type='pci' index='0' model='pcie-root'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='pcie.0'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <controller type='pci' index='1' model='pcie-root-port'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <target chassis='1' port='0x10'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='pci.1'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <controller type='pci' index='2' model='pcie-root-port'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <target chassis='2' port='0x11'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='pci.2'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <controller type='pci' index='3' model='pcie-root-port'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <target chassis='3' port='0x12'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='pci.3'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <controller type='pci' index='4' model='pcie-root-port'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <target chassis='4' port='0x13'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='pci.4'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <controller type='pci' index='5' model='pcie-root-port'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <target chassis='5' port='0x14'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='pci.5'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <controller type='pci' index='6' model='pcie-root-port'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <target chassis='6' port='0x15'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='pci.6'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <controller type='pci' index='7' model='pcie-root-port'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <target chassis='7' port='0x16'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='pci.7'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <controller type='pci' index='8' model='pcie-root-port'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <target chassis='8' port='0x17'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='pci.8'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <controller type='pci' index='9' model='pcie-root-port'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <target chassis='9' port='0x18'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='pci.9'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <controller type='pci' index='10' model='pcie-root-port'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <target chassis='10' port='0x19'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='pci.10'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <controller type='pci' index='11' model='pcie-root-port'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <target chassis='11' port='0x1a'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='pci.11'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <controller type='pci' index='12' model='pcie-root-port'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <target chassis='12' port='0x1b'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='pci.12'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <controller type='pci' index='13' model='pcie-root-port'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <target chassis='13' port='0x1c'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='pci.13'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <controller type='pci' index='14' model='pcie-root-port'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <target chassis='14' port='0x1d'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='pci.14'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <controller type='pci' index='15' model='pcie-root-port'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <target chassis='15' port='0x1e'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='pci.15'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <controller type='pci' index='16' model='pcie-root-port'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <target chassis='16' port='0x1f'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='pci.16'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <controller type='pci' index='17' model='pcie-root-port'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <target chassis='17' port='0x20'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='pci.17'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <controller type='pci' index='18' model='pcie-root-port'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <target chassis='18' port='0x21'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='pci.18'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <controller type='pci' index='19' model='pcie-root-port'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <target chassis='19' port='0x22'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='pci.19'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <controller type='pci' index='20' model='pcie-root-port'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <target chassis='20' port='0x23'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='pci.20'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <controller type='pci' index='21' model='pcie-root-port'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <target chassis='21' port='0x24'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='pci.21'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <controller type='pci' index='22' model='pcie-root-port'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <target chassis='22' port='0x25'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='pci.22'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <controller type='pci' index='23' model='pcie-root-port'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <target chassis='23' port='0x26'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='pci.23'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <controller type='pci' index='24' model='pcie-root-port'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <target chassis='24' port='0x27'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='pci.24'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <controller type='pci' index='25' model='pcie-root-port'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <target chassis='25' port='0x28'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='pci.25'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <model name='pcie-pci-bridge'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='pci.26'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <controller type='usb' index='0' model='piix3-uhci'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='usb'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <controller type='sata' index='0'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='ide'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <interface type='ethernet'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <mac address='fa:16:3e:c8:97:18'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <target dev='tap8d71eaa1-d4'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <model type='virtio'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <driver name='vhost' rx_queue_size='512'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <mtu size='1442'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='net0'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </interface>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <serial type='pty'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <source path='/dev/pts/0'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <log file='/var/lib/nova/instances/bfdc2bf6-cb73-4586-861c-e6057f75edcc/console.log' append='off'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <target type='isa-serial' port='0'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:         <model name='isa-serial'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       </target>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='serial0'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </serial>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <console type='pty' tty='/dev/pts/0'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <source path='/dev/pts/0'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <log file='/var/lib/nova/instances/bfdc2bf6-cb73-4586-861c-e6057f75edcc/console.log' append='off'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <target type='serial' port='0'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='serial0'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </console>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <input type='tablet' bus='usb'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='input0'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <address type='usb' bus='0' port='1'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </input>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <input type='mouse' bus='ps2'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='input1'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </input>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <input type='keyboard' bus='ps2'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='input2'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </input>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <listen type='address' address='::0'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </graphics>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <audio id='1' type='none'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <video>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <model type='virtio' heads='1' primary='yes'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='video0'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </video>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <watchdog model='itco' action='reset'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='watchdog0'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </watchdog>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <memballoon model='virtio'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <stats period='10'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='balloon0'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </memballoon>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <rng model='virtio'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <backend model='random'>/dev/urandom</backend>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='rng0'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </rng>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   </devices>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <label>system_u:system_r:svirt_t:s0:c378,c582</label>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c378,c582</imagelabel>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   </seclabel>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <label>+107:+107</label>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <imagelabel>+107:+107</imagelabel>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   </seclabel>
Jan 20 19:12:48 compute-0 nova_compute[254061]: </domain>
Jan 20 19:12:48 compute-0 nova_compute[254061]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Jan 20 19:12:48 compute-0 nova_compute[254061]: 2026-01-20 19:12:48.026 254065 DEBUG nova.virt.libvirt.guest [req-118d94cf-71f2-4302-b249-0f96a8c9e54e req-a60e5f5f-f59a-426f-981a-7a594054ddeb 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:a5:ca:35"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9aea074b-ae"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 20 19:12:48 compute-0 nova_compute[254061]: 2026-01-20 19:12:48.030 254065 DEBUG nova.virt.libvirt.guest [req-118d94cf-71f2-4302-b249-0f96a8c9e54e req-a60e5f5f-f59a-426f-981a-7a594054ddeb 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:a5:ca:35"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9aea074b-ae"/></interface>not found in domain: <domain type='kvm' id='3'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   <name>instance-00000006</name>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   <uuid>bfdc2bf6-cb73-4586-861c-e6057f75edcc</uuid>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   <metadata>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 19:12:48 compute-0 nova_compute[254061]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   <nova:name>tempest-TestNetworkBasicOps-server-1235924562</nova:name>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   <nova:creationTime>2026-01-20 19:12:45</nova:creationTime>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   <nova:flavor name="m1.nano">
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <nova:memory>128</nova:memory>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <nova:disk>1</nova:disk>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <nova:swap>0</nova:swap>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <nova:ephemeral>0</nova:ephemeral>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <nova:vcpus>1</nova:vcpus>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   </nova:flavor>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   <nova:owner>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <nova:user uuid="d34bd159f8884263a7481e3fcff15267">tempest-TestNetworkBasicOps-899583499-project-member</nova:user>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <nova:project uuid="dc8a6ea17f334edbbfaf2a91ec6fd167">tempest-TestNetworkBasicOps-899583499</nova:project>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   </nova:owner>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   <nova:root type="image" uuid="bc57af0c-4b71-499e-9808-3c8fc070a488"/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   <nova:ports>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <nova:port uuid="8d71eaa1-d4f2-413e-9640-7704328de4fc">
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </nova:port>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   </nova:ports>
Jan 20 19:12:48 compute-0 nova_compute[254061]: </nova:instance>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   </metadata>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   <memory unit='KiB'>131072</memory>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   <currentMemory unit='KiB'>131072</currentMemory>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   <vcpu placement='static'>1</vcpu>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   <resource>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <partition>/machine</partition>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   </resource>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   <sysinfo type='smbios'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <system>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <entry name='manufacturer'>RDO</entry>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <entry name='product'>OpenStack Compute</entry>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <entry name='serial'>bfdc2bf6-cb73-4586-861c-e6057f75edcc</entry>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <entry name='uuid'>bfdc2bf6-cb73-4586-861c-e6057f75edcc</entry>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <entry name='family'>Virtual Machine</entry>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </system>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   </sysinfo>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   <os>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <boot dev='hd'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <smbios mode='sysinfo'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   </os>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   <features>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <acpi/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <apic/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <vmcoreinfo state='on'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   </features>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   <cpu mode='custom' match='exact' check='full'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <model fallback='forbid'>EPYC-Rome</model>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <vendor>AMD</vendor>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <feature policy='require' name='x2apic'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <feature policy='require' name='tsc-deadline'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <feature policy='require' name='hypervisor'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <feature policy='require' name='tsc_adjust'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <feature policy='require' name='spec-ctrl'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <feature policy='require' name='stibp'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <feature policy='require' name='ssbd'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <feature policy='require' name='cmp_legacy'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <feature policy='require' name='overflow-recov'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <feature policy='require' name='succor'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <feature policy='require' name='ibrs'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <feature policy='require' name='amd-ssbd'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <feature policy='require' name='virt-ssbd'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <feature policy='disable' name='lbrv'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <feature policy='disable' name='tsc-scale'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <feature policy='disable' name='vmcb-clean'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <feature policy='disable' name='flushbyasid'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <feature policy='disable' name='pause-filter'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <feature policy='disable' name='pfthreshold'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <feature policy='disable' name='svme-addr-chk'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <feature policy='require' name='lfence-always-serializing'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <feature policy='disable' name='xsaves'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <feature policy='disable' name='svm'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <feature policy='require' name='topoext'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <feature policy='disable' name='npt'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <feature policy='disable' name='nrip-save'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   </cpu>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   <clock offset='utc'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <timer name='pit' tickpolicy='delay'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <timer name='rtc' tickpolicy='catchup'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <timer name='hpet' present='no'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   </clock>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   <on_poweroff>destroy</on_poweroff>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   <on_reboot>restart</on_reboot>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   <on_crash>destroy</on_crash>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   <devices>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <disk type='network' device='disk'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <driver name='qemu' type='raw' cache='none'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <auth username='openstack'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:         <secret type='ceph' uuid='aecbbf3b-b405-507b-97d7-637a83f5b4b1'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       </auth>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <source protocol='rbd' name='vms/bfdc2bf6-cb73-4586-861c-e6057f75edcc_disk' index='2'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:         <host name='192.168.122.100' port='6789'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:         <host name='192.168.122.102' port='6789'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:         <host name='192.168.122.101' port='6789'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       </source>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <target dev='vda' bus='virtio'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='virtio-disk0'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </disk>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <disk type='network' device='cdrom'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <driver name='qemu' type='raw' cache='none'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <auth username='openstack'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:         <secret type='ceph' uuid='aecbbf3b-b405-507b-97d7-637a83f5b4b1'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       </auth>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <source protocol='rbd' name='vms/bfdc2bf6-cb73-4586-861c-e6057f75edcc_disk.config' index='1'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:         <host name='192.168.122.100' port='6789'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:         <host name='192.168.122.102' port='6789'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:         <host name='192.168.122.101' port='6789'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       </source>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <target dev='sda' bus='sata'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <readonly/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='sata0-0-0'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </disk>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <controller type='pci' index='0' model='pcie-root'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='pcie.0'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <controller type='pci' index='1' model='pcie-root-port'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <target chassis='1' port='0x10'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='pci.1'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <controller type='pci' index='2' model='pcie-root-port'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <target chassis='2' port='0x11'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='pci.2'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <controller type='pci' index='3' model='pcie-root-port'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <target chassis='3' port='0x12'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='pci.3'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <controller type='pci' index='4' model='pcie-root-port'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <target chassis='4' port='0x13'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='pci.4'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <controller type='pci' index='5' model='pcie-root-port'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <target chassis='5' port='0x14'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='pci.5'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <controller type='pci' index='6' model='pcie-root-port'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <target chassis='6' port='0x15'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='pci.6'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <controller type='pci' index='7' model='pcie-root-port'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <target chassis='7' port='0x16'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='pci.7'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <controller type='pci' index='8' model='pcie-root-port'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <target chassis='8' port='0x17'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='pci.8'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <controller type='pci' index='9' model='pcie-root-port'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <target chassis='9' port='0x18'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='pci.9'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <controller type='pci' index='10' model='pcie-root-port'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <target chassis='10' port='0x19'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='pci.10'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <controller type='pci' index='11' model='pcie-root-port'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <target chassis='11' port='0x1a'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='pci.11'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <controller type='pci' index='12' model='pcie-root-port'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <target chassis='12' port='0x1b'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='pci.12'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <controller type='pci' index='13' model='pcie-root-port'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <target chassis='13' port='0x1c'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='pci.13'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <controller type='pci' index='14' model='pcie-root-port'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <target chassis='14' port='0x1d'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='pci.14'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <controller type='pci' index='15' model='pcie-root-port'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <target chassis='15' port='0x1e'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='pci.15'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <controller type='pci' index='16' model='pcie-root-port'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <target chassis='16' port='0x1f'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='pci.16'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <controller type='pci' index='17' model='pcie-root-port'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <target chassis='17' port='0x20'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='pci.17'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <controller type='pci' index='18' model='pcie-root-port'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <target chassis='18' port='0x21'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='pci.18'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <controller type='pci' index='19' model='pcie-root-port'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <target chassis='19' port='0x22'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='pci.19'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <controller type='pci' index='20' model='pcie-root-port'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <target chassis='20' port='0x23'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='pci.20'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <controller type='pci' index='21' model='pcie-root-port'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <target chassis='21' port='0x24'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='pci.21'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <controller type='pci' index='22' model='pcie-root-port'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <target chassis='22' port='0x25'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='pci.22'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <controller type='pci' index='23' model='pcie-root-port'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <target chassis='23' port='0x26'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='pci.23'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <controller type='pci' index='24' model='pcie-root-port'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <target chassis='24' port='0x27'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='pci.24'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <controller type='pci' index='25' model='pcie-root-port'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <model name='pcie-root-port'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <target chassis='25' port='0x28'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='pci.25'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <model name='pcie-pci-bridge'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='pci.26'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <controller type='usb' index='0' model='piix3-uhci'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='usb'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <controller type='sata' index='0'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='ide'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </controller>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <interface type='ethernet'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <mac address='fa:16:3e:c8:97:18'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <target dev='tap8d71eaa1-d4'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <model type='virtio'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <driver name='vhost' rx_queue_size='512'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <mtu size='1442'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='net0'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </interface>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <serial type='pty'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <source path='/dev/pts/0'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <log file='/var/lib/nova/instances/bfdc2bf6-cb73-4586-861c-e6057f75edcc/console.log' append='off'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <target type='isa-serial' port='0'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:         <model name='isa-serial'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       </target>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='serial0'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </serial>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <console type='pty' tty='/dev/pts/0'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <source path='/dev/pts/0'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <log file='/var/lib/nova/instances/bfdc2bf6-cb73-4586-861c-e6057f75edcc/console.log' append='off'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <target type='serial' port='0'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='serial0'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </console>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <input type='tablet' bus='usb'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='input0'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <address type='usb' bus='0' port='1'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </input>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <input type='mouse' bus='ps2'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='input1'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </input>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <input type='keyboard' bus='ps2'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='input2'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </input>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <listen type='address' address='::0'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </graphics>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <audio id='1' type='none'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <video>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <model type='virtio' heads='1' primary='yes'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='video0'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </video>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <watchdog model='itco' action='reset'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='watchdog0'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </watchdog>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <memballoon model='virtio'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <stats period='10'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='balloon0'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </memballoon>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <rng model='virtio'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <backend model='random'>/dev/urandom</backend>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <alias name='rng0'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </rng>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   </devices>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <label>system_u:system_r:svirt_t:s0:c378,c582</label>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c378,c582</imagelabel>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   </seclabel>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <label>+107:+107</label>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <imagelabel>+107:+107</imagelabel>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   </seclabel>
Jan 20 19:12:48 compute-0 nova_compute[254061]: </domain>
Jan 20 19:12:48 compute-0 nova_compute[254061]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Jan 20 19:12:48 compute-0 nova_compute[254061]: 2026-01-20 19:12:48.030 254065 WARNING nova.virt.libvirt.driver [req-118d94cf-71f2-4302-b249-0f96a8c9e54e req-a60e5f5f-f59a-426f-981a-7a594054ddeb 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Detaching interface fa:16:3e:a5:ca:35 failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap9aea074b-ae' not found.
Jan 20 19:12:48 compute-0 nova_compute[254061]: 2026-01-20 19:12:48.031 254065 DEBUG nova.virt.libvirt.vif [req-118d94cf-71f2-4302-b249-0f96a8c9e54e req-a60e5f5f-f59a-426f-981a-7a594054ddeb 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T19:11:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1235924562',display_name='tempest-TestNetworkBasicOps-server-1235924562',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1235924562',id=6,image_ref='bc57af0c-4b71-499e-9808-3c8fc070a488',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIjr6syangYxWXc3r4dCfhnhxcAaEg5oVCWg4X5MCcn6n80x4JhggPSqDkhncvG7NiQVFxqb5q9kQ+/60IAt0rodPBVgAFfcYlPDnsnj3CLQm3+or3usHJ4CImoOM7F42A==',key_name='tempest-TestNetworkBasicOps-1374279328',keypairs=<?>,launch_index=0,launched_at=2026-01-20T19:11:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dc8a6ea17f334edbbfaf2a91ec6fd167',ramdisk_id='',reservation_id='r-7t9yeoab',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='bc57af0c-4b71-499e-9808-3c8fc070a488',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-899583499',owner_user_name='tempest-TestNetworkBasicOps-899583499-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T19:11:27Z,user_data=None,user_id='d34bd159f8884263a7481e3fcff15267',uuid=bfdc2bf6-cb73-4586-861c-e6057f75edcc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9aea074b-ae18-481e-9e32-d20b598171be", "address": "fa:16:3e:a5:ca:35", "network": {"id": "527c809d-016a-41e2-8792-ec37a5eee918", "bridge": "br-int", "label": "tempest-network-smoke--83173917", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9aea074b-ae", "ovs_interfaceid": "9aea074b-ae18-481e-9e32-d20b598171be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 19:12:48 compute-0 nova_compute[254061]: 2026-01-20 19:12:48.031 254065 DEBUG nova.network.os_vif_util [req-118d94cf-71f2-4302-b249-0f96a8c9e54e req-a60e5f5f-f59a-426f-981a-7a594054ddeb 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Converting VIF {"id": "9aea074b-ae18-481e-9e32-d20b598171be", "address": "fa:16:3e:a5:ca:35", "network": {"id": "527c809d-016a-41e2-8792-ec37a5eee918", "bridge": "br-int", "label": "tempest-network-smoke--83173917", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9aea074b-ae", "ovs_interfaceid": "9aea074b-ae18-481e-9e32-d20b598171be", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 19:12:48 compute-0 nova_compute[254061]: 2026-01-20 19:12:48.032 254065 DEBUG nova.network.os_vif_util [req-118d94cf-71f2-4302-b249-0f96a8c9e54e req-a60e5f5f-f59a-426f-981a-7a594054ddeb 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a5:ca:35,bridge_name='br-int',has_traffic_filtering=True,id=9aea074b-ae18-481e-9e32-d20b598171be,network=Network(527c809d-016a-41e2-8792-ec37a5eee918),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9aea074b-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 19:12:48 compute-0 nova_compute[254061]: 2026-01-20 19:12:48.032 254065 DEBUG os_vif [req-118d94cf-71f2-4302-b249-0f96a8c9e54e req-a60e5f5f-f59a-426f-981a-7a594054ddeb 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a5:ca:35,bridge_name='br-int',has_traffic_filtering=True,id=9aea074b-ae18-481e-9e32-d20b598171be,network=Network(527c809d-016a-41e2-8792-ec37a5eee918),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9aea074b-ae') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 19:12:48 compute-0 nova_compute[254061]: 2026-01-20 19:12:48.033 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:12:48 compute-0 nova_compute[254061]: 2026-01-20 19:12:48.033 254065 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9aea074b-ae, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 19:12:48 compute-0 nova_compute[254061]: 2026-01-20 19:12:48.034 254065 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 19:12:48 compute-0 nova_compute[254061]: 2026-01-20 19:12:48.035 254065 INFO os_vif [req-118d94cf-71f2-4302-b249-0f96a8c9e54e req-a60e5f5f-f59a-426f-981a-7a594054ddeb 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a5:ca:35,bridge_name='br-int',has_traffic_filtering=True,id=9aea074b-ae18-481e-9e32-d20b598171be,network=Network(527c809d-016a-41e2-8792-ec37a5eee918),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9aea074b-ae')
Jan 20 19:12:48 compute-0 nova_compute[254061]: 2026-01-20 19:12:48.036 254065 DEBUG nova.virt.libvirt.guest [req-118d94cf-71f2-4302-b249-0f96a8c9e54e req-a60e5f5f-f59a-426f-981a-7a594054ddeb 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 19:12:48 compute-0 nova_compute[254061]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   <nova:name>tempest-TestNetworkBasicOps-server-1235924562</nova:name>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   <nova:creationTime>2026-01-20 19:12:48</nova:creationTime>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   <nova:flavor name="m1.nano">
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <nova:memory>128</nova:memory>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <nova:disk>1</nova:disk>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <nova:swap>0</nova:swap>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <nova:ephemeral>0</nova:ephemeral>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <nova:vcpus>1</nova:vcpus>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   </nova:flavor>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   <nova:owner>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <nova:user uuid="d34bd159f8884263a7481e3fcff15267">tempest-TestNetworkBasicOps-899583499-project-member</nova:user>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <nova:project uuid="dc8a6ea17f334edbbfaf2a91ec6fd167">tempest-TestNetworkBasicOps-899583499</nova:project>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   </nova:owner>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   <nova:root type="image" uuid="bc57af0c-4b71-499e-9808-3c8fc070a488"/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   <nova:ports>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     <nova:port uuid="8d71eaa1-d4f2-413e-9640-7704328de4fc">
Jan 20 19:12:48 compute-0 nova_compute[254061]:       <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 20 19:12:48 compute-0 nova_compute[254061]:     </nova:port>
Jan 20 19:12:48 compute-0 nova_compute[254061]:   </nova:ports>
Jan 20 19:12:48 compute-0 nova_compute[254061]: </nova:instance>
Jan 20 19:12:48 compute-0 nova_compute[254061]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Jan 20 19:12:48 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:12:48 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:12:48 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:12:48.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:12:48 compute-0 ceph-mon[74381]: pgmap v922: 337 pgs: 337 active+clean; 121 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 28 KiB/s wr, 30 op/s
Jan 20 19:12:48 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 20 19:12:48 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2533973590' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 19:12:48 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 20 19:12:48 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2533973590' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 19:12:48 compute-0 sudo[268704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:12:48 compute-0 sudo[268704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:12:48 compute-0 sudo[268704]: pam_unix(sudo:session): session closed for user root
Jan 20 19:12:48 compute-0 sudo[268729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 20 19:12:48 compute-0 sudo[268729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:12:48 compute-0 nova_compute[254061]: 2026-01-20 19:12:48.786 254065 INFO nova.network.neutron [None req-8a1fe699-d167-4b49-945e-8359820194e6 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Port 9aea074b-ae18-481e-9e32-d20b598171be from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Jan 20 19:12:48 compute-0 nova_compute[254061]: 2026-01-20 19:12:48.787 254065 DEBUG nova.network.neutron [None req-8a1fe699-d167-4b49-945e-8359820194e6 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Updating instance_info_cache with network_info: [{"id": "8d71eaa1-d4f2-413e-9640-7704328de4fc", "address": "fa:16:3e:c8:97:18", "network": {"id": "d89a966b-cfbe-45ff-b257-05d5877a2da4", "bridge": "br-int", "label": "tempest-network-smoke--23620673", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d71eaa1-d4", "ovs_interfaceid": "8d71eaa1-d4f2-413e-9640-7704328de4fc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 19:12:48 compute-0 nova_compute[254061]: 2026-01-20 19:12:48.803 254065 DEBUG oslo_concurrency.lockutils [None req-8a1fe699-d167-4b49-945e-8359820194e6 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Releasing lock "refresh_cache-bfdc2bf6-cb73-4586-861c-e6057f75edcc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 19:12:48 compute-0 nova_compute[254061]: 2026-01-20 19:12:48.836 254065 DEBUG oslo_concurrency.lockutils [None req-8a1fe699-d167-4b49-945e-8359820194e6 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "interface-bfdc2bf6-cb73-4586-861c-e6057f75edcc-9aea074b-ae18-481e-9e32-d20b598171be" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 3.458s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:12:48 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:12:48.880Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:12:48 compute-0 nova_compute[254061]: 2026-01-20 19:12:48.900 254065 DEBUG nova.compute.manager [req-55626937-ae7a-4e96-beed-afb34fd3912a req-a7914bf5-8df4-4737-97d5-72879411dacb 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Received event network-changed-8d71eaa1-d4f2-413e-9640-7704328de4fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 19:12:48 compute-0 nova_compute[254061]: 2026-01-20 19:12:48.901 254065 DEBUG nova.compute.manager [req-55626937-ae7a-4e96-beed-afb34fd3912a req-a7914bf5-8df4-4737-97d5-72879411dacb 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Refreshing instance network info cache due to event network-changed-8d71eaa1-d4f2-413e-9640-7704328de4fc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 19:12:48 compute-0 nova_compute[254061]: 2026-01-20 19:12:48.901 254065 DEBUG oslo_concurrency.lockutils [req-55626937-ae7a-4e96-beed-afb34fd3912a req-a7914bf5-8df4-4737-97d5-72879411dacb 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Acquiring lock "refresh_cache-bfdc2bf6-cb73-4586-861c-e6057f75edcc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 19:12:48 compute-0 nova_compute[254061]: 2026-01-20 19:12:48.901 254065 DEBUG oslo_concurrency.lockutils [req-55626937-ae7a-4e96-beed-afb34fd3912a req-a7914bf5-8df4-4737-97d5-72879411dacb 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Acquired lock "refresh_cache-bfdc2bf6-cb73-4586-861c-e6057f75edcc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 19:12:48 compute-0 nova_compute[254061]: 2026-01-20 19:12:48.902 254065 DEBUG nova.network.neutron [req-55626937-ae7a-4e96-beed-afb34fd3912a req-a7914bf5-8df4-4737-97d5-72879411dacb 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Refreshing network info cache for port 8d71eaa1-d4f2-413e-9640-7704328de4fc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 19:12:48 compute-0 nova_compute[254061]: 2026-01-20 19:12:48.970 254065 DEBUG oslo_concurrency.lockutils [None req-92770126-2c9e-4171-974a-0fb3ec540255 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Acquiring lock "bfdc2bf6-cb73-4586-861c-e6057f75edcc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:12:48 compute-0 nova_compute[254061]: 2026-01-20 19:12:48.971 254065 DEBUG oslo_concurrency.lockutils [None req-92770126-2c9e-4171-974a-0fb3ec540255 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "bfdc2bf6-cb73-4586-861c-e6057f75edcc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:12:48 compute-0 nova_compute[254061]: 2026-01-20 19:12:48.971 254065 DEBUG oslo_concurrency.lockutils [None req-92770126-2c9e-4171-974a-0fb3ec540255 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Acquiring lock "bfdc2bf6-cb73-4586-861c-e6057f75edcc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:12:48 compute-0 nova_compute[254061]: 2026-01-20 19:12:48.971 254065 DEBUG oslo_concurrency.lockutils [None req-92770126-2c9e-4171-974a-0fb3ec540255 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "bfdc2bf6-cb73-4586-861c-e6057f75edcc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:12:48 compute-0 nova_compute[254061]: 2026-01-20 19:12:48.972 254065 DEBUG oslo_concurrency.lockutils [None req-92770126-2c9e-4171-974a-0fb3ec540255 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "bfdc2bf6-cb73-4586-861c-e6057f75edcc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:12:48 compute-0 nova_compute[254061]: 2026-01-20 19:12:48.973 254065 INFO nova.compute.manager [None req-92770126-2c9e-4171-974a-0fb3ec540255 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Terminating instance
Jan 20 19:12:48 compute-0 nova_compute[254061]: 2026-01-20 19:12:48.973 254065 DEBUG nova.compute.manager [None req-92770126-2c9e-4171-974a-0fb3ec540255 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 19:12:49 compute-0 sudo[268729]: pam_unix(sudo:session): session closed for user root
Jan 20 19:12:49 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v923: 337 pgs: 337 active+clean; 121 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 17 KiB/s wr, 30 op/s
Jan 20 19:12:49 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:12:49 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:12:49 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:12:49.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:12:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:12:49] "GET /metrics HTTP/1.1" 200 48471 "" "Prometheus/2.51.0"
Jan 20 19:12:49 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:12:49] "GET /metrics HTTP/1.1" 200 48471 "" "Prometheus/2.51.0"
Jan 20 19:12:50 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:12:50 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:12:50 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:12:50.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:12:50 compute-0 nova_compute[254061]: 2026-01-20 19:12:50.624 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:12:50 compute-0 ceph-mon[74381]: from='client.? 192.168.122.10:0/2533973590' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 19:12:50 compute-0 ceph-mon[74381]: from='client.? 192.168.122.10:0/2533973590' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 19:12:50 compute-0 kernel: tap8d71eaa1-d4 (unregistering): left promiscuous mode
Jan 20 19:12:50 compute-0 NetworkManager[48914]: <info>  [1768936370.7286] device (tap8d71eaa1-d4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 19:12:50 compute-0 ovn_controller[155128]: 2026-01-20T19:12:50Z|00065|binding|INFO|Releasing lport 8d71eaa1-d4f2-413e-9640-7704328de4fc from this chassis (sb_readonly=0)
Jan 20 19:12:50 compute-0 ovn_controller[155128]: 2026-01-20T19:12:50Z|00066|binding|INFO|Setting lport 8d71eaa1-d4f2-413e-9640-7704328de4fc down in Southbound
Jan 20 19:12:50 compute-0 nova_compute[254061]: 2026-01-20 19:12:50.737 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:12:50 compute-0 ovn_controller[155128]: 2026-01-20T19:12:50Z|00067|binding|INFO|Removing iface tap8d71eaa1-d4 ovn-installed in OVS
Jan 20 19:12:50 compute-0 nova_compute[254061]: 2026-01-20 19:12:50.741 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:12:50 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:12:50.747 165659 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c8:97:18 10.100.0.10'], port_security=['fa:16:3e:c8:97:18 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'bfdc2bf6-cb73-4586-861c-e6057f75edcc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d89a966b-cfbe-45ff-b257-05d5877a2da4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dc8a6ea17f334edbbfaf2a91ec6fd167', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1da04d3e-03f4-48b8-9af0-ca4e3c95d834', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b92aeeb0-ccb0-440f-b327-55f658bc00cf, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4e780f9880>], logical_port=8d71eaa1-d4f2-413e-9640-7704328de4fc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4e780f9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 19:12:50 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:12:50.748 165659 INFO neutron.agent.ovn.metadata.agent [-] Port 8d71eaa1-d4f2-413e-9640-7704328de4fc in datapath d89a966b-cfbe-45ff-b257-05d5877a2da4 unbound from our chassis
Jan 20 19:12:50 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:12:50.749 165659 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d89a966b-cfbe-45ff-b257-05d5877a2da4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 19:12:50 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:12:50.750 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[e6061a78-b184-4109-85c9-1173939e9b0e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:12:50 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:12:50.750 165659 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d89a966b-cfbe-45ff-b257-05d5877a2da4 namespace which is not needed anymore
Jan 20 19:12:50 compute-0 nova_compute[254061]: 2026-01-20 19:12:50.765 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:12:50 compute-0 nova_compute[254061]: 2026-01-20 19:12:50.771 254065 DEBUG nova.network.neutron [req-55626937-ae7a-4e96-beed-afb34fd3912a req-a7914bf5-8df4-4737-97d5-72879411dacb 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Updated VIF entry in instance network info cache for port 8d71eaa1-d4f2-413e-9640-7704328de4fc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 19:12:50 compute-0 nova_compute[254061]: 2026-01-20 19:12:50.771 254065 DEBUG nova.network.neutron [req-55626937-ae7a-4e96-beed-afb34fd3912a req-a7914bf5-8df4-4737-97d5-72879411dacb 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Updating instance_info_cache with network_info: [{"id": "8d71eaa1-d4f2-413e-9640-7704328de4fc", "address": "fa:16:3e:c8:97:18", "network": {"id": "d89a966b-cfbe-45ff-b257-05d5877a2da4", "bridge": "br-int", "label": "tempest-network-smoke--23620673", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d71eaa1-d4", "ovs_interfaceid": "8d71eaa1-d4f2-413e-9640-7704328de4fc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 19:12:50 compute-0 nova_compute[254061]: 2026-01-20 19:12:50.794 254065 DEBUG oslo_concurrency.lockutils [req-55626937-ae7a-4e96-beed-afb34fd3912a req-a7914bf5-8df4-4737-97d5-72879411dacb 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Releasing lock "refresh_cache-bfdc2bf6-cb73-4586-861c-e6057f75edcc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 19:12:50 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000006.scope: Deactivated successfully.
Jan 20 19:12:50 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000006.scope: Consumed 17.228s CPU time.
Jan 20 19:12:50 compute-0 systemd-machined[220746]: Machine qemu-3-instance-00000006 terminated.
Jan 20 19:12:50 compute-0 neutron-haproxy-ovnmeta-d89a966b-cfbe-45ff-b257-05d5877a2da4[265918]: [NOTICE]   (265922) : haproxy version is 2.8.14-c23fe91
Jan 20 19:12:50 compute-0 neutron-haproxy-ovnmeta-d89a966b-cfbe-45ff-b257-05d5877a2da4[265918]: [NOTICE]   (265922) : path to executable is /usr/sbin/haproxy
Jan 20 19:12:50 compute-0 neutron-haproxy-ovnmeta-d89a966b-cfbe-45ff-b257-05d5877a2da4[265918]: [WARNING]  (265922) : Exiting Master process...
Jan 20 19:12:50 compute-0 neutron-haproxy-ovnmeta-d89a966b-cfbe-45ff-b257-05d5877a2da4[265918]: [WARNING]  (265922) : Exiting Master process...
Jan 20 19:12:50 compute-0 neutron-haproxy-ovnmeta-d89a966b-cfbe-45ff-b257-05d5877a2da4[265918]: [ALERT]    (265922) : Current worker (265924) exited with code 143 (Terminated)
Jan 20 19:12:50 compute-0 neutron-haproxy-ovnmeta-d89a966b-cfbe-45ff-b257-05d5877a2da4[265918]: [WARNING]  (265922) : All workers exited. Exiting... (0)
Jan 20 19:12:50 compute-0 systemd[1]: libpod-dbbf74dbcc022618a6a8da17f19eaf41e29886e5f13c357da7e281b9d54e82b3.scope: Deactivated successfully.
Jan 20 19:12:50 compute-0 podman[268811]: 2026-01-20 19:12:50.897956104 +0000 UTC m=+0.047576671 container died dbbf74dbcc022618a6a8da17f19eaf41e29886e5f13c357da7e281b9d54e82b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d89a966b-cfbe-45ff-b257-05d5877a2da4, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 20 19:12:50 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dbbf74dbcc022618a6a8da17f19eaf41e29886e5f13c357da7e281b9d54e82b3-userdata-shm.mount: Deactivated successfully.
Jan 20 19:12:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-e27ebe66e9fb365857b3ceb27fc74b0b511892a1ebc7200cd7dfd559b9088cb4-merged.mount: Deactivated successfully.
Jan 20 19:12:50 compute-0 podman[268811]: 2026-01-20 19:12:50.939724721 +0000 UTC m=+0.089345308 container cleanup dbbf74dbcc022618a6a8da17f19eaf41e29886e5f13c357da7e281b9d54e82b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d89a966b-cfbe-45ff-b257-05d5877a2da4, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 20 19:12:50 compute-0 systemd[1]: libpod-conmon-dbbf74dbcc022618a6a8da17f19eaf41e29886e5f13c357da7e281b9d54e82b3.scope: Deactivated successfully.
Jan 20 19:12:50 compute-0 nova_compute[254061]: 2026-01-20 19:12:50.978 254065 DEBUG nova.compute.manager [req-f841b430-4671-4551-8154-0d85b47f4763 req-abeff4b8-20fb-4613-8473-bec663cacd26 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Received event network-vif-unplugged-8d71eaa1-d4f2-413e-9640-7704328de4fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 19:12:50 compute-0 nova_compute[254061]: 2026-01-20 19:12:50.979 254065 DEBUG oslo_concurrency.lockutils [req-f841b430-4671-4551-8154-0d85b47f4763 req-abeff4b8-20fb-4613-8473-bec663cacd26 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Acquiring lock "bfdc2bf6-cb73-4586-861c-e6057f75edcc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:12:50 compute-0 nova_compute[254061]: 2026-01-20 19:12:50.979 254065 DEBUG oslo_concurrency.lockutils [req-f841b430-4671-4551-8154-0d85b47f4763 req-abeff4b8-20fb-4613-8473-bec663cacd26 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Lock "bfdc2bf6-cb73-4586-861c-e6057f75edcc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:12:50 compute-0 nova_compute[254061]: 2026-01-20 19:12:50.979 254065 DEBUG oslo_concurrency.lockutils [req-f841b430-4671-4551-8154-0d85b47f4763 req-abeff4b8-20fb-4613-8473-bec663cacd26 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Lock "bfdc2bf6-cb73-4586-861c-e6057f75edcc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:12:50 compute-0 nova_compute[254061]: 2026-01-20 19:12:50.979 254065 DEBUG nova.compute.manager [req-f841b430-4671-4551-8154-0d85b47f4763 req-abeff4b8-20fb-4613-8473-bec663cacd26 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] No waiting events found dispatching network-vif-unplugged-8d71eaa1-d4f2-413e-9640-7704328de4fc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 19:12:50 compute-0 nova_compute[254061]: 2026-01-20 19:12:50.980 254065 DEBUG nova.compute.manager [req-f841b430-4671-4551-8154-0d85b47f4763 req-abeff4b8-20fb-4613-8473-bec663cacd26 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Received event network-vif-unplugged-8d71eaa1-d4f2-413e-9640-7704328de4fc for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 19:12:51 compute-0 podman[268841]: 2026-01-20 19:12:51.01023011 +0000 UTC m=+0.044396688 container remove dbbf74dbcc022618a6a8da17f19eaf41e29886e5f13c357da7e281b9d54e82b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d89a966b-cfbe-45ff-b257-05d5877a2da4, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 20 19:12:51 compute-0 nova_compute[254061]: 2026-01-20 19:12:51.017 254065 INFO nova.virt.libvirt.driver [-] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Instance destroyed successfully.
Jan 20 19:12:51 compute-0 nova_compute[254061]: 2026-01-20 19:12:51.018 254065 DEBUG nova.objects.instance [None req-92770126-2c9e-4171-974a-0fb3ec540255 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lazy-loading 'resources' on Instance uuid bfdc2bf6-cb73-4586-861c-e6057f75edcc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 19:12:51 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:12:51.018 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[2086ee64-1a3f-41ec-b0b9-873fa8351306]: (4, ('Tue Jan 20 07:12:50 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-d89a966b-cfbe-45ff-b257-05d5877a2da4 (dbbf74dbcc022618a6a8da17f19eaf41e29886e5f13c357da7e281b9d54e82b3)\ndbbf74dbcc022618a6a8da17f19eaf41e29886e5f13c357da7e281b9d54e82b3\nTue Jan 20 07:12:50 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-d89a966b-cfbe-45ff-b257-05d5877a2da4 (dbbf74dbcc022618a6a8da17f19eaf41e29886e5f13c357da7e281b9d54e82b3)\ndbbf74dbcc022618a6a8da17f19eaf41e29886e5f13c357da7e281b9d54e82b3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:12:51 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:12:51.020 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[2aba50ca-3fbd-456d-a599-aa2f3de505ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:12:51 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:12:51.021 165659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd89a966b-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 19:12:51 compute-0 kernel: tapd89a966b-c0: left promiscuous mode
Jan 20 19:12:51 compute-0 nova_compute[254061]: 2026-01-20 19:12:51.023 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:12:51 compute-0 nova_compute[254061]: 2026-01-20 19:12:51.038 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:12:51 compute-0 nova_compute[254061]: 2026-01-20 19:12:51.040 254065 DEBUG nova.virt.libvirt.vif [None req-92770126-2c9e-4171-974a-0fb3ec540255 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T19:11:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1235924562',display_name='tempest-TestNetworkBasicOps-server-1235924562',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1235924562',id=6,image_ref='bc57af0c-4b71-499e-9808-3c8fc070a488',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIjr6syangYxWXc3r4dCfhnhxcAaEg5oVCWg4X5MCcn6n80x4JhggPSqDkhncvG7NiQVFxqb5q9kQ+/60IAt0rodPBVgAFfcYlPDnsnj3CLQm3+or3usHJ4CImoOM7F42A==',key_name='tempest-TestNetworkBasicOps-1374279328',keypairs=<?>,launch_index=0,launched_at=2026-01-20T19:11:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dc8a6ea17f334edbbfaf2a91ec6fd167',ramdisk_id='',reservation_id='r-7t9yeoab',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='bc57af0c-4b71-499e-9808-3c8fc070a488',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-899583499',owner_user_name='tempest-TestNetworkBasicOps-899583499-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T19:11:27Z,user_data=None,user_id='d34bd159f8884263a7481e3fcff15267',uuid=bfdc2bf6-cb73-4586-861c-e6057f75edcc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8d71eaa1-d4f2-413e-9640-7704328de4fc", "address": "fa:16:3e:c8:97:18", "network": {"id": "d89a966b-cfbe-45ff-b257-05d5877a2da4", "bridge": "br-int", "label": "tempest-network-smoke--23620673", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d71eaa1-d4", "ovs_interfaceid": "8d71eaa1-d4f2-413e-9640-7704328de4fc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 19:12:51 compute-0 nova_compute[254061]: 2026-01-20 19:12:51.041 254065 DEBUG nova.network.os_vif_util [None req-92770126-2c9e-4171-974a-0fb3ec540255 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Converting VIF {"id": "8d71eaa1-d4f2-413e-9640-7704328de4fc", "address": "fa:16:3e:c8:97:18", "network": {"id": "d89a966b-cfbe-45ff-b257-05d5877a2da4", "bridge": "br-int", "label": "tempest-network-smoke--23620673", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d71eaa1-d4", "ovs_interfaceid": "8d71eaa1-d4f2-413e-9640-7704328de4fc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 19:12:51 compute-0 nova_compute[254061]: 2026-01-20 19:12:51.041 254065 DEBUG nova.network.os_vif_util [None req-92770126-2c9e-4171-974a-0fb3ec540255 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c8:97:18,bridge_name='br-int',has_traffic_filtering=True,id=8d71eaa1-d4f2-413e-9640-7704328de4fc,network=Network(d89a966b-cfbe-45ff-b257-05d5877a2da4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d71eaa1-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 19:12:51 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:12:51.041 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[afe53dd4-1b8b-4baa-94c8-517d46099e23]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:12:51 compute-0 nova_compute[254061]: 2026-01-20 19:12:51.042 254065 DEBUG os_vif [None req-92770126-2c9e-4171-974a-0fb3ec540255 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c8:97:18,bridge_name='br-int',has_traffic_filtering=True,id=8d71eaa1-d4f2-413e-9640-7704328de4fc,network=Network(d89a966b-cfbe-45ff-b257-05d5877a2da4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d71eaa1-d4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 19:12:51 compute-0 nova_compute[254061]: 2026-01-20 19:12:51.043 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:12:51 compute-0 nova_compute[254061]: 2026-01-20 19:12:51.043 254065 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8d71eaa1-d4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 19:12:51 compute-0 nova_compute[254061]: 2026-01-20 19:12:51.045 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:12:51 compute-0 nova_compute[254061]: 2026-01-20 19:12:51.047 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:12:51 compute-0 nova_compute[254061]: 2026-01-20 19:12:51.049 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:12:51 compute-0 nova_compute[254061]: 2026-01-20 19:12:51.050 254065 INFO os_vif [None req-92770126-2c9e-4171-974a-0fb3ec540255 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c8:97:18,bridge_name='br-int',has_traffic_filtering=True,id=8d71eaa1-d4f2-413e-9640-7704328de4fc,network=Network(d89a966b-cfbe-45ff-b257-05d5877a2da4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d71eaa1-d4')
Jan 20 19:12:51 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:12:51.057 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[78fe2683-8859-410d-b60b-2fd7d8d6406c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:12:51 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:12:51.059 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[4737a376-9725-42db-9128-5432aadf580c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:12:51 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:12:51.075 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[2e43da1c-13a2-4592-b7b9-1d116b8c7087]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 448280, 'reachable_time': 33321, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268885, 'error': None, 'target': 'ovnmeta-d89a966b-cfbe-45ff-b257-05d5877a2da4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:12:51 compute-0 systemd[1]: run-netns-ovnmeta\x2dd89a966b\x2dcfbe\x2d45ff\x2db257\x2d05d5877a2da4.mount: Deactivated successfully.
Jan 20 19:12:51 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:12:51.079 166372 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d89a966b-cfbe-45ff-b257-05d5877a2da4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 19:12:51 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:12:51.080 166372 DEBUG oslo.privsep.daemon [-] privsep: reply[41e37dbf-dc8d-493c-929b-93560958b09f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:12:51 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v924: 337 pgs: 337 active+clean; 121 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 16 KiB/s wr, 30 op/s
Jan 20 19:12:51 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 20 19:12:51 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:12:51 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 20 19:12:51 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:12:51 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v925: 337 pgs: 337 active+clean; 100 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 20 KiB/s wr, 48 op/s
Jan 20 19:12:51 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v926: 337 pgs: 337 active+clean; 100 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 16 KiB/s wr, 40 op/s
Jan 20 19:12:51 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 19:12:51 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:12:51 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 20 19:12:51 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:12:51 compute-0 sudo[268890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:12:51 compute-0 sudo[268890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:12:51 compute-0 sudo[268890]: pam_unix(sudo:session): session closed for user root
Jan 20 19:12:51 compute-0 sudo[268915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 19:12:51 compute-0 sudo[268915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:12:51 compute-0 nova_compute[254061]: 2026-01-20 19:12:51.524 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:12:51 compute-0 nova_compute[254061]: 2026-01-20 19:12:51.573 254065 INFO nova.virt.libvirt.driver [None req-92770126-2c9e-4171-974a-0fb3ec540255 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Deleting instance files /var/lib/nova/instances/bfdc2bf6-cb73-4586-861c-e6057f75edcc_del
Jan 20 19:12:51 compute-0 nova_compute[254061]: 2026-01-20 19:12:51.574 254065 INFO nova.virt.libvirt.driver [None req-92770126-2c9e-4171-974a-0fb3ec540255 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Deletion of /var/lib/nova/instances/bfdc2bf6-cb73-4586-861c-e6057f75edcc_del complete
Jan 20 19:12:51 compute-0 nova_compute[254061]: 2026-01-20 19:12:51.623 254065 INFO nova.compute.manager [None req-92770126-2c9e-4171-974a-0fb3ec540255 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Took 2.65 seconds to destroy the instance on the hypervisor.
Jan 20 19:12:51 compute-0 nova_compute[254061]: 2026-01-20 19:12:51.623 254065 DEBUG oslo.service.loopingcall [None req-92770126-2c9e-4171-974a-0fb3ec540255 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 19:12:51 compute-0 nova_compute[254061]: 2026-01-20 19:12:51.623 254065 DEBUG nova.compute.manager [-] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 19:12:51 compute-0 nova_compute[254061]: 2026-01-20 19:12:51.624 254065 DEBUG nova.network.neutron [-] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 19:12:51 compute-0 ceph-mon[74381]: pgmap v923: 337 pgs: 337 active+clean; 121 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 17 KiB/s wr, 30 op/s
Jan 20 19:12:51 compute-0 ceph-mon[74381]: pgmap v924: 337 pgs: 337 active+clean; 121 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 16 KiB/s wr, 30 op/s
Jan 20 19:12:51 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:12:51 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:12:51 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 19:12:51 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 19:12:51 compute-0 ceph-mon[74381]: pgmap v925: 337 pgs: 337 active+clean; 100 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 20 KiB/s wr, 48 op/s
Jan 20 19:12:51 compute-0 ceph-mon[74381]: pgmap v926: 337 pgs: 337 active+clean; 100 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 16 KiB/s wr, 40 op/s
Jan 20 19:12:51 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:12:51 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:12:51 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 19:12:51 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 19:12:51 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 19:12:51 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:12:51 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:12:51 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:12:51.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:12:51 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:12:51 compute-0 podman[268980]: 2026-01-20 19:12:51.936200124 +0000 UTC m=+0.040629467 container create aae348206296ce523b19cb9e1081d2eb00b6236d3f3363ac19e035e1f67aff8f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_kepler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 20 19:12:51 compute-0 systemd[1]: Started libpod-conmon-aae348206296ce523b19cb9e1081d2eb00b6236d3f3363ac19e035e1f67aff8f.scope.
Jan 20 19:12:51 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:12:52 compute-0 podman[268980]: 2026-01-20 19:12:52.008632264 +0000 UTC m=+0.113061627 container init aae348206296ce523b19cb9e1081d2eb00b6236d3f3363ac19e035e1f67aff8f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_kepler, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:12:52 compute-0 podman[268980]: 2026-01-20 19:12:51.919332378 +0000 UTC m=+0.023761741 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:12:52 compute-0 podman[268980]: 2026-01-20 19:12:52.019122071 +0000 UTC m=+0.123551414 container start aae348206296ce523b19cb9e1081d2eb00b6236d3f3363ac19e035e1f67aff8f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_kepler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Jan 20 19:12:52 compute-0 podman[268980]: 2026-01-20 19:12:52.023506407 +0000 UTC m=+0.127935760 container attach aae348206296ce523b19cb9e1081d2eb00b6236d3f3363ac19e035e1f67aff8f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:12:52 compute-0 busy_kepler[268998]: 167 167
Jan 20 19:12:52 compute-0 systemd[1]: libpod-aae348206296ce523b19cb9e1081d2eb00b6236d3f3363ac19e035e1f67aff8f.scope: Deactivated successfully.
Jan 20 19:12:52 compute-0 podman[268980]: 2026-01-20 19:12:52.026641841 +0000 UTC m=+0.131071194 container died aae348206296ce523b19cb9e1081d2eb00b6236d3f3363ac19e035e1f67aff8f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_kepler, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:12:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-89ba6c65bb0c3ed0d511880048f182870fa85eccc94f8af4fefe608e1295155a-merged.mount: Deactivated successfully.
Jan 20 19:12:52 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:12:52 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:12:52 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:12:52.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:12:52 compute-0 podman[268980]: 2026-01-20 19:12:52.422725725 +0000 UTC m=+0.527155098 container remove aae348206296ce523b19cb9e1081d2eb00b6236d3f3363ac19e035e1f67aff8f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_kepler, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:12:52 compute-0 systemd[1]: libpod-conmon-aae348206296ce523b19cb9e1081d2eb00b6236d3f3363ac19e035e1f67aff8f.scope: Deactivated successfully.
Jan 20 19:12:52 compute-0 podman[269025]: 2026-01-20 19:12:52.585776366 +0000 UTC m=+0.023238707 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:12:52 compute-0 podman[269025]: 2026-01-20 19:12:52.752004381 +0000 UTC m=+0.189466722 container create 9c7d833841a2015d89bb1461540b7972d11d83a6e32bfbbf87efd5c519645ea8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_wiles, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:12:52 compute-0 nova_compute[254061]: 2026-01-20 19:12:52.788 254065 DEBUG nova.network.neutron [-] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 19:12:52 compute-0 systemd[1]: Started libpod-conmon-9c7d833841a2015d89bb1461540b7972d11d83a6e32bfbbf87efd5c519645ea8.scope.
Jan 20 19:12:52 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:12:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efdf756f48da0bb72029ec64410ae73fd6f17bca91b18a587aa3fb03dddf1ec4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:12:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efdf756f48da0bb72029ec64410ae73fd6f17bca91b18a587aa3fb03dddf1ec4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:12:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efdf756f48da0bb72029ec64410ae73fd6f17bca91b18a587aa3fb03dddf1ec4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:12:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efdf756f48da0bb72029ec64410ae73fd6f17bca91b18a587aa3fb03dddf1ec4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:12:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/efdf756f48da0bb72029ec64410ae73fd6f17bca91b18a587aa3fb03dddf1ec4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:12:52 compute-0 podman[269025]: 2026-01-20 19:12:52.848611551 +0000 UTC m=+0.286073922 container init 9c7d833841a2015d89bb1461540b7972d11d83a6e32bfbbf87efd5c519645ea8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_wiles, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 20 19:12:52 compute-0 podman[269025]: 2026-01-20 19:12:52.859286113 +0000 UTC m=+0.296748454 container start 9c7d833841a2015d89bb1461540b7972d11d83a6e32bfbbf87efd5c519645ea8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_wiles, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Jan 20 19:12:52 compute-0 nova_compute[254061]: 2026-01-20 19:12:52.860 254065 INFO nova.compute.manager [-] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Took 1.24 seconds to deallocate network for instance.
Jan 20 19:12:52 compute-0 nova_compute[254061]: 2026-01-20 19:12:52.874 254065 DEBUG nova.compute.manager [req-2ab89cd8-f448-4f84-b0f7-b0190d510843 req-1bd226ee-d63e-4995-830b-8b6fe18f9b1e 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Received event network-vif-deleted-8d71eaa1-d4f2-413e-9640-7704328de4fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 19:12:52 compute-0 nova_compute[254061]: 2026-01-20 19:12:52.909 254065 DEBUG oslo_concurrency.lockutils [None req-92770126-2c9e-4171-974a-0fb3ec540255 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:12:52 compute-0 nova_compute[254061]: 2026-01-20 19:12:52.910 254065 DEBUG oslo_concurrency.lockutils [None req-92770126-2c9e-4171-974a-0fb3ec540255 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:12:52 compute-0 nova_compute[254061]: 2026-01-20 19:12:52.957 254065 DEBUG oslo_concurrency.processutils [None req-92770126-2c9e-4171-974a-0fb3ec540255 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:12:52 compute-0 podman[269025]: 2026-01-20 19:12:52.970116609 +0000 UTC m=+0.407578930 container attach 9c7d833841a2015d89bb1461540b7972d11d83a6e32bfbbf87efd5c519645ea8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:12:53 compute-0 nova_compute[254061]: 2026-01-20 19:12:53.048 254065 DEBUG nova.compute.manager [req-06707b81-9e3d-42b3-9946-6801aab88869 req-f76b0f01-6e86-416a-85d2-db6e149340dc 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Received event network-vif-plugged-8d71eaa1-d4f2-413e-9640-7704328de4fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 19:12:53 compute-0 nova_compute[254061]: 2026-01-20 19:12:53.048 254065 DEBUG oslo_concurrency.lockutils [req-06707b81-9e3d-42b3-9946-6801aab88869 req-f76b0f01-6e86-416a-85d2-db6e149340dc 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Acquiring lock "bfdc2bf6-cb73-4586-861c-e6057f75edcc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:12:53 compute-0 nova_compute[254061]: 2026-01-20 19:12:53.049 254065 DEBUG oslo_concurrency.lockutils [req-06707b81-9e3d-42b3-9946-6801aab88869 req-f76b0f01-6e86-416a-85d2-db6e149340dc 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Lock "bfdc2bf6-cb73-4586-861c-e6057f75edcc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:12:53 compute-0 nova_compute[254061]: 2026-01-20 19:12:53.049 254065 DEBUG oslo_concurrency.lockutils [req-06707b81-9e3d-42b3-9946-6801aab88869 req-f76b0f01-6e86-416a-85d2-db6e149340dc 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Lock "bfdc2bf6-cb73-4586-861c-e6057f75edcc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:12:53 compute-0 nova_compute[254061]: 2026-01-20 19:12:53.049 254065 DEBUG nova.compute.manager [req-06707b81-9e3d-42b3-9946-6801aab88869 req-f76b0f01-6e86-416a-85d2-db6e149340dc 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] No waiting events found dispatching network-vif-plugged-8d71eaa1-d4f2-413e-9640-7704328de4fc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 19:12:53 compute-0 nova_compute[254061]: 2026-01-20 19:12:53.050 254065 WARNING nova.compute.manager [req-06707b81-9e3d-42b3-9946-6801aab88869 req-f76b0f01-6e86-416a-85d2-db6e149340dc 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Received unexpected event network-vif-plugged-8d71eaa1-d4f2-413e-9640-7704328de4fc for instance with vm_state deleted and task_state None.
Jan 20 19:12:53 compute-0 festive_wiles[269042]: --> passed data devices: 0 physical, 1 LVM
Jan 20 19:12:53 compute-0 festive_wiles[269042]: --> All data devices are unavailable
Jan 20 19:12:53 compute-0 systemd[1]: libpod-9c7d833841a2015d89bb1461540b7972d11d83a6e32bfbbf87efd5c519645ea8.scope: Deactivated successfully.
Jan 20 19:12:53 compute-0 podman[269025]: 2026-01-20 19:12:53.213242872 +0000 UTC m=+0.650705193 container died 9c7d833841a2015d89bb1461540b7972d11d83a6e32bfbbf87efd5c519645ea8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_wiles, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 19:12:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-efdf756f48da0bb72029ec64410ae73fd6f17bca91b18a587aa3fb03dddf1ec4-merged.mount: Deactivated successfully.
Jan 20 19:12:53 compute-0 podman[269025]: 2026-01-20 19:12:53.32528095 +0000 UTC m=+0.762743311 container remove 9c7d833841a2015d89bb1461540b7972d11d83a6e32bfbbf87efd5c519645ea8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_wiles, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 19:12:53 compute-0 systemd[1]: libpod-conmon-9c7d833841a2015d89bb1461540b7972d11d83a6e32bfbbf87efd5c519645ea8.scope: Deactivated successfully.
Jan 20 19:12:53 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v927: 337 pgs: 337 active+clean; 65 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 17 KiB/s wr, 56 op/s
Jan 20 19:12:53 compute-0 sudo[268915]: pam_unix(sudo:session): session closed for user root
Jan 20 19:12:53 compute-0 podman[269077]: 2026-01-20 19:12:53.384671513 +0000 UTC m=+0.124147669 container health_status 7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Jan 20 19:12:53 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:12:53 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4031594809' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:12:53 compute-0 nova_compute[254061]: 2026-01-20 19:12:53.419 254065 DEBUG oslo_concurrency.processutils [None req-92770126-2c9e-4171-974a-0fb3ec540255 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:12:53 compute-0 nova_compute[254061]: 2026-01-20 19:12:53.426 254065 DEBUG nova.compute.provider_tree [None req-92770126-2c9e-4171-974a-0fb3ec540255 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Inventory has not changed in ProviderTree for provider: cb9161e5-191d-495c-920a-01144f42a215 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 19:12:53 compute-0 sudo[269107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:12:53 compute-0 sudo[269107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:12:53 compute-0 sudo[269107]: pam_unix(sudo:session): session closed for user root
Jan 20 19:12:53 compute-0 nova_compute[254061]: 2026-01-20 19:12:53.455 254065 DEBUG nova.scheduler.client.report [None req-92770126-2c9e-4171-974a-0fb3ec540255 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Inventory has not changed for provider cb9161e5-191d-495c-920a-01144f42a215 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 19:12:53 compute-0 sudo[269134]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- lvm list --format json
Jan 20 19:12:53 compute-0 sudo[269134]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:12:53 compute-0 nova_compute[254061]: 2026-01-20 19:12:53.495 254065 DEBUG oslo_concurrency.lockutils [None req-92770126-2c9e-4171-974a-0fb3ec540255 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.586s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:12:53 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/4031594809' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:12:53 compute-0 nova_compute[254061]: 2026-01-20 19:12:53.524 254065 INFO nova.scheduler.client.report [None req-92770126-2c9e-4171-974a-0fb3ec540255 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Deleted allocations for instance bfdc2bf6-cb73-4586-861c-e6057f75edcc
Jan 20 19:12:53 compute-0 nova_compute[254061]: 2026-01-20 19:12:53.611 254065 DEBUG oslo_concurrency.lockutils [None req-92770126-2c9e-4171-974a-0fb3ec540255 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "bfdc2bf6-cb73-4586-861c-e6057f75edcc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.640s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:12:53 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:12:53 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:12:53 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:12:53.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:12:53 compute-0 podman[269202]: 2026-01-20 19:12:53.915644353 +0000 UTC m=+0.041498451 container create ed509ffe640b9997dce0dd49fe30163d6688baccc0a972c0e94f3ef462595926 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_joliot, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:12:53 compute-0 systemd[1]: Started libpod-conmon-ed509ffe640b9997dce0dd49fe30163d6688baccc0a972c0e94f3ef462595926.scope.
Jan 20 19:12:53 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:12:53 compute-0 podman[269202]: 2026-01-20 19:12:53.977785269 +0000 UTC m=+0.103639397 container init ed509ffe640b9997dce0dd49fe30163d6688baccc0a972c0e94f3ef462595926 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 19:12:53 compute-0 podman[269202]: 2026-01-20 19:12:53.985350779 +0000 UTC m=+0.111204887 container start ed509ffe640b9997dce0dd49fe30163d6688baccc0a972c0e94f3ef462595926 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Jan 20 19:12:53 compute-0 mystifying_joliot[269219]: 167 167
Jan 20 19:12:53 compute-0 podman[269202]: 2026-01-20 19:12:53.988640787 +0000 UTC m=+0.114494925 container attach ed509ffe640b9997dce0dd49fe30163d6688baccc0a972c0e94f3ef462595926 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:12:53 compute-0 systemd[1]: libpod-ed509ffe640b9997dce0dd49fe30163d6688baccc0a972c0e94f3ef462595926.scope: Deactivated successfully.
Jan 20 19:12:53 compute-0 podman[269202]: 2026-01-20 19:12:53.991712538 +0000 UTC m=+0.117566666 container died ed509ffe640b9997dce0dd49fe30163d6688baccc0a972c0e94f3ef462595926 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:12:53 compute-0 podman[269202]: 2026-01-20 19:12:53.898135399 +0000 UTC m=+0.023989537 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:12:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-2a9e0ae6efeb9bebd3c802ae7c709a5e57a9447489650b486b8727415410b39f-merged.mount: Deactivated successfully.
Jan 20 19:12:54 compute-0 podman[269202]: 2026-01-20 19:12:54.020594984 +0000 UTC m=+0.146449092 container remove ed509ffe640b9997dce0dd49fe30163d6688baccc0a972c0e94f3ef462595926 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_joliot, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:12:54 compute-0 systemd[1]: libpod-conmon-ed509ffe640b9997dce0dd49fe30163d6688baccc0a972c0e94f3ef462595926.scope: Deactivated successfully.
Jan 20 19:12:54 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:12:54 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:12:54 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:12:54.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:12:54 compute-0 podman[269244]: 2026-01-20 19:12:54.172427476 +0000 UTC m=+0.037336670 container create e6d34ce963dc408948b2fd9bb14817147a469fbaf8904003c22b00f890133e73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Jan 20 19:12:54 compute-0 systemd[1]: Started libpod-conmon-e6d34ce963dc408948b2fd9bb14817147a469fbaf8904003c22b00f890133e73.scope.
Jan 20 19:12:54 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:12:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27876081fab72912146ea822b4a9cec35c506cf9ea5253b381b13cfe235dee8b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:12:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27876081fab72912146ea822b4a9cec35c506cf9ea5253b381b13cfe235dee8b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:12:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27876081fab72912146ea822b4a9cec35c506cf9ea5253b381b13cfe235dee8b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:12:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27876081fab72912146ea822b4a9cec35c506cf9ea5253b381b13cfe235dee8b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:12:54 compute-0 podman[269244]: 2026-01-20 19:12:54.248695958 +0000 UTC m=+0.113605152 container init e6d34ce963dc408948b2fd9bb14817147a469fbaf8904003c22b00f890133e73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_jones, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:12:54 compute-0 podman[269244]: 2026-01-20 19:12:54.155792316 +0000 UTC m=+0.020701530 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:12:54 compute-0 podman[269244]: 2026-01-20 19:12:54.256099424 +0000 UTC m=+0.121008618 container start e6d34ce963dc408948b2fd9bb14817147a469fbaf8904003c22b00f890133e73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_jones, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:12:54 compute-0 podman[269244]: 2026-01-20 19:12:54.259282098 +0000 UTC m=+0.124191302 container attach e6d34ce963dc408948b2fd9bb14817147a469fbaf8904003c22b00f890133e73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_jones, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:12:54 compute-0 priceless_jones[269261]: {
Jan 20 19:12:54 compute-0 priceless_jones[269261]:     "0": [
Jan 20 19:12:54 compute-0 priceless_jones[269261]:         {
Jan 20 19:12:54 compute-0 priceless_jones[269261]:             "devices": [
Jan 20 19:12:54 compute-0 priceless_jones[269261]:                 "/dev/loop3"
Jan 20 19:12:54 compute-0 priceless_jones[269261]:             ],
Jan 20 19:12:54 compute-0 priceless_jones[269261]:             "lv_name": "ceph_lv0",
Jan 20 19:12:54 compute-0 priceless_jones[269261]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:12:54 compute-0 priceless_jones[269261]:             "lv_size": "21470642176",
Jan 20 19:12:54 compute-0 priceless_jones[269261]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=aecbbf3b-b405-507b-97d7-637a83f5b4b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5f53c0c6-6046-4836-83f9-ff93da7e674e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:12:54 compute-0 priceless_jones[269261]:             "lv_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 19:12:54 compute-0 priceless_jones[269261]:             "name": "ceph_lv0",
Jan 20 19:12:54 compute-0 priceless_jones[269261]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:12:54 compute-0 priceless_jones[269261]:             "tags": {
Jan 20 19:12:54 compute-0 priceless_jones[269261]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:12:54 compute-0 priceless_jones[269261]:                 "ceph.block_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 19:12:54 compute-0 priceless_jones[269261]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:12:54 compute-0 priceless_jones[269261]:                 "ceph.cluster_fsid": "aecbbf3b-b405-507b-97d7-637a83f5b4b1",
Jan 20 19:12:54 compute-0 priceless_jones[269261]:                 "ceph.cluster_name": "ceph",
Jan 20 19:12:54 compute-0 priceless_jones[269261]:                 "ceph.crush_device_class": "",
Jan 20 19:12:54 compute-0 priceless_jones[269261]:                 "ceph.encrypted": "0",
Jan 20 19:12:54 compute-0 priceless_jones[269261]:                 "ceph.osd_fsid": "5f53c0c6-6046-4836-83f9-ff93da7e674e",
Jan 20 19:12:54 compute-0 priceless_jones[269261]:                 "ceph.osd_id": "0",
Jan 20 19:12:54 compute-0 priceless_jones[269261]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:12:54 compute-0 priceless_jones[269261]:                 "ceph.type": "block",
Jan 20 19:12:54 compute-0 priceless_jones[269261]:                 "ceph.vdo": "0",
Jan 20 19:12:54 compute-0 priceless_jones[269261]:                 "ceph.with_tpm": "0"
Jan 20 19:12:54 compute-0 priceless_jones[269261]:             },
Jan 20 19:12:54 compute-0 priceless_jones[269261]:             "type": "block",
Jan 20 19:12:54 compute-0 priceless_jones[269261]:             "vg_name": "ceph_vg0"
Jan 20 19:12:54 compute-0 priceless_jones[269261]:         }
Jan 20 19:12:54 compute-0 priceless_jones[269261]:     ]
Jan 20 19:12:54 compute-0 priceless_jones[269261]: }
Jan 20 19:12:54 compute-0 ceph-mon[74381]: pgmap v927: 337 pgs: 337 active+clean; 65 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 17 KiB/s wr, 56 op/s
Jan 20 19:12:54 compute-0 systemd[1]: libpod-e6d34ce963dc408948b2fd9bb14817147a469fbaf8904003c22b00f890133e73.scope: Deactivated successfully.
Jan 20 19:12:54 compute-0 podman[269244]: 2026-01-20 19:12:54.5152553 +0000 UTC m=+0.380164504 container died e6d34ce963dc408948b2fd9bb14817147a469fbaf8904003c22b00f890133e73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_jones, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Jan 20 19:12:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-27876081fab72912146ea822b4a9cec35c506cf9ea5253b381b13cfe235dee8b-merged.mount: Deactivated successfully.
Jan 20 19:12:54 compute-0 podman[269244]: 2026-01-20 19:12:54.557340325 +0000 UTC m=+0.422249519 container remove e6d34ce963dc408948b2fd9bb14817147a469fbaf8904003c22b00f890133e73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Jan 20 19:12:54 compute-0 systemd[1]: libpod-conmon-e6d34ce963dc408948b2fd9bb14817147a469fbaf8904003c22b00f890133e73.scope: Deactivated successfully.
Jan 20 19:12:54 compute-0 sudo[269134]: pam_unix(sudo:session): session closed for user root
Jan 20 19:12:54 compute-0 sudo[269282]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:12:54 compute-0 sudo[269282]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:12:54 compute-0 sudo[269282]: pam_unix(sudo:session): session closed for user root
Jan 20 19:12:54 compute-0 sudo[269307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- raw list --format json
Jan 20 19:12:54 compute-0 sudo[269307]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:12:54 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:12:54.865 165659 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'be:fe:f2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '0e:71:69:cd:a8:95'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 19:12:54 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:12:54.867 165659 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 19:12:54 compute-0 nova_compute[254061]: 2026-01-20 19:12:54.865 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:12:55 compute-0 ceph-mgr[74676]: [balancer INFO root] Optimize plan auto_2026-01-20_19:12:55
Jan 20 19:12:55 compute-0 ceph-mgr[74676]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 19:12:55 compute-0 ceph-mgr[74676]: [balancer INFO root] do_upmap
Jan 20 19:12:55 compute-0 ceph-mgr[74676]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.log', 'volumes', 'default.rgw.control', 'backups', '.nfs', '.mgr', 'vms', 'images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.meta']
Jan 20 19:12:55 compute-0 ceph-mgr[74676]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 19:12:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:12:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:12:55 compute-0 podman[269375]: 2026-01-20 19:12:55.103337153 +0000 UTC m=+0.046206116 container create b70445c1d00a0a61df656a4d3cc5e9092697b0a63df9f435b07e69d996217f77 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_goodall, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:12:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:12:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:12:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:12:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:12:55 compute-0 systemd[1]: Started libpod-conmon-b70445c1d00a0a61df656a4d3cc5e9092697b0a63df9f435b07e69d996217f77.scope.
Jan 20 19:12:55 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:12:55 compute-0 podman[269375]: 2026-01-20 19:12:55.174479598 +0000 UTC m=+0.117348591 container init b70445c1d00a0a61df656a4d3cc5e9092697b0a63df9f435b07e69d996217f77 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_goodall, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Jan 20 19:12:55 compute-0 podman[269375]: 2026-01-20 19:12:55.08663128 +0000 UTC m=+0.029500273 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:12:55 compute-0 podman[269375]: 2026-01-20 19:12:55.180580639 +0000 UTC m=+0.123449602 container start b70445c1d00a0a61df656a4d3cc5e9092697b0a63df9f435b07e69d996217f77 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:12:55 compute-0 podman[269375]: 2026-01-20 19:12:55.183479716 +0000 UTC m=+0.126348699 container attach b70445c1d00a0a61df656a4d3cc5e9092697b0a63df9f435b07e69d996217f77 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_goodall, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:12:55 compute-0 sleepy_goodall[269392]: 167 167
Jan 20 19:12:55 compute-0 systemd[1]: libpod-b70445c1d00a0a61df656a4d3cc5e9092697b0a63df9f435b07e69d996217f77.scope: Deactivated successfully.
Jan 20 19:12:55 compute-0 podman[269375]: 2026-01-20 19:12:55.188264763 +0000 UTC m=+0.131133746 container died b70445c1d00a0a61df656a4d3cc5e9092697b0a63df9f435b07e69d996217f77 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_goodall, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:12:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1b52bf72763f95ddd97a4e6c8364602f76f930975a5ce68e002f2c1c7fd0ce8-merged.mount: Deactivated successfully.
Jan 20 19:12:55 compute-0 podman[269375]: 2026-01-20 19:12:55.226373023 +0000 UTC m=+0.169241976 container remove b70445c1d00a0a61df656a4d3cc5e9092697b0a63df9f435b07e69d996217f77 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 19:12:55 compute-0 systemd[1]: libpod-conmon-b70445c1d00a0a61df656a4d3cc5e9092697b0a63df9f435b07e69d996217f77.scope: Deactivated successfully.
Jan 20 19:12:55 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v928: 337 pgs: 337 active+clean; 65 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 3.3 KiB/s wr, 31 op/s
Jan 20 19:12:55 compute-0 podman[269416]: 2026-01-20 19:12:55.379956401 +0000 UTC m=+0.041304454 container create d02f96636ad5b24eccd5086718b202fa6978898b49d04ddc187a96811a2e8c16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_pascal, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:12:55 compute-0 systemd[1]: Started libpod-conmon-d02f96636ad5b24eccd5086718b202fa6978898b49d04ddc187a96811a2e8c16.scope.
Jan 20 19:12:55 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:12:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a658ef817fa5422b2c7457ba9df6ea0fb6aab9d306937e68d8e94206af1effd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:12:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a658ef817fa5422b2c7457ba9df6ea0fb6aab9d306937e68d8e94206af1effd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:12:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a658ef817fa5422b2c7457ba9df6ea0fb6aab9d306937e68d8e94206af1effd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:12:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a658ef817fa5422b2c7457ba9df6ea0fb6aab9d306937e68d8e94206af1effd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:12:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 19:12:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:12:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 19:12:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:12:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00022695523621149964 of space, bias 1.0, pg target 0.0680865708634499 quantized to 32 (current 32)
Jan 20 19:12:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:12:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:12:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:12:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:12:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:12:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 20 19:12:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:12:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 20 19:12:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:12:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:12:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:12:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 19:12:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:12:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 20 19:12:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:12:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:12:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:12:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 19:12:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:12:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 20 19:12:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 19:12:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:12:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 19:12:55 compute-0 podman[269416]: 2026-01-20 19:12:55.458183614 +0000 UTC m=+0.119531677 container init d02f96636ad5b24eccd5086718b202fa6978898b49d04ddc187a96811a2e8c16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_pascal, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:12:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:12:55 compute-0 podman[269416]: 2026-01-20 19:12:55.363929767 +0000 UTC m=+0.025277840 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:12:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:12:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:12:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:12:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:12:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:12:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:12:55 compute-0 podman[269416]: 2026-01-20 19:12:55.46632871 +0000 UTC m=+0.127676763 container start d02f96636ad5b24eccd5086718b202fa6978898b49d04ddc187a96811a2e8c16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_pascal, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:12:55 compute-0 podman[269416]: 2026-01-20 19:12:55.469485444 +0000 UTC m=+0.130833497 container attach d02f96636ad5b24eccd5086718b202fa6978898b49d04ddc187a96811a2e8c16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_pascal, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Jan 20 19:12:55 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:12:55 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:12:55 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:12:55 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:12:55.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:12:56 compute-0 nova_compute[254061]: 2026-01-20 19:12:56.044 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:12:56 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:12:56 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 19:12:56 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:12:56.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 19:12:56 compute-0 lvm[269508]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 19:12:56 compute-0 lvm[269508]: VG ceph_vg0 finished
Jan 20 19:12:56 compute-0 pensive_pascal[269433]: {}
Jan 20 19:12:56 compute-0 systemd[1]: libpod-d02f96636ad5b24eccd5086718b202fa6978898b49d04ddc187a96811a2e8c16.scope: Deactivated successfully.
Jan 20 19:12:56 compute-0 podman[269416]: 2026-01-20 19:12:56.195485641 +0000 UTC m=+0.856833694 container died d02f96636ad5b24eccd5086718b202fa6978898b49d04ddc187a96811a2e8c16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_pascal, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Jan 20 19:12:56 compute-0 systemd[1]: libpod-d02f96636ad5b24eccd5086718b202fa6978898b49d04ddc187a96811a2e8c16.scope: Consumed 1.194s CPU time.
Jan 20 19:12:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-3a658ef817fa5422b2c7457ba9df6ea0fb6aab9d306937e68d8e94206af1effd-merged.mount: Deactivated successfully.
Jan 20 19:12:56 compute-0 podman[269416]: 2026-01-20 19:12:56.274158755 +0000 UTC m=+0.935506848 container remove d02f96636ad5b24eccd5086718b202fa6978898b49d04ddc187a96811a2e8c16 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_pascal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:12:56 compute-0 systemd[1]: libpod-conmon-d02f96636ad5b24eccd5086718b202fa6978898b49d04ddc187a96811a2e8c16.scope: Deactivated successfully.
Jan 20 19:12:56 compute-0 sudo[269307]: pam_unix(sudo:session): session closed for user root
Jan 20 19:12:56 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:12:56 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:12:56 compute-0 nova_compute[254061]: 2026-01-20 19:12:56.527 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:12:56 compute-0 ceph-mon[74381]: pgmap v928: 337 pgs: 337 active+clean; 65 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 3.3 KiB/s wr, 31 op/s
Jan 20 19:12:56 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:12:56 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:12:56 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:12:56 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:12:56 compute-0 sudo[269525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 19:12:56 compute-0 sudo[269525]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:12:56 compute-0 sudo[269525]: pam_unix(sudo:session): session closed for user root
Jan 20 19:12:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:12:57.193Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:12:57 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v929: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 3.2 KiB/s wr, 41 op/s
Jan 20 19:12:57 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:12:57 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:12:57 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:12:57.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:12:57 compute-0 nova_compute[254061]: 2026-01-20 19:12:57.756 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:12:57 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:12:57 compute-0 ceph-mon[74381]: pgmap v929: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 3.2 KiB/s wr, 41 op/s
Jan 20 19:12:57 compute-0 nova_compute[254061]: 2026-01-20 19:12:57.896 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:12:57 compute-0 sudo[269551]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:12:57 compute-0 sudo[269551]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:12:57 compute-0 sudo[269551]: pam_unix(sudo:session): session closed for user root
Jan 20 19:12:58 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:12:58 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:12:58 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:12:58.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:12:58 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:12:58.881Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:12:59 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v930: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 3.2 KiB/s wr, 41 op/s
Jan 20 19:12:59 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:12:59 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:12:59 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:12:59.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:12:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:12:59] "GET /metrics HTTP/1.1" 200 48470 "" "Prometheus/2.51.0"
Jan 20 19:12:59 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:12:59] "GET /metrics HTTP/1.1" 200 48470 "" "Prometheus/2.51.0"
Jan 20 19:13:00 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:13:00 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:13:00 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:13:00.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:13:00 compute-0 ceph-mon[74381]: pgmap v930: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 3.2 KiB/s wr, 41 op/s
Jan 20 19:13:01 compute-0 nova_compute[254061]: 2026-01-20 19:13:01.047 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:13:01 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v931: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 2.3 KiB/s wr, 21 op/s
Jan 20 19:13:01 compute-0 nova_compute[254061]: 2026-01-20 19:13:01.530 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:13:01 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:13:01 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:13:01 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:13:01.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:13:01 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:13:02 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:13:02 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:13:02 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:13:02.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:13:02 compute-0 ceph-mon[74381]: pgmap v931: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 2.3 KiB/s wr, 21 op/s
Jan 20 19:13:02 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:13:02.870 165659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7018ca8a-de0e-4b56-bb43-675238d4f8b3, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 19:13:03 compute-0 podman[269582]: 2026-01-20 19:13:03.108660955 +0000 UTC m=+0.083685997 container health_status d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 20 19:13:03 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v932: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 1.9 KiB/s wr, 18 op/s
Jan 20 19:13:03 compute-0 ceph-mon[74381]: pgmap v932: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 1.9 KiB/s wr, 18 op/s
Jan 20 19:13:03 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:13:03 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:13:03 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:13:03.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:13:04 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:13:04 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:13:04 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:13:04.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:13:04 compute-0 nova_compute[254061]: 2026-01-20 19:13:04.129 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:13:04 compute-0 nova_compute[254061]: 2026-01-20 19:13:04.130 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:13:04 compute-0 nova_compute[254061]: 2026-01-20 19:13:04.174 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:13:04 compute-0 nova_compute[254061]: 2026-01-20 19:13:04.174 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:13:04 compute-0 nova_compute[254061]: 2026-01-20 19:13:04.175 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:13:04 compute-0 nova_compute[254061]: 2026-01-20 19:13:04.175 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 19:13:04 compute-0 nova_compute[254061]: 2026-01-20 19:13:04.175 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:13:04 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:13:04 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/93042442' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:13:04 compute-0 nova_compute[254061]: 2026-01-20 19:13:04.699 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:13:04 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/93042442' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:13:04 compute-0 nova_compute[254061]: 2026-01-20 19:13:04.869 254065 WARNING nova.virt.libvirt.driver [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 19:13:04 compute-0 nova_compute[254061]: 2026-01-20 19:13:04.871 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4550MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 19:13:04 compute-0 nova_compute[254061]: 2026-01-20 19:13:04.871 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:13:04 compute-0 nova_compute[254061]: 2026-01-20 19:13:04.871 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:13:04 compute-0 nova_compute[254061]: 2026-01-20 19:13:04.951 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 19:13:04 compute-0 nova_compute[254061]: 2026-01-20 19:13:04.951 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 19:13:04 compute-0 nova_compute[254061]: 2026-01-20 19:13:04.966 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:13:05 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v933: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 4.2 KiB/s rd, 938 B/s wr, 7 op/s
Jan 20 19:13:05 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:13:05 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/294522976' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:13:05 compute-0 nova_compute[254061]: 2026-01-20 19:13:05.453 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:13:05 compute-0 nova_compute[254061]: 2026-01-20 19:13:05.460 254065 DEBUG nova.compute.provider_tree [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Inventory has not changed in ProviderTree for provider: cb9161e5-191d-495c-920a-01144f42a215 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 19:13:05 compute-0 nova_compute[254061]: 2026-01-20 19:13:05.482 254065 DEBUG nova.scheduler.client.report [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Inventory has not changed for provider cb9161e5-191d-495c-920a-01144f42a215 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 19:13:05 compute-0 nova_compute[254061]: 2026-01-20 19:13:05.516 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 19:13:05 compute-0 nova_compute[254061]: 2026-01-20 19:13:05.517 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.645s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:13:05 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:13:05 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:13:05 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:13:05.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:13:05 compute-0 ceph-mon[74381]: pgmap v933: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 4.2 KiB/s rd, 938 B/s wr, 7 op/s
Jan 20 19:13:05 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/294522976' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:13:06 compute-0 nova_compute[254061]: 2026-01-20 19:13:06.015 254065 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768936371.0148833, bfdc2bf6-cb73-4586-861c-e6057f75edcc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 19:13:06 compute-0 nova_compute[254061]: 2026-01-20 19:13:06.016 254065 INFO nova.compute.manager [-] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] VM Stopped (Lifecycle Event)
Jan 20 19:13:06 compute-0 nova_compute[254061]: 2026-01-20 19:13:06.049 254065 DEBUG nova.compute.manager [None req-b53249ce-a9e4-42f4-9cea-7614061e14f5 - - - - - -] [instance: bfdc2bf6-cb73-4586-861c-e6057f75edcc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 19:13:06 compute-0 nova_compute[254061]: 2026-01-20 19:13:06.050 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:13:06 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:13:06 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:13:06 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:13:06.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:13:06 compute-0 nova_compute[254061]: 2026-01-20 19:13:06.533 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:13:06 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:13:07 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:13:07.195Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:13:07 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v934: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 4.7 KiB/s rd, 938 B/s wr, 8 op/s
Jan 20 19:13:07 compute-0 nova_compute[254061]: 2026-01-20 19:13:07.517 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:13:07 compute-0 nova_compute[254061]: 2026-01-20 19:13:07.517 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:13:07 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:13:07 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:13:07 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:13:07.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:13:08 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:13:08 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:13:08 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:13:08.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:13:08 compute-0 nova_compute[254061]: 2026-01-20 19:13:08.128 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:13:08 compute-0 nova_compute[254061]: 2026-01-20 19:13:08.128 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 19:13:08 compute-0 ceph-mon[74381]: pgmap v934: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 4.7 KiB/s rd, 938 B/s wr, 8 op/s
Jan 20 19:13:08 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:13:08.881Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:13:09 compute-0 nova_compute[254061]: 2026-01-20 19:13:09.128 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:13:09 compute-0 nova_compute[254061]: 2026-01-20 19:13:09.129 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 19:13:09 compute-0 nova_compute[254061]: 2026-01-20 19:13:09.129 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 19:13:09 compute-0 nova_compute[254061]: 2026-01-20 19:13:09.146 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 19:13:09 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v935: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Jan 20 19:13:09 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:13:09 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:13:09 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:13:09.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:13:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:13:09] "GET /metrics HTTP/1.1" 200 48470 "" "Prometheus/2.51.0"
Jan 20 19:13:09 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:13:09] "GET /metrics HTTP/1.1" 200 48470 "" "Prometheus/2.51.0"
Jan 20 19:13:10 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:13:10 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:13:10 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:13:10.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:13:10 compute-0 nova_compute[254061]: 2026-01-20 19:13:10.128 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:13:10 compute-0 ceph-mon[74381]: pgmap v935: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Jan 20 19:13:10 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:13:10 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/953866614' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:13:10 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/2304259907' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:13:11 compute-0 nova_compute[254061]: 2026-01-20 19:13:11.051 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:13:11 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v936: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Jan 20 19:13:11 compute-0 nova_compute[254061]: 2026-01-20 19:13:11.534 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:13:11 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/2518282582' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:13:11 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/2959037362' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:13:11 compute-0 ceph-mon[74381]: pgmap v936: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Jan 20 19:13:11 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:13:11 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:13:11 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:13:11.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:13:11 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:13:12 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:13:12 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:13:12 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:13:12.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:13:13 compute-0 nova_compute[254061]: 2026-01-20 19:13:13.129 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:13:13 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v937: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 0 op/s
Jan 20 19:13:13 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:13:13 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:13:13 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:13:13.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:13:14 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:13:14 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 19:13:14 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:13:14.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 19:13:14 compute-0 nova_compute[254061]: 2026-01-20 19:13:14.124 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:13:14 compute-0 ceph-mon[74381]: pgmap v937: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 0 op/s
Jan 20 19:13:15 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v938: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Jan 20 19:13:15 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:13:15 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:13:15 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:13:15.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:13:16 compute-0 nova_compute[254061]: 2026-01-20 19:13:16.052 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:13:16 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:13:16 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:13:16 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:13:16.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:13:16 compute-0 ceph-mon[74381]: pgmap v938: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Jan 20 19:13:16 compute-0 nova_compute[254061]: 2026-01-20 19:13:16.584 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:13:16 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:13:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:13:17.196Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:13:17 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v939: 337 pgs: 337 active+clean; 66 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 943 KiB/s wr, 3 op/s
Jan 20 19:13:17 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/299004051' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:13:17 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:13:17 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:13:17 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:13:17.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:13:18 compute-0 sudo[269668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:13:18 compute-0 sudo[269668]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:13:18 compute-0 sudo[269668]: pam_unix(sudo:session): session closed for user root
Jan 20 19:13:18 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:13:18 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:13:18 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:13:18.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:13:18 compute-0 ceph-mon[74381]: pgmap v939: 337 pgs: 337 active+clean; 66 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 943 KiB/s wr, 3 op/s
Jan 20 19:13:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:13:18.883Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:13:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:13:18.883Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:13:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:13:18.883Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:13:19 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v940: 337 pgs: 337 active+clean; 66 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 943 KiB/s wr, 2 op/s
Jan 20 19:13:19 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:13:19 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:13:19 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:13:19.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:13:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:13:19] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Jan 20 19:13:19 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:13:19] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Jan 20 19:13:20 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:13:20 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:13:20 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:13:20.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:13:20 compute-0 ceph-mon[74381]: pgmap v940: 337 pgs: 337 active+clean; 66 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 943 KiB/s wr, 2 op/s
Jan 20 19:13:21 compute-0 nova_compute[254061]: 2026-01-20 19:13:21.054 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:13:21 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v941: 337 pgs: 337 active+clean; 88 MiB data, 270 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 26 op/s
Jan 20 19:13:21 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/450702855' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 19:13:21 compute-0 nova_compute[254061]: 2026-01-20 19:13:21.588 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:13:21 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:13:21 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 19:13:21 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:13:21.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 19:13:21 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:13:22 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:13:22 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:13:22 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:13:22.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:13:22 compute-0 ceph-mon[74381]: pgmap v941: 337 pgs: 337 active+clean; 88 MiB data, 270 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 26 op/s
Jan 20 19:13:22 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/1855881886' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 19:13:23 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v942: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 19:13:23 compute-0 ceph-mon[74381]: pgmap v942: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 19:13:23 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:13:23 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:13:23 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:13:23.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:13:24 compute-0 podman[269700]: 2026-01-20 19:13:24.083671667 +0000 UTC m=+0.051245029 container health_status 7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Jan 20 19:13:24 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:13:24 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:13:24 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:13:24.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:13:25 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:13:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:13:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:13:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:13:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:13:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:13:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:13:25 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v943: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 19:13:25 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:13:25 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:13:25 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:13:25.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:13:26 compute-0 nova_compute[254061]: 2026-01-20 19:13:26.055 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:13:26 compute-0 ceph-mon[74381]: pgmap v943: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 19:13:26 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:13:26 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:13:26 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:13:26.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:13:26 compute-0 nova_compute[254061]: 2026-01-20 19:13:26.591 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:13:26 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Jan 20 19:13:26 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:13:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:13:27.197Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:13:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:13:27.197Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:13:27 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v944: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 20 19:13:27 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:13:27 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:13:27 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:13:27.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:13:28 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:13:28 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:13:28 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:13:28.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:13:28 compute-0 ceph-mon[74381]: pgmap v944: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 20 19:13:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:13:28.884Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:13:29 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v945: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 884 KiB/s wr, 99 op/s
Jan 20 19:13:29 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:13:29 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:13:29 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:13:29.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:13:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:13:29] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Jan 20 19:13:29 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:13:29] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Jan 20 19:13:30 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:13:30 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:13:30 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:13:30.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:13:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:13:30.289 165659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:13:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:13:30.289 165659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:13:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:13:30.290 165659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:13:30 compute-0 ceph-mon[74381]: pgmap v945: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 884 KiB/s wr, 99 op/s
Jan 20 19:13:30 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/1369205945' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:13:31 compute-0 nova_compute[254061]: 2026-01-20 19:13:31.058 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:13:31 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v946: 337 pgs: 337 active+clean; 66 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 885 KiB/s wr, 112 op/s
Jan 20 19:13:31 compute-0 nova_compute[254061]: 2026-01-20 19:13:31.594 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:13:31 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:13:31 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:13:31 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:13:31.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:13:31 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:13:32 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:13:32 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:13:32 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:13:32.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:13:32 compute-0 ceph-mon[74381]: pgmap v946: 337 pgs: 337 active+clean; 66 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 885 KiB/s wr, 112 op/s
Jan 20 19:13:33 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v947: 337 pgs: 337 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Jan 20 19:13:33 compute-0 ceph-mon[74381]: pgmap v947: 337 pgs: 337 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Jan 20 19:13:33 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:13:33 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:13:33 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:13:33.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:13:34 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:13:34 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:13:34 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:13:34.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:13:34 compute-0 podman[269731]: 2026-01-20 19:13:34.127607162 +0000 UTC m=+0.101767145 container health_status d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 20 19:13:34 compute-0 ovn_controller[155128]: 2026-01-20T19:13:34Z|00068|memory_trim|INFO|Detected inactivity (last active 30000 ms ago): trimming memory
Jan 20 19:13:35 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v948: 337 pgs: 337 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Jan 20 19:13:35 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:13:35 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:13:35 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:13:35.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:13:36 compute-0 nova_compute[254061]: 2026-01-20 19:13:36.060 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:13:36 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:13:36 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:13:36 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:13:36.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:13:36 compute-0 ceph-mon[74381]: pgmap v948: 337 pgs: 337 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Jan 20 19:13:36 compute-0 nova_compute[254061]: 2026-01-20 19:13:36.596 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:13:36 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:13:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:13:37.199Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:13:37 compute-0 nova_compute[254061]: 2026-01-20 19:13:37.261 254065 DEBUG oslo_concurrency.lockutils [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Acquiring lock "11e4950d-c220-48d6-93ff-810afbe8ffb3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:13:37 compute-0 nova_compute[254061]: 2026-01-20 19:13:37.262 254065 DEBUG oslo_concurrency.lockutils [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "11e4950d-c220-48d6-93ff-810afbe8ffb3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:13:37 compute-0 nova_compute[254061]: 2026-01-20 19:13:37.279 254065 DEBUG nova.compute.manager [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 11e4950d-c220-48d6-93ff-810afbe8ffb3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 19:13:37 compute-0 nova_compute[254061]: 2026-01-20 19:13:37.346 254065 DEBUG oslo_concurrency.lockutils [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:13:37 compute-0 nova_compute[254061]: 2026-01-20 19:13:37.346 254065 DEBUG oslo_concurrency.lockutils [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:13:37 compute-0 nova_compute[254061]: 2026-01-20 19:13:37.352 254065 DEBUG nova.virt.hardware [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 19:13:37 compute-0 nova_compute[254061]: 2026-01-20 19:13:37.352 254065 INFO nova.compute.claims [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 11e4950d-c220-48d6-93ff-810afbe8ffb3] Claim successful on node compute-0.ctlplane.example.com
Jan 20 19:13:37 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v949: 337 pgs: 337 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Jan 20 19:13:37 compute-0 nova_compute[254061]: 2026-01-20 19:13:37.437 254065 DEBUG oslo_concurrency.processutils [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:13:37 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:13:37 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:13:37 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:13:37.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:13:37 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:13:37 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2968990069' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:13:37 compute-0 nova_compute[254061]: 2026-01-20 19:13:37.882 254065 DEBUG oslo_concurrency.processutils [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:13:37 compute-0 nova_compute[254061]: 2026-01-20 19:13:37.889 254065 DEBUG nova.compute.provider_tree [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Inventory has not changed in ProviderTree for provider: cb9161e5-191d-495c-920a-01144f42a215 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 19:13:37 compute-0 nova_compute[254061]: 2026-01-20 19:13:37.910 254065 DEBUG nova.scheduler.client.report [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Inventory has not changed for provider cb9161e5-191d-495c-920a-01144f42a215 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 19:13:37 compute-0 nova_compute[254061]: 2026-01-20 19:13:37.946 254065 DEBUG oslo_concurrency.lockutils [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.599s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:13:37 compute-0 nova_compute[254061]: 2026-01-20 19:13:37.946 254065 DEBUG nova.compute.manager [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 11e4950d-c220-48d6-93ff-810afbe8ffb3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 19:13:38 compute-0 nova_compute[254061]: 2026-01-20 19:13:38.035 254065 DEBUG nova.compute.manager [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 11e4950d-c220-48d6-93ff-810afbe8ffb3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 19:13:38 compute-0 nova_compute[254061]: 2026-01-20 19:13:38.035 254065 DEBUG nova.network.neutron [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 11e4950d-c220-48d6-93ff-810afbe8ffb3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 19:13:38 compute-0 nova_compute[254061]: 2026-01-20 19:13:38.071 254065 INFO nova.virt.libvirt.driver [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 11e4950d-c220-48d6-93ff-810afbe8ffb3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 19:13:38 compute-0 nova_compute[254061]: 2026-01-20 19:13:38.103 254065 DEBUG nova.compute.manager [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 11e4950d-c220-48d6-93ff-810afbe8ffb3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 19:13:38 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:13:38 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:13:38 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:13:38.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:13:38 compute-0 sudo[269783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:13:38 compute-0 sudo[269783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:13:38 compute-0 sudo[269783]: pam_unix(sudo:session): session closed for user root
Jan 20 19:13:38 compute-0 nova_compute[254061]: 2026-01-20 19:13:38.214 254065 DEBUG nova.compute.manager [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 11e4950d-c220-48d6-93ff-810afbe8ffb3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 19:13:38 compute-0 nova_compute[254061]: 2026-01-20 19:13:38.216 254065 DEBUG nova.virt.libvirt.driver [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 11e4950d-c220-48d6-93ff-810afbe8ffb3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 19:13:38 compute-0 nova_compute[254061]: 2026-01-20 19:13:38.217 254065 INFO nova.virt.libvirt.driver [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 11e4950d-c220-48d6-93ff-810afbe8ffb3] Creating image(s)
Jan 20 19:13:38 compute-0 nova_compute[254061]: 2026-01-20 19:13:38.250 254065 DEBUG nova.storage.rbd_utils [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] rbd image 11e4950d-c220-48d6-93ff-810afbe8ffb3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 19:13:38 compute-0 nova_compute[254061]: 2026-01-20 19:13:38.305 254065 DEBUG nova.storage.rbd_utils [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] rbd image 11e4950d-c220-48d6-93ff-810afbe8ffb3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 19:13:38 compute-0 nova_compute[254061]: 2026-01-20 19:13:38.339 254065 DEBUG nova.storage.rbd_utils [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] rbd image 11e4950d-c220-48d6-93ff-810afbe8ffb3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 19:13:38 compute-0 nova_compute[254061]: 2026-01-20 19:13:38.343 254065 DEBUG oslo_concurrency.processutils [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2e09eef5d7d60aeeb43ad4911302a9acdced7386 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:13:38 compute-0 nova_compute[254061]: 2026-01-20 19:13:38.417 254065 DEBUG oslo_concurrency.processutils [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2e09eef5d7d60aeeb43ad4911302a9acdced7386 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:13:38 compute-0 nova_compute[254061]: 2026-01-20 19:13:38.418 254065 DEBUG oslo_concurrency.lockutils [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Acquiring lock "2e09eef5d7d60aeeb43ad4911302a9acdced7386" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:13:38 compute-0 nova_compute[254061]: 2026-01-20 19:13:38.419 254065 DEBUG oslo_concurrency.lockutils [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "2e09eef5d7d60aeeb43ad4911302a9acdced7386" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:13:38 compute-0 nova_compute[254061]: 2026-01-20 19:13:38.420 254065 DEBUG oslo_concurrency.lockutils [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "2e09eef5d7d60aeeb43ad4911302a9acdced7386" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:13:38 compute-0 nova_compute[254061]: 2026-01-20 19:13:38.449 254065 DEBUG nova.storage.rbd_utils [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] rbd image 11e4950d-c220-48d6-93ff-810afbe8ffb3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 19:13:38 compute-0 nova_compute[254061]: 2026-01-20 19:13:38.453 254065 DEBUG oslo_concurrency.processutils [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/2e09eef5d7d60aeeb43ad4911302a9acdced7386 11e4950d-c220-48d6-93ff-810afbe8ffb3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:13:38 compute-0 ceph-mon[74381]: pgmap v949: 337 pgs: 337 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Jan 20 19:13:38 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/2968990069' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:13:38 compute-0 nova_compute[254061]: 2026-01-20 19:13:38.764 254065 DEBUG nova.policy [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd34bd159f8884263a7481e3fcff15267', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dc8a6ea17f334edbbfaf2a91ec6fd167', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 19:13:38 compute-0 nova_compute[254061]: 2026-01-20 19:13:38.768 254065 DEBUG oslo_concurrency.processutils [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/2e09eef5d7d60aeeb43ad4911302a9acdced7386 11e4950d-c220-48d6-93ff-810afbe8ffb3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.315s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:13:38 compute-0 nova_compute[254061]: 2026-01-20 19:13:38.880 254065 DEBUG nova.storage.rbd_utils [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] resizing rbd image 11e4950d-c220-48d6-93ff-810afbe8ffb3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 19:13:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:13:38.886Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:13:39 compute-0 nova_compute[254061]: 2026-01-20 19:13:39.066 254065 DEBUG nova.objects.instance [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lazy-loading 'migration_context' on Instance uuid 11e4950d-c220-48d6-93ff-810afbe8ffb3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 19:13:39 compute-0 nova_compute[254061]: 2026-01-20 19:13:39.085 254065 DEBUG nova.virt.libvirt.driver [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 11e4950d-c220-48d6-93ff-810afbe8ffb3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 19:13:39 compute-0 nova_compute[254061]: 2026-01-20 19:13:39.085 254065 DEBUG nova.virt.libvirt.driver [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 11e4950d-c220-48d6-93ff-810afbe8ffb3] Ensure instance console log exists: /var/lib/nova/instances/11e4950d-c220-48d6-93ff-810afbe8ffb3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 19:13:39 compute-0 nova_compute[254061]: 2026-01-20 19:13:39.085 254065 DEBUG oslo_concurrency.lockutils [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:13:39 compute-0 nova_compute[254061]: 2026-01-20 19:13:39.086 254065 DEBUG oslo_concurrency.lockutils [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:13:39 compute-0 nova_compute[254061]: 2026-01-20 19:13:39.086 254065 DEBUG oslo_concurrency.lockutils [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:13:39 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v950: 337 pgs: 337 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Jan 20 19:13:39 compute-0 ceph-mon[74381]: pgmap v950: 337 pgs: 337 active+clean; 41 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Jan 20 19:13:39 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:13:39 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:13:39 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:13:39.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:13:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:13:39] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Jan 20 19:13:39 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:13:39] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Jan 20 19:13:39 compute-0 nova_compute[254061]: 2026-01-20 19:13:39.964 254065 DEBUG nova.network.neutron [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 11e4950d-c220-48d6-93ff-810afbe8ffb3] Successfully updated port: 6e554c79-0c8f-4254-b1f4-f67729dacdfa _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 19:13:39 compute-0 nova_compute[254061]: 2026-01-20 19:13:39.983 254065 DEBUG oslo_concurrency.lockutils [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Acquiring lock "refresh_cache-11e4950d-c220-48d6-93ff-810afbe8ffb3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 19:13:39 compute-0 nova_compute[254061]: 2026-01-20 19:13:39.984 254065 DEBUG oslo_concurrency.lockutils [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Acquired lock "refresh_cache-11e4950d-c220-48d6-93ff-810afbe8ffb3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 19:13:39 compute-0 nova_compute[254061]: 2026-01-20 19:13:39.984 254065 DEBUG nova.network.neutron [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 11e4950d-c220-48d6-93ff-810afbe8ffb3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 19:13:40 compute-0 nova_compute[254061]: 2026-01-20 19:13:40.104 254065 DEBUG nova.compute.manager [req-60927fde-7c04-4a00-bf55-5278827d9440 req-05498b05-20d1-4b1e-abac-c796b47c8b03 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 11e4950d-c220-48d6-93ff-810afbe8ffb3] Received event network-changed-6e554c79-0c8f-4254-b1f4-f67729dacdfa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 19:13:40 compute-0 nova_compute[254061]: 2026-01-20 19:13:40.105 254065 DEBUG nova.compute.manager [req-60927fde-7c04-4a00-bf55-5278827d9440 req-05498b05-20d1-4b1e-abac-c796b47c8b03 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 11e4950d-c220-48d6-93ff-810afbe8ffb3] Refreshing instance network info cache due to event network-changed-6e554c79-0c8f-4254-b1f4-f67729dacdfa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 19:13:40 compute-0 nova_compute[254061]: 2026-01-20 19:13:40.105 254065 DEBUG oslo_concurrency.lockutils [req-60927fde-7c04-4a00-bf55-5278827d9440 req-05498b05-20d1-4b1e-abac-c796b47c8b03 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Acquiring lock "refresh_cache-11e4950d-c220-48d6-93ff-810afbe8ffb3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 19:13:40 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:13:40 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:13:40 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:13:40.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:13:40 compute-0 nova_compute[254061]: 2026-01-20 19:13:40.206 254065 DEBUG nova.network.neutron [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 11e4950d-c220-48d6-93ff-810afbe8ffb3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 19:13:40 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:13:41 compute-0 nova_compute[254061]: 2026-01-20 19:13:41.061 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:13:41 compute-0 nova_compute[254061]: 2026-01-20 19:13:41.082 254065 DEBUG nova.network.neutron [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 11e4950d-c220-48d6-93ff-810afbe8ffb3] Updating instance_info_cache with network_info: [{"id": "6e554c79-0c8f-4254-b1f4-f67729dacdfa", "address": "fa:16:3e:dd:36:69", "network": {"id": "6d1a28e4-5186-4bb9-946f-fd06e39cf5fc", "bridge": "br-int", "label": "tempest-network-smoke--146205527", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e554c79-0c", "ovs_interfaceid": "6e554c79-0c8f-4254-b1f4-f67729dacdfa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 19:13:41 compute-0 nova_compute[254061]: 2026-01-20 19:13:41.110 254065 DEBUG oslo_concurrency.lockutils [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Releasing lock "refresh_cache-11e4950d-c220-48d6-93ff-810afbe8ffb3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 19:13:41 compute-0 nova_compute[254061]: 2026-01-20 19:13:41.111 254065 DEBUG nova.compute.manager [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 11e4950d-c220-48d6-93ff-810afbe8ffb3] Instance network_info: |[{"id": "6e554c79-0c8f-4254-b1f4-f67729dacdfa", "address": "fa:16:3e:dd:36:69", "network": {"id": "6d1a28e4-5186-4bb9-946f-fd06e39cf5fc", "bridge": "br-int", "label": "tempest-network-smoke--146205527", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e554c79-0c", "ovs_interfaceid": "6e554c79-0c8f-4254-b1f4-f67729dacdfa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 19:13:41 compute-0 nova_compute[254061]: 2026-01-20 19:13:41.111 254065 DEBUG oslo_concurrency.lockutils [req-60927fde-7c04-4a00-bf55-5278827d9440 req-05498b05-20d1-4b1e-abac-c796b47c8b03 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Acquired lock "refresh_cache-11e4950d-c220-48d6-93ff-810afbe8ffb3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 19:13:41 compute-0 nova_compute[254061]: 2026-01-20 19:13:41.111 254065 DEBUG nova.network.neutron [req-60927fde-7c04-4a00-bf55-5278827d9440 req-05498b05-20d1-4b1e-abac-c796b47c8b03 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 11e4950d-c220-48d6-93ff-810afbe8ffb3] Refreshing network info cache for port 6e554c79-0c8f-4254-b1f4-f67729dacdfa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 19:13:41 compute-0 nova_compute[254061]: 2026-01-20 19:13:41.115 254065 DEBUG nova.virt.libvirt.driver [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 11e4950d-c220-48d6-93ff-810afbe8ffb3] Start _get_guest_xml network_info=[{"id": "6e554c79-0c8f-4254-b1f4-f67729dacdfa", "address": "fa:16:3e:dd:36:69", "network": {"id": "6d1a28e4-5186-4bb9-946f-fd06e39cf5fc", "bridge": "br-int", "label": "tempest-network-smoke--146205527", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e554c79-0c", "ovs_interfaceid": "6e554c79-0c8f-4254-b1f4-f67729dacdfa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T19:05:59Z,direct_url=<?>,disk_format='qcow2',id=bc57af0c-4b71-499e-9808-3c8fc070a488,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='811a4eb676464ca2bd20c0cc2d2f61c9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T19:06:05Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_secret_uuid': None, 'encryption_format': None, 'size': 0, 'encrypted': False, 'guest_format': None, 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': 'bc57af0c-4b71-499e-9808-3c8fc070a488'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 19:13:41 compute-0 nova_compute[254061]: 2026-01-20 19:13:41.119 254065 WARNING nova.virt.libvirt.driver [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 19:13:41 compute-0 nova_compute[254061]: 2026-01-20 19:13:41.141 254065 DEBUG nova.virt.libvirt.host [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 19:13:41 compute-0 nova_compute[254061]: 2026-01-20 19:13:41.142 254065 DEBUG nova.virt.libvirt.host [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 19:13:41 compute-0 nova_compute[254061]: 2026-01-20 19:13:41.148 254065 DEBUG nova.virt.libvirt.host [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 19:13:41 compute-0 nova_compute[254061]: 2026-01-20 19:13:41.148 254065 DEBUG nova.virt.libvirt.host [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 19:13:41 compute-0 nova_compute[254061]: 2026-01-20 19:13:41.149 254065 DEBUG nova.virt.libvirt.driver [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 19:13:41 compute-0 nova_compute[254061]: 2026-01-20 19:13:41.149 254065 DEBUG nova.virt.hardware [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T19:05:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7446c314-5a17-42fd-97d9-a7a94e27bff9',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T19:05:59Z,direct_url=<?>,disk_format='qcow2',id=bc57af0c-4b71-499e-9808-3c8fc070a488,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='811a4eb676464ca2bd20c0cc2d2f61c9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T19:06:05Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 19:13:41 compute-0 nova_compute[254061]: 2026-01-20 19:13:41.150 254065 DEBUG nova.virt.hardware [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 19:13:41 compute-0 nova_compute[254061]: 2026-01-20 19:13:41.150 254065 DEBUG nova.virt.hardware [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 19:13:41 compute-0 nova_compute[254061]: 2026-01-20 19:13:41.150 254065 DEBUG nova.virt.hardware [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 19:13:41 compute-0 nova_compute[254061]: 2026-01-20 19:13:41.150 254065 DEBUG nova.virt.hardware [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 19:13:41 compute-0 nova_compute[254061]: 2026-01-20 19:13:41.151 254065 DEBUG nova.virt.hardware [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 19:13:41 compute-0 nova_compute[254061]: 2026-01-20 19:13:41.151 254065 DEBUG nova.virt.hardware [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 19:13:41 compute-0 nova_compute[254061]: 2026-01-20 19:13:41.151 254065 DEBUG nova.virt.hardware [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 19:13:41 compute-0 nova_compute[254061]: 2026-01-20 19:13:41.152 254065 DEBUG nova.virt.hardware [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 19:13:41 compute-0 nova_compute[254061]: 2026-01-20 19:13:41.152 254065 DEBUG nova.virt.hardware [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 19:13:41 compute-0 nova_compute[254061]: 2026-01-20 19:13:41.152 254065 DEBUG nova.virt.hardware [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 19:13:41 compute-0 nova_compute[254061]: 2026-01-20 19:13:41.155 254065 DEBUG oslo_concurrency.processutils [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:13:41 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v951: 337 pgs: 337 active+clean; 66 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 897 KiB/s wr, 39 op/s
Jan 20 19:13:41 compute-0 nova_compute[254061]: 2026-01-20 19:13:41.599 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:13:41 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 20 19:13:41 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2775066237' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 19:13:41 compute-0 nova_compute[254061]: 2026-01-20 19:13:41.637 254065 DEBUG oslo_concurrency.processutils [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:13:41 compute-0 ceph-mon[74381]: pgmap v951: 337 pgs: 337 active+clean; 66 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 897 KiB/s wr, 39 op/s
Jan 20 19:13:41 compute-0 nova_compute[254061]: 2026-01-20 19:13:41.666 254065 DEBUG nova.storage.rbd_utils [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] rbd image 11e4950d-c220-48d6-93ff-810afbe8ffb3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 19:13:41 compute-0 nova_compute[254061]: 2026-01-20 19:13:41.670 254065 DEBUG oslo_concurrency.processutils [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:13:41 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:13:41 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:13:41 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:13:41.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:13:41 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:13:42 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 20 19:13:42 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1095388836' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 19:13:42 compute-0 nova_compute[254061]: 2026-01-20 19:13:42.107 254065 DEBUG oslo_concurrency.processutils [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:13:42 compute-0 nova_compute[254061]: 2026-01-20 19:13:42.109 254065 DEBUG nova.virt.libvirt.vif [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T19:13:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-780835278',display_name='tempest-TestNetworkBasicOps-server-780835278',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-780835278',id=9,image_ref='bc57af0c-4b71-499e-9808-3c8fc070a488',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKuPfuHi+uE4V1ZIBY5O38PrAxKXVFlykKWG5j8Bge69/bGRf9gbezh34i4qhpnjTMhUHP26JByA8RiLJVtG/moa55IqtE4heVGMDxRueeY6mEizwAAgEJSC0YEQfIW1yg==',key_name='tempest-TestNetworkBasicOps-1443982051',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dc8a6ea17f334edbbfaf2a91ec6fd167',ramdisk_id='',reservation_id='r-f5b3nx4t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='bc57af0c-4b71-499e-9808-3c8fc070a488',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-899583499',owner_user_name='tempest-TestNetworkBasicOps-899583499-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T19:13:38Z,user_data=None,user_id='d34bd159f8884263a7481e3fcff15267',uuid=11e4950d-c220-48d6-93ff-810afbe8ffb3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6e554c79-0c8f-4254-b1f4-f67729dacdfa", "address": "fa:16:3e:dd:36:69", "network": {"id": "6d1a28e4-5186-4bb9-946f-fd06e39cf5fc", "bridge": "br-int", "label": "tempest-network-smoke--146205527", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e554c79-0c", "ovs_interfaceid": "6e554c79-0c8f-4254-b1f4-f67729dacdfa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 19:13:42 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:13:42 compute-0 nova_compute[254061]: 2026-01-20 19:13:42.110 254065 DEBUG nova.network.os_vif_util [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Converting VIF {"id": "6e554c79-0c8f-4254-b1f4-f67729dacdfa", "address": "fa:16:3e:dd:36:69", "network": {"id": "6d1a28e4-5186-4bb9-946f-fd06e39cf5fc", "bridge": "br-int", "label": "tempest-network-smoke--146205527", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e554c79-0c", "ovs_interfaceid": "6e554c79-0c8f-4254-b1f4-f67729dacdfa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 19:13:42 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:13:42 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:13:42.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:13:42 compute-0 nova_compute[254061]: 2026-01-20 19:13:42.111 254065 DEBUG nova.network.os_vif_util [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dd:36:69,bridge_name='br-int',has_traffic_filtering=True,id=6e554c79-0c8f-4254-b1f4-f67729dacdfa,network=Network(6d1a28e4-5186-4bb9-946f-fd06e39cf5fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap6e554c79-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 19:13:42 compute-0 nova_compute[254061]: 2026-01-20 19:13:42.113 254065 DEBUG nova.objects.instance [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lazy-loading 'pci_devices' on Instance uuid 11e4950d-c220-48d6-93ff-810afbe8ffb3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 19:13:42 compute-0 nova_compute[254061]: 2026-01-20 19:13:42.134 254065 DEBUG nova.virt.libvirt.driver [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 11e4950d-c220-48d6-93ff-810afbe8ffb3] End _get_guest_xml xml=<domain type="kvm">
Jan 20 19:13:42 compute-0 nova_compute[254061]:   <uuid>11e4950d-c220-48d6-93ff-810afbe8ffb3</uuid>
Jan 20 19:13:42 compute-0 nova_compute[254061]:   <name>instance-00000009</name>
Jan 20 19:13:42 compute-0 nova_compute[254061]:   <memory>131072</memory>
Jan 20 19:13:42 compute-0 nova_compute[254061]:   <vcpu>1</vcpu>
Jan 20 19:13:42 compute-0 nova_compute[254061]:   <metadata>
Jan 20 19:13:42 compute-0 nova_compute[254061]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 19:13:42 compute-0 nova_compute[254061]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 19:13:42 compute-0 nova_compute[254061]:       <nova:name>tempest-TestNetworkBasicOps-server-780835278</nova:name>
Jan 20 19:13:42 compute-0 nova_compute[254061]:       <nova:creationTime>2026-01-20 19:13:41</nova:creationTime>
Jan 20 19:13:42 compute-0 nova_compute[254061]:       <nova:flavor name="m1.nano">
Jan 20 19:13:42 compute-0 nova_compute[254061]:         <nova:memory>128</nova:memory>
Jan 20 19:13:42 compute-0 nova_compute[254061]:         <nova:disk>1</nova:disk>
Jan 20 19:13:42 compute-0 nova_compute[254061]:         <nova:swap>0</nova:swap>
Jan 20 19:13:42 compute-0 nova_compute[254061]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 19:13:42 compute-0 nova_compute[254061]:         <nova:vcpus>1</nova:vcpus>
Jan 20 19:13:42 compute-0 nova_compute[254061]:       </nova:flavor>
Jan 20 19:13:42 compute-0 nova_compute[254061]:       <nova:owner>
Jan 20 19:13:42 compute-0 nova_compute[254061]:         <nova:user uuid="d34bd159f8884263a7481e3fcff15267">tempest-TestNetworkBasicOps-899583499-project-member</nova:user>
Jan 20 19:13:42 compute-0 nova_compute[254061]:         <nova:project uuid="dc8a6ea17f334edbbfaf2a91ec6fd167">tempest-TestNetworkBasicOps-899583499</nova:project>
Jan 20 19:13:42 compute-0 nova_compute[254061]:       </nova:owner>
Jan 20 19:13:42 compute-0 nova_compute[254061]:       <nova:root type="image" uuid="bc57af0c-4b71-499e-9808-3c8fc070a488"/>
Jan 20 19:13:42 compute-0 nova_compute[254061]:       <nova:ports>
Jan 20 19:13:42 compute-0 nova_compute[254061]:         <nova:port uuid="6e554c79-0c8f-4254-b1f4-f67729dacdfa">
Jan 20 19:13:42 compute-0 nova_compute[254061]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 20 19:13:42 compute-0 nova_compute[254061]:         </nova:port>
Jan 20 19:13:42 compute-0 nova_compute[254061]:       </nova:ports>
Jan 20 19:13:42 compute-0 nova_compute[254061]:     </nova:instance>
Jan 20 19:13:42 compute-0 nova_compute[254061]:   </metadata>
Jan 20 19:13:42 compute-0 nova_compute[254061]:   <sysinfo type="smbios">
Jan 20 19:13:42 compute-0 nova_compute[254061]:     <system>
Jan 20 19:13:42 compute-0 nova_compute[254061]:       <entry name="manufacturer">RDO</entry>
Jan 20 19:13:42 compute-0 nova_compute[254061]:       <entry name="product">OpenStack Compute</entry>
Jan 20 19:13:42 compute-0 nova_compute[254061]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 19:13:42 compute-0 nova_compute[254061]:       <entry name="serial">11e4950d-c220-48d6-93ff-810afbe8ffb3</entry>
Jan 20 19:13:42 compute-0 nova_compute[254061]:       <entry name="uuid">11e4950d-c220-48d6-93ff-810afbe8ffb3</entry>
Jan 20 19:13:42 compute-0 nova_compute[254061]:       <entry name="family">Virtual Machine</entry>
Jan 20 19:13:42 compute-0 nova_compute[254061]:     </system>
Jan 20 19:13:42 compute-0 nova_compute[254061]:   </sysinfo>
Jan 20 19:13:42 compute-0 nova_compute[254061]:   <os>
Jan 20 19:13:42 compute-0 nova_compute[254061]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 19:13:42 compute-0 nova_compute[254061]:     <boot dev="hd"/>
Jan 20 19:13:42 compute-0 nova_compute[254061]:     <smbios mode="sysinfo"/>
Jan 20 19:13:42 compute-0 nova_compute[254061]:   </os>
Jan 20 19:13:42 compute-0 nova_compute[254061]:   <features>
Jan 20 19:13:42 compute-0 nova_compute[254061]:     <acpi/>
Jan 20 19:13:42 compute-0 nova_compute[254061]:     <apic/>
Jan 20 19:13:42 compute-0 nova_compute[254061]:     <vmcoreinfo/>
Jan 20 19:13:42 compute-0 nova_compute[254061]:   </features>
Jan 20 19:13:42 compute-0 nova_compute[254061]:   <clock offset="utc">
Jan 20 19:13:42 compute-0 nova_compute[254061]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 19:13:42 compute-0 nova_compute[254061]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 19:13:42 compute-0 nova_compute[254061]:     <timer name="hpet" present="no"/>
Jan 20 19:13:42 compute-0 nova_compute[254061]:   </clock>
Jan 20 19:13:42 compute-0 nova_compute[254061]:   <cpu mode="host-model" match="exact">
Jan 20 19:13:42 compute-0 nova_compute[254061]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 19:13:42 compute-0 nova_compute[254061]:   </cpu>
Jan 20 19:13:42 compute-0 nova_compute[254061]:   <devices>
Jan 20 19:13:42 compute-0 nova_compute[254061]:     <disk type="network" device="disk">
Jan 20 19:13:42 compute-0 nova_compute[254061]:       <driver type="raw" cache="none"/>
Jan 20 19:13:42 compute-0 nova_compute[254061]:       <source protocol="rbd" name="vms/11e4950d-c220-48d6-93ff-810afbe8ffb3_disk">
Jan 20 19:13:42 compute-0 nova_compute[254061]:         <host name="192.168.122.100" port="6789"/>
Jan 20 19:13:42 compute-0 nova_compute[254061]:         <host name="192.168.122.102" port="6789"/>
Jan 20 19:13:42 compute-0 nova_compute[254061]:         <host name="192.168.122.101" port="6789"/>
Jan 20 19:13:42 compute-0 nova_compute[254061]:       </source>
Jan 20 19:13:42 compute-0 nova_compute[254061]:       <auth username="openstack">
Jan 20 19:13:42 compute-0 nova_compute[254061]:         <secret type="ceph" uuid="aecbbf3b-b405-507b-97d7-637a83f5b4b1"/>
Jan 20 19:13:42 compute-0 nova_compute[254061]:       </auth>
Jan 20 19:13:42 compute-0 nova_compute[254061]:       <target dev="vda" bus="virtio"/>
Jan 20 19:13:42 compute-0 nova_compute[254061]:     </disk>
Jan 20 19:13:42 compute-0 nova_compute[254061]:     <disk type="network" device="cdrom">
Jan 20 19:13:42 compute-0 nova_compute[254061]:       <driver type="raw" cache="none"/>
Jan 20 19:13:42 compute-0 nova_compute[254061]:       <source protocol="rbd" name="vms/11e4950d-c220-48d6-93ff-810afbe8ffb3_disk.config">
Jan 20 19:13:42 compute-0 nova_compute[254061]:         <host name="192.168.122.100" port="6789"/>
Jan 20 19:13:42 compute-0 nova_compute[254061]:         <host name="192.168.122.102" port="6789"/>
Jan 20 19:13:42 compute-0 nova_compute[254061]:         <host name="192.168.122.101" port="6789"/>
Jan 20 19:13:42 compute-0 nova_compute[254061]:       </source>
Jan 20 19:13:42 compute-0 nova_compute[254061]:       <auth username="openstack">
Jan 20 19:13:42 compute-0 nova_compute[254061]:         <secret type="ceph" uuid="aecbbf3b-b405-507b-97d7-637a83f5b4b1"/>
Jan 20 19:13:42 compute-0 nova_compute[254061]:       </auth>
Jan 20 19:13:42 compute-0 nova_compute[254061]:       <target dev="sda" bus="sata"/>
Jan 20 19:13:42 compute-0 nova_compute[254061]:     </disk>
Jan 20 19:13:42 compute-0 nova_compute[254061]:     <interface type="ethernet">
Jan 20 19:13:42 compute-0 nova_compute[254061]:       <mac address="fa:16:3e:dd:36:69"/>
Jan 20 19:13:42 compute-0 nova_compute[254061]:       <model type="virtio"/>
Jan 20 19:13:42 compute-0 nova_compute[254061]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 19:13:42 compute-0 nova_compute[254061]:       <mtu size="1442"/>
Jan 20 19:13:42 compute-0 nova_compute[254061]:       <target dev="tap6e554c79-0c"/>
Jan 20 19:13:42 compute-0 nova_compute[254061]:     </interface>
Jan 20 19:13:42 compute-0 nova_compute[254061]:     <serial type="pty">
Jan 20 19:13:42 compute-0 nova_compute[254061]:       <log file="/var/lib/nova/instances/11e4950d-c220-48d6-93ff-810afbe8ffb3/console.log" append="off"/>
Jan 20 19:13:42 compute-0 nova_compute[254061]:     </serial>
Jan 20 19:13:42 compute-0 nova_compute[254061]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 19:13:42 compute-0 nova_compute[254061]:     <video>
Jan 20 19:13:42 compute-0 nova_compute[254061]:       <model type="virtio"/>
Jan 20 19:13:42 compute-0 nova_compute[254061]:     </video>
Jan 20 19:13:42 compute-0 nova_compute[254061]:     <input type="tablet" bus="usb"/>
Jan 20 19:13:42 compute-0 nova_compute[254061]:     <rng model="virtio">
Jan 20 19:13:42 compute-0 nova_compute[254061]:       <backend model="random">/dev/urandom</backend>
Jan 20 19:13:42 compute-0 nova_compute[254061]:     </rng>
Jan 20 19:13:42 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root"/>
Jan 20 19:13:42 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:13:42 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:13:42 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:13:42 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:13:42 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:13:42 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:13:42 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:13:42 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:13:42 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:13:42 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:13:42 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:13:42 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:13:42 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:13:42 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:13:42 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:13:42 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:13:42 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:13:42 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:13:42 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:13:42 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:13:42 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:13:42 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:13:42 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:13:42 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:13:42 compute-0 nova_compute[254061]:     <controller type="usb" index="0"/>
Jan 20 19:13:42 compute-0 nova_compute[254061]:     <memballoon model="virtio">
Jan 20 19:13:42 compute-0 nova_compute[254061]:       <stats period="10"/>
Jan 20 19:13:42 compute-0 nova_compute[254061]:     </memballoon>
Jan 20 19:13:42 compute-0 nova_compute[254061]:   </devices>
Jan 20 19:13:42 compute-0 nova_compute[254061]: </domain>
Jan 20 19:13:42 compute-0 nova_compute[254061]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 19:13:42 compute-0 nova_compute[254061]: 2026-01-20 19:13:42.135 254065 DEBUG nova.compute.manager [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 11e4950d-c220-48d6-93ff-810afbe8ffb3] Preparing to wait for external event network-vif-plugged-6e554c79-0c8f-4254-b1f4-f67729dacdfa prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 19:13:42 compute-0 nova_compute[254061]: 2026-01-20 19:13:42.136 254065 DEBUG oslo_concurrency.lockutils [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Acquiring lock "11e4950d-c220-48d6-93ff-810afbe8ffb3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:13:42 compute-0 nova_compute[254061]: 2026-01-20 19:13:42.136 254065 DEBUG oslo_concurrency.lockutils [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "11e4950d-c220-48d6-93ff-810afbe8ffb3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:13:42 compute-0 nova_compute[254061]: 2026-01-20 19:13:42.136 254065 DEBUG oslo_concurrency.lockutils [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "11e4950d-c220-48d6-93ff-810afbe8ffb3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:13:42 compute-0 nova_compute[254061]: 2026-01-20 19:13:42.137 254065 DEBUG nova.virt.libvirt.vif [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T19:13:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-780835278',display_name='tempest-TestNetworkBasicOps-server-780835278',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-780835278',id=9,image_ref='bc57af0c-4b71-499e-9808-3c8fc070a488',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKuPfuHi+uE4V1ZIBY5O38PrAxKXVFlykKWG5j8Bge69/bGRf9gbezh34i4qhpnjTMhUHP26JByA8RiLJVtG/moa55IqtE4heVGMDxRueeY6mEizwAAgEJSC0YEQfIW1yg==',key_name='tempest-TestNetworkBasicOps-1443982051',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dc8a6ea17f334edbbfaf2a91ec6fd167',ramdisk_id='',reservation_id='r-f5b3nx4t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='bc57af0c-4b71-499e-9808-3c8fc070a488',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-899583499',owner_user_name='tempest-TestNetworkBasicOps-899583499-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T19:13:38Z,user_data=None,user_id='d34bd159f8884263a7481e3fcff15267',uuid=11e4950d-c220-48d6-93ff-810afbe8ffb3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6e554c79-0c8f-4254-b1f4-f67729dacdfa", "address": "fa:16:3e:dd:36:69", "network": {"id": "6d1a28e4-5186-4bb9-946f-fd06e39cf5fc", "bridge": "br-int", "label": "tempest-network-smoke--146205527", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e554c79-0c", "ovs_interfaceid": "6e554c79-0c8f-4254-b1f4-f67729dacdfa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 19:13:42 compute-0 nova_compute[254061]: 2026-01-20 19:13:42.137 254065 DEBUG nova.network.os_vif_util [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Converting VIF {"id": "6e554c79-0c8f-4254-b1f4-f67729dacdfa", "address": "fa:16:3e:dd:36:69", "network": {"id": "6d1a28e4-5186-4bb9-946f-fd06e39cf5fc", "bridge": "br-int", "label": "tempest-network-smoke--146205527", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e554c79-0c", "ovs_interfaceid": "6e554c79-0c8f-4254-b1f4-f67729dacdfa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 19:13:42 compute-0 nova_compute[254061]: 2026-01-20 19:13:42.138 254065 DEBUG nova.network.os_vif_util [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dd:36:69,bridge_name='br-int',has_traffic_filtering=True,id=6e554c79-0c8f-4254-b1f4-f67729dacdfa,network=Network(6d1a28e4-5186-4bb9-946f-fd06e39cf5fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap6e554c79-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 19:13:42 compute-0 nova_compute[254061]: 2026-01-20 19:13:42.138 254065 DEBUG os_vif [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:36:69,bridge_name='br-int',has_traffic_filtering=True,id=6e554c79-0c8f-4254-b1f4-f67729dacdfa,network=Network(6d1a28e4-5186-4bb9-946f-fd06e39cf5fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap6e554c79-0c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 19:13:42 compute-0 nova_compute[254061]: 2026-01-20 19:13:42.142 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:13:42 compute-0 nova_compute[254061]: 2026-01-20 19:13:42.143 254065 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 19:13:42 compute-0 nova_compute[254061]: 2026-01-20 19:13:42.143 254065 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 19:13:42 compute-0 nova_compute[254061]: 2026-01-20 19:13:42.145 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:13:42 compute-0 nova_compute[254061]: 2026-01-20 19:13:42.146 254065 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6e554c79-0c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 19:13:42 compute-0 nova_compute[254061]: 2026-01-20 19:13:42.146 254065 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6e554c79-0c, col_values=(('external_ids', {'iface-id': '6e554c79-0c8f-4254-b1f4-f67729dacdfa', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:dd:36:69', 'vm-uuid': '11e4950d-c220-48d6-93ff-810afbe8ffb3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 19:13:42 compute-0 nova_compute[254061]: 2026-01-20 19:13:42.148 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:13:42 compute-0 NetworkManager[48914]: <info>  [1768936422.1487] manager: (tap6e554c79-0c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/47)
Jan 20 19:13:42 compute-0 nova_compute[254061]: 2026-01-20 19:13:42.150 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:13:42 compute-0 nova_compute[254061]: 2026-01-20 19:13:42.154 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:13:42 compute-0 nova_compute[254061]: 2026-01-20 19:13:42.156 254065 INFO os_vif [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:36:69,bridge_name='br-int',has_traffic_filtering=True,id=6e554c79-0c8f-4254-b1f4-f67729dacdfa,network=Network(6d1a28e4-5186-4bb9-946f-fd06e39cf5fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap6e554c79-0c')
Jan 20 19:13:42 compute-0 nova_compute[254061]: 2026-01-20 19:13:42.202 254065 DEBUG nova.virt.libvirt.driver [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 19:13:42 compute-0 nova_compute[254061]: 2026-01-20 19:13:42.202 254065 DEBUG nova.virt.libvirt.driver [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 19:13:42 compute-0 nova_compute[254061]: 2026-01-20 19:13:42.202 254065 DEBUG nova.virt.libvirt.driver [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] No VIF found with MAC fa:16:3e:dd:36:69, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 19:13:42 compute-0 nova_compute[254061]: 2026-01-20 19:13:42.203 254065 INFO nova.virt.libvirt.driver [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 11e4950d-c220-48d6-93ff-810afbe8ffb3] Using config drive
Jan 20 19:13:42 compute-0 nova_compute[254061]: 2026-01-20 19:13:42.233 254065 DEBUG nova.storage.rbd_utils [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] rbd image 11e4950d-c220-48d6-93ff-810afbe8ffb3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 19:13:42 compute-0 nova_compute[254061]: 2026-01-20 19:13:42.320 254065 DEBUG nova.network.neutron [req-60927fde-7c04-4a00-bf55-5278827d9440 req-05498b05-20d1-4b1e-abac-c796b47c8b03 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 11e4950d-c220-48d6-93ff-810afbe8ffb3] Updated VIF entry in instance network info cache for port 6e554c79-0c8f-4254-b1f4-f67729dacdfa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 19:13:42 compute-0 nova_compute[254061]: 2026-01-20 19:13:42.320 254065 DEBUG nova.network.neutron [req-60927fde-7c04-4a00-bf55-5278827d9440 req-05498b05-20d1-4b1e-abac-c796b47c8b03 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 11e4950d-c220-48d6-93ff-810afbe8ffb3] Updating instance_info_cache with network_info: [{"id": "6e554c79-0c8f-4254-b1f4-f67729dacdfa", "address": "fa:16:3e:dd:36:69", "network": {"id": "6d1a28e4-5186-4bb9-946f-fd06e39cf5fc", "bridge": "br-int", "label": "tempest-network-smoke--146205527", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e554c79-0c", "ovs_interfaceid": "6e554c79-0c8f-4254-b1f4-f67729dacdfa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 19:13:42 compute-0 nova_compute[254061]: 2026-01-20 19:13:42.342 254065 DEBUG oslo_concurrency.lockutils [req-60927fde-7c04-4a00-bf55-5278827d9440 req-05498b05-20d1-4b1e-abac-c796b47c8b03 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Releasing lock "refresh_cache-11e4950d-c220-48d6-93ff-810afbe8ffb3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 19:13:42 compute-0 nova_compute[254061]: 2026-01-20 19:13:42.541 254065 INFO nova.virt.libvirt.driver [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 11e4950d-c220-48d6-93ff-810afbe8ffb3] Creating config drive at /var/lib/nova/instances/11e4950d-c220-48d6-93ff-810afbe8ffb3/disk.config
Jan 20 19:13:42 compute-0 nova_compute[254061]: 2026-01-20 19:13:42.545 254065 DEBUG oslo_concurrency.processutils [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/11e4950d-c220-48d6-93ff-810afbe8ffb3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwvhlevu2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:13:42 compute-0 nova_compute[254061]: 2026-01-20 19:13:42.671 254065 DEBUG oslo_concurrency.processutils [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/11e4950d-c220-48d6-93ff-810afbe8ffb3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwvhlevu2" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:13:42 compute-0 nova_compute[254061]: 2026-01-20 19:13:42.701 254065 DEBUG nova.storage.rbd_utils [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] rbd image 11e4950d-c220-48d6-93ff-810afbe8ffb3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 19:13:42 compute-0 nova_compute[254061]: 2026-01-20 19:13:42.704 254065 DEBUG oslo_concurrency.processutils [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/11e4950d-c220-48d6-93ff-810afbe8ffb3/disk.config 11e4950d-c220-48d6-93ff-810afbe8ffb3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:13:42 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/2775066237' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 19:13:42 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/1095388836' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 19:13:43 compute-0 nova_compute[254061]: 2026-01-20 19:13:43.042 254065 DEBUG oslo_concurrency.processutils [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/11e4950d-c220-48d6-93ff-810afbe8ffb3/disk.config 11e4950d-c220-48d6-93ff-810afbe8ffb3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.338s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:13:43 compute-0 nova_compute[254061]: 2026-01-20 19:13:43.043 254065 INFO nova.virt.libvirt.driver [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 11e4950d-c220-48d6-93ff-810afbe8ffb3] Deleting local config drive /var/lib/nova/instances/11e4950d-c220-48d6-93ff-810afbe8ffb3/disk.config because it was imported into RBD.
Jan 20 19:13:43 compute-0 systemd[1]: Starting libvirt secret daemon...
Jan 20 19:13:43 compute-0 systemd[1]: Started libvirt secret daemon.
Jan 20 19:13:43 compute-0 kernel: tap6e554c79-0c: entered promiscuous mode
Jan 20 19:13:43 compute-0 NetworkManager[48914]: <info>  [1768936423.1370] manager: (tap6e554c79-0c): new Tun device (/org/freedesktop/NetworkManager/Devices/48)
Jan 20 19:13:43 compute-0 nova_compute[254061]: 2026-01-20 19:13:43.138 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:13:43 compute-0 ovn_controller[155128]: 2026-01-20T19:13:43Z|00069|binding|INFO|Claiming lport 6e554c79-0c8f-4254-b1f4-f67729dacdfa for this chassis.
Jan 20 19:13:43 compute-0 ovn_controller[155128]: 2026-01-20T19:13:43Z|00070|binding|INFO|6e554c79-0c8f-4254-b1f4-f67729dacdfa: Claiming fa:16:3e:dd:36:69 10.100.0.12
Jan 20 19:13:43 compute-0 nova_compute[254061]: 2026-01-20 19:13:43.152 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:13:43 compute-0 NetworkManager[48914]: <info>  [1768936423.1542] manager: (patch-br-int-to-provnet-d23a33c8-2f92-47f7-9446-58aa0ac25f0e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/49)
Jan 20 19:13:43 compute-0 NetworkManager[48914]: <info>  [1768936423.1552] manager: (patch-provnet-d23a33c8-2f92-47f7-9446-58aa0ac25f0e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/50)
Jan 20 19:13:43 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:13:43.165 165659 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dd:36:69 10.100.0.12'], port_security=['fa:16:3e:dd:36:69 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-708195267', 'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '11e4950d-c220-48d6-93ff-810afbe8ffb3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6d1a28e4-5186-4bb9-946f-fd06e39cf5fc', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-708195267', 'neutron:project_id': 'dc8a6ea17f334edbbfaf2a91ec6fd167', 'neutron:revision_number': '7', 'neutron:security_group_ids': '9669d00f-1ed1-4975-b80c-aea64099b405', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.202'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9fd85ad8-198e-402e-ab3e-432be0848b78, chassis=[<ovs.db.idl.Row object at 0x7f4e780f9880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4e780f9880>], logical_port=6e554c79-0c8f-4254-b1f4-f67729dacdfa) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 19:13:43 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:13:43.166 165659 INFO neutron.agent.ovn.metadata.agent [-] Port 6e554c79-0c8f-4254-b1f4-f67729dacdfa in datapath 6d1a28e4-5186-4bb9-946f-fd06e39cf5fc bound to our chassis
Jan 20 19:13:43 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:13:43.167 165659 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6d1a28e4-5186-4bb9-946f-fd06e39cf5fc
Jan 20 19:13:43 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:13:43.179 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[ff9cdc4a-3e68-44df-a84f-eccb1b87c1d6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:13:43 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:13:43.179 165659 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6d1a28e4-51 in ovnmeta-6d1a28e4-5186-4bb9-946f-fd06e39cf5fc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 19:13:43 compute-0 systemd-machined[220746]: New machine qemu-4-instance-00000009.
Jan 20 19:13:43 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:13:43.181 259376 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6d1a28e4-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 19:13:43 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:13:43.181 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[a7b166cb-49b3-48f6-91ed-c0c38c629b94]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:13:43 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:13:43.181 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[a7c9614e-952c-4381-9c16-f7ff50ee623e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:13:43 compute-0 systemd-udevd[270133]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 19:13:43 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:13:43.193 166372 DEBUG oslo.privsep.daemon [-] privsep: reply[305f5bf8-be0e-43ba-a550-b72302b33668]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:13:43 compute-0 NetworkManager[48914]: <info>  [1768936423.2062] device (tap6e554c79-0c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 19:13:43 compute-0 NetworkManager[48914]: <info>  [1768936423.2067] device (tap6e554c79-0c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 19:13:43 compute-0 systemd[1]: Started Virtual Machine qemu-4-instance-00000009.
Jan 20 19:13:43 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:13:43.223 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[eff78d3b-c5a7-432d-a8c5-b553fd2a9568]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:13:43 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:13:43.258 259434 DEBUG oslo.privsep.daemon [-] privsep: reply[0f237fc0-3924-4416-bfe9-8692d2f9905a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:13:43 compute-0 NetworkManager[48914]: <info>  [1768936423.2673] manager: (tap6d1a28e4-50): new Veth device (/org/freedesktop/NetworkManager/Devices/51)
Jan 20 19:13:43 compute-0 nova_compute[254061]: 2026-01-20 19:13:43.265 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:13:43 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:13:43.264 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[34d75d1e-bc24-499b-afa4-65e8d20232ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:13:43 compute-0 nova_compute[254061]: 2026-01-20 19:13:43.278 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:13:43 compute-0 ovn_controller[155128]: 2026-01-20T19:13:43Z|00071|binding|INFO|Setting lport 6e554c79-0c8f-4254-b1f4-f67729dacdfa ovn-installed in OVS
Jan 20 19:13:43 compute-0 ovn_controller[155128]: 2026-01-20T19:13:43Z|00072|binding|INFO|Setting lport 6e554c79-0c8f-4254-b1f4-f67729dacdfa up in Southbound
Jan 20 19:13:43 compute-0 nova_compute[254061]: 2026-01-20 19:13:43.290 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:13:43 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:13:43.304 259434 DEBUG oslo.privsep.daemon [-] privsep: reply[e0f417a6-3b31-413f-86d8-19e9714d3cda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:13:43 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:13:43.307 259434 DEBUG oslo.privsep.daemon [-] privsep: reply[4a2844b7-ea90-4245-b6d7-314a7c62d143]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:13:43 compute-0 NetworkManager[48914]: <info>  [1768936423.3284] device (tap6d1a28e4-50): carrier: link connected
Jan 20 19:13:43 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:13:43.336 259434 DEBUG oslo.privsep.daemon [-] privsep: reply[7f35ffcd-2910-466f-bbcb-ceabdb34ce0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:13:43 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:13:43.352 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[d5063ce3-5009-497f-8001-f5f8ab4e88b3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6d1a28e4-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:af:f6:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461911, 'reachable_time': 27048, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270165, 'error': None, 'target': 'ovnmeta-6d1a28e4-5186-4bb9-946f-fd06e39cf5fc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:13:43 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:13:43.368 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[1fe37c19-0f44-42a1-8893-9a1bf20a7c08]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feaf:f67e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 461911, 'tstamp': 461911}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 270166, 'error': None, 'target': 'ovnmeta-6d1a28e4-5186-4bb9-946f-fd06e39cf5fc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:13:43 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:13:43.383 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[6d5299a6-8dd6-41dc-beb9-08c3df9209f5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6d1a28e4-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:af:f6:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461911, 'reachable_time': 27048, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 270167, 'error': None, 'target': 'ovnmeta-6d1a28e4-5186-4bb9-946f-fd06e39cf5fc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:13:43 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v952: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 1.8 MiB/s wr, 40 op/s
Jan 20 19:13:43 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:13:43.411 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[14e22f4e-a469-4b87-8c48-531abffe5ed9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:13:43 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:13:43.461 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[d921d48e-cf2d-4513-99cf-0fdba9b0cf40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:13:43 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:13:43.463 165659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6d1a28e4-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 19:13:43 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:13:43.463 165659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 19:13:43 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:13:43.464 165659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6d1a28e4-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 19:13:43 compute-0 NetworkManager[48914]: <info>  [1768936423.4661] manager: (tap6d1a28e4-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/52)
Jan 20 19:13:43 compute-0 nova_compute[254061]: 2026-01-20 19:13:43.465 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:13:43 compute-0 kernel: tap6d1a28e4-50: entered promiscuous mode
Jan 20 19:13:43 compute-0 nova_compute[254061]: 2026-01-20 19:13:43.467 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:13:43 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:13:43.468 165659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6d1a28e4-50, col_values=(('external_ids', {'iface-id': '4bebb0b0-3708-46d2-912f-ff2c202dc1c5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 19:13:43 compute-0 nova_compute[254061]: 2026-01-20 19:13:43.468 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:13:43 compute-0 ovn_controller[155128]: 2026-01-20T19:13:43Z|00073|binding|INFO|Releasing lport 4bebb0b0-3708-46d2-912f-ff2c202dc1c5 from this chassis (sb_readonly=0)
Jan 20 19:13:43 compute-0 nova_compute[254061]: 2026-01-20 19:13:43.482 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:13:43 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:13:43.483 165659 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6d1a28e4-5186-4bb9-946f-fd06e39cf5fc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6d1a28e4-5186-4bb9-946f-fd06e39cf5fc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 19:13:43 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:13:43.484 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[4055deea-a44d-4d15-b56c-d37a4b3e3846]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:13:43 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:13:43.485 165659 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 19:13:43 compute-0 ovn_metadata_agent[165637]: global
Jan 20 19:13:43 compute-0 ovn_metadata_agent[165637]:     log         /dev/log local0 debug
Jan 20 19:13:43 compute-0 ovn_metadata_agent[165637]:     log-tag     haproxy-metadata-proxy-6d1a28e4-5186-4bb9-946f-fd06e39cf5fc
Jan 20 19:13:43 compute-0 ovn_metadata_agent[165637]:     user        root
Jan 20 19:13:43 compute-0 ovn_metadata_agent[165637]:     group       root
Jan 20 19:13:43 compute-0 ovn_metadata_agent[165637]:     maxconn     1024
Jan 20 19:13:43 compute-0 ovn_metadata_agent[165637]:     pidfile     /var/lib/neutron/external/pids/6d1a28e4-5186-4bb9-946f-fd06e39cf5fc.pid.haproxy
Jan 20 19:13:43 compute-0 ovn_metadata_agent[165637]:     daemon
Jan 20 19:13:43 compute-0 ovn_metadata_agent[165637]: 
Jan 20 19:13:43 compute-0 ovn_metadata_agent[165637]: defaults
Jan 20 19:13:43 compute-0 ovn_metadata_agent[165637]:     log global
Jan 20 19:13:43 compute-0 ovn_metadata_agent[165637]:     mode http
Jan 20 19:13:43 compute-0 ovn_metadata_agent[165637]:     option httplog
Jan 20 19:13:43 compute-0 ovn_metadata_agent[165637]:     option dontlognull
Jan 20 19:13:43 compute-0 ovn_metadata_agent[165637]:     option http-server-close
Jan 20 19:13:43 compute-0 ovn_metadata_agent[165637]:     option forwardfor
Jan 20 19:13:43 compute-0 ovn_metadata_agent[165637]:     retries                 3
Jan 20 19:13:43 compute-0 ovn_metadata_agent[165637]:     timeout http-request    30s
Jan 20 19:13:43 compute-0 ovn_metadata_agent[165637]:     timeout connect         30s
Jan 20 19:13:43 compute-0 ovn_metadata_agent[165637]:     timeout client          32s
Jan 20 19:13:43 compute-0 ovn_metadata_agent[165637]:     timeout server          32s
Jan 20 19:13:43 compute-0 ovn_metadata_agent[165637]:     timeout http-keep-alive 30s
Jan 20 19:13:43 compute-0 ovn_metadata_agent[165637]: 
Jan 20 19:13:43 compute-0 ovn_metadata_agent[165637]: 
Jan 20 19:13:43 compute-0 ovn_metadata_agent[165637]: listen listener
Jan 20 19:13:43 compute-0 ovn_metadata_agent[165637]:     bind 169.254.169.254:80
Jan 20 19:13:43 compute-0 ovn_metadata_agent[165637]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 19:13:43 compute-0 ovn_metadata_agent[165637]:     http-request add-header X-OVN-Network-ID 6d1a28e4-5186-4bb9-946f-fd06e39cf5fc
Jan 20 19:13:43 compute-0 ovn_metadata_agent[165637]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 19:13:43 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:13:43.486 165659 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6d1a28e4-5186-4bb9-946f-fd06e39cf5fc', 'env', 'PROCESS_TAG=haproxy-6d1a28e4-5186-4bb9-946f-fd06e39cf5fc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6d1a28e4-5186-4bb9-946f-fd06e39cf5fc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 19:13:43 compute-0 nova_compute[254061]: 2026-01-20 19:13:43.562 254065 DEBUG nova.virt.driver [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] Emitting event <LifecycleEvent: 1768936423.5613434, 11e4950d-c220-48d6-93ff-810afbe8ffb3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 19:13:43 compute-0 nova_compute[254061]: 2026-01-20 19:13:43.562 254065 INFO nova.compute.manager [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] [instance: 11e4950d-c220-48d6-93ff-810afbe8ffb3] VM Started (Lifecycle Event)
Jan 20 19:13:43 compute-0 nova_compute[254061]: 2026-01-20 19:13:43.587 254065 DEBUG nova.compute.manager [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] [instance: 11e4950d-c220-48d6-93ff-810afbe8ffb3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 19:13:43 compute-0 nova_compute[254061]: 2026-01-20 19:13:43.591 254065 DEBUG nova.virt.driver [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] Emitting event <LifecycleEvent: 1768936423.5645506, 11e4950d-c220-48d6-93ff-810afbe8ffb3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 19:13:43 compute-0 nova_compute[254061]: 2026-01-20 19:13:43.591 254065 INFO nova.compute.manager [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] [instance: 11e4950d-c220-48d6-93ff-810afbe8ffb3] VM Paused (Lifecycle Event)
Jan 20 19:13:43 compute-0 nova_compute[254061]: 2026-01-20 19:13:43.601 254065 DEBUG nova.compute.manager [req-6b725c2c-4560-42b8-9273-8580c7462600 req-6e729b2e-8462-48c9-aab1-f66b37147542 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 11e4950d-c220-48d6-93ff-810afbe8ffb3] Received event network-vif-plugged-6e554c79-0c8f-4254-b1f4-f67729dacdfa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 19:13:43 compute-0 nova_compute[254061]: 2026-01-20 19:13:43.602 254065 DEBUG oslo_concurrency.lockutils [req-6b725c2c-4560-42b8-9273-8580c7462600 req-6e729b2e-8462-48c9-aab1-f66b37147542 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Acquiring lock "11e4950d-c220-48d6-93ff-810afbe8ffb3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:13:43 compute-0 nova_compute[254061]: 2026-01-20 19:13:43.602 254065 DEBUG oslo_concurrency.lockutils [req-6b725c2c-4560-42b8-9273-8580c7462600 req-6e729b2e-8462-48c9-aab1-f66b37147542 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Lock "11e4950d-c220-48d6-93ff-810afbe8ffb3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:13:43 compute-0 nova_compute[254061]: 2026-01-20 19:13:43.602 254065 DEBUG oslo_concurrency.lockutils [req-6b725c2c-4560-42b8-9273-8580c7462600 req-6e729b2e-8462-48c9-aab1-f66b37147542 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Lock "11e4950d-c220-48d6-93ff-810afbe8ffb3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:13:43 compute-0 nova_compute[254061]: 2026-01-20 19:13:43.602 254065 DEBUG nova.compute.manager [req-6b725c2c-4560-42b8-9273-8580c7462600 req-6e729b2e-8462-48c9-aab1-f66b37147542 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 11e4950d-c220-48d6-93ff-810afbe8ffb3] Processing event network-vif-plugged-6e554c79-0c8f-4254-b1f4-f67729dacdfa _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 19:13:43 compute-0 nova_compute[254061]: 2026-01-20 19:13:43.603 254065 DEBUG nova.compute.manager [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 11e4950d-c220-48d6-93ff-810afbe8ffb3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 19:13:43 compute-0 nova_compute[254061]: 2026-01-20 19:13:43.606 254065 DEBUG nova.virt.libvirt.driver [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 11e4950d-c220-48d6-93ff-810afbe8ffb3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 19:13:43 compute-0 nova_compute[254061]: 2026-01-20 19:13:43.609 254065 INFO nova.virt.libvirt.driver [-] [instance: 11e4950d-c220-48d6-93ff-810afbe8ffb3] Instance spawned successfully.
Jan 20 19:13:43 compute-0 nova_compute[254061]: 2026-01-20 19:13:43.609 254065 DEBUG nova.virt.libvirt.driver [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 11e4950d-c220-48d6-93ff-810afbe8ffb3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 19:13:43 compute-0 nova_compute[254061]: 2026-01-20 19:13:43.612 254065 DEBUG nova.compute.manager [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] [instance: 11e4950d-c220-48d6-93ff-810afbe8ffb3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 19:13:43 compute-0 nova_compute[254061]: 2026-01-20 19:13:43.614 254065 DEBUG nova.virt.driver [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] Emitting event <LifecycleEvent: 1768936423.606643, 11e4950d-c220-48d6-93ff-810afbe8ffb3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 19:13:43 compute-0 nova_compute[254061]: 2026-01-20 19:13:43.614 254065 INFO nova.compute.manager [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] [instance: 11e4950d-c220-48d6-93ff-810afbe8ffb3] VM Resumed (Lifecycle Event)
Jan 20 19:13:43 compute-0 nova_compute[254061]: 2026-01-20 19:13:43.633 254065 DEBUG nova.virt.libvirt.driver [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 11e4950d-c220-48d6-93ff-810afbe8ffb3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 19:13:43 compute-0 nova_compute[254061]: 2026-01-20 19:13:43.633 254065 DEBUG nova.virt.libvirt.driver [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 11e4950d-c220-48d6-93ff-810afbe8ffb3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 19:13:43 compute-0 nova_compute[254061]: 2026-01-20 19:13:43.633 254065 DEBUG nova.virt.libvirt.driver [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 11e4950d-c220-48d6-93ff-810afbe8ffb3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 19:13:43 compute-0 nova_compute[254061]: 2026-01-20 19:13:43.634 254065 DEBUG nova.virt.libvirt.driver [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 11e4950d-c220-48d6-93ff-810afbe8ffb3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 19:13:43 compute-0 nova_compute[254061]: 2026-01-20 19:13:43.634 254065 DEBUG nova.virt.libvirt.driver [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 11e4950d-c220-48d6-93ff-810afbe8ffb3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 19:13:43 compute-0 nova_compute[254061]: 2026-01-20 19:13:43.634 254065 DEBUG nova.virt.libvirt.driver [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 11e4950d-c220-48d6-93ff-810afbe8ffb3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 19:13:43 compute-0 nova_compute[254061]: 2026-01-20 19:13:43.638 254065 DEBUG nova.compute.manager [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] [instance: 11e4950d-c220-48d6-93ff-810afbe8ffb3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 19:13:43 compute-0 nova_compute[254061]: 2026-01-20 19:13:43.640 254065 DEBUG nova.compute.manager [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] [instance: 11e4950d-c220-48d6-93ff-810afbe8ffb3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 19:13:43 compute-0 nova_compute[254061]: 2026-01-20 19:13:43.681 254065 INFO nova.compute.manager [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] [instance: 11e4950d-c220-48d6-93ff-810afbe8ffb3] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 19:13:43 compute-0 nova_compute[254061]: 2026-01-20 19:13:43.716 254065 INFO nova.compute.manager [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 11e4950d-c220-48d6-93ff-810afbe8ffb3] Took 5.50 seconds to spawn the instance on the hypervisor.
Jan 20 19:13:43 compute-0 nova_compute[254061]: 2026-01-20 19:13:43.717 254065 DEBUG nova.compute.manager [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 11e4950d-c220-48d6-93ff-810afbe8ffb3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 19:13:43 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:13:43 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:13:43 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:13:43.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:13:43 compute-0 nova_compute[254061]: 2026-01-20 19:13:43.789 254065 INFO nova.compute.manager [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 11e4950d-c220-48d6-93ff-810afbe8ffb3] Took 6.47 seconds to build instance.
Jan 20 19:13:43 compute-0 nova_compute[254061]: 2026-01-20 19:13:43.806 254065 DEBUG oslo_concurrency.lockutils [None req-a25a96d2-ab5a-4a50-b646-8c653141e272 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "11e4950d-c220-48d6-93ff-810afbe8ffb3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.544s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:13:43 compute-0 ceph-mon[74381]: pgmap v952: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 1.8 MiB/s wr, 40 op/s
Jan 20 19:13:43 compute-0 podman[270239]: 2026-01-20 19:13:43.889027749 +0000 UTC m=+0.068501214 container create 2b1f46c03bb79f649643d279865ba8acfa2a20d5f93ca955a619682af2a933d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d1a28e4-5186-4bb9-946f-fd06e39cf5fc, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 20 19:13:43 compute-0 systemd[1]: Started libpod-conmon-2b1f46c03bb79f649643d279865ba8acfa2a20d5f93ca955a619682af2a933d3.scope.
Jan 20 19:13:43 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:13:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cc9d3560a571b7c9abb6192b8f533b2c5158e4f63588dec000e67ed13d99998/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 19:13:43 compute-0 podman[270239]: 2026-01-20 19:13:43.863119479 +0000 UTC m=+0.042592964 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 19:13:43 compute-0 podman[270239]: 2026-01-20 19:13:43.957322787 +0000 UTC m=+0.136796252 container init 2b1f46c03bb79f649643d279865ba8acfa2a20d5f93ca955a619682af2a933d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d1a28e4-5186-4bb9-946f-fd06e39cf5fc, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:13:43 compute-0 podman[270239]: 2026-01-20 19:13:43.963821963 +0000 UTC m=+0.143295428 container start 2b1f46c03bb79f649643d279865ba8acfa2a20d5f93ca955a619682af2a933d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d1a28e4-5186-4bb9-946f-fd06e39cf5fc, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:13:43 compute-0 neutron-haproxy-ovnmeta-6d1a28e4-5186-4bb9-946f-fd06e39cf5fc[270254]: [NOTICE]   (270258) : New worker (270260) forked
Jan 20 19:13:43 compute-0 neutron-haproxy-ovnmeta-6d1a28e4-5186-4bb9-946f-fd06e39cf5fc[270254]: [NOTICE]   (270258) : Loading success.
Jan 20 19:13:44 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:13:44 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:13:44 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:13:44.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:13:45 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v953: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 19:13:45 compute-0 nova_compute[254061]: 2026-01-20 19:13:45.700 254065 DEBUG nova.compute.manager [req-9611db8d-e18a-402d-87b0-7ec60c5f152a req-4bae3a84-6b33-4573-99b6-3bf0194d9d9a 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 11e4950d-c220-48d6-93ff-810afbe8ffb3] Received event network-vif-plugged-6e554c79-0c8f-4254-b1f4-f67729dacdfa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 19:13:45 compute-0 nova_compute[254061]: 2026-01-20 19:13:45.700 254065 DEBUG oslo_concurrency.lockutils [req-9611db8d-e18a-402d-87b0-7ec60c5f152a req-4bae3a84-6b33-4573-99b6-3bf0194d9d9a 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Acquiring lock "11e4950d-c220-48d6-93ff-810afbe8ffb3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:13:45 compute-0 nova_compute[254061]: 2026-01-20 19:13:45.700 254065 DEBUG oslo_concurrency.lockutils [req-9611db8d-e18a-402d-87b0-7ec60c5f152a req-4bae3a84-6b33-4573-99b6-3bf0194d9d9a 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Lock "11e4950d-c220-48d6-93ff-810afbe8ffb3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:13:45 compute-0 nova_compute[254061]: 2026-01-20 19:13:45.701 254065 DEBUG oslo_concurrency.lockutils [req-9611db8d-e18a-402d-87b0-7ec60c5f152a req-4bae3a84-6b33-4573-99b6-3bf0194d9d9a 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Lock "11e4950d-c220-48d6-93ff-810afbe8ffb3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:13:45 compute-0 nova_compute[254061]: 2026-01-20 19:13:45.701 254065 DEBUG nova.compute.manager [req-9611db8d-e18a-402d-87b0-7ec60c5f152a req-4bae3a84-6b33-4573-99b6-3bf0194d9d9a 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 11e4950d-c220-48d6-93ff-810afbe8ffb3] No waiting events found dispatching network-vif-plugged-6e554c79-0c8f-4254-b1f4-f67729dacdfa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 19:13:45 compute-0 nova_compute[254061]: 2026-01-20 19:13:45.701 254065 WARNING nova.compute.manager [req-9611db8d-e18a-402d-87b0-7ec60c5f152a req-4bae3a84-6b33-4573-99b6-3bf0194d9d9a 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 11e4950d-c220-48d6-93ff-810afbe8ffb3] Received unexpected event network-vif-plugged-6e554c79-0c8f-4254-b1f4-f67729dacdfa for instance with vm_state active and task_state None.
Jan 20 19:13:45 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:13:45 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:13:45 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:13:45.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:13:45 compute-0 nova_compute[254061]: 2026-01-20 19:13:45.806 254065 DEBUG oslo_concurrency.lockutils [None req-a1e9a66b-b721-4a7a-8abc-6ab7752e5657 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Acquiring lock "11e4950d-c220-48d6-93ff-810afbe8ffb3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:13:45 compute-0 nova_compute[254061]: 2026-01-20 19:13:45.807 254065 DEBUG oslo_concurrency.lockutils [None req-a1e9a66b-b721-4a7a-8abc-6ab7752e5657 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "11e4950d-c220-48d6-93ff-810afbe8ffb3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:13:45 compute-0 nova_compute[254061]: 2026-01-20 19:13:45.807 254065 DEBUG oslo_concurrency.lockutils [None req-a1e9a66b-b721-4a7a-8abc-6ab7752e5657 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Acquiring lock "11e4950d-c220-48d6-93ff-810afbe8ffb3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:13:45 compute-0 nova_compute[254061]: 2026-01-20 19:13:45.807 254065 DEBUG oslo_concurrency.lockutils [None req-a1e9a66b-b721-4a7a-8abc-6ab7752e5657 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "11e4950d-c220-48d6-93ff-810afbe8ffb3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:13:45 compute-0 nova_compute[254061]: 2026-01-20 19:13:45.807 254065 DEBUG oslo_concurrency.lockutils [None req-a1e9a66b-b721-4a7a-8abc-6ab7752e5657 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "11e4950d-c220-48d6-93ff-810afbe8ffb3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:13:45 compute-0 nova_compute[254061]: 2026-01-20 19:13:45.808 254065 INFO nova.compute.manager [None req-a1e9a66b-b721-4a7a-8abc-6ab7752e5657 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 11e4950d-c220-48d6-93ff-810afbe8ffb3] Terminating instance
Jan 20 19:13:45 compute-0 nova_compute[254061]: 2026-01-20 19:13:45.809 254065 DEBUG nova.compute.manager [None req-a1e9a66b-b721-4a7a-8abc-6ab7752e5657 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 11e4950d-c220-48d6-93ff-810afbe8ffb3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 19:13:45 compute-0 kernel: tap6e554c79-0c (unregistering): left promiscuous mode
Jan 20 19:13:45 compute-0 NetworkManager[48914]: <info>  [1768936425.8532] device (tap6e554c79-0c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 19:13:45 compute-0 ovn_controller[155128]: 2026-01-20T19:13:45Z|00074|binding|INFO|Releasing lport 6e554c79-0c8f-4254-b1f4-f67729dacdfa from this chassis (sb_readonly=0)
Jan 20 19:13:45 compute-0 ovn_controller[155128]: 2026-01-20T19:13:45Z|00075|binding|INFO|Setting lport 6e554c79-0c8f-4254-b1f4-f67729dacdfa down in Southbound
Jan 20 19:13:45 compute-0 ovn_controller[155128]: 2026-01-20T19:13:45Z|00076|binding|INFO|Removing iface tap6e554c79-0c ovn-installed in OVS
Jan 20 19:13:45 compute-0 nova_compute[254061]: 2026-01-20 19:13:45.862 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:13:45 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:13:45.871 165659 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dd:36:69 10.100.0.12'], port_security=['fa:16:3e:dd:36:69 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-708195267', 'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '11e4950d-c220-48d6-93ff-810afbe8ffb3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6d1a28e4-5186-4bb9-946f-fd06e39cf5fc', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-708195267', 'neutron:project_id': 'dc8a6ea17f334edbbfaf2a91ec6fd167', 'neutron:revision_number': '9', 'neutron:security_group_ids': '9669d00f-1ed1-4975-b80c-aea64099b405', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.202', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9fd85ad8-198e-402e-ab3e-432be0848b78, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4e780f9880>], logical_port=6e554c79-0c8f-4254-b1f4-f67729dacdfa) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4e780f9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 19:13:45 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:13:45.873 165659 INFO neutron.agent.ovn.metadata.agent [-] Port 6e554c79-0c8f-4254-b1f4-f67729dacdfa in datapath 6d1a28e4-5186-4bb9-946f-fd06e39cf5fc unbound from our chassis
Jan 20 19:13:45 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:13:45.874 165659 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6d1a28e4-5186-4bb9-946f-fd06e39cf5fc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 19:13:45 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:13:45.875 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[b2c831d4-26f8-4a9a-b4b0-3f360da33850]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:13:45 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:13:45.876 165659 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6d1a28e4-5186-4bb9-946f-fd06e39cf5fc namespace which is not needed anymore
Jan 20 19:13:45 compute-0 nova_compute[254061]: 2026-01-20 19:13:45.881 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:13:45 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000009.scope: Deactivated successfully.
Jan 20 19:13:45 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000009.scope: Consumed 2.710s CPU time.
Jan 20 19:13:45 compute-0 systemd-machined[220746]: Machine qemu-4-instance-00000009 terminated.
Jan 20 19:13:45 compute-0 neutron-haproxy-ovnmeta-6d1a28e4-5186-4bb9-946f-fd06e39cf5fc[270254]: [NOTICE]   (270258) : haproxy version is 2.8.14-c23fe91
Jan 20 19:13:45 compute-0 neutron-haproxy-ovnmeta-6d1a28e4-5186-4bb9-946f-fd06e39cf5fc[270254]: [NOTICE]   (270258) : path to executable is /usr/sbin/haproxy
Jan 20 19:13:45 compute-0 neutron-haproxy-ovnmeta-6d1a28e4-5186-4bb9-946f-fd06e39cf5fc[270254]: [WARNING]  (270258) : Exiting Master process...
Jan 20 19:13:45 compute-0 neutron-haproxy-ovnmeta-6d1a28e4-5186-4bb9-946f-fd06e39cf5fc[270254]: [WARNING]  (270258) : Exiting Master process...
Jan 20 19:13:45 compute-0 neutron-haproxy-ovnmeta-6d1a28e4-5186-4bb9-946f-fd06e39cf5fc[270254]: [ALERT]    (270258) : Current worker (270260) exited with code 143 (Terminated)
Jan 20 19:13:45 compute-0 neutron-haproxy-ovnmeta-6d1a28e4-5186-4bb9-946f-fd06e39cf5fc[270254]: [WARNING]  (270258) : All workers exited. Exiting... (0)
Jan 20 19:13:46 compute-0 systemd[1]: libpod-2b1f46c03bb79f649643d279865ba8acfa2a20d5f93ca955a619682af2a933d3.scope: Deactivated successfully.
Jan 20 19:13:46 compute-0 podman[270292]: 2026-01-20 19:13:46.003584586 +0000 UTC m=+0.040614401 container died 2b1f46c03bb79f649643d279865ba8acfa2a20d5f93ca955a619682af2a933d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d1a28e4-5186-4bb9-946f-fd06e39cf5fc, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 19:13:46 compute-0 kernel: tap6e554c79-0c: entered promiscuous mode
Jan 20 19:13:46 compute-0 NetworkManager[48914]: <info>  [1768936426.0253] manager: (tap6e554c79-0c): new Tun device (/org/freedesktop/NetworkManager/Devices/53)
Jan 20 19:13:46 compute-0 kernel: tap6e554c79-0c (unregistering): left promiscuous mode
Jan 20 19:13:46 compute-0 ovn_controller[155128]: 2026-01-20T19:13:46Z|00077|binding|INFO|Claiming lport 6e554c79-0c8f-4254-b1f4-f67729dacdfa for this chassis.
Jan 20 19:13:46 compute-0 ovn_controller[155128]: 2026-01-20T19:13:46Z|00078|binding|INFO|6e554c79-0c8f-4254-b1f4-f67729dacdfa: Claiming fa:16:3e:dd:36:69 10.100.0.12
Jan 20 19:13:46 compute-0 nova_compute[254061]: 2026-01-20 19:13:46.030 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:13:46 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2b1f46c03bb79f649643d279865ba8acfa2a20d5f93ca955a619682af2a933d3-userdata-shm.mount: Deactivated successfully.
Jan 20 19:13:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-5cc9d3560a571b7c9abb6192b8f533b2c5158e4f63588dec000e67ed13d99998-merged.mount: Deactivated successfully.
Jan 20 19:13:46 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:13:46.045 165659 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dd:36:69 10.100.0.12'], port_security=['fa:16:3e:dd:36:69 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-708195267', 'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '11e4950d-c220-48d6-93ff-810afbe8ffb3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6d1a28e4-5186-4bb9-946f-fd06e39cf5fc', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-708195267', 'neutron:project_id': 'dc8a6ea17f334edbbfaf2a91ec6fd167', 'neutron:revision_number': '9', 'neutron:security_group_ids': '9669d00f-1ed1-4975-b80c-aea64099b405', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.202', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9fd85ad8-198e-402e-ab3e-432be0848b78, chassis=[<ovs.db.idl.Row object at 0x7f4e780f9880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4e780f9880>], logical_port=6e554c79-0c8f-4254-b1f4-f67729dacdfa) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 19:13:46 compute-0 nova_compute[254061]: 2026-01-20 19:13:46.048 254065 INFO nova.virt.libvirt.driver [-] [instance: 11e4950d-c220-48d6-93ff-810afbe8ffb3] Instance destroyed successfully.
Jan 20 19:13:46 compute-0 nova_compute[254061]: 2026-01-20 19:13:46.048 254065 DEBUG nova.objects.instance [None req-a1e9a66b-b721-4a7a-8abc-6ab7752e5657 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lazy-loading 'resources' on Instance uuid 11e4950d-c220-48d6-93ff-810afbe8ffb3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 19:13:46 compute-0 ovn_controller[155128]: 2026-01-20T19:13:46Z|00079|binding|INFO|Releasing lport 6e554c79-0c8f-4254-b1f4-f67729dacdfa from this chassis (sb_readonly=0)
Jan 20 19:13:46 compute-0 nova_compute[254061]: 2026-01-20 19:13:46.053 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:13:46 compute-0 podman[270292]: 2026-01-20 19:13:46.055992603 +0000 UTC m=+0.093022408 container cleanup 2b1f46c03bb79f649643d279865ba8acfa2a20d5f93ca955a619682af2a933d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d1a28e4-5186-4bb9-946f-fd06e39cf5fc, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 19:13:46 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:13:46.060 165659 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dd:36:69 10.100.0.12'], port_security=['fa:16:3e:dd:36:69 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-708195267', 'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '11e4950d-c220-48d6-93ff-810afbe8ffb3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6d1a28e4-5186-4bb9-946f-fd06e39cf5fc', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-708195267', 'neutron:project_id': 'dc8a6ea17f334edbbfaf2a91ec6fd167', 'neutron:revision_number': '9', 'neutron:security_group_ids': '9669d00f-1ed1-4975-b80c-aea64099b405', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.202', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9fd85ad8-198e-402e-ab3e-432be0848b78, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4e780f9880>], logical_port=6e554c79-0c8f-4254-b1f4-f67729dacdfa) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7f4e780f9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 19:13:46 compute-0 nova_compute[254061]: 2026-01-20 19:13:46.062 254065 DEBUG nova.virt.libvirt.vif [None req-a1e9a66b-b721-4a7a-8abc-6ab7752e5657 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T19:13:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-780835278',display_name='tempest-TestNetworkBasicOps-server-780835278',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-780835278',id=9,image_ref='bc57af0c-4b71-499e-9808-3c8fc070a488',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKuPfuHi+uE4V1ZIBY5O38PrAxKXVFlykKWG5j8Bge69/bGRf9gbezh34i4qhpnjTMhUHP26JByA8RiLJVtG/moa55IqtE4heVGMDxRueeY6mEizwAAgEJSC0YEQfIW1yg==',key_name='tempest-TestNetworkBasicOps-1443982051',keypairs=<?>,launch_index=0,launched_at=2026-01-20T19:13:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dc8a6ea17f334edbbfaf2a91ec6fd167',ramdisk_id='',reservation_id='r-f5b3nx4t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='bc57af0c-4b71-499e-9808-3c8fc070a488',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-899583499',owner_user_name='tempest-TestNetworkBasicOps-899583499-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T19:13:43Z,user_data=None,user_id='d34bd159f8884263a7481e3fcff15267',uuid=11e4950d-c220-48d6-93ff-810afbe8ffb3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6e554c79-0c8f-4254-b1f4-f67729dacdfa", "address": "fa:16:3e:dd:36:69", "network": {"id": "6d1a28e4-5186-4bb9-946f-fd06e39cf5fc", "bridge": "br-int", "label": "tempest-network-smoke--146205527", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e554c79-0c", "ovs_interfaceid": "6e554c79-0c8f-4254-b1f4-f67729dacdfa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 19:13:46 compute-0 nova_compute[254061]: 2026-01-20 19:13:46.062 254065 DEBUG nova.network.os_vif_util [None req-a1e9a66b-b721-4a7a-8abc-6ab7752e5657 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Converting VIF {"id": "6e554c79-0c8f-4254-b1f4-f67729dacdfa", "address": "fa:16:3e:dd:36:69", "network": {"id": "6d1a28e4-5186-4bb9-946f-fd06e39cf5fc", "bridge": "br-int", "label": "tempest-network-smoke--146205527", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e554c79-0c", "ovs_interfaceid": "6e554c79-0c8f-4254-b1f4-f67729dacdfa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 19:13:46 compute-0 nova_compute[254061]: 2026-01-20 19:13:46.064 254065 DEBUG nova.network.os_vif_util [None req-a1e9a66b-b721-4a7a-8abc-6ab7752e5657 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dd:36:69,bridge_name='br-int',has_traffic_filtering=True,id=6e554c79-0c8f-4254-b1f4-f67729dacdfa,network=Network(6d1a28e4-5186-4bb9-946f-fd06e39cf5fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap6e554c79-0c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 19:13:46 compute-0 nova_compute[254061]: 2026-01-20 19:13:46.064 254065 DEBUG os_vif [None req-a1e9a66b-b721-4a7a-8abc-6ab7752e5657 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:36:69,bridge_name='br-int',has_traffic_filtering=True,id=6e554c79-0c8f-4254-b1f4-f67729dacdfa,network=Network(6d1a28e4-5186-4bb9-946f-fd06e39cf5fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap6e554c79-0c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 19:13:46 compute-0 nova_compute[254061]: 2026-01-20 19:13:46.066 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:13:46 compute-0 nova_compute[254061]: 2026-01-20 19:13:46.067 254065 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6e554c79-0c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 19:13:46 compute-0 nova_compute[254061]: 2026-01-20 19:13:46.068 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:13:46 compute-0 systemd[1]: libpod-conmon-2b1f46c03bb79f649643d279865ba8acfa2a20d5f93ca955a619682af2a933d3.scope: Deactivated successfully.
Jan 20 19:13:46 compute-0 nova_compute[254061]: 2026-01-20 19:13:46.069 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:13:46 compute-0 nova_compute[254061]: 2026-01-20 19:13:46.071 254065 INFO os_vif [None req-a1e9a66b-b721-4a7a-8abc-6ab7752e5657 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dd:36:69,bridge_name='br-int',has_traffic_filtering=True,id=6e554c79-0c8f-4254-b1f4-f67729dacdfa,network=Network(6d1a28e4-5186-4bb9-946f-fd06e39cf5fc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap6e554c79-0c')
Jan 20 19:13:46 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:13:46 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:13:46 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:13:46.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:13:46 compute-0 podman[270325]: 2026-01-20 19:13:46.121061243 +0000 UTC m=+0.042273565 container remove 2b1f46c03bb79f649643d279865ba8acfa2a20d5f93ca955a619682af2a933d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6d1a28e4-5186-4bb9-946f-fd06e39cf5fc, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 19:13:46 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:13:46.126 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[de5889fd-c668-4f80-b8f9-09d78a3fac07]: (4, ('Tue Jan 20 07:13:45 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-6d1a28e4-5186-4bb9-946f-fd06e39cf5fc (2b1f46c03bb79f649643d279865ba8acfa2a20d5f93ca955a619682af2a933d3)\n2b1f46c03bb79f649643d279865ba8acfa2a20d5f93ca955a619682af2a933d3\nTue Jan 20 07:13:46 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-6d1a28e4-5186-4bb9-946f-fd06e39cf5fc (2b1f46c03bb79f649643d279865ba8acfa2a20d5f93ca955a619682af2a933d3)\n2b1f46c03bb79f649643d279865ba8acfa2a20d5f93ca955a619682af2a933d3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:13:46 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:13:46.128 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[a99c602d-331a-427f-ae7f-e6fefe0e6b5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:13:46 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:13:46.129 165659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6d1a28e4-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 19:13:46 compute-0 nova_compute[254061]: 2026-01-20 19:13:46.131 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:13:46 compute-0 kernel: tap6d1a28e4-50: left promiscuous mode
Jan 20 19:13:46 compute-0 nova_compute[254061]: 2026-01-20 19:13:46.144 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:13:46 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:13:46.147 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[37112371-d5d3-4b34-bb60-7c483c53a08a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:13:46 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:13:46.159 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[7b911498-c916-4b83-86ef-c71b9d8f15a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:13:46 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:13:46.160 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[57a7304b-2693-4a74-954f-f30ce2be4128]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:13:46 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:13:46.174 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[a6d25071-8a8b-4992-9080-260c7b330f94]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 461903, 'reachable_time': 16956, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270357, 'error': None, 'target': 'ovnmeta-6d1a28e4-5186-4bb9-946f-fd06e39cf5fc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:13:46 compute-0 systemd[1]: run-netns-ovnmeta\x2d6d1a28e4\x2d5186\x2d4bb9\x2d946f\x2dfd06e39cf5fc.mount: Deactivated successfully.
Jan 20 19:13:46 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:13:46.178 166372 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6d1a28e4-5186-4bb9-946f-fd06e39cf5fc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 19:13:46 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:13:46.178 166372 DEBUG oslo.privsep.daemon [-] privsep: reply[bdd2af94-5f66-4cf1-8c4f-9019c279e223]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:13:46 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:13:46.178 165659 INFO neutron.agent.ovn.metadata.agent [-] Port 6e554c79-0c8f-4254-b1f4-f67729dacdfa in datapath 6d1a28e4-5186-4bb9-946f-fd06e39cf5fc unbound from our chassis
Jan 20 19:13:46 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:13:46.179 165659 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6d1a28e4-5186-4bb9-946f-fd06e39cf5fc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 19:13:46 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:13:46.180 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[e63d7b1b-cf42-45b4-9683-e88cb2770d2c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:13:46 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:13:46.180 165659 INFO neutron.agent.ovn.metadata.agent [-] Port 6e554c79-0c8f-4254-b1f4-f67729dacdfa in datapath 6d1a28e4-5186-4bb9-946f-fd06e39cf5fc unbound from our chassis
Jan 20 19:13:46 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:13:46.181 165659 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6d1a28e4-5186-4bb9-946f-fd06e39cf5fc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 19:13:46 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:13:46.182 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[b2ffdc85-cd84-4e4b-8431-087c4e3570ff]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:13:46 compute-0 ceph-mon[74381]: pgmap v953: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 19:13:46 compute-0 nova_compute[254061]: 2026-01-20 19:13:46.491 254065 INFO nova.virt.libvirt.driver [None req-a1e9a66b-b721-4a7a-8abc-6ab7752e5657 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 11e4950d-c220-48d6-93ff-810afbe8ffb3] Deleting instance files /var/lib/nova/instances/11e4950d-c220-48d6-93ff-810afbe8ffb3_del
Jan 20 19:13:46 compute-0 nova_compute[254061]: 2026-01-20 19:13:46.492 254065 INFO nova.virt.libvirt.driver [None req-a1e9a66b-b721-4a7a-8abc-6ab7752e5657 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 11e4950d-c220-48d6-93ff-810afbe8ffb3] Deletion of /var/lib/nova/instances/11e4950d-c220-48d6-93ff-810afbe8ffb3_del complete
Jan 20 19:13:46 compute-0 nova_compute[254061]: 2026-01-20 19:13:46.539 254065 INFO nova.compute.manager [None req-a1e9a66b-b721-4a7a-8abc-6ab7752e5657 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 11e4950d-c220-48d6-93ff-810afbe8ffb3] Took 0.73 seconds to destroy the instance on the hypervisor.
Jan 20 19:13:46 compute-0 nova_compute[254061]: 2026-01-20 19:13:46.540 254065 DEBUG oslo.service.loopingcall [None req-a1e9a66b-b721-4a7a-8abc-6ab7752e5657 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 19:13:46 compute-0 nova_compute[254061]: 2026-01-20 19:13:46.540 254065 DEBUG nova.compute.manager [-] [instance: 11e4950d-c220-48d6-93ff-810afbe8ffb3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 19:13:46 compute-0 nova_compute[254061]: 2026-01-20 19:13:46.541 254065 DEBUG nova.network.neutron [-] [instance: 11e4950d-c220-48d6-93ff-810afbe8ffb3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 19:13:46 compute-0 nova_compute[254061]: 2026-01-20 19:13:46.648 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:13:46 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:13:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:13:47.201Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:13:47 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v954: 337 pgs: 337 active+clean; 62 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 123 op/s
Jan 20 19:13:47 compute-0 nova_compute[254061]: 2026-01-20 19:13:47.632 254065 DEBUG nova.network.neutron [-] [instance: 11e4950d-c220-48d6-93ff-810afbe8ffb3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 19:13:47 compute-0 nova_compute[254061]: 2026-01-20 19:13:47.654 254065 INFO nova.compute.manager [-] [instance: 11e4950d-c220-48d6-93ff-810afbe8ffb3] Took 1.11 seconds to deallocate network for instance.
Jan 20 19:13:47 compute-0 nova_compute[254061]: 2026-01-20 19:13:47.692 254065 DEBUG oslo_concurrency.lockutils [None req-a1e9a66b-b721-4a7a-8abc-6ab7752e5657 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:13:47 compute-0 nova_compute[254061]: 2026-01-20 19:13:47.692 254065 DEBUG oslo_concurrency.lockutils [None req-a1e9a66b-b721-4a7a-8abc-6ab7752e5657 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:13:47 compute-0 nova_compute[254061]: 2026-01-20 19:13:47.741 254065 DEBUG oslo_concurrency.processutils [None req-a1e9a66b-b721-4a7a-8abc-6ab7752e5657 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:13:47 compute-0 nova_compute[254061]: 2026-01-20 19:13:47.771 254065 DEBUG nova.compute.manager [req-5d297154-f8f8-4c33-96ab-881e0da13621 req-87573e65-2865-445e-a5df-5949e39a6295 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 11e4950d-c220-48d6-93ff-810afbe8ffb3] Received event network-vif-unplugged-6e554c79-0c8f-4254-b1f4-f67729dacdfa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 19:13:47 compute-0 nova_compute[254061]: 2026-01-20 19:13:47.772 254065 DEBUG oslo_concurrency.lockutils [req-5d297154-f8f8-4c33-96ab-881e0da13621 req-87573e65-2865-445e-a5df-5949e39a6295 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Acquiring lock "11e4950d-c220-48d6-93ff-810afbe8ffb3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:13:47 compute-0 nova_compute[254061]: 2026-01-20 19:13:47.772 254065 DEBUG oslo_concurrency.lockutils [req-5d297154-f8f8-4c33-96ab-881e0da13621 req-87573e65-2865-445e-a5df-5949e39a6295 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Lock "11e4950d-c220-48d6-93ff-810afbe8ffb3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:13:47 compute-0 nova_compute[254061]: 2026-01-20 19:13:47.772 254065 DEBUG oslo_concurrency.lockutils [req-5d297154-f8f8-4c33-96ab-881e0da13621 req-87573e65-2865-445e-a5df-5949e39a6295 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Lock "11e4950d-c220-48d6-93ff-810afbe8ffb3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:13:47 compute-0 nova_compute[254061]: 2026-01-20 19:13:47.773 254065 DEBUG nova.compute.manager [req-5d297154-f8f8-4c33-96ab-881e0da13621 req-87573e65-2865-445e-a5df-5949e39a6295 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 11e4950d-c220-48d6-93ff-810afbe8ffb3] No waiting events found dispatching network-vif-unplugged-6e554c79-0c8f-4254-b1f4-f67729dacdfa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 19:13:47 compute-0 nova_compute[254061]: 2026-01-20 19:13:47.773 254065 WARNING nova.compute.manager [req-5d297154-f8f8-4c33-96ab-881e0da13621 req-87573e65-2865-445e-a5df-5949e39a6295 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 11e4950d-c220-48d6-93ff-810afbe8ffb3] Received unexpected event network-vif-unplugged-6e554c79-0c8f-4254-b1f4-f67729dacdfa for instance with vm_state deleted and task_state None.
Jan 20 19:13:47 compute-0 nova_compute[254061]: 2026-01-20 19:13:47.773 254065 DEBUG nova.compute.manager [req-5d297154-f8f8-4c33-96ab-881e0da13621 req-87573e65-2865-445e-a5df-5949e39a6295 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 11e4950d-c220-48d6-93ff-810afbe8ffb3] Received event network-vif-plugged-6e554c79-0c8f-4254-b1f4-f67729dacdfa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 19:13:47 compute-0 nova_compute[254061]: 2026-01-20 19:13:47.774 254065 DEBUG oslo_concurrency.lockutils [req-5d297154-f8f8-4c33-96ab-881e0da13621 req-87573e65-2865-445e-a5df-5949e39a6295 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Acquiring lock "11e4950d-c220-48d6-93ff-810afbe8ffb3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:13:47 compute-0 nova_compute[254061]: 2026-01-20 19:13:47.774 254065 DEBUG oslo_concurrency.lockutils [req-5d297154-f8f8-4c33-96ab-881e0da13621 req-87573e65-2865-445e-a5df-5949e39a6295 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Lock "11e4950d-c220-48d6-93ff-810afbe8ffb3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:13:47 compute-0 nova_compute[254061]: 2026-01-20 19:13:47.774 254065 DEBUG oslo_concurrency.lockutils [req-5d297154-f8f8-4c33-96ab-881e0da13621 req-87573e65-2865-445e-a5df-5949e39a6295 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Lock "11e4950d-c220-48d6-93ff-810afbe8ffb3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:13:47 compute-0 nova_compute[254061]: 2026-01-20 19:13:47.774 254065 DEBUG nova.compute.manager [req-5d297154-f8f8-4c33-96ab-881e0da13621 req-87573e65-2865-445e-a5df-5949e39a6295 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 11e4950d-c220-48d6-93ff-810afbe8ffb3] No waiting events found dispatching network-vif-plugged-6e554c79-0c8f-4254-b1f4-f67729dacdfa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 19:13:47 compute-0 nova_compute[254061]: 2026-01-20 19:13:47.775 254065 WARNING nova.compute.manager [req-5d297154-f8f8-4c33-96ab-881e0da13621 req-87573e65-2865-445e-a5df-5949e39a6295 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 11e4950d-c220-48d6-93ff-810afbe8ffb3] Received unexpected event network-vif-plugged-6e554c79-0c8f-4254-b1f4-f67729dacdfa for instance with vm_state deleted and task_state None.
Jan 20 19:13:47 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:13:47 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:13:47 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:13:47.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:13:48 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:13:48 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:13:48 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:13:48.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:13:48 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:13:48 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2251606928' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:13:48 compute-0 nova_compute[254061]: 2026-01-20 19:13:48.200 254065 DEBUG oslo_concurrency.processutils [None req-a1e9a66b-b721-4a7a-8abc-6ab7752e5657 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:13:48 compute-0 nova_compute[254061]: 2026-01-20 19:13:48.206 254065 DEBUG nova.compute.provider_tree [None req-a1e9a66b-b721-4a7a-8abc-6ab7752e5657 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Inventory has not changed in ProviderTree for provider: cb9161e5-191d-495c-920a-01144f42a215 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 19:13:48 compute-0 nova_compute[254061]: 2026-01-20 19:13:48.223 254065 DEBUG nova.scheduler.client.report [None req-a1e9a66b-b721-4a7a-8abc-6ab7752e5657 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Inventory has not changed for provider cb9161e5-191d-495c-920a-01144f42a215 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 19:13:48 compute-0 nova_compute[254061]: 2026-01-20 19:13:48.241 254065 DEBUG oslo_concurrency.lockutils [None req-a1e9a66b-b721-4a7a-8abc-6ab7752e5657 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.548s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:13:48 compute-0 nova_compute[254061]: 2026-01-20 19:13:48.279 254065 INFO nova.scheduler.client.report [None req-a1e9a66b-b721-4a7a-8abc-6ab7752e5657 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Deleted allocations for instance 11e4950d-c220-48d6-93ff-810afbe8ffb3
Jan 20 19:13:48 compute-0 nova_compute[254061]: 2026-01-20 19:13:48.355 254065 DEBUG oslo_concurrency.lockutils [None req-a1e9a66b-b721-4a7a-8abc-6ab7752e5657 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "11e4950d-c220-48d6-93ff-810afbe8ffb3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.549s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:13:48 compute-0 ceph-mon[74381]: pgmap v954: 337 pgs: 337 active+clean; 62 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 123 op/s
Jan 20 19:13:48 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/2251606928' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:13:48 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:13:48.887Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:13:49 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v955: 337 pgs: 337 active+clean; 62 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 123 op/s
Jan 20 19:13:49 compute-0 ceph-mon[74381]: from='client.? 192.168.122.10:0/1389110666' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 19:13:49 compute-0 ceph-mon[74381]: from='client.? 192.168.122.10:0/1389110666' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 19:13:49 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:13:49 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:13:49 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:13:49.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:13:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:13:49] "GET /metrics HTTP/1.1" 200 48452 "" "Prometheus/2.51.0"
Jan 20 19:13:49 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:13:49] "GET /metrics HTTP/1.1" 200 48452 "" "Prometheus/2.51.0"
Jan 20 19:13:50 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:13:50 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:13:50 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:13:50.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:13:50 compute-0 ceph-mon[74381]: pgmap v955: 337 pgs: 337 active+clean; 62 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 123 op/s
Jan 20 19:13:51 compute-0 nova_compute[254061]: 2026-01-20 19:13:51.070 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:13:51 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v956: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Jan 20 19:13:51 compute-0 nova_compute[254061]: 2026-01-20 19:13:51.650 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:13:51 compute-0 ceph-mon[74381]: pgmap v956: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Jan 20 19:13:51 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:13:51 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:13:51 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:13:51.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:13:51 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:13:52 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:13:52 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:13:52 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:13:52.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:13:53 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v957: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 933 KiB/s wr, 115 op/s
Jan 20 19:13:53 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:13:53 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:13:53 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:13:53.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:13:54 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:13:54 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:13:54 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:13:54.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:13:54 compute-0 ceph-mon[74381]: pgmap v957: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 933 KiB/s wr, 115 op/s
Jan 20 19:13:55 compute-0 ceph-mgr[74676]: [balancer INFO root] Optimize plan auto_2026-01-20_19:13:55
Jan 20 19:13:55 compute-0 ceph-mgr[74676]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 19:13:55 compute-0 ceph-mgr[74676]: [balancer INFO root] do_upmap
Jan 20 19:13:55 compute-0 ceph-mgr[74676]: [balancer INFO root] pools ['default.rgw.log', 'backups', 'default.rgw.meta', 'vms', 'default.rgw.control', '.mgr', 'cephfs.cephfs.data', 'images', 'volumes', '.nfs', '.rgw.root', 'cephfs.cephfs.meta']
Jan 20 19:13:55 compute-0 ceph-mgr[74676]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 19:13:55 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:13:55.091 165659 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'be:fe:f2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '0e:71:69:cd:a8:95'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 19:13:55 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:13:55.092 165659 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 19:13:55 compute-0 nova_compute[254061]: 2026-01-20 19:13:55.091 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:13:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:13:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:13:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:13:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:13:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:13:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:13:55 compute-0 podman[270395]: 2026-01-20 19:13:55.122157543 +0000 UTC m=+0.072050300 container health_status 7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 20 19:13:55 compute-0 nova_compute[254061]: 2026-01-20 19:13:55.256 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:13:55 compute-0 nova_compute[254061]: 2026-01-20 19:13:55.375 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:13:55 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v958: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Jan 20 19:13:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 19:13:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:13:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 19:13:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:13:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 20 19:13:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:13:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:13:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:13:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:13:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:13:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 20 19:13:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:13:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 20 19:13:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:13:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:13:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:13:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 19:13:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:13:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 20 19:13:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:13:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:13:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:13:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 19:13:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:13:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 20 19:13:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 19:13:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:13:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 19:13:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:13:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:13:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:13:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:13:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:13:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:13:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:13:55 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:13:55 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:13:55 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:13:55.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:13:56 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:13:56 compute-0 ceph-mon[74381]: pgmap v958: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Jan 20 19:13:56 compute-0 nova_compute[254061]: 2026-01-20 19:13:56.111 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:13:56 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:13:56 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:13:56 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:13:56.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:13:56 compute-0 nova_compute[254061]: 2026-01-20 19:13:56.652 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:13:56 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:13:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:13:57.202Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:13:57 compute-0 sudo[270418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:13:57 compute-0 sudo[270418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:13:57 compute-0 sudo[270418]: pam_unix(sudo:session): session closed for user root
Jan 20 19:13:57 compute-0 sudo[270443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 20 19:13:57 compute-0 sudo[270443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:13:57 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v959: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Jan 20 19:13:57 compute-0 sudo[270443]: pam_unix(sudo:session): session closed for user root
Jan 20 19:13:57 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:13:57 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:13:57 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:13:57.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:13:57 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v960: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 676 B/s wr, 5 op/s
Jan 20 19:13:57 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 19:13:57 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:13:57 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 20 19:13:58 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:13:58 compute-0 sudo[270501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:13:58 compute-0 sudo[270501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:13:58 compute-0 sudo[270501]: pam_unix(sudo:session): session closed for user root
Jan 20 19:13:58 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:13:58 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:13:58 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:13:58.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:13:58 compute-0 sudo[270527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 19:13:58 compute-0 sudo[270527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:13:58 compute-0 sudo[270552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:13:58 compute-0 sudo[270552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:13:58 compute-0 sudo[270552]: pam_unix(sudo:session): session closed for user root
Jan 20 19:13:58 compute-0 ceph-mon[74381]: pgmap v959: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Jan 20 19:13:58 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 19:13:58 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 19:13:58 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:13:58 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:13:58 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 19:13:58 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 19:13:58 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 19:13:58 compute-0 podman[270617]: 2026-01-20 19:13:58.570921252 +0000 UTC m=+0.039503549 container create 5b3176f75fe213555ddf25fa7aa1a7dff753d9897fba7c7cf016f605900c1b4b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_mestorf, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid)
Jan 20 19:13:58 compute-0 systemd[1]: Started libpod-conmon-5b3176f75fe213555ddf25fa7aa1a7dff753d9897fba7c7cf016f605900c1b4b.scope.
Jan 20 19:13:58 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:13:58 compute-0 podman[270617]: 2026-01-20 19:13:58.553628945 +0000 UTC m=+0.022211242 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:13:58 compute-0 podman[270617]: 2026-01-20 19:13:58.662367187 +0000 UTC m=+0.130949494 container init 5b3176f75fe213555ddf25fa7aa1a7dff753d9897fba7c7cf016f605900c1b4b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:13:58 compute-0 podman[270617]: 2026-01-20 19:13:58.669418748 +0000 UTC m=+0.138001045 container start 5b3176f75fe213555ddf25fa7aa1a7dff753d9897fba7c7cf016f605900c1b4b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_mestorf, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Jan 20 19:13:58 compute-0 podman[270617]: 2026-01-20 19:13:58.672716707 +0000 UTC m=+0.141299014 container attach 5b3176f75fe213555ddf25fa7aa1a7dff753d9897fba7c7cf016f605900c1b4b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_mestorf, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 19:13:58 compute-0 systemd[1]: libpod-5b3176f75fe213555ddf25fa7aa1a7dff753d9897fba7c7cf016f605900c1b4b.scope: Deactivated successfully.
Jan 20 19:13:58 compute-0 intelligent_mestorf[270634]: 167 167
Jan 20 19:13:58 compute-0 conmon[270634]: conmon 5b3176f75fe213555ddf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5b3176f75fe213555ddf25fa7aa1a7dff753d9897fba7c7cf016f605900c1b4b.scope/container/memory.events
Jan 20 19:13:58 compute-0 podman[270617]: 2026-01-20 19:13:58.676664334 +0000 UTC m=+0.145246631 container died 5b3176f75fe213555ddf25fa7aa1a7dff753d9897fba7c7cf016f605900c1b4b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_mestorf, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:13:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-da2c9b43b3552b27a071651ec29dbdbfe903ae89c0755e5f33bf5a9b8c3592c7-merged.mount: Deactivated successfully.
Jan 20 19:13:58 compute-0 podman[270617]: 2026-01-20 19:13:58.708226087 +0000 UTC m=+0.176808384 container remove 5b3176f75fe213555ddf25fa7aa1a7dff753d9897fba7c7cf016f605900c1b4b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:13:58 compute-0 systemd[1]: libpod-conmon-5b3176f75fe213555ddf25fa7aa1a7dff753d9897fba7c7cf016f605900c1b4b.scope: Deactivated successfully.
Jan 20 19:13:58 compute-0 podman[270658]: 2026-01-20 19:13:58.853331273 +0000 UTC m=+0.037607149 container create a80e9cb71246883451ff52590a4880146eb472c3459567aa7c510adb8b1b0b57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_austin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325)
Jan 20 19:13:58 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:13:58.888Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:13:58 compute-0 systemd[1]: Started libpod-conmon-a80e9cb71246883451ff52590a4880146eb472c3459567aa7c510adb8b1b0b57.scope.
Jan 20 19:13:58 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:13:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82305fd829170602c084ee1f8d4f7cbb097dc4c457937afeca880cf96da2704e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:13:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82305fd829170602c084ee1f8d4f7cbb097dc4c457937afeca880cf96da2704e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:13:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82305fd829170602c084ee1f8d4f7cbb097dc4c457937afeca880cf96da2704e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:13:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82305fd829170602c084ee1f8d4f7cbb097dc4c457937afeca880cf96da2704e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:13:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82305fd829170602c084ee1f8d4f7cbb097dc4c457937afeca880cf96da2704e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:13:58 compute-0 podman[270658]: 2026-01-20 19:13:58.837526996 +0000 UTC m=+0.021802892 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:13:58 compute-0 podman[270658]: 2026-01-20 19:13:58.943126732 +0000 UTC m=+0.127402628 container init a80e9cb71246883451ff52590a4880146eb472c3459567aa7c510adb8b1b0b57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_austin, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 20 19:13:58 compute-0 podman[270658]: 2026-01-20 19:13:58.948412745 +0000 UTC m=+0.132688621 container start a80e9cb71246883451ff52590a4880146eb472c3459567aa7c510adb8b1b0b57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_austin, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:13:58 compute-0 podman[270658]: 2026-01-20 19:13:58.951772926 +0000 UTC m=+0.136048822 container attach a80e9cb71246883451ff52590a4880146eb472c3459567aa7c510adb8b1b0b57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_austin, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:13:59 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:13:59.094 165659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7018ca8a-de0e-4b56-bb43-675238d4f8b3, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 19:13:59 compute-0 inspiring_austin[270675]: --> passed data devices: 0 physical, 1 LVM
Jan 20 19:13:59 compute-0 inspiring_austin[270675]: --> All data devices are unavailable
Jan 20 19:13:59 compute-0 systemd[1]: libpod-a80e9cb71246883451ff52590a4880146eb472c3459567aa7c510adb8b1b0b57.scope: Deactivated successfully.
Jan 20 19:13:59 compute-0 podman[270658]: 2026-01-20 19:13:59.273674214 +0000 UTC m=+0.457950090 container died a80e9cb71246883451ff52590a4880146eb472c3459567aa7c510adb8b1b0b57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_austin, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:13:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-82305fd829170602c084ee1f8d4f7cbb097dc4c457937afeca880cf96da2704e-merged.mount: Deactivated successfully.
Jan 20 19:13:59 compute-0 podman[270658]: 2026-01-20 19:13:59.322928557 +0000 UTC m=+0.507204433 container remove a80e9cb71246883451ff52590a4880146eb472c3459567aa7c510adb8b1b0b57 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_austin, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Jan 20 19:13:59 compute-0 systemd[1]: libpod-conmon-a80e9cb71246883451ff52590a4880146eb472c3459567aa7c510adb8b1b0b57.scope: Deactivated successfully.
Jan 20 19:13:59 compute-0 sudo[270527]: pam_unix(sudo:session): session closed for user root
Jan 20 19:13:59 compute-0 sudo[270704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:13:59 compute-0 sudo[270704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:13:59 compute-0 sudo[270704]: pam_unix(sudo:session): session closed for user root
Jan 20 19:13:59 compute-0 sudo[270729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- lvm list --format json
Jan 20 19:13:59 compute-0 sudo[270729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:13:59 compute-0 ceph-mon[74381]: pgmap v960: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 676 B/s wr, 5 op/s
Jan 20 19:13:59 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:13:59 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:13:59 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:13:59.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:13:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:13:59] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Jan 20 19:13:59 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:13:59] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Jan 20 19:13:59 compute-0 podman[270794]: 2026-01-20 19:13:59.97213222 +0000 UTC m=+0.044587597 container create 9672c28e62897c782f84075c364b9949c62aa3d138d9acc728aa8a694e259672 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_hopper, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:13:59 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v961: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 676 B/s wr, 5 op/s
Jan 20 19:14:00 compute-0 systemd[1]: Started libpod-conmon-9672c28e62897c782f84075c364b9949c62aa3d138d9acc728aa8a694e259672.scope.
Jan 20 19:14:00 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:14:00 compute-0 podman[270794]: 2026-01-20 19:13:59.953558068 +0000 UTC m=+0.026013445 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:14:00 compute-0 podman[270794]: 2026-01-20 19:14:00.063514982 +0000 UTC m=+0.135970359 container init 9672c28e62897c782f84075c364b9949c62aa3d138d9acc728aa8a694e259672 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_hopper, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:14:00 compute-0 podman[270794]: 2026-01-20 19:14:00.071106588 +0000 UTC m=+0.143561955 container start 9672c28e62897c782f84075c364b9949c62aa3d138d9acc728aa8a694e259672 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_hopper, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:14:00 compute-0 podman[270794]: 2026-01-20 19:14:00.074572891 +0000 UTC m=+0.147028308 container attach 9672c28e62897c782f84075c364b9949c62aa3d138d9acc728aa8a694e259672 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_hopper, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:14:00 compute-0 flamboyant_hopper[270810]: 167 167
Jan 20 19:14:00 compute-0 systemd[1]: libpod-9672c28e62897c782f84075c364b9949c62aa3d138d9acc728aa8a694e259672.scope: Deactivated successfully.
Jan 20 19:14:00 compute-0 podman[270794]: 2026-01-20 19:14:00.075862286 +0000 UTC m=+0.148317643 container died 9672c28e62897c782f84075c364b9949c62aa3d138d9acc728aa8a694e259672 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:14:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-30ecda4381ad6ddea4bb3598e073563112271b272a599c266d320c0f220bb94e-merged.mount: Deactivated successfully.
Jan 20 19:14:00 compute-0 podman[270794]: 2026-01-20 19:14:00.113566266 +0000 UTC m=+0.186021623 container remove 9672c28e62897c782f84075c364b9949c62aa3d138d9acc728aa8a694e259672 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:14:00 compute-0 systemd[1]: libpod-conmon-9672c28e62897c782f84075c364b9949c62aa3d138d9acc728aa8a694e259672.scope: Deactivated successfully.
Jan 20 19:14:00 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:14:00 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:14:00 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:14:00.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:14:00 compute-0 podman[270834]: 2026-01-20 19:14:00.344656658 +0000 UTC m=+0.049256393 container create 10fcbf6fc7c8024d43f0df04b6514bda266ca2e572f3dd8b4a5b038631db6520 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_swartz, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Jan 20 19:14:00 compute-0 systemd[1]: Started libpod-conmon-10fcbf6fc7c8024d43f0df04b6514bda266ca2e572f3dd8b4a5b038631db6520.scope.
Jan 20 19:14:00 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:14:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/273ec12d9458b3dcee53005928deca4f431509600d4d5d5534c3041a4d63b9fd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:14:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/273ec12d9458b3dcee53005928deca4f431509600d4d5d5534c3041a4d63b9fd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:14:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/273ec12d9458b3dcee53005928deca4f431509600d4d5d5534c3041a4d63b9fd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:14:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/273ec12d9458b3dcee53005928deca4f431509600d4d5d5534c3041a4d63b9fd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:14:00 compute-0 podman[270834]: 2026-01-20 19:14:00.32883938 +0000 UTC m=+0.033439135 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:14:00 compute-0 podman[270834]: 2026-01-20 19:14:00.430085179 +0000 UTC m=+0.134684944 container init 10fcbf6fc7c8024d43f0df04b6514bda266ca2e572f3dd8b4a5b038631db6520 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS)
Jan 20 19:14:00 compute-0 podman[270834]: 2026-01-20 19:14:00.442394352 +0000 UTC m=+0.146994117 container start 10fcbf6fc7c8024d43f0df04b6514bda266ca2e572f3dd8b4a5b038631db6520 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_swartz, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:14:00 compute-0 podman[270834]: 2026-01-20 19:14:00.446125234 +0000 UTC m=+0.150724969 container attach 10fcbf6fc7c8024d43f0df04b6514bda266ca2e572f3dd8b4a5b038631db6520 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_swartz, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:14:00 compute-0 awesome_swartz[270852]: {
Jan 20 19:14:00 compute-0 awesome_swartz[270852]:     "0": [
Jan 20 19:14:00 compute-0 awesome_swartz[270852]:         {
Jan 20 19:14:00 compute-0 awesome_swartz[270852]:             "devices": [
Jan 20 19:14:00 compute-0 awesome_swartz[270852]:                 "/dev/loop3"
Jan 20 19:14:00 compute-0 awesome_swartz[270852]:             ],
Jan 20 19:14:00 compute-0 awesome_swartz[270852]:             "lv_name": "ceph_lv0",
Jan 20 19:14:00 compute-0 awesome_swartz[270852]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:14:00 compute-0 awesome_swartz[270852]:             "lv_size": "21470642176",
Jan 20 19:14:00 compute-0 awesome_swartz[270852]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=aecbbf3b-b405-507b-97d7-637a83f5b4b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5f53c0c6-6046-4836-83f9-ff93da7e674e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:14:00 compute-0 awesome_swartz[270852]:             "lv_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 19:14:00 compute-0 awesome_swartz[270852]:             "name": "ceph_lv0",
Jan 20 19:14:00 compute-0 awesome_swartz[270852]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:14:00 compute-0 awesome_swartz[270852]:             "tags": {
Jan 20 19:14:00 compute-0 awesome_swartz[270852]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:14:00 compute-0 awesome_swartz[270852]:                 "ceph.block_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 19:14:00 compute-0 awesome_swartz[270852]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:14:00 compute-0 awesome_swartz[270852]:                 "ceph.cluster_fsid": "aecbbf3b-b405-507b-97d7-637a83f5b4b1",
Jan 20 19:14:00 compute-0 awesome_swartz[270852]:                 "ceph.cluster_name": "ceph",
Jan 20 19:14:00 compute-0 awesome_swartz[270852]:                 "ceph.crush_device_class": "",
Jan 20 19:14:00 compute-0 awesome_swartz[270852]:                 "ceph.encrypted": "0",
Jan 20 19:14:00 compute-0 awesome_swartz[270852]:                 "ceph.osd_fsid": "5f53c0c6-6046-4836-83f9-ff93da7e674e",
Jan 20 19:14:00 compute-0 awesome_swartz[270852]:                 "ceph.osd_id": "0",
Jan 20 19:14:00 compute-0 awesome_swartz[270852]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:14:00 compute-0 awesome_swartz[270852]:                 "ceph.type": "block",
Jan 20 19:14:00 compute-0 awesome_swartz[270852]:                 "ceph.vdo": "0",
Jan 20 19:14:00 compute-0 awesome_swartz[270852]:                 "ceph.with_tpm": "0"
Jan 20 19:14:00 compute-0 awesome_swartz[270852]:             },
Jan 20 19:14:00 compute-0 awesome_swartz[270852]:             "type": "block",
Jan 20 19:14:00 compute-0 awesome_swartz[270852]:             "vg_name": "ceph_vg0"
Jan 20 19:14:00 compute-0 awesome_swartz[270852]:         }
Jan 20 19:14:00 compute-0 awesome_swartz[270852]:     ]
Jan 20 19:14:00 compute-0 awesome_swartz[270852]: }
Jan 20 19:14:00 compute-0 systemd[1]: libpod-10fcbf6fc7c8024d43f0df04b6514bda266ca2e572f3dd8b4a5b038631db6520.scope: Deactivated successfully.
Jan 20 19:14:00 compute-0 podman[270861]: 2026-01-20 19:14:00.83526994 +0000 UTC m=+0.044602757 container died 10fcbf6fc7c8024d43f0df04b6514bda266ca2e572f3dd8b4a5b038631db6520 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 20 19:14:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-273ec12d9458b3dcee53005928deca4f431509600d4d5d5534c3041a4d63b9fd-merged.mount: Deactivated successfully.
Jan 20 19:14:00 compute-0 ceph-mon[74381]: pgmap v961: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 676 B/s wr, 5 op/s
Jan 20 19:14:00 compute-0 podman[270861]: 2026-01-20 19:14:00.887947356 +0000 UTC m=+0.097280203 container remove 10fcbf6fc7c8024d43f0df04b6514bda266ca2e572f3dd8b4a5b038631db6520 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 19:14:00 compute-0 systemd[1]: libpod-conmon-10fcbf6fc7c8024d43f0df04b6514bda266ca2e572f3dd8b4a5b038631db6520.scope: Deactivated successfully.
Jan 20 19:14:00 compute-0 sudo[270729]: pam_unix(sudo:session): session closed for user root
Jan 20 19:14:01 compute-0 sudo[270876]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:14:01 compute-0 sudo[270876]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:14:01 compute-0 sudo[270876]: pam_unix(sudo:session): session closed for user root
Jan 20 19:14:01 compute-0 nova_compute[254061]: 2026-01-20 19:14:01.045 254065 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768936426.0451562, 11e4950d-c220-48d6-93ff-810afbe8ffb3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 19:14:01 compute-0 nova_compute[254061]: 2026-01-20 19:14:01.047 254065 INFO nova.compute.manager [-] [instance: 11e4950d-c220-48d6-93ff-810afbe8ffb3] VM Stopped (Lifecycle Event)
Jan 20 19:14:01 compute-0 nova_compute[254061]: 2026-01-20 19:14:01.070 254065 DEBUG nova.compute.manager [None req-1f7f31b8-1bba-4194-8fd6-dac1006522ae - - - - - -] [instance: 11e4950d-c220-48d6-93ff-810afbe8ffb3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 19:14:01 compute-0 sudo[270901]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- raw list --format json
Jan 20 19:14:01 compute-0 sudo[270901]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:14:01 compute-0 nova_compute[254061]: 2026-01-20 19:14:01.114 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:14:01 compute-0 podman[270966]: 2026-01-20 19:14:01.493901729 +0000 UTC m=+0.035954494 container create c003519873141bf9c421eeca4bc70ed95b758c4ac16ba1b3648dbe324857f728 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_visvesvaraya, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:14:01 compute-0 systemd[1]: Started libpod-conmon-c003519873141bf9c421eeca4bc70ed95b758c4ac16ba1b3648dbe324857f728.scope.
Jan 20 19:14:01 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:14:01 compute-0 podman[270966]: 2026-01-20 19:14:01.478547864 +0000 UTC m=+0.020600649 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:14:01 compute-0 podman[270966]: 2026-01-20 19:14:01.576793201 +0000 UTC m=+0.118845996 container init c003519873141bf9c421eeca4bc70ed95b758c4ac16ba1b3648dbe324857f728 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Jan 20 19:14:01 compute-0 podman[270966]: 2026-01-20 19:14:01.582511286 +0000 UTC m=+0.124564051 container start c003519873141bf9c421eeca4bc70ed95b758c4ac16ba1b3648dbe324857f728 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1)
Jan 20 19:14:01 compute-0 podman[270966]: 2026-01-20 19:14:01.585728013 +0000 UTC m=+0.127780808 container attach c003519873141bf9c421eeca4bc70ed95b758c4ac16ba1b3648dbe324857f728 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_visvesvaraya, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1)
Jan 20 19:14:01 compute-0 admiring_visvesvaraya[270982]: 167 167
Jan 20 19:14:01 compute-0 systemd[1]: libpod-c003519873141bf9c421eeca4bc70ed95b758c4ac16ba1b3648dbe324857f728.scope: Deactivated successfully.
Jan 20 19:14:01 compute-0 podman[270966]: 2026-01-20 19:14:01.587360438 +0000 UTC m=+0.129413223 container died c003519873141bf9c421eeca4bc70ed95b758c4ac16ba1b3648dbe324857f728 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_visvesvaraya, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:14:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e4774e474d6d9e9cfc7b5d7c1aa1d23e33d56bde492817f2112b8a3404c4acd-merged.mount: Deactivated successfully.
Jan 20 19:14:01 compute-0 podman[270966]: 2026-01-20 19:14:01.618349736 +0000 UTC m=+0.160402501 container remove c003519873141bf9c421eeca4bc70ed95b758c4ac16ba1b3648dbe324857f728 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:14:01 compute-0 systemd[1]: libpod-conmon-c003519873141bf9c421eeca4bc70ed95b758c4ac16ba1b3648dbe324857f728.scope: Deactivated successfully.
Jan 20 19:14:01 compute-0 nova_compute[254061]: 2026-01-20 19:14:01.653 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:14:01 compute-0 podman[271004]: 2026-01-20 19:14:01.786408952 +0000 UTC m=+0.050394094 container create 5c8de1d30a72b578bbad062940adaf51e47f94964ff0900a7d6a741058849852 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Jan 20 19:14:01 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:14:01 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:14:01 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:14:01.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:14:01 compute-0 systemd[1]: Started libpod-conmon-5c8de1d30a72b578bbad062940adaf51e47f94964ff0900a7d6a741058849852.scope.
Jan 20 19:14:01 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:14:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86c58dc67ed771339c0aa707361847f3bc658b3ffd80a9adc4dbfd2c8ad44aa9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:14:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86c58dc67ed771339c0aa707361847f3bc658b3ffd80a9adc4dbfd2c8ad44aa9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:14:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86c58dc67ed771339c0aa707361847f3bc658b3ffd80a9adc4dbfd2c8ad44aa9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:14:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86c58dc67ed771339c0aa707361847f3bc658b3ffd80a9adc4dbfd2c8ad44aa9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:14:01 compute-0 podman[271004]: 2026-01-20 19:14:01.847127895 +0000 UTC m=+0.111113067 container init 5c8de1d30a72b578bbad062940adaf51e47f94964ff0900a7d6a741058849852 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:14:01 compute-0 podman[271004]: 2026-01-20 19:14:01.856610801 +0000 UTC m=+0.120595953 container start 5c8de1d30a72b578bbad062940adaf51e47f94964ff0900a7d6a741058849852 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_chatelet, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 20 19:14:01 compute-0 podman[271004]: 2026-01-20 19:14:01.765302481 +0000 UTC m=+0.029287663 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:14:01 compute-0 podman[271004]: 2026-01-20 19:14:01.859949661 +0000 UTC m=+0.123934793 container attach 5c8de1d30a72b578bbad062940adaf51e47f94964ff0900a7d6a741058849852 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_chatelet, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 19:14:01 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:14:01 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v962: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 773 B/s rd, 0 op/s
Jan 20 19:14:02 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:14:02 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:14:02 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:14:02.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:14:02 compute-0 lvm[271096]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 19:14:02 compute-0 lvm[271096]: VG ceph_vg0 finished
Jan 20 19:14:02 compute-0 relaxed_chatelet[271020]: {}
Jan 20 19:14:02 compute-0 systemd[1]: libpod-5c8de1d30a72b578bbad062940adaf51e47f94964ff0900a7d6a741058849852.scope: Deactivated successfully.
Jan 20 19:14:02 compute-0 systemd[1]: libpod-5c8de1d30a72b578bbad062940adaf51e47f94964ff0900a7d6a741058849852.scope: Consumed 1.168s CPU time.
Jan 20 19:14:02 compute-0 podman[271004]: 2026-01-20 19:14:02.601053031 +0000 UTC m=+0.865038203 container died 5c8de1d30a72b578bbad062940adaf51e47f94964ff0900a7d6a741058849852 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_chatelet, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:14:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-86c58dc67ed771339c0aa707361847f3bc658b3ffd80a9adc4dbfd2c8ad44aa9-merged.mount: Deactivated successfully.
Jan 20 19:14:02 compute-0 podman[271004]: 2026-01-20 19:14:02.644972859 +0000 UTC m=+0.908957981 container remove 5c8de1d30a72b578bbad062940adaf51e47f94964ff0900a7d6a741058849852 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_chatelet, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:14:02 compute-0 systemd[1]: libpod-conmon-5c8de1d30a72b578bbad062940adaf51e47f94964ff0900a7d6a741058849852.scope: Deactivated successfully.
Jan 20 19:14:02 compute-0 sudo[270901]: pam_unix(sudo:session): session closed for user root
Jan 20 19:14:02 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:14:02 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:14:02 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:14:03 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:14:03 compute-0 sudo[271113]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 19:14:03 compute-0 sudo[271113]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:14:03 compute-0 sudo[271113]: pam_unix(sudo:session): session closed for user root
Jan 20 19:14:03 compute-0 ceph-mon[74381]: pgmap v962: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 773 B/s rd, 0 op/s
Jan 20 19:14:03 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:14:03 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:14:03 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:14:03 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:14:03.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:14:03 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v963: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 773 B/s rd, 0 op/s
Jan 20 19:14:04 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:14:04 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:14:04 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:14:04.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:14:05 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:14:05 compute-0 ceph-mon[74381]: pgmap v963: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 773 B/s rd, 0 op/s
Jan 20 19:14:05 compute-0 podman[271140]: 2026-01-20 19:14:05.140875652 +0000 UTC m=+0.112149936 container health_status d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller)
Jan 20 19:14:05 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:14:05 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:14:05 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:14:05.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:14:05 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v964: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 773 B/s rd, 0 op/s
Jan 20 19:14:06 compute-0 nova_compute[254061]: 2026-01-20 19:14:06.116 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:14:06 compute-0 nova_compute[254061]: 2026-01-20 19:14:06.127 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:14:06 compute-0 nova_compute[254061]: 2026-01-20 19:14:06.128 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:14:06 compute-0 nova_compute[254061]: 2026-01-20 19:14:06.128 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:14:06 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:14:06 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:14:06 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:14:06.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:14:06 compute-0 nova_compute[254061]: 2026-01-20 19:14:06.154 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:14:06 compute-0 nova_compute[254061]: 2026-01-20 19:14:06.154 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:14:06 compute-0 nova_compute[254061]: 2026-01-20 19:14:06.155 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:14:06 compute-0 nova_compute[254061]: 2026-01-20 19:14:06.155 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 19:14:06 compute-0 nova_compute[254061]: 2026-01-20 19:14:06.155 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:14:06 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:14:06 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3521740396' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:14:06 compute-0 nova_compute[254061]: 2026-01-20 19:14:06.588 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:14:06 compute-0 nova_compute[254061]: 2026-01-20 19:14:06.655 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:14:06 compute-0 nova_compute[254061]: 2026-01-20 19:14:06.733 254065 WARNING nova.virt.libvirt.driver [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 19:14:06 compute-0 nova_compute[254061]: 2026-01-20 19:14:06.734 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4554MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 19:14:06 compute-0 nova_compute[254061]: 2026-01-20 19:14:06.734 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:14:06 compute-0 nova_compute[254061]: 2026-01-20 19:14:06.734 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:14:06 compute-0 nova_compute[254061]: 2026-01-20 19:14:06.804 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 19:14:06 compute-0 nova_compute[254061]: 2026-01-20 19:14:06.804 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 19:14:06 compute-0 nova_compute[254061]: 2026-01-20 19:14:06.819 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:14:06 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:14:07 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:14:07.203Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:14:07 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:14:07 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2651252085' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:14:07 compute-0 nova_compute[254061]: 2026-01-20 19:14:07.275 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:14:07 compute-0 nova_compute[254061]: 2026-01-20 19:14:07.279 254065 DEBUG nova.compute.provider_tree [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Inventory has not changed in ProviderTree for provider: cb9161e5-191d-495c-920a-01144f42a215 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 19:14:07 compute-0 nova_compute[254061]: 2026-01-20 19:14:07.614 254065 DEBUG nova.scheduler.client.report [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Inventory has not changed for provider cb9161e5-191d-495c-920a-01144f42a215 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 19:14:07 compute-0 ceph-mon[74381]: pgmap v964: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 773 B/s rd, 0 op/s
Jan 20 19:14:07 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/3521740396' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:14:07 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/2651252085' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:14:07 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:14:07 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:14:07 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:14:07.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:14:07 compute-0 nova_compute[254061]: 2026-01-20 19:14:07.916 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 19:14:07 compute-0 nova_compute[254061]: 2026-01-20 19:14:07.917 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.183s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:14:07 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v965: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 773 B/s rd, 0 op/s
Jan 20 19:14:08 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:14:08 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:14:08 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:14:08.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:14:08 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:14:08.890Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:14:08 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:14:08.891Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:14:08 compute-0 ceph-mon[74381]: pgmap v965: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 773 B/s rd, 0 op/s
Jan 20 19:14:08 compute-0 nova_compute[254061]: 2026-01-20 19:14:08.918 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:14:08 compute-0 nova_compute[254061]: 2026-01-20 19:14:08.919 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:14:08 compute-0 nova_compute[254061]: 2026-01-20 19:14:08.919 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 19:14:09 compute-0 nova_compute[254061]: 2026-01-20 19:14:09.128 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:14:09 compute-0 nova_compute[254061]: 2026-01-20 19:14:09.129 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 19:14:09 compute-0 nova_compute[254061]: 2026-01-20 19:14:09.129 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 19:14:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:14:09] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Jan 20 19:14:09 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:14:09] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Jan 20 19:14:09 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:14:09 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:14:09 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:14:09.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:14:09 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v966: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Jan 20 19:14:10 compute-0 nova_compute[254061]: 2026-01-20 19:14:10.084 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 19:14:10 compute-0 nova_compute[254061]: 2026-01-20 19:14:10.128 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:14:10 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:14:10 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:14:10 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:14:10.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:14:11 compute-0 ceph-mon[74381]: pgmap v966: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Jan 20 19:14:11 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:14:11 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/1881081078' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:14:11 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/2919463750' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:14:11 compute-0 nova_compute[254061]: 2026-01-20 19:14:11.119 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:14:11 compute-0 nova_compute[254061]: 2026-01-20 19:14:11.657 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:14:11 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:14:11 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:14:11 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:14:11.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:14:11 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:14:11 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v967: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Jan 20 19:14:12 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/3501654440' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:14:12 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/3443376085' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:14:12 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:14:12 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:14:12 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:14:12.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:14:13 compute-0 nova_compute[254061]: 2026-01-20 19:14:13.128 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:14:13 compute-0 ceph-mon[74381]: pgmap v967: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Jan 20 19:14:13 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/3250790342' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:14:13 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:14:13 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:14:13 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:14:13.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:14:13 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v968: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 0 op/s
Jan 20 19:14:14 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:14:14 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:14:14 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:14:14.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:14:14 compute-0 ceph-mon[74381]: pgmap v968: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 0 op/s
Jan 20 19:14:15 compute-0 nova_compute[254061]: 2026-01-20 19:14:15.124 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:14:15 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:14:15 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:14:15 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:14:15.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:14:15 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v969: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Jan 20 19:14:16 compute-0 nova_compute[254061]: 2026-01-20 19:14:16.122 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:14:16 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:14:16 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:14:16 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:14:16.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:14:16 compute-0 nova_compute[254061]: 2026-01-20 19:14:16.681 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:14:16 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:14:17 compute-0 ceph-mon[74381]: pgmap v969: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Jan 20 19:14:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:14:17.204Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:14:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:14:17.204Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:14:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:14:17.204Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:14:17 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:14:17 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:14:17 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:14:17.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:14:17 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v970: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 20 19:14:18 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:14:18 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:14:18 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:14:18.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:14:18 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/3228832477' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 19:14:18 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/3932755803' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 19:14:18 compute-0 sudo[271227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:14:18 compute-0 sudo[271227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:14:18 compute-0 sudo[271227]: pam_unix(sudo:session): session closed for user root
Jan 20 19:14:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:14:18.892Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:14:19 compute-0 ceph-mon[74381]: pgmap v970: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 20 19:14:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:14:19] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Jan 20 19:14:19 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:14:19] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Jan 20 19:14:19 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:14:19 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:14:19 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:14:19.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:14:19 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v971: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 19:14:20 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:14:20 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:14:20 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:14:20.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:14:20 compute-0 ceph-mon[74381]: pgmap v971: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 20 19:14:21 compute-0 nova_compute[254061]: 2026-01-20 19:14:21.125 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:14:21 compute-0 nova_compute[254061]: 2026-01-20 19:14:21.684 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:14:21 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:14:21 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:14:21 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:14:21.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:14:21 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:14:21 compute-0 ceph-mon[74381]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Jan 20 19:14:21 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:14:21.904926) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 19:14:21 compute-0 ceph-mon[74381]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Jan 20 19:14:21 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936461904960, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 1360, "num_deletes": 257, "total_data_size": 2478131, "memory_usage": 2518616, "flush_reason": "Manual Compaction"}
Jan 20 19:14:21 compute-0 ceph-mon[74381]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Jan 20 19:14:21 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936461919655, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 2420478, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 27335, "largest_seqno": 28694, "table_properties": {"data_size": 2414149, "index_size": 3528, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13435, "raw_average_key_size": 19, "raw_value_size": 2401254, "raw_average_value_size": 3475, "num_data_blocks": 155, "num_entries": 691, "num_filter_entries": 691, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768936337, "oldest_key_time": 1768936337, "file_creation_time": 1768936461, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cbf3ab03-d51c-4622-b6c7-e997cd5246eb", "db_session_id": "I40O2DG19JCNHUB0JQU4", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Jan 20 19:14:21 compute-0 ceph-mon[74381]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 14770 microseconds, and 6196 cpu microseconds.
Jan 20 19:14:21 compute-0 ceph-mon[74381]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 19:14:21 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:14:21.919696) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 2420478 bytes OK
Jan 20 19:14:21 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:14:21.919712) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Jan 20 19:14:21 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:14:21.921234) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Jan 20 19:14:21 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:14:21.921248) EVENT_LOG_v1 {"time_micros": 1768936461921244, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 19:14:21 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:14:21.921265) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 19:14:21 compute-0 ceph-mon[74381]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 2472134, prev total WAL file size 2472134, number of live WAL files 2.
Jan 20 19:14:21 compute-0 ceph-mon[74381]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 19:14:21 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:14:21.922262) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353033' seq:72057594037927935, type:22 .. '6C6F676D00373536' seq:0, type:0; will stop at (end)
Jan 20 19:14:21 compute-0 ceph-mon[74381]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 19:14:21 compute-0 ceph-mon[74381]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(2363KB)], [59(14MB)]
Jan 20 19:14:21 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936461922300, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 17726279, "oldest_snapshot_seqno": -1}
Jan 20 19:14:21 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v972: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 680 KiB/s rd, 1.8 MiB/s wr, 57 op/s
Jan 20 19:14:22 compute-0 ceph-mon[74381]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 6267 keys, 17589869 bytes, temperature: kUnknown
Jan 20 19:14:22 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936462022085, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 17589869, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17544450, "index_size": 28669, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15685, "raw_key_size": 159888, "raw_average_key_size": 25, "raw_value_size": 17427851, "raw_average_value_size": 2780, "num_data_blocks": 1170, "num_entries": 6267, "num_filter_entries": 6267, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768934326, "oldest_key_time": 0, "file_creation_time": 1768936461, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cbf3ab03-d51c-4622-b6c7-e997cd5246eb", "db_session_id": "I40O2DG19JCNHUB0JQU4", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Jan 20 19:14:22 compute-0 ceph-mon[74381]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 19:14:22 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:14:22.022340) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 17589869 bytes
Jan 20 19:14:22 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:14:22.024144) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 177.5 rd, 176.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 14.6 +0.0 blob) out(16.8 +0.0 blob), read-write-amplify(14.6) write-amplify(7.3) OK, records in: 6797, records dropped: 530 output_compression: NoCompression
Jan 20 19:14:22 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:14:22.024166) EVENT_LOG_v1 {"time_micros": 1768936462024156, "job": 32, "event": "compaction_finished", "compaction_time_micros": 99841, "compaction_time_cpu_micros": 40254, "output_level": 6, "num_output_files": 1, "total_output_size": 17589869, "num_input_records": 6797, "num_output_records": 6267, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 19:14:22 compute-0 ceph-mon[74381]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 19:14:22 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936462024752, "job": 32, "event": "table_file_deletion", "file_number": 61}
Jan 20 19:14:22 compute-0 ceph-mon[74381]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 19:14:22 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936462028217, "job": 32, "event": "table_file_deletion", "file_number": 59}
Jan 20 19:14:22 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:14:21.922200) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:14:22 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:14:22.028346) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:14:22 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:14:22.028352) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:14:22 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:14:22.028354) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:14:22 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:14:22.028356) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:14:22 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:14:22.028358) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:14:22 compute-0 nova_compute[254061]: 2026-01-20 19:14:22.123 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:14:22 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:14:22 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:14:22 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:14:22.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:14:22 compute-0 ceph-mon[74381]: pgmap v972: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 680 KiB/s rd, 1.8 MiB/s wr, 57 op/s
Jan 20 19:14:23 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:14:23 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:14:23 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:14:23.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:14:23 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v973: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 96 op/s
Jan 20 19:14:24 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:14:24 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:14:24 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:14:24.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:14:25 compute-0 ceph-mon[74381]: pgmap v973: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 96 op/s
Jan 20 19:14:25 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:14:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:14:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:14:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:14:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:14:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:14:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:14:25 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:14:25 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:14:25 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:14:25.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:14:25 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v974: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 96 op/s
Jan 20 19:14:26 compute-0 podman[271258]: 2026-01-20 19:14:26.084672809 +0000 UTC m=+0.059539192 container health_status 7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 20 19:14:26 compute-0 nova_compute[254061]: 2026-01-20 19:14:26.128 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:14:26 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:14:26 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:14:26 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:14:26.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:14:26 compute-0 nova_compute[254061]: 2026-01-20 19:14:26.685 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:14:26 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:14:27 compute-0 ceph-mon[74381]: pgmap v974: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 96 op/s
Jan 20 19:14:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:14:27.205Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:14:27 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:14:27 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:14:27 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:14:27.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:14:27 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v975: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 20 19:14:28 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:14:28 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:14:28 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:14:28.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:14:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:14:28.892Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:14:29 compute-0 ceph-mon[74381]: pgmap v975: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 20 19:14:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:14:29] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Jan 20 19:14:29 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:14:29] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Jan 20 19:14:29 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:14:29 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:14:29 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:14:29.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:14:30 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:14:30 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:14:30 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:14:30.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:14:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:14:30.290 165659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:14:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:14:30.290 165659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:14:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:14:30.290 165659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:14:30 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v976: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 20 19:14:31 compute-0 nova_compute[254061]: 2026-01-20 19:14:31.131 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:14:31 compute-0 ovn_controller[155128]: 2026-01-20T19:14:31Z|00080|memory_trim|INFO|Detected inactivity (last active 30005 ms ago): trimming memory
Jan 20 19:14:31 compute-0 ceph-mon[74381]: pgmap v976: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 20 19:14:31 compute-0 nova_compute[254061]: 2026-01-20 19:14:31.689 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:14:31 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:14:31 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.003000079s ======
Jan 20 19:14:31 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:14:31.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000079s
Jan 20 19:14:31 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:14:31 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v977: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 20 19:14:32 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:14:32 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:14:32 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:14:32.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:14:33 compute-0 ceph-mon[74381]: pgmap v977: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 20 19:14:33 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:14:33 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:14:33 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:14:33.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:14:33 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v978: 337 pgs: 337 active+clean; 100 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.3 MiB/s wr, 68 op/s
Jan 20 19:14:34 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:14:34 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:14:34 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:14:34.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:14:34 compute-0 ceph-mon[74381]: pgmap v978: 337 pgs: 337 active+clean; 100 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.3 MiB/s wr, 68 op/s
Jan 20 19:14:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [WARNING] 019/191435 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Jan 20 19:14:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm[97597]: [ALERT] 019/191435 (4) : backend 'backend' has no server available!
Jan 20 19:14:35 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:14:35 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:14:35 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:14:35.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:14:35 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v979: 337 pgs: 337 active+clean; 100 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 297 KiB/s rd, 1.3 MiB/s wr, 29 op/s
Jan 20 19:14:36 compute-0 nova_compute[254061]: 2026-01-20 19:14:36.133 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:14:36 compute-0 podman[271290]: 2026-01-20 19:14:36.143485872 +0000 UTC m=+0.112991168 container health_status d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 20 19:14:36 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:14:36 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:14:36 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:14:36.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:14:36 compute-0 nova_compute[254061]: 2026-01-20 19:14:36.726 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:14:36 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:14:37 compute-0 ceph-mon[74381]: pgmap v979: 337 pgs: 337 active+clean; 100 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 297 KiB/s rd, 1.3 MiB/s wr, 29 op/s
Jan 20 19:14:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:14:37.206Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:14:37 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:14:37 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:14:37 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:14:37.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:14:37 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v980: 337 pgs: 337 active+clean; 121 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 429 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Jan 20 19:14:38 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:14:38 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:14:38 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:14:38.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:14:38 compute-0 sudo[271320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:14:38 compute-0 sudo[271320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:14:38 compute-0 sudo[271320]: pam_unix(sudo:session): session closed for user root
Jan 20 19:14:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:14:38.893Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:14:39 compute-0 ceph-mon[74381]: pgmap v980: 337 pgs: 337 active+clean; 121 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 429 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Jan 20 19:14:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:14:39] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Jan 20 19:14:39 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:14:39] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Jan 20 19:14:39 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:14:39 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:14:39 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:14:39.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:14:39 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v981: 337 pgs: 337 active+clean; 121 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 270 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Jan 20 19:14:40 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:14:40 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:14:40 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:14:40.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:14:40 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:14:41 compute-0 nova_compute[254061]: 2026-01-20 19:14:41.135 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:14:41 compute-0 ceph-mon[74381]: pgmap v981: 337 pgs: 337 active+clean; 121 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 270 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Jan 20 19:14:41 compute-0 nova_compute[254061]: 2026-01-20 19:14:41.728 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:14:41 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:14:41 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:14:41 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:14:41.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:14:41 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:14:41 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v982: 337 pgs: 337 active+clean; 121 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 271 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Jan 20 19:14:42 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:14:42 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:14:42 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:14:42.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:14:43 compute-0 ceph-mon[74381]: pgmap v982: 337 pgs: 337 active+clean; 121 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 271 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Jan 20 19:14:43 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:14:43 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:14:43 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:14:43.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:14:43 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v983: 337 pgs: 337 active+clean; 121 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 272 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 20 19:14:44 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:14:44 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:14:44 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:14:44.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:14:45 compute-0 ceph-mon[74381]: pgmap v983: 337 pgs: 337 active+clean; 121 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 272 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 20 19:14:45 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:14:45 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:14:45 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:14:45.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:14:45 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v984: 337 pgs: 337 active+clean; 121 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 133 KiB/s rd, 860 KiB/s wr, 38 op/s
Jan 20 19:14:46 compute-0 nova_compute[254061]: 2026-01-20 19:14:46.138 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:14:46 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:14:46 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:14:46 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:14:46.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:14:46 compute-0 nova_compute[254061]: 2026-01-20 19:14:46.731 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:14:46 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:14:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:14:47.207Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:14:47 compute-0 ceph-mon[74381]: pgmap v984: 337 pgs: 337 active+clean; 121 MiB data, 311 MiB used, 60 GiB / 60 GiB avail; 133 KiB/s rd, 860 KiB/s wr, 38 op/s
Jan 20 19:14:47 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:14:47 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:14:47 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:14:47.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:14:47 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v985: 337 pgs: 337 active+clean; 41 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 152 KiB/s rd, 866 KiB/s wr, 67 op/s
Jan 20 19:14:48 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:14:48 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:14:48 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:14:48.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:14:48 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 20 19:14:48 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3045393656' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 19:14:48 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 20 19:14:48 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3045393656' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 19:14:48 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/2645568445' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:14:48 compute-0 ceph-mon[74381]: pgmap v985: 337 pgs: 337 active+clean; 41 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 152 KiB/s rd, 866 KiB/s wr, 67 op/s
Jan 20 19:14:48 compute-0 ceph-mon[74381]: from='client.? 192.168.122.10:0/3045393656' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 19:14:48 compute-0 ceph-mon[74381]: from='client.? 192.168.122.10:0/3045393656' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 19:14:48 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:14:48.895Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:14:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:14:49] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Jan 20 19:14:49 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:14:49] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Jan 20 19:14:49 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:14:49 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:14:49 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:14:49.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:14:49 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v986: 337 pgs: 337 active+clean; 41 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 19 KiB/s wr, 30 op/s
Jan 20 19:14:50 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:14:50 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:14:50 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:14:50.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:14:51 compute-0 nova_compute[254061]: 2026-01-20 19:14:51.141 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:14:51 compute-0 ceph-mon[74381]: pgmap v986: 337 pgs: 337 active+clean; 41 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 19 KiB/s wr, 30 op/s
Jan 20 19:14:51 compute-0 nova_compute[254061]: 2026-01-20 19:14:51.734 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:14:51 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:14:51 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:14:51 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:14:51.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:14:51 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:14:51 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v987: 337 pgs: 337 active+clean; 41 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 19 KiB/s wr, 30 op/s
Jan 20 19:14:52 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:14:52 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:14:52 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:14:52.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:14:53 compute-0 ceph-mon[74381]: pgmap v987: 337 pgs: 337 active+clean; 41 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 19 KiB/s wr, 30 op/s
Jan 20 19:14:53 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:14:53 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:14:53 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:14:53.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:14:54 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v988: 337 pgs: 337 active+clean; 41 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 8.0 KiB/s wr, 30 op/s
Jan 20 19:14:54 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:14:54 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:14:54 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:14:54.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:14:55 compute-0 ceph-mgr[74676]: [balancer INFO root] Optimize plan auto_2026-01-20_19:14:55
Jan 20 19:14:55 compute-0 ceph-mgr[74676]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 19:14:55 compute-0 ceph-mgr[74676]: [balancer INFO root] do_upmap
Jan 20 19:14:55 compute-0 ceph-mgr[74676]: [balancer INFO root] pools ['volumes', 'default.rgw.log', 'backups', '.mgr', 'default.rgw.control', '.nfs', 'default.rgw.meta', 'vms', 'cephfs.cephfs.meta', 'images', '.rgw.root', 'cephfs.cephfs.data']
Jan 20 19:14:55 compute-0 ceph-mgr[74676]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 19:14:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:14:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:14:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:14:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:14:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:14:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:14:55 compute-0 ceph-mon[74381]: pgmap v988: 337 pgs: 337 active+clean; 41 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 8.0 KiB/s wr, 30 op/s
Jan 20 19:14:55 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:14:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 19:14:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:14:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 19:14:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:14:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 19:14:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:14:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 20 19:14:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:14:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:14:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:14:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:14:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:14:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 20 19:14:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:14:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 20 19:14:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:14:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:14:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:14:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 19:14:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:14:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 20 19:14:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:14:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:14:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:14:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 19:14:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:14:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 20 19:14:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 19:14:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:14:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:14:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:14:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:14:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:14:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:14:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:14:55 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:14:55 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:14:55 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:14:55.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:14:56 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v989: 337 pgs: 337 active+clean; 41 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 6.6 KiB/s wr, 29 op/s
Jan 20 19:14:56 compute-0 nova_compute[254061]: 2026-01-20 19:14:56.145 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:14:56 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:14:56 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:14:56 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:14:56.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:14:56 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:14:56.324 165659 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'be:fe:f2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '0e:71:69:cd:a8:95'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 19:14:56 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:14:56.325 165659 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 19:14:56 compute-0 nova_compute[254061]: 2026-01-20 19:14:56.326 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:14:56 compute-0 nova_compute[254061]: 2026-01-20 19:14:56.735 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:14:56 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:14:57 compute-0 ceph-mgr[74676]: [devicehealth INFO root] Check health
Jan 20 19:14:57 compute-0 podman[271364]: 2026-01-20 19:14:57.123045757 +0000 UTC m=+0.087423917 container health_status 7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 20 19:14:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:14:57.208Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:14:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:14:57.208Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:14:57 compute-0 ceph-mon[74381]: pgmap v989: 337 pgs: 337 active+clean; 41 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 6.6 KiB/s wr, 29 op/s
Jan 20 19:14:57 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:14:57 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:14:57 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:14:57.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:14:58 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v990: 337 pgs: 337 active+clean; 41 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 6.6 KiB/s wr, 29 op/s
Jan 20 19:14:58 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:14:58 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:14:58 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:14:58.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:14:58 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:14:58.327 165659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7018ca8a-de0e-4b56-bb43-675238d4f8b3, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 19:14:58 compute-0 sudo[271386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:14:58 compute-0 sudo[271386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:14:58 compute-0 sudo[271386]: pam_unix(sudo:session): session closed for user root
Jan 20 19:14:58 compute-0 ceph-mon[74381]: pgmap v990: 337 pgs: 337 active+clean; 41 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 6.6 KiB/s wr, 29 op/s
Jan 20 19:14:58 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:14:58.896Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:14:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:14:59] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Jan 20 19:14:59 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:14:59] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Jan 20 19:14:59 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:14:59 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:14:59 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:14:59.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:15:00 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v991: 337 pgs: 337 active+clean; 41 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 0 B/s wr, 1 op/s
Jan 20 19:15:00 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:15:00 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:15:00 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:15:00.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:15:01 compute-0 ceph-mon[74381]: pgmap v991: 337 pgs: 337 active+clean; 41 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 0 B/s wr, 1 op/s
Jan 20 19:15:01 compute-0 nova_compute[254061]: 2026-01-20 19:15:01.147 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:15:01 compute-0 nova_compute[254061]: 2026-01-20 19:15:01.738 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:15:01 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:15:01 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:15:01 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:15:01.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:15:01 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:15:02 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v992: 337 pgs: 337 active+clean; 41 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 0 B/s wr, 1 op/s
Jan 20 19:15:02 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:15:02 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:15:02 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:15:02.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:15:03 compute-0 ceph-mon[74381]: pgmap v992: 337 pgs: 337 active+clean; 41 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 0 B/s wr, 1 op/s
Jan 20 19:15:03 compute-0 sudo[271415]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:15:03 compute-0 sudo[271415]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:15:03 compute-0 sudo[271415]: pam_unix(sudo:session): session closed for user root
Jan 20 19:15:03 compute-0 sudo[271440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 20 19:15:03 compute-0 sudo[271440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:15:03 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:15:03 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:15:03 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:15:03.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:15:04 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v993: 337 pgs: 337 active+clean; 41 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:15:04 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:15:04 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.002000053s ======
Jan 20 19:15:04 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:15:04.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Jan 20 19:15:04 compute-0 sudo[271440]: pam_unix(sudo:session): session closed for user root
Jan 20 19:15:04 compute-0 sudo[271499]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:15:04 compute-0 sudo[271499]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:15:04 compute-0 sudo[271499]: pam_unix(sudo:session): session closed for user root
Jan 20 19:15:04 compute-0 sudo[271525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Jan 20 19:15:04 compute-0 sudo[271525]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:15:04 compute-0 sudo[271525]: pam_unix(sudo:session): session closed for user root
Jan 20 19:15:04 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:15:04 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:15:04 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:15:04 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:15:04 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 20 19:15:04 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:15:04 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 20 19:15:04 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 20 19:15:04 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:15:04 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:15:04 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 20 19:15:04 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:15:04 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v994: 337 pgs: 337 active+clean; 41 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 847 B/s rd, 0 op/s
Jan 20 19:15:04 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 19:15:04 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:15:04 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 20 19:15:04 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:15:04 compute-0 sudo[271570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:15:04 compute-0 sudo[271570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:15:04 compute-0 sudo[271570]: pam_unix(sudo:session): session closed for user root
Jan 20 19:15:05 compute-0 sudo[271595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 19:15:05 compute-0 sudo[271595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:15:05 compute-0 ceph-mon[74381]: pgmap v993: 337 pgs: 337 active+clean; 41 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:15:05 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:15:05 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:15:05 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:15:05 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:15:05 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:15:05 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:15:05 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 19:15:05 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 19:15:05 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:15:05 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:15:05 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 19:15:05 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 19:15:05 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 19:15:05 compute-0 podman[271662]: 2026-01-20 19:15:05.499776865 +0000 UTC m=+0.041754441 container create 95b2355e9fe56eeb379043c1b19e313c55007029bb50cdc999aa31a98c440942 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Jan 20 19:15:05 compute-0 systemd[1]: Started libpod-conmon-95b2355e9fe56eeb379043c1b19e313c55007029bb50cdc999aa31a98c440942.scope.
Jan 20 19:15:05 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:15:05 compute-0 podman[271662]: 2026-01-20 19:15:05.480889144 +0000 UTC m=+0.022866710 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:15:05 compute-0 podman[271662]: 2026-01-20 19:15:05.577325623 +0000 UTC m=+0.119303169 container init 95b2355e9fe56eeb379043c1b19e313c55007029bb50cdc999aa31a98c440942 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 20 19:15:05 compute-0 podman[271662]: 2026-01-20 19:15:05.58759725 +0000 UTC m=+0.129574806 container start 95b2355e9fe56eeb379043c1b19e313c55007029bb50cdc999aa31a98c440942 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:15:05 compute-0 podman[271662]: 2026-01-20 19:15:05.591209829 +0000 UTC m=+0.133187385 container attach 95b2355e9fe56eeb379043c1b19e313c55007029bb50cdc999aa31a98c440942 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_mclaren, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Jan 20 19:15:05 compute-0 eloquent_mclaren[271679]: 167 167
Jan 20 19:15:05 compute-0 systemd[1]: libpod-95b2355e9fe56eeb379043c1b19e313c55007029bb50cdc999aa31a98c440942.scope: Deactivated successfully.
Jan 20 19:15:05 compute-0 podman[271662]: 2026-01-20 19:15:05.591971569 +0000 UTC m=+0.133949125 container died 95b2355e9fe56eeb379043c1b19e313c55007029bb50cdc999aa31a98c440942 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 20 19:15:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-24c221cda7f843e5e50abb75288dac5d273bae85d9e7b5010188bb79b641c601-merged.mount: Deactivated successfully.
Jan 20 19:15:05 compute-0 podman[271662]: 2026-01-20 19:15:05.629470563 +0000 UTC m=+0.171448109 container remove 95b2355e9fe56eeb379043c1b19e313c55007029bb50cdc999aa31a98c440942 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_mclaren, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Jan 20 19:15:05 compute-0 systemd[1]: libpod-conmon-95b2355e9fe56eeb379043c1b19e313c55007029bb50cdc999aa31a98c440942.scope: Deactivated successfully.
Jan 20 19:15:05 compute-0 podman[271703]: 2026-01-20 19:15:05.861429759 +0000 UTC m=+0.071392523 container create 917f7a3e0563e7ce4646f9ddfc323fc7bd7e4264f2789de511311c6fa96bfd39 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_noyce, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:15:05 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:15:05 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:15:05 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:15:05.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:15:05 compute-0 systemd[1]: Started libpod-conmon-917f7a3e0563e7ce4646f9ddfc323fc7bd7e4264f2789de511311c6fa96bfd39.scope.
Jan 20 19:15:05 compute-0 podman[271703]: 2026-01-20 19:15:05.833877923 +0000 UTC m=+0.043840747 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:15:05 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:15:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7284c7a521905b778fb4ff5bf2786591bf126d96409adaf38ef18f368525b28/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:15:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7284c7a521905b778fb4ff5bf2786591bf126d96409adaf38ef18f368525b28/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:15:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7284c7a521905b778fb4ff5bf2786591bf126d96409adaf38ef18f368525b28/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:15:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7284c7a521905b778fb4ff5bf2786591bf126d96409adaf38ef18f368525b28/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:15:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7284c7a521905b778fb4ff5bf2786591bf126d96409adaf38ef18f368525b28/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:15:05 compute-0 podman[271703]: 2026-01-20 19:15:05.947678512 +0000 UTC m=+0.157641276 container init 917f7a3e0563e7ce4646f9ddfc323fc7bd7e4264f2789de511311c6fa96bfd39 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_noyce, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:15:05 compute-0 podman[271703]: 2026-01-20 19:15:05.961405923 +0000 UTC m=+0.171368657 container start 917f7a3e0563e7ce4646f9ddfc323fc7bd7e4264f2789de511311c6fa96bfd39 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_noyce, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True)
Jan 20 19:15:05 compute-0 podman[271703]: 2026-01-20 19:15:05.966884961 +0000 UTC m=+0.176847735 container attach 917f7a3e0563e7ce4646f9ddfc323fc7bd7e4264f2789de511311c6fa96bfd39 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:15:06 compute-0 nova_compute[254061]: 2026-01-20 19:15:06.129 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:15:06 compute-0 nova_compute[254061]: 2026-01-20 19:15:06.149 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:15:06 compute-0 nova_compute[254061]: 2026-01-20 19:15:06.174 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:15:06 compute-0 nova_compute[254061]: 2026-01-20 19:15:06.174 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:15:06 compute-0 nova_compute[254061]: 2026-01-20 19:15:06.175 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:15:06 compute-0 nova_compute[254061]: 2026-01-20 19:15:06.175 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 19:15:06 compute-0 nova_compute[254061]: 2026-01-20 19:15:06.176 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:15:06 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:15:06 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:15:06 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:15:06.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:15:06 compute-0 flamboyant_noyce[271719]: --> passed data devices: 0 physical, 1 LVM
Jan 20 19:15:06 compute-0 flamboyant_noyce[271719]: --> All data devices are unavailable
Jan 20 19:15:06 compute-0 systemd[1]: libpod-917f7a3e0563e7ce4646f9ddfc323fc7bd7e4264f2789de511311c6fa96bfd39.scope: Deactivated successfully.
Jan 20 19:15:06 compute-0 podman[271703]: 2026-01-20 19:15:06.382335051 +0000 UTC m=+0.592297775 container died 917f7a3e0563e7ce4646f9ddfc323fc7bd7e4264f2789de511311c6fa96bfd39 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_noyce, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Jan 20 19:15:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-f7284c7a521905b778fb4ff5bf2786591bf126d96409adaf38ef18f368525b28-merged.mount: Deactivated successfully.
Jan 20 19:15:06 compute-0 ceph-mon[74381]: pgmap v994: 337 pgs: 337 active+clean; 41 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 847 B/s rd, 0 op/s
Jan 20 19:15:06 compute-0 podman[271703]: 2026-01-20 19:15:06.439137428 +0000 UTC m=+0.649100152 container remove 917f7a3e0563e7ce4646f9ddfc323fc7bd7e4264f2789de511311c6fa96bfd39 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_noyce, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 20 19:15:06 compute-0 systemd[1]: libpod-conmon-917f7a3e0563e7ce4646f9ddfc323fc7bd7e4264f2789de511311c6fa96bfd39.scope: Deactivated successfully.
Jan 20 19:15:06 compute-0 sudo[271595]: pam_unix(sudo:session): session closed for user root
Jan 20 19:15:06 compute-0 podman[271757]: 2026-01-20 19:15:06.55936683 +0000 UTC m=+0.143954595 container health_status d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 20 19:15:06 compute-0 sudo[271783]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:15:06 compute-0 sudo[271783]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:15:06 compute-0 sudo[271783]: pam_unix(sudo:session): session closed for user root
Jan 20 19:15:06 compute-0 sudo[271817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- lvm list --format json
Jan 20 19:15:06 compute-0 sudo[271817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:15:06 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:15:06 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1406994262' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:15:06 compute-0 nova_compute[254061]: 2026-01-20 19:15:06.732 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:15:06 compute-0 nova_compute[254061]: 2026-01-20 19:15:06.779 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:15:06 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v995: 337 pgs: 337 active+clean; 41 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 20 19:15:06 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:15:06 compute-0 nova_compute[254061]: 2026-01-20 19:15:06.967 254065 WARNING nova.virt.libvirt.driver [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 19:15:06 compute-0 nova_compute[254061]: 2026-01-20 19:15:06.969 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4555MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 19:15:06 compute-0 nova_compute[254061]: 2026-01-20 19:15:06.970 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:15:06 compute-0 nova_compute[254061]: 2026-01-20 19:15:06.970 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:15:07 compute-0 nova_compute[254061]: 2026-01-20 19:15:07.074 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 19:15:07 compute-0 nova_compute[254061]: 2026-01-20 19:15:07.075 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 19:15:07 compute-0 podman[271885]: 2026-01-20 19:15:07.092086502 +0000 UTC m=+0.052790829 container create c3d4f1b1e44962d56203f4a2b545056d60f9d040d0a40b45eed3c76b76e9f1b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:15:07 compute-0 nova_compute[254061]: 2026-01-20 19:15:07.092 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:15:07 compute-0 systemd[1]: Started libpod-conmon-c3d4f1b1e44962d56203f4a2b545056d60f9d040d0a40b45eed3c76b76e9f1b6.scope.
Jan 20 19:15:07 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:15:07 compute-0 podman[271885]: 2026-01-20 19:15:07.071232918 +0000 UTC m=+0.031937285 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:15:07 compute-0 podman[271885]: 2026-01-20 19:15:07.173946966 +0000 UTC m=+0.134651303 container init c3d4f1b1e44962d56203f4a2b545056d60f9d040d0a40b45eed3c76b76e9f1b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:15:07 compute-0 podman[271885]: 2026-01-20 19:15:07.183381082 +0000 UTC m=+0.144085429 container start c3d4f1b1e44962d56203f4a2b545056d60f9d040d0a40b45eed3c76b76e9f1b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:15:07 compute-0 podman[271885]: 2026-01-20 19:15:07.187279708 +0000 UTC m=+0.147984035 container attach c3d4f1b1e44962d56203f4a2b545056d60f9d040d0a40b45eed3c76b76e9f1b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True)
Jan 20 19:15:07 compute-0 quizzical_beaver[271902]: 167 167
Jan 20 19:15:07 compute-0 systemd[1]: libpod-c3d4f1b1e44962d56203f4a2b545056d60f9d040d0a40b45eed3c76b76e9f1b6.scope: Deactivated successfully.
Jan 20 19:15:07 compute-0 conmon[271902]: conmon c3d4f1b1e44962d56203 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c3d4f1b1e44962d56203f4a2b545056d60f9d040d0a40b45eed3c76b76e9f1b6.scope/container/memory.events
Jan 20 19:15:07 compute-0 podman[271885]: 2026-01-20 19:15:07.194060861 +0000 UTC m=+0.154765198 container died c3d4f1b1e44962d56203f4a2b545056d60f9d040d0a40b45eed3c76b76e9f1b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_beaver, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:15:07 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:15:07.209Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:15:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-b3d21cbd246edd45d1f3ef07f34cd6c003b61dec4f9e20e18292d0528042ca18-merged.mount: Deactivated successfully.
Jan 20 19:15:07 compute-0 podman[271885]: 2026-01-20 19:15:07.236990532 +0000 UTC m=+0.197694879 container remove c3d4f1b1e44962d56203f4a2b545056d60f9d040d0a40b45eed3c76b76e9f1b6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_beaver, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:15:07 compute-0 systemd[1]: libpod-conmon-c3d4f1b1e44962d56203f4a2b545056d60f9d040d0a40b45eed3c76b76e9f1b6.scope: Deactivated successfully.
Jan 20 19:15:07 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/1406994262' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:15:07 compute-0 podman[271947]: 2026-01-20 19:15:07.446584022 +0000 UTC m=+0.061702480 container create f5fafc5fc073a05b10e955eb0983cd4356ad5e76948e4fc4130a1a38295e1ec9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_dijkstra, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:15:07 compute-0 systemd[1]: Started libpod-conmon-f5fafc5fc073a05b10e955eb0983cd4356ad5e76948e4fc4130a1a38295e1ec9.scope.
Jan 20 19:15:07 compute-0 podman[271947]: 2026-01-20 19:15:07.422365077 +0000 UTC m=+0.037483575 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:15:07 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:15:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8f3d785c5742387f3e492deba727f5054ba96edffae70eee89b28c0d0def333/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:15:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8f3d785c5742387f3e492deba727f5054ba96edffae70eee89b28c0d0def333/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:15:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8f3d785c5742387f3e492deba727f5054ba96edffae70eee89b28c0d0def333/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:15:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c8f3d785c5742387f3e492deba727f5054ba96edffae70eee89b28c0d0def333/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:15:07 compute-0 podman[271947]: 2026-01-20 19:15:07.55662591 +0000 UTC m=+0.171744378 container init f5fafc5fc073a05b10e955eb0983cd4356ad5e76948e4fc4130a1a38295e1ec9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_dijkstra, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Jan 20 19:15:07 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:15:07 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1906562570' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:15:07 compute-0 podman[271947]: 2026-01-20 19:15:07.566762964 +0000 UTC m=+0.181881412 container start f5fafc5fc073a05b10e955eb0983cd4356ad5e76948e4fc4130a1a38295e1ec9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_dijkstra, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Jan 20 19:15:07 compute-0 podman[271947]: 2026-01-20 19:15:07.571203304 +0000 UTC m=+0.186321752 container attach f5fafc5fc073a05b10e955eb0983cd4356ad5e76948e4fc4130a1a38295e1ec9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Jan 20 19:15:07 compute-0 nova_compute[254061]: 2026-01-20 19:15:07.590 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:15:07 compute-0 nova_compute[254061]: 2026-01-20 19:15:07.600 254065 DEBUG nova.compute.provider_tree [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Inventory has not changed in ProviderTree for provider: cb9161e5-191d-495c-920a-01144f42a215 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 19:15:07 compute-0 nova_compute[254061]: 2026-01-20 19:15:07.618 254065 DEBUG nova.scheduler.client.report [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Inventory has not changed for provider cb9161e5-191d-495c-920a-01144f42a215 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 19:15:07 compute-0 nova_compute[254061]: 2026-01-20 19:15:07.621 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 19:15:07 compute-0 nova_compute[254061]: 2026-01-20 19:15:07.621 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:15:07 compute-0 stupefied_dijkstra[271964]: {
Jan 20 19:15:07 compute-0 stupefied_dijkstra[271964]:     "0": [
Jan 20 19:15:07 compute-0 stupefied_dijkstra[271964]:         {
Jan 20 19:15:07 compute-0 stupefied_dijkstra[271964]:             "devices": [
Jan 20 19:15:07 compute-0 stupefied_dijkstra[271964]:                 "/dev/loop3"
Jan 20 19:15:07 compute-0 stupefied_dijkstra[271964]:             ],
Jan 20 19:15:07 compute-0 stupefied_dijkstra[271964]:             "lv_name": "ceph_lv0",
Jan 20 19:15:07 compute-0 stupefied_dijkstra[271964]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:15:07 compute-0 stupefied_dijkstra[271964]:             "lv_size": "21470642176",
Jan 20 19:15:07 compute-0 stupefied_dijkstra[271964]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=aecbbf3b-b405-507b-97d7-637a83f5b4b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5f53c0c6-6046-4836-83f9-ff93da7e674e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:15:07 compute-0 stupefied_dijkstra[271964]:             "lv_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 19:15:07 compute-0 stupefied_dijkstra[271964]:             "name": "ceph_lv0",
Jan 20 19:15:07 compute-0 stupefied_dijkstra[271964]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:15:07 compute-0 stupefied_dijkstra[271964]:             "tags": {
Jan 20 19:15:07 compute-0 stupefied_dijkstra[271964]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:15:07 compute-0 stupefied_dijkstra[271964]:                 "ceph.block_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 19:15:07 compute-0 stupefied_dijkstra[271964]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:15:07 compute-0 stupefied_dijkstra[271964]:                 "ceph.cluster_fsid": "aecbbf3b-b405-507b-97d7-637a83f5b4b1",
Jan 20 19:15:07 compute-0 stupefied_dijkstra[271964]:                 "ceph.cluster_name": "ceph",
Jan 20 19:15:07 compute-0 stupefied_dijkstra[271964]:                 "ceph.crush_device_class": "",
Jan 20 19:15:07 compute-0 stupefied_dijkstra[271964]:                 "ceph.encrypted": "0",
Jan 20 19:15:07 compute-0 stupefied_dijkstra[271964]:                 "ceph.osd_fsid": "5f53c0c6-6046-4836-83f9-ff93da7e674e",
Jan 20 19:15:07 compute-0 stupefied_dijkstra[271964]:                 "ceph.osd_id": "0",
Jan 20 19:15:07 compute-0 stupefied_dijkstra[271964]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:15:07 compute-0 stupefied_dijkstra[271964]:                 "ceph.type": "block",
Jan 20 19:15:07 compute-0 stupefied_dijkstra[271964]:                 "ceph.vdo": "0",
Jan 20 19:15:07 compute-0 stupefied_dijkstra[271964]:                 "ceph.with_tpm": "0"
Jan 20 19:15:07 compute-0 stupefied_dijkstra[271964]:             },
Jan 20 19:15:07 compute-0 stupefied_dijkstra[271964]:             "type": "block",
Jan 20 19:15:07 compute-0 stupefied_dijkstra[271964]:             "vg_name": "ceph_vg0"
Jan 20 19:15:07 compute-0 stupefied_dijkstra[271964]:         }
Jan 20 19:15:07 compute-0 stupefied_dijkstra[271964]:     ]
Jan 20 19:15:07 compute-0 stupefied_dijkstra[271964]: }
Jan 20 19:15:07 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:15:07 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:15:07 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:15:07.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:15:07 compute-0 systemd[1]: libpod-f5fafc5fc073a05b10e955eb0983cd4356ad5e76948e4fc4130a1a38295e1ec9.scope: Deactivated successfully.
Jan 20 19:15:07 compute-0 nova_compute[254061]: 2026-01-20 19:15:07.954 254065 DEBUG oslo_concurrency.lockutils [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Acquiring lock "464ffed9-a738-406a-9a42-2bd3d60d27f2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:15:07 compute-0 nova_compute[254061]: 2026-01-20 19:15:07.957 254065 DEBUG oslo_concurrency.lockutils [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "464ffed9-a738-406a-9a42-2bd3d60d27f2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:15:07 compute-0 podman[271975]: 2026-01-20 19:15:07.958341077 +0000 UTC m=+0.032462420 container died f5fafc5fc073a05b10e955eb0983cd4356ad5e76948e4fc4130a1a38295e1ec9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_dijkstra, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid)
Jan 20 19:15:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-c8f3d785c5742387f3e492deba727f5054ba96edffae70eee89b28c0d0def333-merged.mount: Deactivated successfully.
Jan 20 19:15:07 compute-0 nova_compute[254061]: 2026-01-20 19:15:07.981 254065 DEBUG nova.compute.manager [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 19:15:07 compute-0 podman[271975]: 2026-01-20 19:15:07.999547722 +0000 UTC m=+0.073669055 container remove f5fafc5fc073a05b10e955eb0983cd4356ad5e76948e4fc4130a1a38295e1ec9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_dijkstra, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 20 19:15:08 compute-0 systemd[1]: libpod-conmon-f5fafc5fc073a05b10e955eb0983cd4356ad5e76948e4fc4130a1a38295e1ec9.scope: Deactivated successfully.
Jan 20 19:15:08 compute-0 sudo[271817]: pam_unix(sudo:session): session closed for user root
Jan 20 19:15:08 compute-0 nova_compute[254061]: 2026-01-20 19:15:08.066 254065 DEBUG oslo_concurrency.lockutils [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:15:08 compute-0 nova_compute[254061]: 2026-01-20 19:15:08.067 254065 DEBUG oslo_concurrency.lockutils [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:15:08 compute-0 nova_compute[254061]: 2026-01-20 19:15:08.076 254065 DEBUG nova.virt.hardware [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 19:15:08 compute-0 nova_compute[254061]: 2026-01-20 19:15:08.077 254065 INFO nova.compute.claims [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Claim successful on node compute-0.ctlplane.example.com
Jan 20 19:15:08 compute-0 sudo[271988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:15:08 compute-0 sudo[271988]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:15:08 compute-0 sudo[271988]: pam_unix(sudo:session): session closed for user root
Jan 20 19:15:08 compute-0 sudo[272014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- raw list --format json
Jan 20 19:15:08 compute-0 sudo[272014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:15:08 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:15:08 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:15:08 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:15:08.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:15:08 compute-0 nova_compute[254061]: 2026-01-20 19:15:08.223 254065 DEBUG oslo_concurrency.processutils [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:15:08 compute-0 ceph-mon[74381]: pgmap v995: 337 pgs: 337 active+clean; 41 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 20 19:15:08 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/1906562570' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:15:08 compute-0 podman[272100]: 2026-01-20 19:15:08.576361947 +0000 UTC m=+0.045654917 container create 48846cf781662d3b70bd29f806b90ef31776b1e994797163b5455a3413354a70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_bardeen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:15:08 compute-0 nova_compute[254061]: 2026-01-20 19:15:08.621 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:15:08 compute-0 nova_compute[254061]: 2026-01-20 19:15:08.622 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:15:08 compute-0 systemd[1]: Started libpod-conmon-48846cf781662d3b70bd29f806b90ef31776b1e994797163b5455a3413354a70.scope.
Jan 20 19:15:08 compute-0 podman[272100]: 2026-01-20 19:15:08.554532546 +0000 UTC m=+0.023825596 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:15:08 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:15:08 compute-0 podman[272100]: 2026-01-20 19:15:08.676146106 +0000 UTC m=+0.145439076 container init 48846cf781662d3b70bd29f806b90ef31776b1e994797163b5455a3413354a70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_bardeen, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Jan 20 19:15:08 compute-0 podman[272100]: 2026-01-20 19:15:08.683016212 +0000 UTC m=+0.152309182 container start 48846cf781662d3b70bd29f806b90ef31776b1e994797163b5455a3413354a70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_bardeen, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:15:08 compute-0 podman[272100]: 2026-01-20 19:15:08.686392284 +0000 UTC m=+0.155685274 container attach 48846cf781662d3b70bd29f806b90ef31776b1e994797163b5455a3413354a70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:15:08 compute-0 systemd[1]: libpod-48846cf781662d3b70bd29f806b90ef31776b1e994797163b5455a3413354a70.scope: Deactivated successfully.
Jan 20 19:15:08 compute-0 sharp_bardeen[272117]: 167 167
Jan 20 19:15:08 compute-0 conmon[272117]: conmon 48846cf781662d3b70bd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-48846cf781662d3b70bd29f806b90ef31776b1e994797163b5455a3413354a70.scope/container/memory.events
Jan 20 19:15:08 compute-0 podman[272100]: 2026-01-20 19:15:08.68921521 +0000 UTC m=+0.158508180 container died 48846cf781662d3b70bd29f806b90ef31776b1e994797163b5455a3413354a70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Jan 20 19:15:08 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:15:08 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3620791173' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:15:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-e3b8675fb472aa7546e771ba73371bc335cd9e7406581e5f95f0263886a78ada-merged.mount: Deactivated successfully.
Jan 20 19:15:08 compute-0 nova_compute[254061]: 2026-01-20 19:15:08.731 254065 DEBUG oslo_concurrency.processutils [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:15:08 compute-0 nova_compute[254061]: 2026-01-20 19:15:08.740 254065 DEBUG nova.compute.provider_tree [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Inventory has not changed in ProviderTree for provider: cb9161e5-191d-495c-920a-01144f42a215 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 19:15:08 compute-0 podman[272100]: 2026-01-20 19:15:08.743129048 +0000 UTC m=+0.212422058 container remove 48846cf781662d3b70bd29f806b90ef31776b1e994797163b5455a3413354a70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_bardeen, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:15:08 compute-0 systemd[1]: libpod-conmon-48846cf781662d3b70bd29f806b90ef31776b1e994797163b5455a3413354a70.scope: Deactivated successfully.
Jan 20 19:15:08 compute-0 nova_compute[254061]: 2026-01-20 19:15:08.771 254065 DEBUG nova.scheduler.client.report [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Inventory has not changed for provider cb9161e5-191d-495c-920a-01144f42a215 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 19:15:08 compute-0 nova_compute[254061]: 2026-01-20 19:15:08.808 254065 DEBUG oslo_concurrency.lockutils [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.741s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:15:08 compute-0 nova_compute[254061]: 2026-01-20 19:15:08.809 254065 DEBUG nova.compute.manager [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 19:15:08 compute-0 nova_compute[254061]: 2026-01-20 19:15:08.870 254065 DEBUG nova.compute.manager [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 19:15:08 compute-0 nova_compute[254061]: 2026-01-20 19:15:08.870 254065 DEBUG nova.network.neutron [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 19:15:08 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v996: 337 pgs: 337 active+clean; 41 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 20 19:15:08 compute-0 nova_compute[254061]: 2026-01-20 19:15:08.896 254065 INFO nova.virt.libvirt.driver [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 19:15:08 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:15:08.897Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:15:08 compute-0 nova_compute[254061]: 2026-01-20 19:15:08.918 254065 DEBUG nova.compute.manager [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 19:15:08 compute-0 podman[272143]: 2026-01-20 19:15:08.93318138 +0000 UTC m=+0.048931955 container create 7472a59eaffe5603b46712bc6918aeb0ce494c776e5fa1c6caabf1040c50f98c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_moore, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Jan 20 19:15:08 compute-0 systemd[1]: Started libpod-conmon-7472a59eaffe5603b46712bc6918aeb0ce494c776e5fa1c6caabf1040c50f98c.scope.
Jan 20 19:15:09 compute-0 nova_compute[254061]: 2026-01-20 19:15:09.002 254065 DEBUG nova.compute.manager [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 19:15:09 compute-0 podman[272143]: 2026-01-20 19:15:08.910324181 +0000 UTC m=+0.026074846 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:15:09 compute-0 nova_compute[254061]: 2026-01-20 19:15:09.004 254065 DEBUG nova.virt.libvirt.driver [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 19:15:09 compute-0 nova_compute[254061]: 2026-01-20 19:15:09.005 254065 INFO nova.virt.libvirt.driver [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Creating image(s)
Jan 20 19:15:09 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:15:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ddd08610a5b98fe5a438cbb3b4702269150e87ddfd9baaa6e2ca4ef5ea69ea5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:15:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ddd08610a5b98fe5a438cbb3b4702269150e87ddfd9baaa6e2ca4ef5ea69ea5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:15:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ddd08610a5b98fe5a438cbb3b4702269150e87ddfd9baaa6e2ca4ef5ea69ea5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:15:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ddd08610a5b98fe5a438cbb3b4702269150e87ddfd9baaa6e2ca4ef5ea69ea5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:15:09 compute-0 podman[272143]: 2026-01-20 19:15:09.030361018 +0000 UTC m=+0.146111613 container init 7472a59eaffe5603b46712bc6918aeb0ce494c776e5fa1c6caabf1040c50f98c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_moore, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:15:09 compute-0 nova_compute[254061]: 2026-01-20 19:15:09.037 254065 DEBUG nova.storage.rbd_utils [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] rbd image 464ffed9-a738-406a-9a42-2bd3d60d27f2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 19:15:09 compute-0 podman[272143]: 2026-01-20 19:15:09.042084236 +0000 UTC m=+0.157834831 container start 7472a59eaffe5603b46712bc6918aeb0ce494c776e5fa1c6caabf1040c50f98c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_moore, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Jan 20 19:15:09 compute-0 podman[272143]: 2026-01-20 19:15:09.04776454 +0000 UTC m=+0.163515115 container attach 7472a59eaffe5603b46712bc6918aeb0ce494c776e5fa1c6caabf1040c50f98c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_moore, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:15:09 compute-0 nova_compute[254061]: 2026-01-20 19:15:09.071 254065 DEBUG nova.storage.rbd_utils [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] rbd image 464ffed9-a738-406a-9a42-2bd3d60d27f2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 19:15:09 compute-0 nova_compute[254061]: 2026-01-20 19:15:09.101 254065 DEBUG nova.storage.rbd_utils [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] rbd image 464ffed9-a738-406a-9a42-2bd3d60d27f2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 19:15:09 compute-0 nova_compute[254061]: 2026-01-20 19:15:09.104 254065 DEBUG oslo_concurrency.processutils [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2e09eef5d7d60aeeb43ad4911302a9acdced7386 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:15:09 compute-0 nova_compute[254061]: 2026-01-20 19:15:09.128 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:15:09 compute-0 nova_compute[254061]: 2026-01-20 19:15:09.131 254065 DEBUG nova.policy [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd34bd159f8884263a7481e3fcff15267', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dc8a6ea17f334edbbfaf2a91ec6fd167', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 19:15:09 compute-0 nova_compute[254061]: 2026-01-20 19:15:09.162 254065 DEBUG oslo_concurrency.processutils [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2e09eef5d7d60aeeb43ad4911302a9acdced7386 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:15:09 compute-0 nova_compute[254061]: 2026-01-20 19:15:09.162 254065 DEBUG oslo_concurrency.lockutils [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Acquiring lock "2e09eef5d7d60aeeb43ad4911302a9acdced7386" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:15:09 compute-0 nova_compute[254061]: 2026-01-20 19:15:09.163 254065 DEBUG oslo_concurrency.lockutils [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "2e09eef5d7d60aeeb43ad4911302a9acdced7386" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:15:09 compute-0 nova_compute[254061]: 2026-01-20 19:15:09.163 254065 DEBUG oslo_concurrency.lockutils [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "2e09eef5d7d60aeeb43ad4911302a9acdced7386" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:15:09 compute-0 nova_compute[254061]: 2026-01-20 19:15:09.188 254065 DEBUG nova.storage.rbd_utils [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] rbd image 464ffed9-a738-406a-9a42-2bd3d60d27f2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 19:15:09 compute-0 nova_compute[254061]: 2026-01-20 19:15:09.191 254065 DEBUG oslo_concurrency.processutils [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/2e09eef5d7d60aeeb43ad4911302a9acdced7386 464ffed9-a738-406a-9a42-2bd3d60d27f2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:15:09 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/3620791173' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:15:09 compute-0 ceph-mon[74381]: pgmap v996: 337 pgs: 337 active+clean; 41 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 20 19:15:09 compute-0 nova_compute[254061]: 2026-01-20 19:15:09.661 254065 DEBUG oslo_concurrency.processutils [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/2e09eef5d7d60aeeb43ad4911302a9acdced7386 464ffed9-a738-406a-9a42-2bd3d60d27f2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:15:09 compute-0 lvm[272328]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 19:15:09 compute-0 lvm[272328]: VG ceph_vg0 finished
Jan 20 19:15:09 compute-0 cranky_moore[272160]: {}
Jan 20 19:15:09 compute-0 systemd[1]: libpod-7472a59eaffe5603b46712bc6918aeb0ce494c776e5fa1c6caabf1040c50f98c.scope: Deactivated successfully.
Jan 20 19:15:09 compute-0 systemd[1]: libpod-7472a59eaffe5603b46712bc6918aeb0ce494c776e5fa1c6caabf1040c50f98c.scope: Consumed 1.103s CPU time.
Jan 20 19:15:09 compute-0 conmon[272160]: conmon 7472a59eaffe5603b467 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7472a59eaffe5603b46712bc6918aeb0ce494c776e5fa1c6caabf1040c50f98c.scope/container/memory.events
Jan 20 19:15:09 compute-0 nova_compute[254061]: 2026-01-20 19:15:09.753 254065 DEBUG nova.storage.rbd_utils [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] resizing rbd image 464ffed9-a738-406a-9a42-2bd3d60d27f2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 19:15:09 compute-0 podman[272365]: 2026-01-20 19:15:09.779343881 +0000 UTC m=+0.031661287 container died 7472a59eaffe5603b46712bc6918aeb0ce494c776e5fa1c6caabf1040c50f98c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_moore, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:15:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-6ddd08610a5b98fe5a438cbb3b4702269150e87ddfd9baaa6e2ca4ef5ea69ea5-merged.mount: Deactivated successfully.
Jan 20 19:15:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:15:09] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Jan 20 19:15:09 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:15:09] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Jan 20 19:15:09 compute-0 podman[272365]: 2026-01-20 19:15:09.814327747 +0000 UTC m=+0.066645133 container remove 7472a59eaffe5603b46712bc6918aeb0ce494c776e5fa1c6caabf1040c50f98c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_moore, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Jan 20 19:15:09 compute-0 systemd[1]: libpod-conmon-7472a59eaffe5603b46712bc6918aeb0ce494c776e5fa1c6caabf1040c50f98c.scope: Deactivated successfully.
Jan 20 19:15:09 compute-0 nova_compute[254061]: 2026-01-20 19:15:09.881 254065 DEBUG nova.objects.instance [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lazy-loading 'migration_context' on Instance uuid 464ffed9-a738-406a-9a42-2bd3d60d27f2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 19:15:09 compute-0 sudo[272014]: pam_unix(sudo:session): session closed for user root
Jan 20 19:15:09 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:15:09 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:15:09 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:15:09 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:15:09.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:15:09 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:15:09 compute-0 nova_compute[254061]: 2026-01-20 19:15:09.904 254065 DEBUG nova.virt.libvirt.driver [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 19:15:09 compute-0 nova_compute[254061]: 2026-01-20 19:15:09.904 254065 DEBUG nova.virt.libvirt.driver [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Ensure instance console log exists: /var/lib/nova/instances/464ffed9-a738-406a-9a42-2bd3d60d27f2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 19:15:09 compute-0 nova_compute[254061]: 2026-01-20 19:15:09.905 254065 DEBUG oslo_concurrency.lockutils [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:15:09 compute-0 nova_compute[254061]: 2026-01-20 19:15:09.905 254065 DEBUG oslo_concurrency.lockutils [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:15:09 compute-0 nova_compute[254061]: 2026-01-20 19:15:09.905 254065 DEBUG oslo_concurrency.lockutils [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:15:09 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:15:09 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:15:09 compute-0 sudo[272419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 19:15:09 compute-0 sudo[272419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:15:09 compute-0 sudo[272419]: pam_unix(sudo:session): session closed for user root
Jan 20 19:15:10 compute-0 nova_compute[254061]: 2026-01-20 19:15:10.128 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:15:10 compute-0 nova_compute[254061]: 2026-01-20 19:15:10.129 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 19:15:10 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:15:10 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:15:10 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:15:10.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:15:10 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v997: 337 pgs: 337 active+clean; 56 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 368 KiB/s wr, 2 op/s
Jan 20 19:15:10 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:15:10 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:15:10 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:15:10 compute-0 nova_compute[254061]: 2026-01-20 19:15:10.969 254065 DEBUG nova.network.neutron [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Successfully created port: 49f8aed7-e782-474a-b2f1-2fbc7b04e852 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 19:15:11 compute-0 nova_compute[254061]: 2026-01-20 19:15:11.130 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:15:11 compute-0 nova_compute[254061]: 2026-01-20 19:15:11.130 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 19:15:11 compute-0 nova_compute[254061]: 2026-01-20 19:15:11.131 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 19:15:11 compute-0 nova_compute[254061]: 2026-01-20 19:15:11.149 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 20 19:15:11 compute-0 nova_compute[254061]: 2026-01-20 19:15:11.149 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 19:15:11 compute-0 nova_compute[254061]: 2026-01-20 19:15:11.152 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:15:11 compute-0 nova_compute[254061]: 2026-01-20 19:15:11.779 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:15:11 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:15:11 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:15:11 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:15:11.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:15:11 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:15:12 compute-0 nova_compute[254061]: 2026-01-20 19:15:12.017 254065 DEBUG nova.network.neutron [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Successfully updated port: 49f8aed7-e782-474a-b2f1-2fbc7b04e852 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 19:15:12 compute-0 nova_compute[254061]: 2026-01-20 19:15:12.042 254065 DEBUG oslo_concurrency.lockutils [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Acquiring lock "refresh_cache-464ffed9-a738-406a-9a42-2bd3d60d27f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 19:15:12 compute-0 nova_compute[254061]: 2026-01-20 19:15:12.042 254065 DEBUG oslo_concurrency.lockutils [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Acquired lock "refresh_cache-464ffed9-a738-406a-9a42-2bd3d60d27f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 19:15:12 compute-0 nova_compute[254061]: 2026-01-20 19:15:12.043 254065 DEBUG nova.network.neutron [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 19:15:12 compute-0 nova_compute[254061]: 2026-01-20 19:15:12.128 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:15:12 compute-0 nova_compute[254061]: 2026-01-20 19:15:12.141 254065 DEBUG nova.compute.manager [req-04bcd40c-7514-4c03-8581-f05abe829988 req-72d951e5-e496-4a5f-b58d-e6062a51e61e 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Received event network-changed-49f8aed7-e782-474a-b2f1-2fbc7b04e852 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 19:15:12 compute-0 nova_compute[254061]: 2026-01-20 19:15:12.141 254065 DEBUG nova.compute.manager [req-04bcd40c-7514-4c03-8581-f05abe829988 req-72d951e5-e496-4a5f-b58d-e6062a51e61e 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Refreshing instance network info cache due to event network-changed-49f8aed7-e782-474a-b2f1-2fbc7b04e852. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 19:15:12 compute-0 nova_compute[254061]: 2026-01-20 19:15:12.141 254065 DEBUG oslo_concurrency.lockutils [req-04bcd40c-7514-4c03-8581-f05abe829988 req-72d951e5-e496-4a5f-b58d-e6062a51e61e 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Acquiring lock "refresh_cache-464ffed9-a738-406a-9a42-2bd3d60d27f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 19:15:12 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:15:12 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:15:12 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:15:12.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:15:12 compute-0 nova_compute[254061]: 2026-01-20 19:15:12.200 254065 DEBUG nova.network.neutron [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 19:15:12 compute-0 ceph-mon[74381]: pgmap v997: 337 pgs: 337 active+clean; 56 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 368 KiB/s wr, 2 op/s
Jan 20 19:15:12 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/3485938252' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:15:12 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/2109707921' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:15:12 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/1537446602' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:15:12 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/919357469' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:15:12 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v998: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.0 MiB/s wr, 30 op/s
Jan 20 19:15:13 compute-0 nova_compute[254061]: 2026-01-20 19:15:13.127 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:15:13 compute-0 nova_compute[254061]: 2026-01-20 19:15:13.825 254065 DEBUG nova.network.neutron [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Updating instance_info_cache with network_info: [{"id": "49f8aed7-e782-474a-b2f1-2fbc7b04e852", "address": "fa:16:3e:9f:26:34", "network": {"id": "43656a85-4118-4ce7-9ff6-eff8095a7ad3", "bridge": "br-int", "label": "tempest-network-smoke--1377908728", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap49f8aed7-e7", "ovs_interfaceid": "49f8aed7-e782-474a-b2f1-2fbc7b04e852", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 19:15:13 compute-0 nova_compute[254061]: 2026-01-20 19:15:13.854 254065 DEBUG oslo_concurrency.lockutils [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Releasing lock "refresh_cache-464ffed9-a738-406a-9a42-2bd3d60d27f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 19:15:13 compute-0 nova_compute[254061]: 2026-01-20 19:15:13.854 254065 DEBUG nova.compute.manager [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Instance network_info: |[{"id": "49f8aed7-e782-474a-b2f1-2fbc7b04e852", "address": "fa:16:3e:9f:26:34", "network": {"id": "43656a85-4118-4ce7-9ff6-eff8095a7ad3", "bridge": "br-int", "label": "tempest-network-smoke--1377908728", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap49f8aed7-e7", "ovs_interfaceid": "49f8aed7-e782-474a-b2f1-2fbc7b04e852", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 19:15:13 compute-0 nova_compute[254061]: 2026-01-20 19:15:13.855 254065 DEBUG oslo_concurrency.lockutils [req-04bcd40c-7514-4c03-8581-f05abe829988 req-72d951e5-e496-4a5f-b58d-e6062a51e61e 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Acquired lock "refresh_cache-464ffed9-a738-406a-9a42-2bd3d60d27f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 19:15:13 compute-0 nova_compute[254061]: 2026-01-20 19:15:13.856 254065 DEBUG nova.network.neutron [req-04bcd40c-7514-4c03-8581-f05abe829988 req-72d951e5-e496-4a5f-b58d-e6062a51e61e 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Refreshing network info cache for port 49f8aed7-e782-474a-b2f1-2fbc7b04e852 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 19:15:13 compute-0 nova_compute[254061]: 2026-01-20 19:15:13.861 254065 DEBUG nova.virt.libvirt.driver [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Start _get_guest_xml network_info=[{"id": "49f8aed7-e782-474a-b2f1-2fbc7b04e852", "address": "fa:16:3e:9f:26:34", "network": {"id": "43656a85-4118-4ce7-9ff6-eff8095a7ad3", "bridge": "br-int", "label": "tempest-network-smoke--1377908728", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap49f8aed7-e7", "ovs_interfaceid": "49f8aed7-e782-474a-b2f1-2fbc7b04e852", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T19:05:59Z,direct_url=<?>,disk_format='qcow2',id=bc57af0c-4b71-499e-9808-3c8fc070a488,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='811a4eb676464ca2bd20c0cc2d2f61c9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T19:06:05Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_secret_uuid': None, 'encryption_format': None, 'size': 0, 'encrypted': False, 'guest_format': None, 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': 'bc57af0c-4b71-499e-9808-3c8fc070a488'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 19:15:13 compute-0 nova_compute[254061]: 2026-01-20 19:15:13.867 254065 WARNING nova.virt.libvirt.driver [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 19:15:13 compute-0 nova_compute[254061]: 2026-01-20 19:15:13.874 254065 DEBUG nova.virt.libvirt.host [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 19:15:13 compute-0 nova_compute[254061]: 2026-01-20 19:15:13.875 254065 DEBUG nova.virt.libvirt.host [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 19:15:13 compute-0 nova_compute[254061]: 2026-01-20 19:15:13.878 254065 DEBUG nova.virt.libvirt.host [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 19:15:13 compute-0 nova_compute[254061]: 2026-01-20 19:15:13.878 254065 DEBUG nova.virt.libvirt.host [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 19:15:13 compute-0 nova_compute[254061]: 2026-01-20 19:15:13.878 254065 DEBUG nova.virt.libvirt.driver [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 19:15:13 compute-0 nova_compute[254061]: 2026-01-20 19:15:13.879 254065 DEBUG nova.virt.hardware [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T19:05:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7446c314-5a17-42fd-97d9-a7a94e27bff9',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T19:05:59Z,direct_url=<?>,disk_format='qcow2',id=bc57af0c-4b71-499e-9808-3c8fc070a488,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='811a4eb676464ca2bd20c0cc2d2f61c9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T19:06:05Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 19:15:13 compute-0 nova_compute[254061]: 2026-01-20 19:15:13.879 254065 DEBUG nova.virt.hardware [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 19:15:13 compute-0 nova_compute[254061]: 2026-01-20 19:15:13.880 254065 DEBUG nova.virt.hardware [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 19:15:13 compute-0 nova_compute[254061]: 2026-01-20 19:15:13.880 254065 DEBUG nova.virt.hardware [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 19:15:13 compute-0 nova_compute[254061]: 2026-01-20 19:15:13.880 254065 DEBUG nova.virt.hardware [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 19:15:13 compute-0 nova_compute[254061]: 2026-01-20 19:15:13.880 254065 DEBUG nova.virt.hardware [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 19:15:13 compute-0 nova_compute[254061]: 2026-01-20 19:15:13.881 254065 DEBUG nova.virt.hardware [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 19:15:13 compute-0 nova_compute[254061]: 2026-01-20 19:15:13.881 254065 DEBUG nova.virt.hardware [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 19:15:13 compute-0 nova_compute[254061]: 2026-01-20 19:15:13.882 254065 DEBUG nova.virt.hardware [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 19:15:13 compute-0 nova_compute[254061]: 2026-01-20 19:15:13.882 254065 DEBUG nova.virt.hardware [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 19:15:13 compute-0 nova_compute[254061]: 2026-01-20 19:15:13.882 254065 DEBUG nova.virt.hardware [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 19:15:13 compute-0 nova_compute[254061]: 2026-01-20 19:15:13.885 254065 DEBUG oslo_concurrency.processutils [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:15:13 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:15:13 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:15:13 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:15:13.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:15:14 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:15:14 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:15:14 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:15:14.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:15:14 compute-0 ceph-mon[74381]: pgmap v998: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.0 MiB/s wr, 30 op/s
Jan 20 19:15:14 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 20 19:15:14 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4124081107' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 19:15:14 compute-0 nova_compute[254061]: 2026-01-20 19:15:14.320 254065 DEBUG oslo_concurrency.processutils [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:15:14 compute-0 nova_compute[254061]: 2026-01-20 19:15:14.347 254065 DEBUG nova.storage.rbd_utils [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] rbd image 464ffed9-a738-406a-9a42-2bd3d60d27f2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 19:15:14 compute-0 nova_compute[254061]: 2026-01-20 19:15:14.351 254065 DEBUG oslo_concurrency.processutils [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:15:14 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 20 19:15:14 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1919448934' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 19:15:14 compute-0 nova_compute[254061]: 2026-01-20 19:15:14.838 254065 DEBUG oslo_concurrency.processutils [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:15:14 compute-0 nova_compute[254061]: 2026-01-20 19:15:14.840 254065 DEBUG nova.virt.libvirt.vif [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T19:15:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-687559922',display_name='tempest-TestNetworkBasicOps-server-687559922',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-687559922',id=11,image_ref='bc57af0c-4b71-499e-9808-3c8fc070a488',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH5+/TlwhQfGy5x5Qu13nVCd3hZZgyJqcdIE8MBsjPZKKeVwNAJjbXcfTn5a0nUT3sF8nVHjNP5cE4VCngPg11b25JCd13c+VH8ik9H9ryZ3Q54ulsbxEsCug3YhIstfOA==',key_name='tempest-TestNetworkBasicOps-576438739',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dc8a6ea17f334edbbfaf2a91ec6fd167',ramdisk_id='',reservation_id='r-49ymama9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='bc57af0c-4b71-499e-9808-3c8fc070a488',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-899583499',owner_user_name='tempest-TestNetworkBasicOps-899583499-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T19:15:08Z,user_data=None,user_id='d34bd159f8884263a7481e3fcff15267',uuid=464ffed9-a738-406a-9a42-2bd3d60d27f2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "49f8aed7-e782-474a-b2f1-2fbc7b04e852", "address": "fa:16:3e:9f:26:34", "network": {"id": "43656a85-4118-4ce7-9ff6-eff8095a7ad3", "bridge": "br-int", "label": "tempest-network-smoke--1377908728", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap49f8aed7-e7", "ovs_interfaceid": "49f8aed7-e782-474a-b2f1-2fbc7b04e852", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 19:15:14 compute-0 nova_compute[254061]: 2026-01-20 19:15:14.840 254065 DEBUG nova.network.os_vif_util [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Converting VIF {"id": "49f8aed7-e782-474a-b2f1-2fbc7b04e852", "address": "fa:16:3e:9f:26:34", "network": {"id": "43656a85-4118-4ce7-9ff6-eff8095a7ad3", "bridge": "br-int", "label": "tempest-network-smoke--1377908728", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap49f8aed7-e7", "ovs_interfaceid": "49f8aed7-e782-474a-b2f1-2fbc7b04e852", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 19:15:14 compute-0 nova_compute[254061]: 2026-01-20 19:15:14.841 254065 DEBUG nova.network.os_vif_util [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9f:26:34,bridge_name='br-int',has_traffic_filtering=True,id=49f8aed7-e782-474a-b2f1-2fbc7b04e852,network=Network(43656a85-4118-4ce7-9ff6-eff8095a7ad3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap49f8aed7-e7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 19:15:14 compute-0 nova_compute[254061]: 2026-01-20 19:15:14.842 254065 DEBUG nova.objects.instance [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lazy-loading 'pci_devices' on Instance uuid 464ffed9-a738-406a-9a42-2bd3d60d27f2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 19:15:14 compute-0 nova_compute[254061]: 2026-01-20 19:15:14.859 254065 DEBUG nova.virt.libvirt.driver [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] End _get_guest_xml xml=<domain type="kvm">
Jan 20 19:15:14 compute-0 nova_compute[254061]:   <uuid>464ffed9-a738-406a-9a42-2bd3d60d27f2</uuid>
Jan 20 19:15:14 compute-0 nova_compute[254061]:   <name>instance-0000000b</name>
Jan 20 19:15:14 compute-0 nova_compute[254061]:   <memory>131072</memory>
Jan 20 19:15:14 compute-0 nova_compute[254061]:   <vcpu>1</vcpu>
Jan 20 19:15:14 compute-0 nova_compute[254061]:   <metadata>
Jan 20 19:15:14 compute-0 nova_compute[254061]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 19:15:14 compute-0 nova_compute[254061]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 19:15:14 compute-0 nova_compute[254061]:       <nova:name>tempest-TestNetworkBasicOps-server-687559922</nova:name>
Jan 20 19:15:14 compute-0 nova_compute[254061]:       <nova:creationTime>2026-01-20 19:15:13</nova:creationTime>
Jan 20 19:15:14 compute-0 nova_compute[254061]:       <nova:flavor name="m1.nano">
Jan 20 19:15:14 compute-0 nova_compute[254061]:         <nova:memory>128</nova:memory>
Jan 20 19:15:14 compute-0 nova_compute[254061]:         <nova:disk>1</nova:disk>
Jan 20 19:15:14 compute-0 nova_compute[254061]:         <nova:swap>0</nova:swap>
Jan 20 19:15:14 compute-0 nova_compute[254061]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 19:15:14 compute-0 nova_compute[254061]:         <nova:vcpus>1</nova:vcpus>
Jan 20 19:15:14 compute-0 nova_compute[254061]:       </nova:flavor>
Jan 20 19:15:14 compute-0 nova_compute[254061]:       <nova:owner>
Jan 20 19:15:14 compute-0 nova_compute[254061]:         <nova:user uuid="d34bd159f8884263a7481e3fcff15267">tempest-TestNetworkBasicOps-899583499-project-member</nova:user>
Jan 20 19:15:14 compute-0 nova_compute[254061]:         <nova:project uuid="dc8a6ea17f334edbbfaf2a91ec6fd167">tempest-TestNetworkBasicOps-899583499</nova:project>
Jan 20 19:15:14 compute-0 nova_compute[254061]:       </nova:owner>
Jan 20 19:15:14 compute-0 nova_compute[254061]:       <nova:root type="image" uuid="bc57af0c-4b71-499e-9808-3c8fc070a488"/>
Jan 20 19:15:14 compute-0 nova_compute[254061]:       <nova:ports>
Jan 20 19:15:14 compute-0 nova_compute[254061]:         <nova:port uuid="49f8aed7-e782-474a-b2f1-2fbc7b04e852">
Jan 20 19:15:14 compute-0 nova_compute[254061]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 20 19:15:14 compute-0 nova_compute[254061]:         </nova:port>
Jan 20 19:15:14 compute-0 nova_compute[254061]:       </nova:ports>
Jan 20 19:15:14 compute-0 nova_compute[254061]:     </nova:instance>
Jan 20 19:15:14 compute-0 nova_compute[254061]:   </metadata>
Jan 20 19:15:14 compute-0 nova_compute[254061]:   <sysinfo type="smbios">
Jan 20 19:15:14 compute-0 nova_compute[254061]:     <system>
Jan 20 19:15:14 compute-0 nova_compute[254061]:       <entry name="manufacturer">RDO</entry>
Jan 20 19:15:14 compute-0 nova_compute[254061]:       <entry name="product">OpenStack Compute</entry>
Jan 20 19:15:14 compute-0 nova_compute[254061]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 19:15:14 compute-0 nova_compute[254061]:       <entry name="serial">464ffed9-a738-406a-9a42-2bd3d60d27f2</entry>
Jan 20 19:15:14 compute-0 nova_compute[254061]:       <entry name="uuid">464ffed9-a738-406a-9a42-2bd3d60d27f2</entry>
Jan 20 19:15:14 compute-0 nova_compute[254061]:       <entry name="family">Virtual Machine</entry>
Jan 20 19:15:14 compute-0 nova_compute[254061]:     </system>
Jan 20 19:15:14 compute-0 nova_compute[254061]:   </sysinfo>
Jan 20 19:15:14 compute-0 nova_compute[254061]:   <os>
Jan 20 19:15:14 compute-0 nova_compute[254061]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 19:15:14 compute-0 nova_compute[254061]:     <boot dev="hd"/>
Jan 20 19:15:14 compute-0 nova_compute[254061]:     <smbios mode="sysinfo"/>
Jan 20 19:15:14 compute-0 nova_compute[254061]:   </os>
Jan 20 19:15:14 compute-0 nova_compute[254061]:   <features>
Jan 20 19:15:14 compute-0 nova_compute[254061]:     <acpi/>
Jan 20 19:15:14 compute-0 nova_compute[254061]:     <apic/>
Jan 20 19:15:14 compute-0 nova_compute[254061]:     <vmcoreinfo/>
Jan 20 19:15:14 compute-0 nova_compute[254061]:   </features>
Jan 20 19:15:14 compute-0 nova_compute[254061]:   <clock offset="utc">
Jan 20 19:15:14 compute-0 nova_compute[254061]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 19:15:14 compute-0 nova_compute[254061]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 19:15:14 compute-0 nova_compute[254061]:     <timer name="hpet" present="no"/>
Jan 20 19:15:14 compute-0 nova_compute[254061]:   </clock>
Jan 20 19:15:14 compute-0 nova_compute[254061]:   <cpu mode="host-model" match="exact">
Jan 20 19:15:14 compute-0 nova_compute[254061]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 19:15:14 compute-0 nova_compute[254061]:   </cpu>
Jan 20 19:15:14 compute-0 nova_compute[254061]:   <devices>
Jan 20 19:15:14 compute-0 nova_compute[254061]:     <disk type="network" device="disk">
Jan 20 19:15:14 compute-0 nova_compute[254061]:       <driver type="raw" cache="none"/>
Jan 20 19:15:14 compute-0 nova_compute[254061]:       <source protocol="rbd" name="vms/464ffed9-a738-406a-9a42-2bd3d60d27f2_disk">
Jan 20 19:15:14 compute-0 nova_compute[254061]:         <host name="192.168.122.100" port="6789"/>
Jan 20 19:15:14 compute-0 nova_compute[254061]:         <host name="192.168.122.102" port="6789"/>
Jan 20 19:15:14 compute-0 nova_compute[254061]:         <host name="192.168.122.101" port="6789"/>
Jan 20 19:15:14 compute-0 nova_compute[254061]:       </source>
Jan 20 19:15:14 compute-0 nova_compute[254061]:       <auth username="openstack">
Jan 20 19:15:14 compute-0 nova_compute[254061]:         <secret type="ceph" uuid="aecbbf3b-b405-507b-97d7-637a83f5b4b1"/>
Jan 20 19:15:14 compute-0 nova_compute[254061]:       </auth>
Jan 20 19:15:14 compute-0 nova_compute[254061]:       <target dev="vda" bus="virtio"/>
Jan 20 19:15:14 compute-0 nova_compute[254061]:     </disk>
Jan 20 19:15:14 compute-0 nova_compute[254061]:     <disk type="network" device="cdrom">
Jan 20 19:15:14 compute-0 nova_compute[254061]:       <driver type="raw" cache="none"/>
Jan 20 19:15:14 compute-0 nova_compute[254061]:       <source protocol="rbd" name="vms/464ffed9-a738-406a-9a42-2bd3d60d27f2_disk.config">
Jan 20 19:15:14 compute-0 nova_compute[254061]:         <host name="192.168.122.100" port="6789"/>
Jan 20 19:15:14 compute-0 nova_compute[254061]:         <host name="192.168.122.102" port="6789"/>
Jan 20 19:15:14 compute-0 nova_compute[254061]:         <host name="192.168.122.101" port="6789"/>
Jan 20 19:15:14 compute-0 nova_compute[254061]:       </source>
Jan 20 19:15:14 compute-0 nova_compute[254061]:       <auth username="openstack">
Jan 20 19:15:14 compute-0 nova_compute[254061]:         <secret type="ceph" uuid="aecbbf3b-b405-507b-97d7-637a83f5b4b1"/>
Jan 20 19:15:14 compute-0 nova_compute[254061]:       </auth>
Jan 20 19:15:14 compute-0 nova_compute[254061]:       <target dev="sda" bus="sata"/>
Jan 20 19:15:14 compute-0 nova_compute[254061]:     </disk>
Jan 20 19:15:14 compute-0 nova_compute[254061]:     <interface type="ethernet">
Jan 20 19:15:14 compute-0 nova_compute[254061]:       <mac address="fa:16:3e:9f:26:34"/>
Jan 20 19:15:14 compute-0 nova_compute[254061]:       <model type="virtio"/>
Jan 20 19:15:14 compute-0 nova_compute[254061]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 19:15:14 compute-0 nova_compute[254061]:       <mtu size="1442"/>
Jan 20 19:15:14 compute-0 nova_compute[254061]:       <target dev="tap49f8aed7-e7"/>
Jan 20 19:15:14 compute-0 nova_compute[254061]:     </interface>
Jan 20 19:15:14 compute-0 nova_compute[254061]:     <serial type="pty">
Jan 20 19:15:14 compute-0 nova_compute[254061]:       <log file="/var/lib/nova/instances/464ffed9-a738-406a-9a42-2bd3d60d27f2/console.log" append="off"/>
Jan 20 19:15:14 compute-0 nova_compute[254061]:     </serial>
Jan 20 19:15:14 compute-0 nova_compute[254061]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 19:15:14 compute-0 nova_compute[254061]:     <video>
Jan 20 19:15:14 compute-0 nova_compute[254061]:       <model type="virtio"/>
Jan 20 19:15:14 compute-0 nova_compute[254061]:     </video>
Jan 20 19:15:14 compute-0 nova_compute[254061]:     <input type="tablet" bus="usb"/>
Jan 20 19:15:14 compute-0 nova_compute[254061]:     <rng model="virtio">
Jan 20 19:15:14 compute-0 nova_compute[254061]:       <backend model="random">/dev/urandom</backend>
Jan 20 19:15:14 compute-0 nova_compute[254061]:     </rng>
Jan 20 19:15:14 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root"/>
Jan 20 19:15:14 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:15:14 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:15:14 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:15:14 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:15:14 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:15:14 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:15:14 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:15:14 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:15:14 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:15:14 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:15:14 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:15:14 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:15:14 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:15:14 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:15:14 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:15:14 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:15:14 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:15:14 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:15:14 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:15:14 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:15:14 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:15:14 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:15:14 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:15:14 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:15:14 compute-0 nova_compute[254061]:     <controller type="usb" index="0"/>
Jan 20 19:15:14 compute-0 nova_compute[254061]:     <memballoon model="virtio">
Jan 20 19:15:14 compute-0 nova_compute[254061]:       <stats period="10"/>
Jan 20 19:15:14 compute-0 nova_compute[254061]:     </memballoon>
Jan 20 19:15:14 compute-0 nova_compute[254061]:   </devices>
Jan 20 19:15:14 compute-0 nova_compute[254061]: </domain>
Jan 20 19:15:14 compute-0 nova_compute[254061]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 19:15:14 compute-0 nova_compute[254061]: 2026-01-20 19:15:14.860 254065 DEBUG nova.compute.manager [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Preparing to wait for external event network-vif-plugged-49f8aed7-e782-474a-b2f1-2fbc7b04e852 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 19:15:14 compute-0 nova_compute[254061]: 2026-01-20 19:15:14.860 254065 DEBUG oslo_concurrency.lockutils [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Acquiring lock "464ffed9-a738-406a-9a42-2bd3d60d27f2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:15:14 compute-0 nova_compute[254061]: 2026-01-20 19:15:14.860 254065 DEBUG oslo_concurrency.lockutils [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "464ffed9-a738-406a-9a42-2bd3d60d27f2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:15:14 compute-0 nova_compute[254061]: 2026-01-20 19:15:14.861 254065 DEBUG oslo_concurrency.lockutils [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "464ffed9-a738-406a-9a42-2bd3d60d27f2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:15:14 compute-0 nova_compute[254061]: 2026-01-20 19:15:14.861 254065 DEBUG nova.virt.libvirt.vif [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T19:15:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-687559922',display_name='tempest-TestNetworkBasicOps-server-687559922',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-687559922',id=11,image_ref='bc57af0c-4b71-499e-9808-3c8fc070a488',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH5+/TlwhQfGy5x5Qu13nVCd3hZZgyJqcdIE8MBsjPZKKeVwNAJjbXcfTn5a0nUT3sF8nVHjNP5cE4VCngPg11b25JCd13c+VH8ik9H9ryZ3Q54ulsbxEsCug3YhIstfOA==',key_name='tempest-TestNetworkBasicOps-576438739',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dc8a6ea17f334edbbfaf2a91ec6fd167',ramdisk_id='',reservation_id='r-49ymama9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='bc57af0c-4b71-499e-9808-3c8fc070a488',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-899583499',owner_user_name='tempest-TestNetworkBasicOps-899583499-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T19:15:08Z,user_data=None,user_id='d34bd159f8884263a7481e3fcff15267',uuid=464ffed9-a738-406a-9a42-2bd3d60d27f2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "49f8aed7-e782-474a-b2f1-2fbc7b04e852", "address": "fa:16:3e:9f:26:34", "network": {"id": "43656a85-4118-4ce7-9ff6-eff8095a7ad3", "bridge": "br-int", "label": "tempest-network-smoke--1377908728", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap49f8aed7-e7", "ovs_interfaceid": "49f8aed7-e782-474a-b2f1-2fbc7b04e852", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 19:15:14 compute-0 nova_compute[254061]: 2026-01-20 19:15:14.861 254065 DEBUG nova.network.os_vif_util [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Converting VIF {"id": "49f8aed7-e782-474a-b2f1-2fbc7b04e852", "address": "fa:16:3e:9f:26:34", "network": {"id": "43656a85-4118-4ce7-9ff6-eff8095a7ad3", "bridge": "br-int", "label": "tempest-network-smoke--1377908728", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap49f8aed7-e7", "ovs_interfaceid": "49f8aed7-e782-474a-b2f1-2fbc7b04e852", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 19:15:14 compute-0 nova_compute[254061]: 2026-01-20 19:15:14.862 254065 DEBUG nova.network.os_vif_util [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9f:26:34,bridge_name='br-int',has_traffic_filtering=True,id=49f8aed7-e782-474a-b2f1-2fbc7b04e852,network=Network(43656a85-4118-4ce7-9ff6-eff8095a7ad3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap49f8aed7-e7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 19:15:14 compute-0 nova_compute[254061]: 2026-01-20 19:15:14.862 254065 DEBUG os_vif [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9f:26:34,bridge_name='br-int',has_traffic_filtering=True,id=49f8aed7-e782-474a-b2f1-2fbc7b04e852,network=Network(43656a85-4118-4ce7-9ff6-eff8095a7ad3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap49f8aed7-e7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 19:15:14 compute-0 nova_compute[254061]: 2026-01-20 19:15:14.863 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:15:14 compute-0 nova_compute[254061]: 2026-01-20 19:15:14.863 254065 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 19:15:14 compute-0 nova_compute[254061]: 2026-01-20 19:15:14.863 254065 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 19:15:14 compute-0 nova_compute[254061]: 2026-01-20 19:15:14.866 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:15:14 compute-0 nova_compute[254061]: 2026-01-20 19:15:14.866 254065 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap49f8aed7-e7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 19:15:14 compute-0 nova_compute[254061]: 2026-01-20 19:15:14.866 254065 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap49f8aed7-e7, col_values=(('external_ids', {'iface-id': '49f8aed7-e782-474a-b2f1-2fbc7b04e852', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9f:26:34', 'vm-uuid': '464ffed9-a738-406a-9a42-2bd3d60d27f2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 19:15:14 compute-0 nova_compute[254061]: 2026-01-20 19:15:14.868 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:15:14 compute-0 NetworkManager[48914]: <info>  [1768936514.8697] manager: (tap49f8aed7-e7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/54)
Jan 20 19:15:14 compute-0 nova_compute[254061]: 2026-01-20 19:15:14.870 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:15:14 compute-0 nova_compute[254061]: 2026-01-20 19:15:14.874 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:15:14 compute-0 nova_compute[254061]: 2026-01-20 19:15:14.876 254065 INFO os_vif [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9f:26:34,bridge_name='br-int',has_traffic_filtering=True,id=49f8aed7-e782-474a-b2f1-2fbc7b04e852,network=Network(43656a85-4118-4ce7-9ff6-eff8095a7ad3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap49f8aed7-e7')
Jan 20 19:15:14 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v999: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.0 MiB/s wr, 30 op/s
Jan 20 19:15:14 compute-0 nova_compute[254061]: 2026-01-20 19:15:14.941 254065 DEBUG nova.virt.libvirt.driver [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 19:15:14 compute-0 nova_compute[254061]: 2026-01-20 19:15:14.943 254065 DEBUG nova.virt.libvirt.driver [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 19:15:14 compute-0 nova_compute[254061]: 2026-01-20 19:15:14.943 254065 DEBUG nova.virt.libvirt.driver [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] No VIF found with MAC fa:16:3e:9f:26:34, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 19:15:14 compute-0 nova_compute[254061]: 2026-01-20 19:15:14.945 254065 INFO nova.virt.libvirt.driver [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Using config drive
Jan 20 19:15:14 compute-0 nova_compute[254061]: 2026-01-20 19:15:14.981 254065 DEBUG nova.storage.rbd_utils [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] rbd image 464ffed9-a738-406a-9a42-2bd3d60d27f2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 19:15:15 compute-0 nova_compute[254061]: 2026-01-20 19:15:15.158 254065 DEBUG nova.network.neutron [req-04bcd40c-7514-4c03-8581-f05abe829988 req-72d951e5-e496-4a5f-b58d-e6062a51e61e 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Updated VIF entry in instance network info cache for port 49f8aed7-e782-474a-b2f1-2fbc7b04e852. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 19:15:15 compute-0 nova_compute[254061]: 2026-01-20 19:15:15.158 254065 DEBUG nova.network.neutron [req-04bcd40c-7514-4c03-8581-f05abe829988 req-72d951e5-e496-4a5f-b58d-e6062a51e61e 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Updating instance_info_cache with network_info: [{"id": "49f8aed7-e782-474a-b2f1-2fbc7b04e852", "address": "fa:16:3e:9f:26:34", "network": {"id": "43656a85-4118-4ce7-9ff6-eff8095a7ad3", "bridge": "br-int", "label": "tempest-network-smoke--1377908728", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap49f8aed7-e7", "ovs_interfaceid": "49f8aed7-e782-474a-b2f1-2fbc7b04e852", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 19:15:15 compute-0 nova_compute[254061]: 2026-01-20 19:15:15.176 254065 DEBUG oslo_concurrency.lockutils [req-04bcd40c-7514-4c03-8581-f05abe829988 req-72d951e5-e496-4a5f-b58d-e6062a51e61e 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Releasing lock "refresh_cache-464ffed9-a738-406a-9a42-2bd3d60d27f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 19:15:15 compute-0 nova_compute[254061]: 2026-01-20 19:15:15.326 254065 INFO nova.virt.libvirt.driver [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Creating config drive at /var/lib/nova/instances/464ffed9-a738-406a-9a42-2bd3d60d27f2/disk.config
Jan 20 19:15:15 compute-0 nova_compute[254061]: 2026-01-20 19:15:15.335 254065 DEBUG oslo_concurrency.processutils [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/464ffed9-a738-406a-9a42-2bd3d60d27f2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpla7caw3n execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:15:15 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/4124081107' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 19:15:15 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/1919448934' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 19:15:15 compute-0 nova_compute[254061]: 2026-01-20 19:15:15.469 254065 DEBUG oslo_concurrency.processutils [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/464ffed9-a738-406a-9a42-2bd3d60d27f2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpla7caw3n" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:15:15 compute-0 nova_compute[254061]: 2026-01-20 19:15:15.500 254065 DEBUG nova.storage.rbd_utils [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] rbd image 464ffed9-a738-406a-9a42-2bd3d60d27f2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 19:15:15 compute-0 nova_compute[254061]: 2026-01-20 19:15:15.503 254065 DEBUG oslo_concurrency.processutils [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/464ffed9-a738-406a-9a42-2bd3d60d27f2/disk.config 464ffed9-a738-406a-9a42-2bd3d60d27f2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:15:15 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:15:15 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:15:15 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:15:15.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:15:16 compute-0 nova_compute[254061]: 2026-01-20 19:15:16.124 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:15:16 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:15:16 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:15:16 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:15:16.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:15:16 compute-0 nova_compute[254061]: 2026-01-20 19:15:16.780 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:15:16 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1000: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 20 19:15:16 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:15:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:15:17.210Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:15:17 compute-0 nova_compute[254061]: 2026-01-20 19:15:17.278 254065 DEBUG oslo_concurrency.processutils [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/464ffed9-a738-406a-9a42-2bd3d60d27f2/disk.config 464ffed9-a738-406a-9a42-2bd3d60d27f2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.775s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:15:17 compute-0 nova_compute[254061]: 2026-01-20 19:15:17.279 254065 INFO nova.virt.libvirt.driver [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Deleting local config drive /var/lib/nova/instances/464ffed9-a738-406a-9a42-2bd3d60d27f2/disk.config because it was imported into RBD.
Jan 20 19:15:17 compute-0 kernel: tap49f8aed7-e7: entered promiscuous mode
Jan 20 19:15:17 compute-0 NetworkManager[48914]: <info>  [1768936517.3383] manager: (tap49f8aed7-e7): new Tun device (/org/freedesktop/NetworkManager/Devices/55)
Jan 20 19:15:17 compute-0 ovn_controller[155128]: 2026-01-20T19:15:17Z|00081|binding|INFO|Claiming lport 49f8aed7-e782-474a-b2f1-2fbc7b04e852 for this chassis.
Jan 20 19:15:17 compute-0 ovn_controller[155128]: 2026-01-20T19:15:17Z|00082|binding|INFO|49f8aed7-e782-474a-b2f1-2fbc7b04e852: Claiming fa:16:3e:9f:26:34 10.100.0.9
Jan 20 19:15:17 compute-0 nova_compute[254061]: 2026-01-20 19:15:17.338 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:15:17 compute-0 nova_compute[254061]: 2026-01-20 19:15:17.345 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:15:17 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:15:17.354 165659 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9f:26:34 10.100.0.9'], port_security=['fa:16:3e:9f:26:34 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '464ffed9-a738-406a-9a42-2bd3d60d27f2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43656a85-4118-4ce7-9ff6-eff8095a7ad3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dc8a6ea17f334edbbfaf2a91ec6fd167', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a39bda74-9c15-44c1-83ec-b9e2df1ecff1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=244f1299-7dd7-4257-a749-5b40ece19c33, chassis=[<ovs.db.idl.Row object at 0x7f4e780f9880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4e780f9880>], logical_port=49f8aed7-e782-474a-b2f1-2fbc7b04e852) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 19:15:17 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:15:17.356 165659 INFO neutron.agent.ovn.metadata.agent [-] Port 49f8aed7-e782-474a-b2f1-2fbc7b04e852 in datapath 43656a85-4118-4ce7-9ff6-eff8095a7ad3 bound to our chassis
Jan 20 19:15:17 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:15:17.358 165659 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 43656a85-4118-4ce7-9ff6-eff8095a7ad3
Jan 20 19:15:17 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:15:17.370 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[51fc1797-601b-4064-9b0b-3314d9294cc9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:15:17 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:15:17.372 165659 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap43656a85-41 in ovnmeta-43656a85-4118-4ce7-9ff6-eff8095a7ad3 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 19:15:17 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:15:17.374 259376 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap43656a85-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 19:15:17 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:15:17.375 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[fe3c085b-4e60-4f46-b19a-211b765eab3b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:15:17 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:15:17.375 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[39569837-20a4-45c3-9e12-9a0b28209bd5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:15:17 compute-0 systemd-machined[220746]: New machine qemu-5-instance-0000000b.
Jan 20 19:15:17 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:15:17.388 166372 DEBUG oslo.privsep.daemon [-] privsep: reply[d4d6b145-b140-4536-8d89-cb1d58549402]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:15:17 compute-0 systemd[1]: Started Virtual Machine qemu-5-instance-0000000b.
Jan 20 19:15:17 compute-0 nova_compute[254061]: 2026-01-20 19:15:17.407 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:15:17 compute-0 ovn_controller[155128]: 2026-01-20T19:15:17Z|00083|binding|INFO|Setting lport 49f8aed7-e782-474a-b2f1-2fbc7b04e852 ovn-installed in OVS
Jan 20 19:15:17 compute-0 ovn_controller[155128]: 2026-01-20T19:15:17Z|00084|binding|INFO|Setting lport 49f8aed7-e782-474a-b2f1-2fbc7b04e852 up in Southbound
Jan 20 19:15:17 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:15:17.415 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[c24450a7-217b-4be2-bdc1-02a46ae7ab30]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:15:17 compute-0 nova_compute[254061]: 2026-01-20 19:15:17.417 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:15:17 compute-0 systemd-udevd[272592]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 19:15:17 compute-0 NetworkManager[48914]: <info>  [1768936517.4489] device (tap49f8aed7-e7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 19:15:17 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:15:17.447 259434 DEBUG oslo.privsep.daemon [-] privsep: reply[59da9704-6508-45b7-b399-c06790a849aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:15:17 compute-0 NetworkManager[48914]: <info>  [1768936517.4501] device (tap49f8aed7-e7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 19:15:17 compute-0 NetworkManager[48914]: <info>  [1768936517.4557] manager: (tap43656a85-40): new Veth device (/org/freedesktop/NetworkManager/Devices/56)
Jan 20 19:15:17 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:15:17.455 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[81c76d7f-6c43-4cd9-ac93-50ef12f8ef5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:15:17 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:15:17.488 259434 DEBUG oslo.privsep.daemon [-] privsep: reply[40f4d6c1-72ab-4822-9922-23b9e6eb7c00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:15:17 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:15:17.490 259434 DEBUG oslo.privsep.daemon [-] privsep: reply[61701920-497d-438d-9d4a-24f1b47b6026]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:15:17 compute-0 NetworkManager[48914]: <info>  [1768936517.5259] device (tap43656a85-40): carrier: link connected
Jan 20 19:15:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[106777]: logger=cleanup t=2026-01-20T19:15:17.52697218Z level=info msg="Completed cleanup jobs" duration=17.084613ms
Jan 20 19:15:17 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:15:17.531 259434 DEBUG oslo.privsep.daemon [-] privsep: reply[1da4f2ef-1c98-482f-ab8e-0eee86d05694]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:15:17 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:15:17.547 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[01519664-73e8-4e7c-815d-3d3d528b6ff3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap43656a85-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5b:14:1e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 471331, 'reachable_time': 34192, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272620, 'error': None, 'target': 'ovnmeta-43656a85-4118-4ce7-9ff6-eff8095a7ad3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:15:17 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:15:17.562 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[51288c6c-6303-4008-9e22-934385758abb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5b:141e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 471331, 'tstamp': 471331}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 272622, 'error': None, 'target': 'ovnmeta-43656a85-4118-4ce7-9ff6-eff8095a7ad3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:15:17 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:15:17.596 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[74b9e1f9-ef04-468c-ad6c-4a25b70676a8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap43656a85-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5b:14:1e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 471331, 'reachable_time': 34192, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 272623, 'error': None, 'target': 'ovnmeta-43656a85-4118-4ce7-9ff6-eff8095a7ad3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:15:17 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:15:17.625 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[4aa5b349-5bf6-469c-a121-98bcc24eba31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:15:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[106777]: logger=plugins.update.checker t=2026-01-20T19:15:17.653898153Z level=info msg="Update check succeeded" duration=61.32187ms
Jan 20 19:15:17 compute-0 ceph-mon[74381]: pgmap v999: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.0 MiB/s wr, 30 op/s
Jan 20 19:15:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[106777]: logger=grafana.update.checker t=2026-01-20T19:15:17.66707207Z level=info msg="Update check succeeded" duration=74.495256ms
Jan 20 19:15:17 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:15:17.693 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[5f313721-4513-486f-9eab-f43b44ecb088]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:15:17 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:15:17.695 165659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap43656a85-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 19:15:17 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:15:17.695 165659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 19:15:17 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:15:17.696 165659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap43656a85-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 19:15:17 compute-0 NetworkManager[48914]: <info>  [1768936517.6984] manager: (tap43656a85-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/57)
Jan 20 19:15:17 compute-0 nova_compute[254061]: 2026-01-20 19:15:17.697 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:15:17 compute-0 kernel: tap43656a85-40: entered promiscuous mode
Jan 20 19:15:17 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:15:17.703 165659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap43656a85-40, col_values=(('external_ids', {'iface-id': 'e6ecdda6-0823-4e5d-9393-ba709d8369b3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 19:15:17 compute-0 ovn_controller[155128]: 2026-01-20T19:15:17Z|00085|binding|INFO|Releasing lport e6ecdda6-0823-4e5d-9393-ba709d8369b3 from this chassis (sb_readonly=0)
Jan 20 19:15:17 compute-0 nova_compute[254061]: 2026-01-20 19:15:17.704 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:15:17 compute-0 nova_compute[254061]: 2026-01-20 19:15:17.704 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:15:17 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:15:17.707 165659 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/43656a85-4118-4ce7-9ff6-eff8095a7ad3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/43656a85-4118-4ce7-9ff6-eff8095a7ad3.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 19:15:17 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:15:17.718 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[c0c833e5-b8ac-4753-9f3e-353a87f11d93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:15:17 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:15:17.719 165659 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 19:15:17 compute-0 ovn_metadata_agent[165637]: global
Jan 20 19:15:17 compute-0 ovn_metadata_agent[165637]:     log         /dev/log local0 debug
Jan 20 19:15:17 compute-0 ovn_metadata_agent[165637]:     log-tag     haproxy-metadata-proxy-43656a85-4118-4ce7-9ff6-eff8095a7ad3
Jan 20 19:15:17 compute-0 ovn_metadata_agent[165637]:     user        root
Jan 20 19:15:17 compute-0 ovn_metadata_agent[165637]:     group       root
Jan 20 19:15:17 compute-0 ovn_metadata_agent[165637]:     maxconn     1024
Jan 20 19:15:17 compute-0 ovn_metadata_agent[165637]:     pidfile     /var/lib/neutron/external/pids/43656a85-4118-4ce7-9ff6-eff8095a7ad3.pid.haproxy
Jan 20 19:15:17 compute-0 ovn_metadata_agent[165637]:     daemon
Jan 20 19:15:17 compute-0 ovn_metadata_agent[165637]: 
Jan 20 19:15:17 compute-0 ovn_metadata_agent[165637]: defaults
Jan 20 19:15:17 compute-0 ovn_metadata_agent[165637]:     log global
Jan 20 19:15:17 compute-0 ovn_metadata_agent[165637]:     mode http
Jan 20 19:15:17 compute-0 ovn_metadata_agent[165637]:     option httplog
Jan 20 19:15:17 compute-0 ovn_metadata_agent[165637]:     option dontlognull
Jan 20 19:15:17 compute-0 ovn_metadata_agent[165637]:     option http-server-close
Jan 20 19:15:17 compute-0 ovn_metadata_agent[165637]:     option forwardfor
Jan 20 19:15:17 compute-0 ovn_metadata_agent[165637]:     retries                 3
Jan 20 19:15:17 compute-0 ovn_metadata_agent[165637]:     timeout http-request    30s
Jan 20 19:15:17 compute-0 ovn_metadata_agent[165637]:     timeout connect         30s
Jan 20 19:15:17 compute-0 ovn_metadata_agent[165637]:     timeout client          32s
Jan 20 19:15:17 compute-0 ovn_metadata_agent[165637]:     timeout server          32s
Jan 20 19:15:17 compute-0 ovn_metadata_agent[165637]:     timeout http-keep-alive 30s
Jan 20 19:15:17 compute-0 ovn_metadata_agent[165637]: 
Jan 20 19:15:17 compute-0 ovn_metadata_agent[165637]: 
Jan 20 19:15:17 compute-0 ovn_metadata_agent[165637]: listen listener
Jan 20 19:15:17 compute-0 ovn_metadata_agent[165637]:     bind 169.254.169.254:80
Jan 20 19:15:17 compute-0 ovn_metadata_agent[165637]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 19:15:17 compute-0 ovn_metadata_agent[165637]:     http-request add-header X-OVN-Network-ID 43656a85-4118-4ce7-9ff6-eff8095a7ad3
Jan 20 19:15:17 compute-0 ovn_metadata_agent[165637]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 19:15:17 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:15:17.719 165659 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-43656a85-4118-4ce7-9ff6-eff8095a7ad3', 'env', 'PROCESS_TAG=haproxy-43656a85-4118-4ce7-9ff6-eff8095a7ad3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/43656a85-4118-4ce7-9ff6-eff8095a7ad3.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 19:15:17 compute-0 nova_compute[254061]: 2026-01-20 19:15:17.721 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:15:17 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:15:17 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:15:17 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:15:17.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:15:18 compute-0 nova_compute[254061]: 2026-01-20 19:15:18.017 254065 DEBUG nova.compute.manager [req-c1427bc8-068f-4ea1-bd32-9c9757cc6843 req-69df20d4-b0d9-499d-85a0-d1aadd2964e8 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Received event network-vif-plugged-49f8aed7-e782-474a-b2f1-2fbc7b04e852 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 19:15:18 compute-0 nova_compute[254061]: 2026-01-20 19:15:18.018 254065 DEBUG oslo_concurrency.lockutils [req-c1427bc8-068f-4ea1-bd32-9c9757cc6843 req-69df20d4-b0d9-499d-85a0-d1aadd2964e8 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Acquiring lock "464ffed9-a738-406a-9a42-2bd3d60d27f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:15:18 compute-0 nova_compute[254061]: 2026-01-20 19:15:18.019 254065 DEBUG oslo_concurrency.lockutils [req-c1427bc8-068f-4ea1-bd32-9c9757cc6843 req-69df20d4-b0d9-499d-85a0-d1aadd2964e8 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Lock "464ffed9-a738-406a-9a42-2bd3d60d27f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:15:18 compute-0 nova_compute[254061]: 2026-01-20 19:15:18.019 254065 DEBUG oslo_concurrency.lockutils [req-c1427bc8-068f-4ea1-bd32-9c9757cc6843 req-69df20d4-b0d9-499d-85a0-d1aadd2964e8 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Lock "464ffed9-a738-406a-9a42-2bd3d60d27f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:15:18 compute-0 nova_compute[254061]: 2026-01-20 19:15:18.019 254065 DEBUG nova.compute.manager [req-c1427bc8-068f-4ea1-bd32-9c9757cc6843 req-69df20d4-b0d9-499d-85a0-d1aadd2964e8 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Processing event network-vif-plugged-49f8aed7-e782-474a-b2f1-2fbc7b04e852 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 19:15:18 compute-0 podman[272689]: 2026-01-20 19:15:18.115383918 +0000 UTC m=+0.081500156 container create 111c6c5127b4f1e4e81426c2856a55e636ca12205233bbc0bfb44a6e7831db68 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-43656a85-4118-4ce7-9ff6-eff8095a7ad3, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 20 19:15:18 compute-0 systemd[1]: Started libpod-conmon-111c6c5127b4f1e4e81426c2856a55e636ca12205233bbc0bfb44a6e7831db68.scope.
Jan 20 19:15:18 compute-0 podman[272689]: 2026-01-20 19:15:18.056978238 +0000 UTC m=+0.023094536 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 19:15:18 compute-0 nova_compute[254061]: 2026-01-20 19:15:18.158 254065 DEBUG nova.virt.driver [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] Emitting event <LifecycleEvent: 1768936518.1578763, 464ffed9-a738-406a-9a42-2bd3d60d27f2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 19:15:18 compute-0 nova_compute[254061]: 2026-01-20 19:15:18.159 254065 INFO nova.compute.manager [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] VM Started (Lifecycle Event)
Jan 20 19:15:18 compute-0 nova_compute[254061]: 2026-01-20 19:15:18.160 254065 DEBUG nova.compute.manager [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 19:15:18 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:15:18 compute-0 nova_compute[254061]: 2026-01-20 19:15:18.164 254065 DEBUG nova.virt.libvirt.driver [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 19:15:18 compute-0 nova_compute[254061]: 2026-01-20 19:15:18.167 254065 INFO nova.virt.libvirt.driver [-] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Instance spawned successfully.
Jan 20 19:15:18 compute-0 nova_compute[254061]: 2026-01-20 19:15:18.167 254065 DEBUG nova.virt.libvirt.driver [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 19:15:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c72ed1dee5e9a9168bce39b81e56ec7ef91349e508be298f6a2cb6abc4fe20f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 19:15:18 compute-0 nova_compute[254061]: 2026-01-20 19:15:18.181 254065 DEBUG nova.compute.manager [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 19:15:18 compute-0 podman[272689]: 2026-01-20 19:15:18.183256184 +0000 UTC m=+0.149372452 container init 111c6c5127b4f1e4e81426c2856a55e636ca12205233bbc0bfb44a6e7831db68 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-43656a85-4118-4ce7-9ff6-eff8095a7ad3, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 20 19:15:18 compute-0 nova_compute[254061]: 2026-01-20 19:15:18.184 254065 DEBUG nova.compute.manager [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 19:15:18 compute-0 podman[272689]: 2026-01-20 19:15:18.18867357 +0000 UTC m=+0.154789808 container start 111c6c5127b4f1e4e81426c2856a55e636ca12205233bbc0bfb44a6e7831db68 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-43656a85-4118-4ce7-9ff6-eff8095a7ad3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS)
Jan 20 19:15:18 compute-0 nova_compute[254061]: 2026-01-20 19:15:18.191 254065 DEBUG nova.virt.libvirt.driver [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 19:15:18 compute-0 nova_compute[254061]: 2026-01-20 19:15:18.191 254065 DEBUG nova.virt.libvirt.driver [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 19:15:18 compute-0 nova_compute[254061]: 2026-01-20 19:15:18.191 254065 DEBUG nova.virt.libvirt.driver [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 19:15:18 compute-0 nova_compute[254061]: 2026-01-20 19:15:18.192 254065 DEBUG nova.virt.libvirt.driver [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 19:15:18 compute-0 nova_compute[254061]: 2026-01-20 19:15:18.192 254065 DEBUG nova.virt.libvirt.driver [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 19:15:18 compute-0 nova_compute[254061]: 2026-01-20 19:15:18.193 254065 DEBUG nova.virt.libvirt.driver [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 19:15:18 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:15:18 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:15:18 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:15:18.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:15:18 compute-0 neutron-haproxy-ovnmeta-43656a85-4118-4ce7-9ff6-eff8095a7ad3[272714]: [NOTICE]   (272718) : New worker (272720) forked
Jan 20 19:15:18 compute-0 neutron-haproxy-ovnmeta-43656a85-4118-4ce7-9ff6-eff8095a7ad3[272714]: [NOTICE]   (272718) : Loading success.
Jan 20 19:15:18 compute-0 nova_compute[254061]: 2026-01-20 19:15:18.219 254065 INFO nova.compute.manager [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 19:15:18 compute-0 nova_compute[254061]: 2026-01-20 19:15:18.219 254065 DEBUG nova.virt.driver [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] Emitting event <LifecycleEvent: 1768936518.1586697, 464ffed9-a738-406a-9a42-2bd3d60d27f2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 19:15:18 compute-0 nova_compute[254061]: 2026-01-20 19:15:18.219 254065 INFO nova.compute.manager [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] VM Paused (Lifecycle Event)
Jan 20 19:15:18 compute-0 nova_compute[254061]: 2026-01-20 19:15:18.247 254065 DEBUG nova.compute.manager [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 19:15:18 compute-0 nova_compute[254061]: 2026-01-20 19:15:18.250 254065 DEBUG nova.virt.driver [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] Emitting event <LifecycleEvent: 1768936518.1637356, 464ffed9-a738-406a-9a42-2bd3d60d27f2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 19:15:18 compute-0 nova_compute[254061]: 2026-01-20 19:15:18.250 254065 INFO nova.compute.manager [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] VM Resumed (Lifecycle Event)
Jan 20 19:15:18 compute-0 nova_compute[254061]: 2026-01-20 19:15:18.256 254065 INFO nova.compute.manager [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Took 9.25 seconds to spawn the instance on the hypervisor.
Jan 20 19:15:18 compute-0 nova_compute[254061]: 2026-01-20 19:15:18.256 254065 DEBUG nova.compute.manager [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 19:15:18 compute-0 nova_compute[254061]: 2026-01-20 19:15:18.268 254065 DEBUG nova.compute.manager [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 19:15:18 compute-0 nova_compute[254061]: 2026-01-20 19:15:18.270 254065 DEBUG nova.compute.manager [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 19:15:18 compute-0 nova_compute[254061]: 2026-01-20 19:15:18.294 254065 INFO nova.compute.manager [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 19:15:18 compute-0 nova_compute[254061]: 2026-01-20 19:15:18.317 254065 INFO nova.compute.manager [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Took 10.28 seconds to build instance.
Jan 20 19:15:18 compute-0 nova_compute[254061]: 2026-01-20 19:15:18.331 254065 DEBUG oslo_concurrency.lockutils [None req-bee56693-0fd9-4bf9-8cf3-390287001f19 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "464ffed9-a738-406a-9a42-2bd3d60d27f2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.374s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:15:18 compute-0 ceph-mon[74381]: pgmap v1000: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 20 19:15:18 compute-0 sudo[272730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:15:18 compute-0 sudo[272730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:15:18 compute-0 sudo[272730]: pam_unix(sudo:session): session closed for user root
Jan 20 19:15:18 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1001: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 20 19:15:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:15:18.899Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:15:19 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:15:19] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Jan 20 19:15:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:15:19] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Jan 20 19:15:19 compute-0 ceph-mon[74381]: pgmap v1001: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 20 19:15:19 compute-0 nova_compute[254061]: 2026-01-20 19:15:19.868 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:15:19 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:15:19 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:15:19 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:15:19.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:15:20 compute-0 nova_compute[254061]: 2026-01-20 19:15:20.105 254065 DEBUG nova.compute.manager [req-1b902128-f047-4c05-82fb-3f37532ded71 req-b75ae619-e2be-4038-b1eb-bf21c6ec584e 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Received event network-vif-plugged-49f8aed7-e782-474a-b2f1-2fbc7b04e852 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 19:15:20 compute-0 nova_compute[254061]: 2026-01-20 19:15:20.105 254065 DEBUG oslo_concurrency.lockutils [req-1b902128-f047-4c05-82fb-3f37532ded71 req-b75ae619-e2be-4038-b1eb-bf21c6ec584e 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Acquiring lock "464ffed9-a738-406a-9a42-2bd3d60d27f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:15:20 compute-0 nova_compute[254061]: 2026-01-20 19:15:20.106 254065 DEBUG oslo_concurrency.lockutils [req-1b902128-f047-4c05-82fb-3f37532ded71 req-b75ae619-e2be-4038-b1eb-bf21c6ec584e 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Lock "464ffed9-a738-406a-9a42-2bd3d60d27f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:15:20 compute-0 nova_compute[254061]: 2026-01-20 19:15:20.106 254065 DEBUG oslo_concurrency.lockutils [req-1b902128-f047-4c05-82fb-3f37532ded71 req-b75ae619-e2be-4038-b1eb-bf21c6ec584e 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Lock "464ffed9-a738-406a-9a42-2bd3d60d27f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:15:20 compute-0 nova_compute[254061]: 2026-01-20 19:15:20.106 254065 DEBUG nova.compute.manager [req-1b902128-f047-4c05-82fb-3f37532ded71 req-b75ae619-e2be-4038-b1eb-bf21c6ec584e 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] No waiting events found dispatching network-vif-plugged-49f8aed7-e782-474a-b2f1-2fbc7b04e852 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 19:15:20 compute-0 nova_compute[254061]: 2026-01-20 19:15:20.106 254065 WARNING nova.compute.manager [req-1b902128-f047-4c05-82fb-3f37532ded71 req-b75ae619-e2be-4038-b1eb-bf21c6ec584e 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Received unexpected event network-vif-plugged-49f8aed7-e782-474a-b2f1-2fbc7b04e852 for instance with vm_state active and task_state None.
Jan 20 19:15:20 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:15:20 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:15:20 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:15:20.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:15:20 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1002: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 416 KiB/s rd, 1.8 MiB/s wr, 42 op/s
Jan 20 19:15:21 compute-0 nova_compute[254061]: 2026-01-20 19:15:21.782 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:15:21 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:15:21 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:15:21 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:15:21 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:15:21.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:15:22 compute-0 ceph-mon[74381]: pgmap v1002: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 416 KiB/s rd, 1.8 MiB/s wr, 42 op/s
Jan 20 19:15:22 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:15:22 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:15:22 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:15:22.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:15:22 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1003: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.5 MiB/s wr, 101 op/s
Jan 20 19:15:23 compute-0 ovn_controller[155128]: 2026-01-20T19:15:23Z|00086|binding|INFO|Releasing lport e6ecdda6-0823-4e5d-9393-ba709d8369b3 from this chassis (sb_readonly=0)
Jan 20 19:15:23 compute-0 NetworkManager[48914]: <info>  [1768936523.6437] manager: (patch-provnet-d23a33c8-2f92-47f7-9446-58aa0ac25f0e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/58)
Jan 20 19:15:23 compute-0 NetworkManager[48914]: <info>  [1768936523.6446] manager: (patch-br-int-to-provnet-d23a33c8-2f92-47f7-9446-58aa0ac25f0e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/59)
Jan 20 19:15:23 compute-0 nova_compute[254061]: 2026-01-20 19:15:23.642 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:15:23 compute-0 ovn_controller[155128]: 2026-01-20T19:15:23Z|00087|binding|INFO|Releasing lport e6ecdda6-0823-4e5d-9393-ba709d8369b3 from this chassis (sb_readonly=0)
Jan 20 19:15:23 compute-0 nova_compute[254061]: 2026-01-20 19:15:23.683 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:15:23 compute-0 nova_compute[254061]: 2026-01-20 19:15:23.688 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:15:23 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:15:23 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:15:23 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:15:23.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:15:24 compute-0 nova_compute[254061]: 2026-01-20 19:15:24.051 254065 DEBUG nova.compute.manager [req-268d1c03-9249-4891-abca-a9a40fd6b2d3 req-97f722ff-7f69-48a2-9840-cb1d1b68e450 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Received event network-changed-49f8aed7-e782-474a-b2f1-2fbc7b04e852 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 19:15:24 compute-0 nova_compute[254061]: 2026-01-20 19:15:24.051 254065 DEBUG nova.compute.manager [req-268d1c03-9249-4891-abca-a9a40fd6b2d3 req-97f722ff-7f69-48a2-9840-cb1d1b68e450 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Refreshing instance network info cache due to event network-changed-49f8aed7-e782-474a-b2f1-2fbc7b04e852. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 19:15:24 compute-0 nova_compute[254061]: 2026-01-20 19:15:24.051 254065 DEBUG oslo_concurrency.lockutils [req-268d1c03-9249-4891-abca-a9a40fd6b2d3 req-97f722ff-7f69-48a2-9840-cb1d1b68e450 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Acquiring lock "refresh_cache-464ffed9-a738-406a-9a42-2bd3d60d27f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 19:15:24 compute-0 nova_compute[254061]: 2026-01-20 19:15:24.051 254065 DEBUG oslo_concurrency.lockutils [req-268d1c03-9249-4891-abca-a9a40fd6b2d3 req-97f722ff-7f69-48a2-9840-cb1d1b68e450 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Acquired lock "refresh_cache-464ffed9-a738-406a-9a42-2bd3d60d27f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 19:15:24 compute-0 nova_compute[254061]: 2026-01-20 19:15:24.051 254065 DEBUG nova.network.neutron [req-268d1c03-9249-4891-abca-a9a40fd6b2d3 req-97f722ff-7f69-48a2-9840-cb1d1b68e450 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Refreshing network info cache for port 49f8aed7-e782-474a-b2f1-2fbc7b04e852 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 19:15:24 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:15:24 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:15:24 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:15:24.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:15:24 compute-0 ceph-mon[74381]: pgmap v1003: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.5 MiB/s wr, 101 op/s
Jan 20 19:15:24 compute-0 nova_compute[254061]: 2026-01-20 19:15:24.869 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:15:24 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1004: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 20 19:15:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:15:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:15:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:15:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:15:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:15:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:15:25 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:15:25 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:15:25 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:15:25 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:15:25.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:15:26 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:15:26 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:15:26 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:15:26.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:15:26 compute-0 ceph-mon[74381]: pgmap v1004: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 20 19:15:26 compute-0 nova_compute[254061]: 2026-01-20 19:15:26.784 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:15:26 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1005: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 75 op/s
Jan 20 19:15:26 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:15:27 compute-0 nova_compute[254061]: 2026-01-20 19:15:27.066 254065 DEBUG nova.network.neutron [req-268d1c03-9249-4891-abca-a9a40fd6b2d3 req-97f722ff-7f69-48a2-9840-cb1d1b68e450 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Updated VIF entry in instance network info cache for port 49f8aed7-e782-474a-b2f1-2fbc7b04e852. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 19:15:27 compute-0 nova_compute[254061]: 2026-01-20 19:15:27.067 254065 DEBUG nova.network.neutron [req-268d1c03-9249-4891-abca-a9a40fd6b2d3 req-97f722ff-7f69-48a2-9840-cb1d1b68e450 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Updating instance_info_cache with network_info: [{"id": "49f8aed7-e782-474a-b2f1-2fbc7b04e852", "address": "fa:16:3e:9f:26:34", "network": {"id": "43656a85-4118-4ce7-9ff6-eff8095a7ad3", "bridge": "br-int", "label": "tempest-network-smoke--1377908728", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap49f8aed7-e7", "ovs_interfaceid": "49f8aed7-e782-474a-b2f1-2fbc7b04e852", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 19:15:27 compute-0 nova_compute[254061]: 2026-01-20 19:15:27.087 254065 DEBUG oslo_concurrency.lockutils [req-268d1c03-9249-4891-abca-a9a40fd6b2d3 req-97f722ff-7f69-48a2-9840-cb1d1b68e450 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Releasing lock "refresh_cache-464ffed9-a738-406a-9a42-2bd3d60d27f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 19:15:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:15:27.211Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:15:27 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:15:27 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:15:27 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:15:27.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:15:28 compute-0 podman[272765]: 2026-01-20 19:15:28.070489256 +0000 UTC m=+0.050289752 container health_status 7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Jan 20 19:15:28 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:15:28 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:15:28 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:15:28.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:15:28 compute-0 ceph-mon[74381]: pgmap v1005: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 75 op/s
Jan 20 19:15:28 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1006: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 20 19:15:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:15:28.899Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:15:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:15:29] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Jan 20 19:15:29 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:15:29] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Jan 20 19:15:29 compute-0 nova_compute[254061]: 2026-01-20 19:15:29.871 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:15:29 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:15:29 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:15:29 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:15:29.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:15:30 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:15:30 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:15:30 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:15:30.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:15:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:15:30.292 165659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:15:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:15:30.292 165659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:15:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:15:30.293 165659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:15:30 compute-0 ceph-mon[74381]: pgmap v1006: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 20 19:15:30 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1007: 337 pgs: 337 active+clean; 91 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 292 KiB/s wr, 81 op/s
Jan 20 19:15:31 compute-0 ovn_controller[155128]: 2026-01-20T19:15:31Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9f:26:34 10.100.0.9
Jan 20 19:15:31 compute-0 ovn_controller[155128]: 2026-01-20T19:15:31Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9f:26:34 10.100.0.9
Jan 20 19:15:31 compute-0 nova_compute[254061]: 2026-01-20 19:15:31.786 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:15:31 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:15:31 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:15:31 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:15:31 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:15:31.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:15:32 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:15:32 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:15:32 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:15:32.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:15:32 compute-0 ceph-mon[74381]: pgmap v1007: 337 pgs: 337 active+clean; 91 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 292 KiB/s wr, 81 op/s
Jan 20 19:15:32 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/1100128627' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:15:32 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1008: 337 pgs: 337 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.1 MiB/s wr, 123 op/s
Jan 20 19:15:33 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:15:33 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:15:33 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:15:33.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:15:34 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:15:34 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:15:34 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:15:34.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:15:34 compute-0 ceph-mon[74381]: pgmap v1008: 337 pgs: 337 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.1 MiB/s wr, 123 op/s
Jan 20 19:15:34 compute-0 nova_compute[254061]: 2026-01-20 19:15:34.873 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:15:34 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1009: 337 pgs: 337 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 20 19:15:35 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:15:35 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:15:35 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:15:35.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:15:36 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:15:36 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:15:36 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:15:36.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:15:36 compute-0 ceph-mon[74381]: pgmap v1009: 337 pgs: 337 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 20 19:15:36 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/2761920884' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 19:15:36 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/4086997786' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 19:15:36 compute-0 nova_compute[254061]: 2026-01-20 19:15:36.789 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:15:36 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1010: 337 pgs: 337 active+clean; 167 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 344 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Jan 20 19:15:36 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:15:37 compute-0 podman[272796]: 2026-01-20 19:15:37.137743435 +0000 UTC m=+0.109370480 container health_status d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 20 19:15:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:15:37.213Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:15:37 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:15:37 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:15:37 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:15:37.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:15:38 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:15:38 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:15:38 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:15:38.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:15:38 compute-0 ceph-mon[74381]: pgmap v1010: 337 pgs: 337 active+clean; 167 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 344 KiB/s rd, 3.9 MiB/s wr, 92 op/s
Jan 20 19:15:38 compute-0 sudo[272826]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:15:38 compute-0 sudo[272826]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:15:38 compute-0 sudo[272826]: pam_unix(sudo:session): session closed for user root
Jan 20 19:15:38 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1011: 337 pgs: 337 active+clean; 167 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 343 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Jan 20 19:15:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:15:38.901Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:15:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:15:39] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Jan 20 19:15:39 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:15:39] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Jan 20 19:15:39 compute-0 nova_compute[254061]: 2026-01-20 19:15:39.874 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:15:39 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:15:39 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:15:39 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:15:39.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:15:40 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:15:40 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:15:40 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:15:40.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:15:40 compute-0 ceph-mon[74381]: pgmap v1011: 337 pgs: 337 active+clean; 167 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 343 KiB/s rd, 3.9 MiB/s wr, 91 op/s
Jan 20 19:15:40 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:15:40 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1012: 337 pgs: 337 active+clean; 167 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.9 MiB/s wr, 118 op/s
Jan 20 19:15:41 compute-0 nova_compute[254061]: 2026-01-20 19:15:41.792 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:15:41 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:15:41 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:15:41 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:15:41 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:15:41.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:15:42 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:15:42 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:15:42 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:15:42.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:15:42 compute-0 ceph-mon[74381]: pgmap v1012: 337 pgs: 337 active+clean; 167 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.9 MiB/s wr, 118 op/s
Jan 20 19:15:42 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1013: 337 pgs: 337 active+clean; 167 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.7 MiB/s wr, 159 op/s
Jan 20 19:15:43 compute-0 ceph-mon[74381]: pgmap v1013: 337 pgs: 337 active+clean; 167 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.7 MiB/s wr, 159 op/s
Jan 20 19:15:43 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:15:43 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:15:43 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:15:43.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:15:44 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:15:44 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:15:44 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:15:44.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:15:44 compute-0 nova_compute[254061]: 2026-01-20 19:15:44.878 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:15:44 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1014: 337 pgs: 337 active+clean; 167 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Jan 20 19:15:45 compute-0 ceph-mon[74381]: pgmap v1014: 337 pgs: 337 active+clean; 167 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Jan 20 19:15:45 compute-0 ceph-mon[74381]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Jan 20 19:15:45 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:15:45.943684) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 19:15:45 compute-0 ceph-mon[74381]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Jan 20 19:15:45 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936545943756, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 1265, "num_deletes": 503, "total_data_size": 1821953, "memory_usage": 1860256, "flush_reason": "Manual Compaction"}
Jan 20 19:15:45 compute-0 ceph-mon[74381]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Jan 20 19:15:45 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936545955312, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 1775749, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28695, "largest_seqno": 29959, "table_properties": {"data_size": 1770020, "index_size": 2613, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 15947, "raw_average_key_size": 19, "raw_value_size": 1756567, "raw_average_value_size": 2179, "num_data_blocks": 113, "num_entries": 806, "num_filter_entries": 806, "num_deletions": 503, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768936462, "oldest_key_time": 1768936462, "file_creation_time": 1768936545, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cbf3ab03-d51c-4622-b6c7-e997cd5246eb", "db_session_id": "I40O2DG19JCNHUB0JQU4", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Jan 20 19:15:45 compute-0 ceph-mon[74381]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 11664 microseconds, and 4525 cpu microseconds.
Jan 20 19:15:45 compute-0 ceph-mon[74381]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 19:15:45 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:15:45.955355) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 1775749 bytes OK
Jan 20 19:15:45 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:15:45.955374) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Jan 20 19:15:45 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:15:45.957735) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Jan 20 19:15:45 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:15:45.957747) EVENT_LOG_v1 {"time_micros": 1768936545957743, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 19:15:45 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:15:45.957763) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 19:15:45 compute-0 ceph-mon[74381]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 1815211, prev total WAL file size 1815211, number of live WAL files 2.
Jan 20 19:15:45 compute-0 ceph-mon[74381]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 19:15:45 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:15:45.958465) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Jan 20 19:15:45 compute-0 ceph-mon[74381]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 19:15:45 compute-0 ceph-mon[74381]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(1734KB)], [62(16MB)]
Jan 20 19:15:45 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936545958583, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 19365618, "oldest_snapshot_seqno": -1}
Jan 20 19:15:45 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:15:45 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:15:45 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:15:45.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:15:46 compute-0 ceph-mon[74381]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 6048 keys, 13079829 bytes, temperature: kUnknown
Jan 20 19:15:46 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936546081352, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 13079829, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13040972, "index_size": 22619, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15173, "raw_key_size": 156486, "raw_average_key_size": 25, "raw_value_size": 12933296, "raw_average_value_size": 2138, "num_data_blocks": 902, "num_entries": 6048, "num_filter_entries": 6048, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768934326, "oldest_key_time": 0, "file_creation_time": 1768936545, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cbf3ab03-d51c-4622-b6c7-e997cd5246eb", "db_session_id": "I40O2DG19JCNHUB0JQU4", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Jan 20 19:15:46 compute-0 ceph-mon[74381]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 19:15:46 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:15:46.081609) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 13079829 bytes
Jan 20 19:15:46 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:15:46.083338) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 157.7 rd, 106.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 16.8 +0.0 blob) out(12.5 +0.0 blob), read-write-amplify(18.3) write-amplify(7.4) OK, records in: 7073, records dropped: 1025 output_compression: NoCompression
Jan 20 19:15:46 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:15:46.083363) EVENT_LOG_v1 {"time_micros": 1768936546083351, "job": 34, "event": "compaction_finished", "compaction_time_micros": 122819, "compaction_time_cpu_micros": 57006, "output_level": 6, "num_output_files": 1, "total_output_size": 13079829, "num_input_records": 7073, "num_output_records": 6048, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 19:15:46 compute-0 ceph-mon[74381]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 19:15:46 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936546083983, "job": 34, "event": "table_file_deletion", "file_number": 64}
Jan 20 19:15:46 compute-0 ceph-mon[74381]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 19:15:46 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936546088775, "job": 34, "event": "table_file_deletion", "file_number": 62}
Jan 20 19:15:46 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:15:45.958317) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:15:46 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:15:46.088864) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:15:46 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:15:46.088869) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:15:46 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:15:46.088870) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:15:46 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:15:46.088872) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:15:46 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:15:46.088873) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:15:46 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:15:46 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:15:46 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:15:46.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:15:46 compute-0 nova_compute[254061]: 2026-01-20 19:15:46.794 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:15:46 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1015: 337 pgs: 337 active+clean; 167 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 103 op/s
Jan 20 19:15:46 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:15:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:15:47.214Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:15:47 compute-0 ceph-mon[74381]: pgmap v1015: 337 pgs: 337 active+clean; 167 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 103 op/s
Jan 20 19:15:47 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:15:47 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:15:47 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:15:47.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:15:48 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:15:48 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:15:48 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:15:48.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:15:48 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 20 19:15:48 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/600117930' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 19:15:48 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 20 19:15:48 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/600117930' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 19:15:48 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1016: 337 pgs: 337 active+clean; 167 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 75 op/s
Jan 20 19:15:48 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:15:48.902Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:15:49 compute-0 ceph-mon[74381]: from='client.? 192.168.122.10:0/600117930' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 19:15:49 compute-0 ceph-mon[74381]: from='client.? 192.168.122.10:0/600117930' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 19:15:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:15:49] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Jan 20 19:15:49 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:15:49] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Jan 20 19:15:49 compute-0 nova_compute[254061]: 2026-01-20 19:15:49.922 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:15:49 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:15:49 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:15:49 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:15:49.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:15:50 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:15:50 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:15:50 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:15:50.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:15:50 compute-0 ceph-mon[74381]: pgmap v1016: 337 pgs: 337 active+clean; 167 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 75 op/s
Jan 20 19:15:50 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1017: 337 pgs: 337 active+clean; 170 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 396 KiB/s wr, 82 op/s
Jan 20 19:15:51 compute-0 ceph-mon[74381]: pgmap v1017: 337 pgs: 337 active+clean; 170 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 396 KiB/s wr, 82 op/s
Jan 20 19:15:51 compute-0 nova_compute[254061]: 2026-01-20 19:15:51.796 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:15:51 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:15:51 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:15:51 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:15:51 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:15:51.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:15:52 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:15:52 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:15:52 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:15:52.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:15:52 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1018: 337 pgs: 337 active+clean; 188 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.0 MiB/s wr, 72 op/s
Jan 20 19:15:53 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:15:53 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:15:53 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:15:53.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:15:53 compute-0 ceph-mon[74381]: pgmap v1018: 337 pgs: 337 active+clean; 188 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.0 MiB/s wr, 72 op/s
Jan 20 19:15:54 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:15:54 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:15:54 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:15:54.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:15:54 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1019: 337 pgs: 337 active+clean; 188 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 2.0 MiB/s wr, 24 op/s
Jan 20 19:15:54 compute-0 nova_compute[254061]: 2026-01-20 19:15:54.924 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:15:55 compute-0 ceph-mgr[74676]: [balancer INFO root] Optimize plan auto_2026-01-20_19:15:55
Jan 20 19:15:55 compute-0 ceph-mgr[74676]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 19:15:55 compute-0 ceph-mgr[74676]: [balancer INFO root] do_upmap
Jan 20 19:15:55 compute-0 ceph-mgr[74676]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.log', '.rgw.root', 'vms', 'default.rgw.control', 'volumes', 'images', 'backups', '.nfs', '.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Jan 20 19:15:55 compute-0 ceph-mgr[74676]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 19:15:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:15:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:15:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:15:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:15:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:15:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:15:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 19:15:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:15:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 19:15:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:15:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0014968616713876407 of space, bias 1.0, pg target 0.4490585014162922 quantized to 32 (current 32)
Jan 20 19:15:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:15:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:15:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:15:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:15:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:15:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 20 19:15:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:15:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 20 19:15:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:15:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:15:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:15:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 19:15:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:15:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 20 19:15:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:15:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:15:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:15:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 19:15:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:15:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 20 19:15:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 19:15:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:15:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 19:15:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:15:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:15:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:15:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:15:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:15:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:15:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:15:55 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:15:55 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:15:55 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:15:55.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:15:56 compute-0 ceph-mon[74381]: pgmap v1019: 337 pgs: 337 active+clean; 188 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 2.0 MiB/s wr, 24 op/s
Jan 20 19:15:56 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:15:56 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:15:56 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:15:56 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:15:56.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:15:56 compute-0 nova_compute[254061]: 2026-01-20 19:15:56.799 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:15:56 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1020: 337 pgs: 337 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 391 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Jan 20 19:15:56 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:15:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:15:57.214Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:15:57 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:15:57 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:15:57 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:15:57.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:15:58 compute-0 ceph-mon[74381]: pgmap v1020: 337 pgs: 337 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 391 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Jan 20 19:15:58 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:15:58 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:15:58 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:15:58.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:15:58 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1021: 337 pgs: 337 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Jan 20 19:15:58 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:15:58.903Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:15:58 compute-0 sudo[272871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:15:58 compute-0 sudo[272871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:15:58 compute-0 sudo[272871]: pam_unix(sudo:session): session closed for user root
Jan 20 19:15:58 compute-0 podman[272895]: 2026-01-20 19:15:58.990436951 +0000 UTC m=+0.047052493 container health_status 7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 19:15:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:15:59] "GET /metrics HTTP/1.1" 200 48476 "" "Prometheus/2.51.0"
Jan 20 19:15:59 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:15:59] "GET /metrics HTTP/1.1" 200 48476 "" "Prometheus/2.51.0"
Jan 20 19:15:59 compute-0 nova_compute[254061]: 2026-01-20 19:15:59.928 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:15:59 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:15:59 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:15:59 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:15:59.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:16:00 compute-0 ceph-mon[74381]: pgmap v1021: 337 pgs: 337 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 390 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Jan 20 19:16:00 compute-0 nova_compute[254061]: 2026-01-20 19:16:00.059 254065 INFO nova.compute.manager [None req-a8b65e8e-aea4-494f-ad59-42f8ab8c12cc d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Get console output
Jan 20 19:16:00 compute-0 nova_compute[254061]: 2026-01-20 19:16:00.067 260360 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 20 19:16:00 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:16:00 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:16:00 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:16:00.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:16:00 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1022: 337 pgs: 337 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 396 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Jan 20 19:16:00 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:16:00.961 165659 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'be:fe:f2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '0e:71:69:cd:a8:95'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 19:16:00 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:16:00.962 165659 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 19:16:00 compute-0 nova_compute[254061]: 2026-01-20 19:16:00.963 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:16:01 compute-0 nova_compute[254061]: 2026-01-20 19:16:01.175 254065 DEBUG nova.compute.manager [req-a3518bc1-bfe9-4da5-bdf0-dd2e591f180e req-d7941a5c-5483-47e0-807f-cbd10132c480 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Received event network-changed-49f8aed7-e782-474a-b2f1-2fbc7b04e852 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 19:16:01 compute-0 nova_compute[254061]: 2026-01-20 19:16:01.176 254065 DEBUG nova.compute.manager [req-a3518bc1-bfe9-4da5-bdf0-dd2e591f180e req-d7941a5c-5483-47e0-807f-cbd10132c480 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Refreshing instance network info cache due to event network-changed-49f8aed7-e782-474a-b2f1-2fbc7b04e852. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 19:16:01 compute-0 nova_compute[254061]: 2026-01-20 19:16:01.176 254065 DEBUG oslo_concurrency.lockutils [req-a3518bc1-bfe9-4da5-bdf0-dd2e591f180e req-d7941a5c-5483-47e0-807f-cbd10132c480 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Acquiring lock "refresh_cache-464ffed9-a738-406a-9a42-2bd3d60d27f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 19:16:01 compute-0 nova_compute[254061]: 2026-01-20 19:16:01.176 254065 DEBUG oslo_concurrency.lockutils [req-a3518bc1-bfe9-4da5-bdf0-dd2e591f180e req-d7941a5c-5483-47e0-807f-cbd10132c480 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Acquired lock "refresh_cache-464ffed9-a738-406a-9a42-2bd3d60d27f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 19:16:01 compute-0 nova_compute[254061]: 2026-01-20 19:16:01.176 254065 DEBUG nova.network.neutron [req-a3518bc1-bfe9-4da5-bdf0-dd2e591f180e req-d7941a5c-5483-47e0-807f-cbd10132c480 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Refreshing network info cache for port 49f8aed7-e782-474a-b2f1-2fbc7b04e852 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 19:16:01 compute-0 nova_compute[254061]: 2026-01-20 19:16:01.861 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:16:01 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:16:01 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:16:01 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:16:01 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:16:01.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:16:02 compute-0 ceph-mon[74381]: pgmap v1022: 337 pgs: 337 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 396 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Jan 20 19:16:02 compute-0 nova_compute[254061]: 2026-01-20 19:16:02.171 254065 INFO nova.compute.manager [None req-b2db8dda-d7dd-447c-927c-e23ea0f7dbd1 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Get console output
Jan 20 19:16:02 compute-0 nova_compute[254061]: 2026-01-20 19:16:02.174 260360 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 20 19:16:02 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:16:02 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:16:02 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:16:02.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:16:02 compute-0 nova_compute[254061]: 2026-01-20 19:16:02.518 254065 DEBUG nova.network.neutron [req-a3518bc1-bfe9-4da5-bdf0-dd2e591f180e req-d7941a5c-5483-47e0-807f-cbd10132c480 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Updated VIF entry in instance network info cache for port 49f8aed7-e782-474a-b2f1-2fbc7b04e852. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 19:16:02 compute-0 nova_compute[254061]: 2026-01-20 19:16:02.519 254065 DEBUG nova.network.neutron [req-a3518bc1-bfe9-4da5-bdf0-dd2e591f180e req-d7941a5c-5483-47e0-807f-cbd10132c480 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Updating instance_info_cache with network_info: [{"id": "49f8aed7-e782-474a-b2f1-2fbc7b04e852", "address": "fa:16:3e:9f:26:34", "network": {"id": "43656a85-4118-4ce7-9ff6-eff8095a7ad3", "bridge": "br-int", "label": "tempest-network-smoke--1377908728", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap49f8aed7-e7", "ovs_interfaceid": "49f8aed7-e782-474a-b2f1-2fbc7b04e852", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 19:16:02 compute-0 nova_compute[254061]: 2026-01-20 19:16:02.535 254065 DEBUG oslo_concurrency.lockutils [req-a3518bc1-bfe9-4da5-bdf0-dd2e591f180e req-d7941a5c-5483-47e0-807f-cbd10132c480 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Releasing lock "refresh_cache-464ffed9-a738-406a-9a42-2bd3d60d27f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 19:16:02 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1023: 337 pgs: 337 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 395 KiB/s rd, 1.8 MiB/s wr, 60 op/s
Jan 20 19:16:03 compute-0 nova_compute[254061]: 2026-01-20 19:16:03.275 254065 DEBUG nova.compute.manager [req-e1ad5d15-fdb8-4b7a-9954-1ea30c42261c req-159601d7-40ef-4816-a340-6137c31af401 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Received event network-vif-unplugged-49f8aed7-e782-474a-b2f1-2fbc7b04e852 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 19:16:03 compute-0 nova_compute[254061]: 2026-01-20 19:16:03.276 254065 DEBUG oslo_concurrency.lockutils [req-e1ad5d15-fdb8-4b7a-9954-1ea30c42261c req-159601d7-40ef-4816-a340-6137c31af401 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Acquiring lock "464ffed9-a738-406a-9a42-2bd3d60d27f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:16:03 compute-0 nova_compute[254061]: 2026-01-20 19:16:03.276 254065 DEBUG oslo_concurrency.lockutils [req-e1ad5d15-fdb8-4b7a-9954-1ea30c42261c req-159601d7-40ef-4816-a340-6137c31af401 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Lock "464ffed9-a738-406a-9a42-2bd3d60d27f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:16:03 compute-0 nova_compute[254061]: 2026-01-20 19:16:03.276 254065 DEBUG oslo_concurrency.lockutils [req-e1ad5d15-fdb8-4b7a-9954-1ea30c42261c req-159601d7-40ef-4816-a340-6137c31af401 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Lock "464ffed9-a738-406a-9a42-2bd3d60d27f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:16:03 compute-0 nova_compute[254061]: 2026-01-20 19:16:03.276 254065 DEBUG nova.compute.manager [req-e1ad5d15-fdb8-4b7a-9954-1ea30c42261c req-159601d7-40ef-4816-a340-6137c31af401 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] No waiting events found dispatching network-vif-unplugged-49f8aed7-e782-474a-b2f1-2fbc7b04e852 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 19:16:03 compute-0 nova_compute[254061]: 2026-01-20 19:16:03.276 254065 WARNING nova.compute.manager [req-e1ad5d15-fdb8-4b7a-9954-1ea30c42261c req-159601d7-40ef-4816-a340-6137c31af401 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Received unexpected event network-vif-unplugged-49f8aed7-e782-474a-b2f1-2fbc7b04e852 for instance with vm_state active and task_state None.
Jan 20 19:16:03 compute-0 nova_compute[254061]: 2026-01-20 19:16:03.277 254065 DEBUG nova.compute.manager [req-e1ad5d15-fdb8-4b7a-9954-1ea30c42261c req-159601d7-40ef-4816-a340-6137c31af401 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Received event network-vif-plugged-49f8aed7-e782-474a-b2f1-2fbc7b04e852 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 19:16:03 compute-0 nova_compute[254061]: 2026-01-20 19:16:03.277 254065 DEBUG oslo_concurrency.lockutils [req-e1ad5d15-fdb8-4b7a-9954-1ea30c42261c req-159601d7-40ef-4816-a340-6137c31af401 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Acquiring lock "464ffed9-a738-406a-9a42-2bd3d60d27f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:16:03 compute-0 nova_compute[254061]: 2026-01-20 19:16:03.277 254065 DEBUG oslo_concurrency.lockutils [req-e1ad5d15-fdb8-4b7a-9954-1ea30c42261c req-159601d7-40ef-4816-a340-6137c31af401 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Lock "464ffed9-a738-406a-9a42-2bd3d60d27f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:16:03 compute-0 nova_compute[254061]: 2026-01-20 19:16:03.277 254065 DEBUG oslo_concurrency.lockutils [req-e1ad5d15-fdb8-4b7a-9954-1ea30c42261c req-159601d7-40ef-4816-a340-6137c31af401 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Lock "464ffed9-a738-406a-9a42-2bd3d60d27f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:16:03 compute-0 nova_compute[254061]: 2026-01-20 19:16:03.278 254065 DEBUG nova.compute.manager [req-e1ad5d15-fdb8-4b7a-9954-1ea30c42261c req-159601d7-40ef-4816-a340-6137c31af401 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] No waiting events found dispatching network-vif-plugged-49f8aed7-e782-474a-b2f1-2fbc7b04e852 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 19:16:03 compute-0 nova_compute[254061]: 2026-01-20 19:16:03.278 254065 WARNING nova.compute.manager [req-e1ad5d15-fdb8-4b7a-9954-1ea30c42261c req-159601d7-40ef-4816-a340-6137c31af401 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Received unexpected event network-vif-plugged-49f8aed7-e782-474a-b2f1-2fbc7b04e852 for instance with vm_state active and task_state None.
Jan 20 19:16:03 compute-0 nova_compute[254061]: 2026-01-20 19:16:03.893 254065 INFO nova.compute.manager [None req-ef007ed7-009c-49f1-8795-3ceef48cfa01 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Get console output
Jan 20 19:16:03 compute-0 nova_compute[254061]: 2026-01-20 19:16:03.898 260360 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 20 19:16:03 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:16:03 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:16:03 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:16:03.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:16:04 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:16:04 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:16:04 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:16:04.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:16:04 compute-0 ceph-mon[74381]: pgmap v1023: 337 pgs: 337 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 395 KiB/s rd, 1.8 MiB/s wr, 60 op/s
Jan 20 19:16:04 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1024: 337 pgs: 337 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 366 KiB/s rd, 196 KiB/s wr, 44 op/s
Jan 20 19:16:04 compute-0 nova_compute[254061]: 2026-01-20 19:16:04.932 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:16:05 compute-0 nova_compute[254061]: 2026-01-20 19:16:05.400 254065 DEBUG nova.compute.manager [req-fa45f18a-143c-4150-8623-8f8876df03fe req-8a2772b8-ab31-4711-bfd9-2147cd7d74c5 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Received event network-changed-49f8aed7-e782-474a-b2f1-2fbc7b04e852 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 19:16:05 compute-0 nova_compute[254061]: 2026-01-20 19:16:05.401 254065 DEBUG nova.compute.manager [req-fa45f18a-143c-4150-8623-8f8876df03fe req-8a2772b8-ab31-4711-bfd9-2147cd7d74c5 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Refreshing instance network info cache due to event network-changed-49f8aed7-e782-474a-b2f1-2fbc7b04e852. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 19:16:05 compute-0 nova_compute[254061]: 2026-01-20 19:16:05.402 254065 DEBUG oslo_concurrency.lockutils [req-fa45f18a-143c-4150-8623-8f8876df03fe req-8a2772b8-ab31-4711-bfd9-2147cd7d74c5 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Acquiring lock "refresh_cache-464ffed9-a738-406a-9a42-2bd3d60d27f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 19:16:05 compute-0 nova_compute[254061]: 2026-01-20 19:16:05.402 254065 DEBUG oslo_concurrency.lockutils [req-fa45f18a-143c-4150-8623-8f8876df03fe req-8a2772b8-ab31-4711-bfd9-2147cd7d74c5 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Acquired lock "refresh_cache-464ffed9-a738-406a-9a42-2bd3d60d27f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 19:16:05 compute-0 nova_compute[254061]: 2026-01-20 19:16:05.402 254065 DEBUG nova.network.neutron [req-fa45f18a-143c-4150-8623-8f8876df03fe req-8a2772b8-ab31-4711-bfd9-2147cd7d74c5 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Refreshing network info cache for port 49f8aed7-e782-474a-b2f1-2fbc7b04e852 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 19:16:05 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:16:05 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:16:05 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:16:05.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:16:06 compute-0 nova_compute[254061]: 2026-01-20 19:16:06.129 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:16:06 compute-0 nova_compute[254061]: 2026-01-20 19:16:06.155 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:16:06 compute-0 nova_compute[254061]: 2026-01-20 19:16:06.155 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:16:06 compute-0 nova_compute[254061]: 2026-01-20 19:16:06.155 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:16:06 compute-0 nova_compute[254061]: 2026-01-20 19:16:06.155 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 19:16:06 compute-0 nova_compute[254061]: 2026-01-20 19:16:06.156 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:16:06 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:16:06 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:16:06 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:16:06.254 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:16:06 compute-0 ceph-mon[74381]: pgmap v1024: 337 pgs: 337 active+clean; 200 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 366 KiB/s rd, 196 KiB/s wr, 44 op/s
Jan 20 19:16:06 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:16:06 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/5793095' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:16:06 compute-0 nova_compute[254061]: 2026-01-20 19:16:06.638 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:16:06 compute-0 nova_compute[254061]: 2026-01-20 19:16:06.707 254065 DEBUG nova.virt.libvirt.driver [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 19:16:06 compute-0 nova_compute[254061]: 2026-01-20 19:16:06.707 254065 DEBUG nova.virt.libvirt.driver [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 19:16:06 compute-0 nova_compute[254061]: 2026-01-20 19:16:06.862 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:16:06 compute-0 nova_compute[254061]: 2026-01-20 19:16:06.876 254065 WARNING nova.virt.libvirt.driver [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 19:16:06 compute-0 nova_compute[254061]: 2026-01-20 19:16:06.877 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4356MB free_disk=59.897186279296875GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 19:16:06 compute-0 nova_compute[254061]: 2026-01-20 19:16:06.877 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:16:06 compute-0 nova_compute[254061]: 2026-01-20 19:16:06.877 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:16:06 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1025: 337 pgs: 337 active+clean; 121 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 385 KiB/s rd, 203 KiB/s wr, 73 op/s
Jan 20 19:16:06 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:16:06 compute-0 nova_compute[254061]: 2026-01-20 19:16:06.965 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Instance 464ffed9-a738-406a-9a42-2bd3d60d27f2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 19:16:06 compute-0 nova_compute[254061]: 2026-01-20 19:16:06.965 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 19:16:06 compute-0 nova_compute[254061]: 2026-01-20 19:16:06.966 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 19:16:07 compute-0 nova_compute[254061]: 2026-01-20 19:16:07.012 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:16:07 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:16:07.216Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:16:07 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:16:07.216Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:16:07 compute-0 nova_compute[254061]: 2026-01-20 19:16:07.277 254065 DEBUG nova.network.neutron [req-fa45f18a-143c-4150-8623-8f8876df03fe req-8a2772b8-ab31-4711-bfd9-2147cd7d74c5 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Updated VIF entry in instance network info cache for port 49f8aed7-e782-474a-b2f1-2fbc7b04e852. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 19:16:07 compute-0 nova_compute[254061]: 2026-01-20 19:16:07.278 254065 DEBUG nova.network.neutron [req-fa45f18a-143c-4150-8623-8f8876df03fe req-8a2772b8-ab31-4711-bfd9-2147cd7d74c5 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Updating instance_info_cache with network_info: [{"id": "49f8aed7-e782-474a-b2f1-2fbc7b04e852", "address": "fa:16:3e:9f:26:34", "network": {"id": "43656a85-4118-4ce7-9ff6-eff8095a7ad3", "bridge": "br-int", "label": "tempest-network-smoke--1377908728", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap49f8aed7-e7", "ovs_interfaceid": "49f8aed7-e782-474a-b2f1-2fbc7b04e852", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 19:16:07 compute-0 nova_compute[254061]: 2026-01-20 19:16:07.303 254065 DEBUG oslo_concurrency.lockutils [req-fa45f18a-143c-4150-8623-8f8876df03fe req-8a2772b8-ab31-4711-bfd9-2147cd7d74c5 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Releasing lock "refresh_cache-464ffed9-a738-406a-9a42-2bd3d60d27f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 19:16:07 compute-0 nova_compute[254061]: 2026-01-20 19:16:07.303 254065 DEBUG nova.compute.manager [req-fa45f18a-143c-4150-8623-8f8876df03fe req-8a2772b8-ab31-4711-bfd9-2147cd7d74c5 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Received event network-vif-plugged-49f8aed7-e782-474a-b2f1-2fbc7b04e852 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 19:16:07 compute-0 nova_compute[254061]: 2026-01-20 19:16:07.303 254065 DEBUG oslo_concurrency.lockutils [req-fa45f18a-143c-4150-8623-8f8876df03fe req-8a2772b8-ab31-4711-bfd9-2147cd7d74c5 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Acquiring lock "464ffed9-a738-406a-9a42-2bd3d60d27f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:16:07 compute-0 nova_compute[254061]: 2026-01-20 19:16:07.304 254065 DEBUG oslo_concurrency.lockutils [req-fa45f18a-143c-4150-8623-8f8876df03fe req-8a2772b8-ab31-4711-bfd9-2147cd7d74c5 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Lock "464ffed9-a738-406a-9a42-2bd3d60d27f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:16:07 compute-0 nova_compute[254061]: 2026-01-20 19:16:07.304 254065 DEBUG oslo_concurrency.lockutils [req-fa45f18a-143c-4150-8623-8f8876df03fe req-8a2772b8-ab31-4711-bfd9-2147cd7d74c5 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Lock "464ffed9-a738-406a-9a42-2bd3d60d27f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:16:07 compute-0 nova_compute[254061]: 2026-01-20 19:16:07.304 254065 DEBUG nova.compute.manager [req-fa45f18a-143c-4150-8623-8f8876df03fe req-8a2772b8-ab31-4711-bfd9-2147cd7d74c5 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] No waiting events found dispatching network-vif-plugged-49f8aed7-e782-474a-b2f1-2fbc7b04e852 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 19:16:07 compute-0 nova_compute[254061]: 2026-01-20 19:16:07.304 254065 WARNING nova.compute.manager [req-fa45f18a-143c-4150-8623-8f8876df03fe req-8a2772b8-ab31-4711-bfd9-2147cd7d74c5 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Received unexpected event network-vif-plugged-49f8aed7-e782-474a-b2f1-2fbc7b04e852 for instance with vm_state active and task_state None.
Jan 20 19:16:07 compute-0 nova_compute[254061]: 2026-01-20 19:16:07.305 254065 DEBUG nova.compute.manager [req-fa45f18a-143c-4150-8623-8f8876df03fe req-8a2772b8-ab31-4711-bfd9-2147cd7d74c5 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Received event network-vif-plugged-49f8aed7-e782-474a-b2f1-2fbc7b04e852 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 19:16:07 compute-0 nova_compute[254061]: 2026-01-20 19:16:07.305 254065 DEBUG oslo_concurrency.lockutils [req-fa45f18a-143c-4150-8623-8f8876df03fe req-8a2772b8-ab31-4711-bfd9-2147cd7d74c5 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Acquiring lock "464ffed9-a738-406a-9a42-2bd3d60d27f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:16:07 compute-0 nova_compute[254061]: 2026-01-20 19:16:07.305 254065 DEBUG oslo_concurrency.lockutils [req-fa45f18a-143c-4150-8623-8f8876df03fe req-8a2772b8-ab31-4711-bfd9-2147cd7d74c5 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Lock "464ffed9-a738-406a-9a42-2bd3d60d27f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:16:07 compute-0 nova_compute[254061]: 2026-01-20 19:16:07.305 254065 DEBUG oslo_concurrency.lockutils [req-fa45f18a-143c-4150-8623-8f8876df03fe req-8a2772b8-ab31-4711-bfd9-2147cd7d74c5 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Lock "464ffed9-a738-406a-9a42-2bd3d60d27f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:16:07 compute-0 nova_compute[254061]: 2026-01-20 19:16:07.305 254065 DEBUG nova.compute.manager [req-fa45f18a-143c-4150-8623-8f8876df03fe req-8a2772b8-ab31-4711-bfd9-2147cd7d74c5 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] No waiting events found dispatching network-vif-plugged-49f8aed7-e782-474a-b2f1-2fbc7b04e852 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 19:16:07 compute-0 nova_compute[254061]: 2026-01-20 19:16:07.306 254065 WARNING nova.compute.manager [req-fa45f18a-143c-4150-8623-8f8876df03fe req-8a2772b8-ab31-4711-bfd9-2147cd7d74c5 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Received unexpected event network-vif-plugged-49f8aed7-e782-474a-b2f1-2fbc7b04e852 for instance with vm_state active and task_state None.
Jan 20 19:16:07 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:16:07 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/94013702' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:16:07 compute-0 nova_compute[254061]: 2026-01-20 19:16:07.467 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:16:07 compute-0 nova_compute[254061]: 2026-01-20 19:16:07.473 254065 DEBUG nova.compute.provider_tree [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Inventory has not changed in ProviderTree for provider: cb9161e5-191d-495c-920a-01144f42a215 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 19:16:07 compute-0 nova_compute[254061]: 2026-01-20 19:16:07.487 254065 DEBUG nova.scheduler.client.report [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Inventory has not changed for provider cb9161e5-191d-495c-920a-01144f42a215 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 19:16:07 compute-0 nova_compute[254061]: 2026-01-20 19:16:07.505 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 19:16:07 compute-0 nova_compute[254061]: 2026-01-20 19:16:07.505 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.628s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:16:07 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/5793095' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:16:07 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/94013702' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:16:08 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:16:08 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:16:08 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:16:07.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:16:08 compute-0 podman[272968]: 2026-01-20 19:16:08.106151411 +0000 UTC m=+0.086409829 container health_status d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:16:08 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:16:08 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:16:08 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:16:08.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:16:08 compute-0 nova_compute[254061]: 2026-01-20 19:16:08.505 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:16:08 compute-0 ceph-mon[74381]: pgmap v1025: 337 pgs: 337 active+clean; 121 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 385 KiB/s rd, 203 KiB/s wr, 73 op/s
Jan 20 19:16:08 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/2464105748' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:16:08 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1026: 337 pgs: 337 active+clean; 121 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 20 KiB/s wr, 30 op/s
Jan 20 19:16:08 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:16:08.903Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:16:08 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:16:08.903Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:16:08 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:16:08.904Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:16:09 compute-0 nova_compute[254061]: 2026-01-20 19:16:09.128 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:16:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:16:09] "GET /metrics HTTP/1.1" 200 48476 "" "Prometheus/2.51.0"
Jan 20 19:16:09 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:16:09] "GET /metrics HTTP/1.1" 200 48476 "" "Prometheus/2.51.0"
Jan 20 19:16:09 compute-0 nova_compute[254061]: 2026-01-20 19:16:09.934 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:16:10 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:16:10 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:16:10 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:16:10.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:16:10 compute-0 nova_compute[254061]: 2026-01-20 19:16:10.128 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:16:10 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:16:10 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:16:10 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:16:10.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:16:10 compute-0 sudo[272999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:16:10 compute-0 sudo[272999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:16:10 compute-0 sudo[272999]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:10 compute-0 sudo[273025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 20 19:16:10 compute-0 sudo[273025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:16:10 compute-0 ceph-mon[74381]: pgmap v1026: 337 pgs: 337 active+clean; 121 MiB data, 361 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 20 KiB/s wr, 30 op/s
Jan 20 19:16:10 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:16:10 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1027: 337 pgs: 337 active+clean; 121 MiB data, 351 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 20 KiB/s wr, 30 op/s
Jan 20 19:16:10 compute-0 sudo[273025]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:10 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 20 19:16:10 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 20 19:16:10 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:16:10.964 165659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7018ca8a-de0e-4b56-bb43-675238d4f8b3, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 19:16:11 compute-0 nova_compute[254061]: 2026-01-20 19:16:11.128 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:16:11 compute-0 nova_compute[254061]: 2026-01-20 19:16:11.128 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 19:16:11 compute-0 nova_compute[254061]: 2026-01-20 19:16:11.147 254065 DEBUG nova.compute.manager [req-6697ed8f-41f5-4778-933d-0778c3f657a3 req-fca50c90-35af-4c94-92d8-54a2975a2ab3 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Received event network-changed-49f8aed7-e782-474a-b2f1-2fbc7b04e852 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 19:16:11 compute-0 nova_compute[254061]: 2026-01-20 19:16:11.148 254065 DEBUG nova.compute.manager [req-6697ed8f-41f5-4778-933d-0778c3f657a3 req-fca50c90-35af-4c94-92d8-54a2975a2ab3 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Refreshing instance network info cache due to event network-changed-49f8aed7-e782-474a-b2f1-2fbc7b04e852. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 19:16:11 compute-0 nova_compute[254061]: 2026-01-20 19:16:11.148 254065 DEBUG oslo_concurrency.lockutils [req-6697ed8f-41f5-4778-933d-0778c3f657a3 req-fca50c90-35af-4c94-92d8-54a2975a2ab3 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Acquiring lock "refresh_cache-464ffed9-a738-406a-9a42-2bd3d60d27f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 19:16:11 compute-0 nova_compute[254061]: 2026-01-20 19:16:11.148 254065 DEBUG oslo_concurrency.lockutils [req-6697ed8f-41f5-4778-933d-0778c3f657a3 req-fca50c90-35af-4c94-92d8-54a2975a2ab3 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Acquired lock "refresh_cache-464ffed9-a738-406a-9a42-2bd3d60d27f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 19:16:11 compute-0 nova_compute[254061]: 2026-01-20 19:16:11.148 254065 DEBUG nova.network.neutron [req-6697ed8f-41f5-4778-933d-0778c3f657a3 req-fca50c90-35af-4c94-92d8-54a2975a2ab3 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Refreshing network info cache for port 49f8aed7-e782-474a-b2f1-2fbc7b04e852 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 19:16:11 compute-0 nova_compute[254061]: 2026-01-20 19:16:11.199 254065 DEBUG oslo_concurrency.lockutils [None req-b0886263-582c-410f-8131-f3a8e82841a4 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Acquiring lock "464ffed9-a738-406a-9a42-2bd3d60d27f2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:16:11 compute-0 nova_compute[254061]: 2026-01-20 19:16:11.199 254065 DEBUG oslo_concurrency.lockutils [None req-b0886263-582c-410f-8131-f3a8e82841a4 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "464ffed9-a738-406a-9a42-2bd3d60d27f2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:16:11 compute-0 nova_compute[254061]: 2026-01-20 19:16:11.199 254065 DEBUG oslo_concurrency.lockutils [None req-b0886263-582c-410f-8131-f3a8e82841a4 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Acquiring lock "464ffed9-a738-406a-9a42-2bd3d60d27f2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:16:11 compute-0 nova_compute[254061]: 2026-01-20 19:16:11.200 254065 DEBUG oslo_concurrency.lockutils [None req-b0886263-582c-410f-8131-f3a8e82841a4 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "464ffed9-a738-406a-9a42-2bd3d60d27f2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:16:11 compute-0 nova_compute[254061]: 2026-01-20 19:16:11.200 254065 DEBUG oslo_concurrency.lockutils [None req-b0886263-582c-410f-8131-f3a8e82841a4 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "464ffed9-a738-406a-9a42-2bd3d60d27f2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:16:11 compute-0 nova_compute[254061]: 2026-01-20 19:16:11.201 254065 INFO nova.compute.manager [None req-b0886263-582c-410f-8131-f3a8e82841a4 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Terminating instance
Jan 20 19:16:11 compute-0 nova_compute[254061]: 2026-01-20 19:16:11.201 254065 DEBUG nova.compute.manager [None req-b0886263-582c-410f-8131-f3a8e82841a4 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 19:16:11 compute-0 kernel: tap49f8aed7-e7 (unregistering): left promiscuous mode
Jan 20 19:16:11 compute-0 NetworkManager[48914]: <info>  [1768936571.2595] device (tap49f8aed7-e7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 19:16:11 compute-0 ovn_controller[155128]: 2026-01-20T19:16:11Z|00088|binding|INFO|Releasing lport 49f8aed7-e782-474a-b2f1-2fbc7b04e852 from this chassis (sb_readonly=0)
Jan 20 19:16:11 compute-0 ovn_controller[155128]: 2026-01-20T19:16:11Z|00089|binding|INFO|Setting lport 49f8aed7-e782-474a-b2f1-2fbc7b04e852 down in Southbound
Jan 20 19:16:11 compute-0 nova_compute[254061]: 2026-01-20 19:16:11.263 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:16:11 compute-0 ovn_controller[155128]: 2026-01-20T19:16:11Z|00090|binding|INFO|Removing iface tap49f8aed7-e7 ovn-installed in OVS
Jan 20 19:16:11 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:16:11.275 165659 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9f:26:34 10.100.0.9'], port_security=['fa:16:3e:9f:26:34 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '464ffed9-a738-406a-9a42-2bd3d60d27f2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-43656a85-4118-4ce7-9ff6-eff8095a7ad3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dc8a6ea17f334edbbfaf2a91ec6fd167', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'a39bda74-9c15-44c1-83ec-b9e2df1ecff1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=244f1299-7dd7-4257-a749-5b40ece19c33, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4e780f9880>], logical_port=49f8aed7-e782-474a-b2f1-2fbc7b04e852) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4e780f9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 19:16:11 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:16:11.277 165659 INFO neutron.agent.ovn.metadata.agent [-] Port 49f8aed7-e782-474a-b2f1-2fbc7b04e852 in datapath 43656a85-4118-4ce7-9ff6-eff8095a7ad3 unbound from our chassis
Jan 20 19:16:11 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:16:11.278 165659 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 43656a85-4118-4ce7-9ff6-eff8095a7ad3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 19:16:11 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:16:11.279 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[a77d07b8-79a7-4192-b3f9-d3dcd1991389]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:16:11 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:16:11.280 165659 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-43656a85-4118-4ce7-9ff6-eff8095a7ad3 namespace which is not needed anymore
Jan 20 19:16:11 compute-0 nova_compute[254061]: 2026-01-20 19:16:11.292 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:16:11 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Jan 20 19:16:11 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d0000000b.scope: Consumed 14.313s CPU time.
Jan 20 19:16:11 compute-0 systemd-machined[220746]: Machine qemu-5-instance-0000000b terminated.
Jan 20 19:16:11 compute-0 neutron-haproxy-ovnmeta-43656a85-4118-4ce7-9ff6-eff8095a7ad3[272714]: [NOTICE]   (272718) : haproxy version is 2.8.14-c23fe91
Jan 20 19:16:11 compute-0 neutron-haproxy-ovnmeta-43656a85-4118-4ce7-9ff6-eff8095a7ad3[272714]: [NOTICE]   (272718) : path to executable is /usr/sbin/haproxy
Jan 20 19:16:11 compute-0 neutron-haproxy-ovnmeta-43656a85-4118-4ce7-9ff6-eff8095a7ad3[272714]: [WARNING]  (272718) : Exiting Master process...
Jan 20 19:16:11 compute-0 neutron-haproxy-ovnmeta-43656a85-4118-4ce7-9ff6-eff8095a7ad3[272714]: [WARNING]  (272718) : Exiting Master process...
Jan 20 19:16:11 compute-0 neutron-haproxy-ovnmeta-43656a85-4118-4ce7-9ff6-eff8095a7ad3[272714]: [ALERT]    (272718) : Current worker (272720) exited with code 143 (Terminated)
Jan 20 19:16:11 compute-0 neutron-haproxy-ovnmeta-43656a85-4118-4ce7-9ff6-eff8095a7ad3[272714]: [WARNING]  (272718) : All workers exited. Exiting... (0)
Jan 20 19:16:11 compute-0 systemd[1]: libpod-111c6c5127b4f1e4e81426c2856a55e636ca12205233bbc0bfb44a6e7831db68.scope: Deactivated successfully.
Jan 20 19:16:11 compute-0 podman[273105]: 2026-01-20 19:16:11.419828447 +0000 UTC m=+0.041160585 container died 111c6c5127b4f1e4e81426c2856a55e636ca12205233bbc0bfb44a6e7831db68 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-43656a85-4118-4ce7-9ff6-eff8095a7ad3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 20 19:16:11 compute-0 nova_compute[254061]: 2026-01-20 19:16:11.435 254065 INFO nova.virt.libvirt.driver [-] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Instance destroyed successfully.
Jan 20 19:16:11 compute-0 nova_compute[254061]: 2026-01-20 19:16:11.436 254065 DEBUG nova.objects.instance [None req-b0886263-582c-410f-8131-f3a8e82841a4 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lazy-loading 'resources' on Instance uuid 464ffed9-a738-406a-9a42-2bd3d60d27f2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 19:16:11 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-111c6c5127b4f1e4e81426c2856a55e636ca12205233bbc0bfb44a6e7831db68-userdata-shm.mount: Deactivated successfully.
Jan 20 19:16:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c72ed1dee5e9a9168bce39b81e56ec7ef91349e508be298f6a2cb6abc4fe20f-merged.mount: Deactivated successfully.
Jan 20 19:16:11 compute-0 nova_compute[254061]: 2026-01-20 19:16:11.458 254065 DEBUG nova.virt.libvirt.vif [None req-b0886263-582c-410f-8131-f3a8e82841a4 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T19:15:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-687559922',display_name='tempest-TestNetworkBasicOps-server-687559922',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-687559922',id=11,image_ref='bc57af0c-4b71-499e-9808-3c8fc070a488',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH5+/TlwhQfGy5x5Qu13nVCd3hZZgyJqcdIE8MBsjPZKKeVwNAJjbXcfTn5a0nUT3sF8nVHjNP5cE4VCngPg11b25JCd13c+VH8ik9H9ryZ3Q54ulsbxEsCug3YhIstfOA==',key_name='tempest-TestNetworkBasicOps-576438739',keypairs=<?>,launch_index=0,launched_at=2026-01-20T19:15:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dc8a6ea17f334edbbfaf2a91ec6fd167',ramdisk_id='',reservation_id='r-49ymama9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='bc57af0c-4b71-499e-9808-3c8fc070a488',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-899583499',owner_user_name='tempest-TestNetworkBasicOps-899583499-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T19:15:18Z,user_data=None,user_id='d34bd159f8884263a7481e3fcff15267',uuid=464ffed9-a738-406a-9a42-2bd3d60d27f2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "49f8aed7-e782-474a-b2f1-2fbc7b04e852", "address": "fa:16:3e:9f:26:34", "network": {"id": "43656a85-4118-4ce7-9ff6-eff8095a7ad3", "bridge": "br-int", "label": "tempest-network-smoke--1377908728", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap49f8aed7-e7", "ovs_interfaceid": "49f8aed7-e782-474a-b2f1-2fbc7b04e852", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 19:16:11 compute-0 nova_compute[254061]: 2026-01-20 19:16:11.458 254065 DEBUG nova.network.os_vif_util [None req-b0886263-582c-410f-8131-f3a8e82841a4 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Converting VIF {"id": "49f8aed7-e782-474a-b2f1-2fbc7b04e852", "address": "fa:16:3e:9f:26:34", "network": {"id": "43656a85-4118-4ce7-9ff6-eff8095a7ad3", "bridge": "br-int", "label": "tempest-network-smoke--1377908728", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap49f8aed7-e7", "ovs_interfaceid": "49f8aed7-e782-474a-b2f1-2fbc7b04e852", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 19:16:11 compute-0 nova_compute[254061]: 2026-01-20 19:16:11.459 254065 DEBUG nova.network.os_vif_util [None req-b0886263-582c-410f-8131-f3a8e82841a4 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9f:26:34,bridge_name='br-int',has_traffic_filtering=True,id=49f8aed7-e782-474a-b2f1-2fbc7b04e852,network=Network(43656a85-4118-4ce7-9ff6-eff8095a7ad3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap49f8aed7-e7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 19:16:11 compute-0 podman[273105]: 2026-01-20 19:16:11.459446999 +0000 UTC m=+0.080779127 container cleanup 111c6c5127b4f1e4e81426c2856a55e636ca12205233bbc0bfb44a6e7831db68 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-43656a85-4118-4ce7-9ff6-eff8095a7ad3, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 20 19:16:11 compute-0 nova_compute[254061]: 2026-01-20 19:16:11.459 254065 DEBUG os_vif [None req-b0886263-582c-410f-8131-f3a8e82841a4 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9f:26:34,bridge_name='br-int',has_traffic_filtering=True,id=49f8aed7-e782-474a-b2f1-2fbc7b04e852,network=Network(43656a85-4118-4ce7-9ff6-eff8095a7ad3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap49f8aed7-e7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 19:16:11 compute-0 nova_compute[254061]: 2026-01-20 19:16:11.461 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:16:11 compute-0 nova_compute[254061]: 2026-01-20 19:16:11.461 254065 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap49f8aed7-e7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 19:16:11 compute-0 nova_compute[254061]: 2026-01-20 19:16:11.462 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:16:11 compute-0 nova_compute[254061]: 2026-01-20 19:16:11.464 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:16:11 compute-0 nova_compute[254061]: 2026-01-20 19:16:11.466 254065 INFO os_vif [None req-b0886263-582c-410f-8131-f3a8e82841a4 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9f:26:34,bridge_name='br-int',has_traffic_filtering=True,id=49f8aed7-e782-474a-b2f1-2fbc7b04e852,network=Network(43656a85-4118-4ce7-9ff6-eff8095a7ad3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap49f8aed7-e7')
Jan 20 19:16:11 compute-0 systemd[1]: libpod-conmon-111c6c5127b4f1e4e81426c2856a55e636ca12205233bbc0bfb44a6e7831db68.scope: Deactivated successfully.
Jan 20 19:16:11 compute-0 podman[273145]: 2026-01-20 19:16:11.523989485 +0000 UTC m=+0.042086770 container remove 111c6c5127b4f1e4e81426c2856a55e636ca12205233bbc0bfb44a6e7831db68 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-43656a85-4118-4ce7-9ff6-eff8095a7ad3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 20 19:16:11 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:16:11.529 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[1a20363a-da61-4cf6-b0c9-53273a2766ec]: (4, ('Tue Jan 20 07:16:11 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-43656a85-4118-4ce7-9ff6-eff8095a7ad3 (111c6c5127b4f1e4e81426c2856a55e636ca12205233bbc0bfb44a6e7831db68)\n111c6c5127b4f1e4e81426c2856a55e636ca12205233bbc0bfb44a6e7831db68\nTue Jan 20 07:16:11 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-43656a85-4118-4ce7-9ff6-eff8095a7ad3 (111c6c5127b4f1e4e81426c2856a55e636ca12205233bbc0bfb44a6e7831db68)\n111c6c5127b4f1e4e81426c2856a55e636ca12205233bbc0bfb44a6e7831db68\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:16:11 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:16:11.530 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[b4ba038a-d843-45a0-b6ce-3cac8aea9517]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:16:11 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:16:11.531 165659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap43656a85-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 19:16:11 compute-0 nova_compute[254061]: 2026-01-20 19:16:11.533 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:16:11 compute-0 kernel: tap43656a85-40: left promiscuous mode
Jan 20 19:16:11 compute-0 nova_compute[254061]: 2026-01-20 19:16:11.552 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:16:11 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:16:11.554 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[52a47c2e-b311-40f3-96e0-62254145bd97]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:16:11 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:16:11.578 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[0a3c6660-89ce-4c37-8b86-048058914950]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:16:11 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:16:11.579 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[e5b1ba78-4653-49a4-b5c6-106cfa51c0eb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:16:11 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:16:11.594 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[b06a5110-9b11-4e34-b0b3-cae704bc637b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 471322, 'reachable_time': 38762, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273180, 'error': None, 'target': 'ovnmeta-43656a85-4118-4ce7-9ff6-eff8095a7ad3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:16:11 compute-0 systemd[1]: run-netns-ovnmeta\x2d43656a85\x2d4118\x2d4ce7\x2d9ff6\x2deff8095a7ad3.mount: Deactivated successfully.
Jan 20 19:16:11 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:16:11.598 166372 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-43656a85-4118-4ce7-9ff6-eff8095a7ad3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 19:16:11 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:16:11.598 166372 DEBUG oslo.privsep.daemon [-] privsep: reply[e0886d60-7237-4b75-b7d5-897f20e69005]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:16:11 compute-0 ceph-mon[74381]: pgmap v1027: 337 pgs: 337 active+clean; 121 MiB data, 351 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 20 KiB/s wr, 30 op/s
Jan 20 19:16:11 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 20 19:16:11 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 20 19:16:11 compute-0 nova_compute[254061]: 2026-01-20 19:16:11.907 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:16:11 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:16:12 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:16:12 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:16:12 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:16:12.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:16:12 compute-0 nova_compute[254061]: 2026-01-20 19:16:12.025 254065 INFO nova.virt.libvirt.driver [None req-b0886263-582c-410f-8131-f3a8e82841a4 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Deleting instance files /var/lib/nova/instances/464ffed9-a738-406a-9a42-2bd3d60d27f2_del
Jan 20 19:16:12 compute-0 nova_compute[254061]: 2026-01-20 19:16:12.026 254065 INFO nova.virt.libvirt.driver [None req-b0886263-582c-410f-8131-f3a8e82841a4 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Deletion of /var/lib/nova/instances/464ffed9-a738-406a-9a42-2bd3d60d27f2_del complete
Jan 20 19:16:12 compute-0 nova_compute[254061]: 2026-01-20 19:16:12.101 254065 INFO nova.compute.manager [None req-b0886263-582c-410f-8131-f3a8e82841a4 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Took 0.90 seconds to destroy the instance on the hypervisor.
Jan 20 19:16:12 compute-0 nova_compute[254061]: 2026-01-20 19:16:12.102 254065 DEBUG oslo.service.loopingcall [None req-b0886263-582c-410f-8131-f3a8e82841a4 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 19:16:12 compute-0 nova_compute[254061]: 2026-01-20 19:16:12.102 254065 DEBUG nova.compute.manager [-] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 19:16:12 compute-0 nova_compute[254061]: 2026-01-20 19:16:12.102 254065 DEBUG nova.network.neutron [-] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 19:16:12 compute-0 nova_compute[254061]: 2026-01-20 19:16:12.128 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:16:12 compute-0 nova_compute[254061]: 2026-01-20 19:16:12.128 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 19:16:12 compute-0 nova_compute[254061]: 2026-01-20 19:16:12.128 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 19:16:12 compute-0 nova_compute[254061]: 2026-01-20 19:16:12.184 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Jan 20 19:16:12 compute-0 nova_compute[254061]: 2026-01-20 19:16:12.184 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 19:16:12 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:16:12 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:16:12 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:16:12.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:16:12 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 20 19:16:12 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:16:12 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 20 19:16:12 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:16:12 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1028: 337 pgs: 337 active+clean; 121 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 20 KiB/s wr, 30 op/s
Jan 20 19:16:13 compute-0 nova_compute[254061]: 2026-01-20 19:16:13.074 254065 DEBUG nova.network.neutron [-] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 19:16:13 compute-0 nova_compute[254061]: 2026-01-20 19:16:13.096 254065 INFO nova.compute.manager [-] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Took 0.99 seconds to deallocate network for instance.
Jan 20 19:16:13 compute-0 nova_compute[254061]: 2026-01-20 19:16:13.128 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:16:13 compute-0 nova_compute[254061]: 2026-01-20 19:16:13.142 254065 DEBUG oslo_concurrency.lockutils [None req-b0886263-582c-410f-8131-f3a8e82841a4 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:16:13 compute-0 nova_compute[254061]: 2026-01-20 19:16:13.142 254065 DEBUG oslo_concurrency.lockutils [None req-b0886263-582c-410f-8131-f3a8e82841a4 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:16:13 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Jan 20 19:16:13 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 20 19:16:13 compute-0 nova_compute[254061]: 2026-01-20 19:16:13.186 254065 DEBUG oslo_concurrency.processutils [None req-b0886263-582c-410f-8131-f3a8e82841a4 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:16:13 compute-0 nova_compute[254061]: 2026-01-20 19:16:13.221 254065 DEBUG nova.network.neutron [req-6697ed8f-41f5-4778-933d-0778c3f657a3 req-fca50c90-35af-4c94-92d8-54a2975a2ab3 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Updated VIF entry in instance network info cache for port 49f8aed7-e782-474a-b2f1-2fbc7b04e852. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 19:16:13 compute-0 nova_compute[254061]: 2026-01-20 19:16:13.222 254065 DEBUG nova.network.neutron [req-6697ed8f-41f5-4778-933d-0778c3f657a3 req-fca50c90-35af-4c94-92d8-54a2975a2ab3 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Updating instance_info_cache with network_info: [{"id": "49f8aed7-e782-474a-b2f1-2fbc7b04e852", "address": "fa:16:3e:9f:26:34", "network": {"id": "43656a85-4118-4ce7-9ff6-eff8095a7ad3", "bridge": "br-int", "label": "tempest-network-smoke--1377908728", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap49f8aed7-e7", "ovs_interfaceid": "49f8aed7-e782-474a-b2f1-2fbc7b04e852", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 19:16:13 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 20 19:16:13 compute-0 nova_compute[254061]: 2026-01-20 19:16:13.236 254065 DEBUG nova.compute.manager [req-22515cb8-a60e-4562-9b52-465a71152472 req-f9d2c660-127e-4353-ad3f-6c5f62277b4a 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Received event network-vif-unplugged-49f8aed7-e782-474a-b2f1-2fbc7b04e852 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 19:16:13 compute-0 nova_compute[254061]: 2026-01-20 19:16:13.236 254065 DEBUG oslo_concurrency.lockutils [req-22515cb8-a60e-4562-9b52-465a71152472 req-f9d2c660-127e-4353-ad3f-6c5f62277b4a 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Acquiring lock "464ffed9-a738-406a-9a42-2bd3d60d27f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:16:13 compute-0 nova_compute[254061]: 2026-01-20 19:16:13.236 254065 DEBUG oslo_concurrency.lockutils [req-22515cb8-a60e-4562-9b52-465a71152472 req-f9d2c660-127e-4353-ad3f-6c5f62277b4a 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Lock "464ffed9-a738-406a-9a42-2bd3d60d27f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:16:13 compute-0 nova_compute[254061]: 2026-01-20 19:16:13.237 254065 DEBUG oslo_concurrency.lockutils [req-22515cb8-a60e-4562-9b52-465a71152472 req-f9d2c660-127e-4353-ad3f-6c5f62277b4a 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Lock "464ffed9-a738-406a-9a42-2bd3d60d27f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:16:13 compute-0 nova_compute[254061]: 2026-01-20 19:16:13.237 254065 DEBUG nova.compute.manager [req-22515cb8-a60e-4562-9b52-465a71152472 req-f9d2c660-127e-4353-ad3f-6c5f62277b4a 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] No waiting events found dispatching network-vif-unplugged-49f8aed7-e782-474a-b2f1-2fbc7b04e852 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 19:16:13 compute-0 nova_compute[254061]: 2026-01-20 19:16:13.237 254065 WARNING nova.compute.manager [req-22515cb8-a60e-4562-9b52-465a71152472 req-f9d2c660-127e-4353-ad3f-6c5f62277b4a 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Received unexpected event network-vif-unplugged-49f8aed7-e782-474a-b2f1-2fbc7b04e852 for instance with vm_state deleted and task_state None.
Jan 20 19:16:13 compute-0 nova_compute[254061]: 2026-01-20 19:16:13.238 254065 DEBUG nova.compute.manager [req-22515cb8-a60e-4562-9b52-465a71152472 req-f9d2c660-127e-4353-ad3f-6c5f62277b4a 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Received event network-vif-plugged-49f8aed7-e782-474a-b2f1-2fbc7b04e852 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 19:16:13 compute-0 nova_compute[254061]: 2026-01-20 19:16:13.238 254065 DEBUG oslo_concurrency.lockutils [req-22515cb8-a60e-4562-9b52-465a71152472 req-f9d2c660-127e-4353-ad3f-6c5f62277b4a 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Acquiring lock "464ffed9-a738-406a-9a42-2bd3d60d27f2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:16:13 compute-0 nova_compute[254061]: 2026-01-20 19:16:13.238 254065 DEBUG oslo_concurrency.lockutils [req-22515cb8-a60e-4562-9b52-465a71152472 req-f9d2c660-127e-4353-ad3f-6c5f62277b4a 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Lock "464ffed9-a738-406a-9a42-2bd3d60d27f2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:16:13 compute-0 nova_compute[254061]: 2026-01-20 19:16:13.238 254065 DEBUG oslo_concurrency.lockutils [req-22515cb8-a60e-4562-9b52-465a71152472 req-f9d2c660-127e-4353-ad3f-6c5f62277b4a 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Lock "464ffed9-a738-406a-9a42-2bd3d60d27f2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:16:13 compute-0 nova_compute[254061]: 2026-01-20 19:16:13.238 254065 DEBUG nova.compute.manager [req-22515cb8-a60e-4562-9b52-465a71152472 req-f9d2c660-127e-4353-ad3f-6c5f62277b4a 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] No waiting events found dispatching network-vif-plugged-49f8aed7-e782-474a-b2f1-2fbc7b04e852 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 19:16:13 compute-0 nova_compute[254061]: 2026-01-20 19:16:13.239 254065 WARNING nova.compute.manager [req-22515cb8-a60e-4562-9b52-465a71152472 req-f9d2c660-127e-4353-ad3f-6c5f62277b4a 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Received unexpected event network-vif-plugged-49f8aed7-e782-474a-b2f1-2fbc7b04e852 for instance with vm_state deleted and task_state None.
Jan 20 19:16:13 compute-0 nova_compute[254061]: 2026-01-20 19:16:13.239 254065 DEBUG nova.compute.manager [req-22515cb8-a60e-4562-9b52-465a71152472 req-f9d2c660-127e-4353-ad3f-6c5f62277b4a 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Received event network-vif-deleted-49f8aed7-e782-474a-b2f1-2fbc7b04e852 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 19:16:13 compute-0 nova_compute[254061]: 2026-01-20 19:16:13.240 254065 DEBUG oslo_concurrency.lockutils [req-6697ed8f-41f5-4778-933d-0778c3f657a3 req-fca50c90-35af-4c94-92d8-54a2975a2ab3 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Releasing lock "refresh_cache-464ffed9-a738-406a-9a42-2bd3d60d27f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 19:16:13 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:16:13 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 20 19:16:13 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:16:13 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:16:13 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:16:13 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/297589593' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:16:13 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 20 19:16:13 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 20 19:16:13 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:16:13 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:16:13 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/787847211' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:16:13 compute-0 nova_compute[254061]: 2026-01-20 19:16:13.621 254065 DEBUG oslo_concurrency.processutils [None req-b0886263-582c-410f-8131-f3a8e82841a4 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:16:13 compute-0 nova_compute[254061]: 2026-01-20 19:16:13.625 254065 DEBUG nova.compute.provider_tree [None req-b0886263-582c-410f-8131-f3a8e82841a4 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Inventory has not changed in ProviderTree for provider: cb9161e5-191d-495c-920a-01144f42a215 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 19:16:13 compute-0 nova_compute[254061]: 2026-01-20 19:16:13.648 254065 DEBUG nova.scheduler.client.report [None req-b0886263-582c-410f-8131-f3a8e82841a4 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Inventory has not changed for provider cb9161e5-191d-495c-920a-01144f42a215 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 19:16:13 compute-0 nova_compute[254061]: 2026-01-20 19:16:13.678 254065 DEBUG oslo_concurrency.lockutils [None req-b0886263-582c-410f-8131-f3a8e82841a4 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.536s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:16:13 compute-0 nova_compute[254061]: 2026-01-20 19:16:13.707 254065 INFO nova.scheduler.client.report [None req-b0886263-582c-410f-8131-f3a8e82841a4 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Deleted allocations for instance 464ffed9-a738-406a-9a42-2bd3d60d27f2
Jan 20 19:16:13 compute-0 nova_compute[254061]: 2026-01-20 19:16:13.777 254065 DEBUG oslo_concurrency.lockutils [None req-b0886263-582c-410f-8131-f3a8e82841a4 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "464ffed9-a738-406a-9a42-2bd3d60d27f2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.577s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:16:14 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:16:14 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:16:14 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:16:14.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:16:14 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Jan 20 19:16:14 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 20 19:16:14 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1029: 337 pgs: 337 active+clean; 121 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 8.4 KiB/s wr, 31 op/s
Jan 20 19:16:14 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 19:16:14 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:16:14 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 20 19:16:14 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:16:14 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:16:14 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:16:14 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:16:14.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:16:14 compute-0 sudo[273207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:16:14 compute-0 sudo[273207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:16:14 compute-0 sudo[273207]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:14 compute-0 sudo[273232]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 19:16:14 compute-0 sudo[273232]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:16:14 compute-0 ceph-mon[74381]: pgmap v1028: 337 pgs: 337 active+clean; 121 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 20 KiB/s wr, 30 op/s
Jan 20 19:16:14 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/1397101129' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:16:14 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:16:14 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/787847211' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:16:14 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/3931853994' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:16:14 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/2953952221' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:16:14 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 20 19:16:14 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 20 19:16:14 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 19:16:14 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 19:16:14 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:16:14 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:16:14 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 19:16:14 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 19:16:14 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 19:16:14 compute-0 ceph-mon[74381]: log_channel(cluster) log [INF] : Health check cleared: CEPHADM_FAILED_DAEMON (was: 2 failed cephadm daemon(s))
Jan 20 19:16:14 compute-0 ceph-mon[74381]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 20 19:16:14 compute-0 podman[273298]: 2026-01-20 19:16:14.703882811 +0000 UTC m=+0.045513861 container create e7d21f503b0d85fb46d1429c7c99a2cfc658c01cafeba526c10f81e0de4d258a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:16:14 compute-0 systemd[1]: Started libpod-conmon-e7d21f503b0d85fb46d1429c7c99a2cfc658c01cafeba526c10f81e0de4d258a.scope.
Jan 20 19:16:14 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:16:14 compute-0 podman[273298]: 2026-01-20 19:16:14.683736167 +0000 UTC m=+0.025367267 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:16:14 compute-0 podman[273298]: 2026-01-20 19:16:14.793226509 +0000 UTC m=+0.134857579 container init e7d21f503b0d85fb46d1429c7c99a2cfc658c01cafeba526c10f81e0de4d258a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_newton, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Jan 20 19:16:14 compute-0 podman[273298]: 2026-01-20 19:16:14.799559719 +0000 UTC m=+0.141190779 container start e7d21f503b0d85fb46d1429c7c99a2cfc658c01cafeba526c10f81e0de4d258a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_newton, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 20 19:16:14 compute-0 podman[273298]: 2026-01-20 19:16:14.803242709 +0000 UTC m=+0.144873809 container attach e7d21f503b0d85fb46d1429c7c99a2cfc658c01cafeba526c10f81e0de4d258a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_newton, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:16:14 compute-0 tender_newton[273314]: 167 167
Jan 20 19:16:14 compute-0 systemd[1]: libpod-e7d21f503b0d85fb46d1429c7c99a2cfc658c01cafeba526c10f81e0de4d258a.scope: Deactivated successfully.
Jan 20 19:16:14 compute-0 podman[273298]: 2026-01-20 19:16:14.808528153 +0000 UTC m=+0.150159203 container died e7d21f503b0d85fb46d1429c7c99a2cfc658c01cafeba526c10f81e0de4d258a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_newton, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 20 19:16:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-614cf7579aa86b36a9795a25af53e0f21403979dfa659846aa3e7d9c3cba210d-merged.mount: Deactivated successfully.
Jan 20 19:16:14 compute-0 podman[273298]: 2026-01-20 19:16:14.848591196 +0000 UTC m=+0.190222246 container remove e7d21f503b0d85fb46d1429c7c99a2cfc658c01cafeba526c10f81e0de4d258a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:16:14 compute-0 systemd[1]: libpod-conmon-e7d21f503b0d85fb46d1429c7c99a2cfc658c01cafeba526c10f81e0de4d258a.scope: Deactivated successfully.
Jan 20 19:16:15 compute-0 podman[273335]: 2026-01-20 19:16:15.017070824 +0000 UTC m=+0.060413135 container create a480437d631d607b52e3193f86004f10d23fafbc8575c80fdc3a43abb60dc746 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 19:16:15 compute-0 systemd[1]: Started libpod-conmon-a480437d631d607b52e3193f86004f10d23fafbc8575c80fdc3a43abb60dc746.scope.
Jan 20 19:16:15 compute-0 podman[273335]: 2026-01-20 19:16:14.987672419 +0000 UTC m=+0.031014790 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:16:15 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:16:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccca1d231af739fd34178028f86f6e28dd858ed3c1003352eb23f373aab8233f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:16:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccca1d231af739fd34178028f86f6e28dd858ed3c1003352eb23f373aab8233f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:16:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccca1d231af739fd34178028f86f6e28dd858ed3c1003352eb23f373aab8233f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:16:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccca1d231af739fd34178028f86f6e28dd858ed3c1003352eb23f373aab8233f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:16:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ccca1d231af739fd34178028f86f6e28dd858ed3c1003352eb23f373aab8233f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:16:15 compute-0 podman[273335]: 2026-01-20 19:16:15.094716165 +0000 UTC m=+0.138058476 container init a480437d631d607b52e3193f86004f10d23fafbc8575c80fdc3a43abb60dc746 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_elbakyan, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Jan 20 19:16:15 compute-0 podman[273335]: 2026-01-20 19:16:15.102022042 +0000 UTC m=+0.145364323 container start a480437d631d607b52e3193f86004f10d23fafbc8575c80fdc3a43abb60dc746 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_elbakyan, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 20 19:16:15 compute-0 podman[273335]: 2026-01-20 19:16:15.105668021 +0000 UTC m=+0.149010332 container attach a480437d631d607b52e3193f86004f10d23fafbc8575c80fdc3a43abb60dc746 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 20 19:16:15 compute-0 nova_compute[254061]: 2026-01-20 19:16:15.128 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:16:15 compute-0 intelligent_elbakyan[273351]: --> passed data devices: 0 physical, 1 LVM
Jan 20 19:16:15 compute-0 intelligent_elbakyan[273351]: --> All data devices are unavailable
Jan 20 19:16:15 compute-0 systemd[1]: libpod-a480437d631d607b52e3193f86004f10d23fafbc8575c80fdc3a43abb60dc746.scope: Deactivated successfully.
Jan 20 19:16:15 compute-0 podman[273335]: 2026-01-20 19:16:15.419128641 +0000 UTC m=+0.462470922 container died a480437d631d607b52e3193f86004f10d23fafbc8575c80fdc3a43abb60dc746 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_elbakyan, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 19:16:15 compute-0 ceph-mon[74381]: pgmap v1029: 337 pgs: 337 active+clean; 121 MiB data, 317 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 8.4 KiB/s wr, 31 op/s
Jan 20 19:16:15 compute-0 ceph-mon[74381]: Health check cleared: CEPHADM_FAILED_DAEMON (was: 2 failed cephadm daemon(s))
Jan 20 19:16:15 compute-0 ceph-mon[74381]: Cluster is now healthy
Jan 20 19:16:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-ccca1d231af739fd34178028f86f6e28dd858ed3c1003352eb23f373aab8233f-merged.mount: Deactivated successfully.
Jan 20 19:16:15 compute-0 podman[273335]: 2026-01-20 19:16:15.538253894 +0000 UTC m=+0.581596175 container remove a480437d631d607b52e3193f86004f10d23fafbc8575c80fdc3a43abb60dc746 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:16:15 compute-0 systemd[1]: libpod-conmon-a480437d631d607b52e3193f86004f10d23fafbc8575c80fdc3a43abb60dc746.scope: Deactivated successfully.
Jan 20 19:16:15 compute-0 sudo[273232]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:15 compute-0 sudo[273378]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:16:15 compute-0 sudo[273378]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:16:15 compute-0 sudo[273378]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:15 compute-0 sudo[273403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- lvm list --format json
Jan 20 19:16:15 compute-0 sudo[273403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:16:16 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:16:16 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:16:16 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:16:16.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:16:16 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1030: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 9.7 KiB/s wr, 62 op/s
Jan 20 19:16:16 compute-0 podman[273469]: 2026-01-20 19:16:16.133394224 +0000 UTC m=+0.042017587 container create 76fb132450630221a38fa90b07231ba10b48922b319b2e46d741a4b73d7bc436 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_archimedes, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:16:16 compute-0 systemd[1]: Started libpod-conmon-76fb132450630221a38fa90b07231ba10b48922b319b2e46d741a4b73d7bc436.scope.
Jan 20 19:16:16 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:16:16 compute-0 podman[273469]: 2026-01-20 19:16:16.115907412 +0000 UTC m=+0.024530785 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:16:16 compute-0 podman[273469]: 2026-01-20 19:16:16.210215743 +0000 UTC m=+0.118839136 container init 76fb132450630221a38fa90b07231ba10b48922b319b2e46d741a4b73d7bc436 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_archimedes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 20 19:16:16 compute-0 podman[273469]: 2026-01-20 19:16:16.216616765 +0000 UTC m=+0.125240118 container start 76fb132450630221a38fa90b07231ba10b48922b319b2e46d741a4b73d7bc436 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 20 19:16:16 compute-0 podman[273469]: 2026-01-20 19:16:16.220535342 +0000 UTC m=+0.129158745 container attach 76fb132450630221a38fa90b07231ba10b48922b319b2e46d741a4b73d7bc436 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_archimedes, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Jan 20 19:16:16 compute-0 stupefied_archimedes[273487]: 167 167
Jan 20 19:16:16 compute-0 systemd[1]: libpod-76fb132450630221a38fa90b07231ba10b48922b319b2e46d741a4b73d7bc436.scope: Deactivated successfully.
Jan 20 19:16:16 compute-0 podman[273469]: 2026-01-20 19:16:16.223709058 +0000 UTC m=+0.132332451 container died 76fb132450630221a38fa90b07231ba10b48922b319b2e46d741a4b73d7bc436 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_archimedes, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Jan 20 19:16:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e16dcf9ec1cd1a71896d7f87c18d67254a1dd1eae05bd4e9985e3f51687fbc3-merged.mount: Deactivated successfully.
Jan 20 19:16:16 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:16:16 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:16:16 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:16:16.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:16:16 compute-0 podman[273469]: 2026-01-20 19:16:16.278943432 +0000 UTC m=+0.187566785 container remove 76fb132450630221a38fa90b07231ba10b48922b319b2e46d741a4b73d7bc436 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Jan 20 19:16:16 compute-0 systemd[1]: libpod-conmon-76fb132450630221a38fa90b07231ba10b48922b319b2e46d741a4b73d7bc436.scope: Deactivated successfully.
Jan 20 19:16:16 compute-0 podman[273513]: 2026-01-20 19:16:16.46814073 +0000 UTC m=+0.047746032 container create a4ca4cf319123e5474aa13840d1a9caea867da60df23d2d88f331a3d7b3c867b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_engelbart, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Jan 20 19:16:16 compute-0 nova_compute[254061]: 2026-01-20 19:16:16.513 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:16:16 compute-0 systemd[1]: Started libpod-conmon-a4ca4cf319123e5474aa13840d1a9caea867da60df23d2d88f331a3d7b3c867b.scope.
Jan 20 19:16:16 compute-0 podman[273513]: 2026-01-20 19:16:16.447364498 +0000 UTC m=+0.026969830 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:16:16 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:16:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24d5a920e6093723831faacaec4ef03631836f4c75c0ad264c192a3a92e8f571/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:16:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24d5a920e6093723831faacaec4ef03631836f4c75c0ad264c192a3a92e8f571/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:16:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24d5a920e6093723831faacaec4ef03631836f4c75c0ad264c192a3a92e8f571/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:16:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24d5a920e6093723831faacaec4ef03631836f4c75c0ad264c192a3a92e8f571/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:16:16 compute-0 podman[273513]: 2026-01-20 19:16:16.572117863 +0000 UTC m=+0.151723225 container init a4ca4cf319123e5474aa13840d1a9caea867da60df23d2d88f331a3d7b3c867b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_engelbart, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:16:16 compute-0 podman[273513]: 2026-01-20 19:16:16.580370646 +0000 UTC m=+0.159975978 container start a4ca4cf319123e5474aa13840d1a9caea867da60df23d2d88f331a3d7b3c867b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:16:16 compute-0 podman[273513]: 2026-01-20 19:16:16.584268842 +0000 UTC m=+0.163874144 container attach a4ca4cf319123e5474aa13840d1a9caea867da60df23d2d88f331a3d7b3c867b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_engelbart, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:16:16 compute-0 awesome_engelbart[273529]: {
Jan 20 19:16:16 compute-0 awesome_engelbart[273529]:     "0": [
Jan 20 19:16:16 compute-0 awesome_engelbart[273529]:         {
Jan 20 19:16:16 compute-0 awesome_engelbart[273529]:             "devices": [
Jan 20 19:16:16 compute-0 awesome_engelbart[273529]:                 "/dev/loop3"
Jan 20 19:16:16 compute-0 awesome_engelbart[273529]:             ],
Jan 20 19:16:16 compute-0 awesome_engelbart[273529]:             "lv_name": "ceph_lv0",
Jan 20 19:16:16 compute-0 awesome_engelbart[273529]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:16:16 compute-0 awesome_engelbart[273529]:             "lv_size": "21470642176",
Jan 20 19:16:16 compute-0 awesome_engelbart[273529]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=aecbbf3b-b405-507b-97d7-637a83f5b4b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5f53c0c6-6046-4836-83f9-ff93da7e674e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:16:16 compute-0 awesome_engelbart[273529]:             "lv_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 19:16:16 compute-0 awesome_engelbart[273529]:             "name": "ceph_lv0",
Jan 20 19:16:16 compute-0 awesome_engelbart[273529]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:16:16 compute-0 awesome_engelbart[273529]:             "tags": {
Jan 20 19:16:16 compute-0 awesome_engelbart[273529]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:16:16 compute-0 awesome_engelbart[273529]:                 "ceph.block_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 19:16:16 compute-0 awesome_engelbart[273529]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:16:16 compute-0 awesome_engelbart[273529]:                 "ceph.cluster_fsid": "aecbbf3b-b405-507b-97d7-637a83f5b4b1",
Jan 20 19:16:16 compute-0 awesome_engelbart[273529]:                 "ceph.cluster_name": "ceph",
Jan 20 19:16:16 compute-0 awesome_engelbart[273529]:                 "ceph.crush_device_class": "",
Jan 20 19:16:16 compute-0 awesome_engelbart[273529]:                 "ceph.encrypted": "0",
Jan 20 19:16:16 compute-0 awesome_engelbart[273529]:                 "ceph.osd_fsid": "5f53c0c6-6046-4836-83f9-ff93da7e674e",
Jan 20 19:16:16 compute-0 awesome_engelbart[273529]:                 "ceph.osd_id": "0",
Jan 20 19:16:16 compute-0 awesome_engelbart[273529]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:16:16 compute-0 awesome_engelbart[273529]:                 "ceph.type": "block",
Jan 20 19:16:16 compute-0 awesome_engelbart[273529]:                 "ceph.vdo": "0",
Jan 20 19:16:16 compute-0 awesome_engelbart[273529]:                 "ceph.with_tpm": "0"
Jan 20 19:16:16 compute-0 awesome_engelbart[273529]:             },
Jan 20 19:16:16 compute-0 awesome_engelbart[273529]:             "type": "block",
Jan 20 19:16:16 compute-0 awesome_engelbart[273529]:             "vg_name": "ceph_vg0"
Jan 20 19:16:16 compute-0 awesome_engelbart[273529]:         }
Jan 20 19:16:16 compute-0 awesome_engelbart[273529]:     ]
Jan 20 19:16:16 compute-0 awesome_engelbart[273529]: }
Jan 20 19:16:16 compute-0 systemd[1]: libpod-a4ca4cf319123e5474aa13840d1a9caea867da60df23d2d88f331a3d7b3c867b.scope: Deactivated successfully.
Jan 20 19:16:16 compute-0 podman[273513]: 2026-01-20 19:16:16.873395234 +0000 UTC m=+0.453000526 container died a4ca4cf319123e5474aa13840d1a9caea867da60df23d2d88f331a3d7b3c867b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_engelbart, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Jan 20 19:16:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-24d5a920e6093723831faacaec4ef03631836f4c75c0ad264c192a3a92e8f571-merged.mount: Deactivated successfully.
Jan 20 19:16:16 compute-0 nova_compute[254061]: 2026-01-20 19:16:16.909 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:16:16 compute-0 podman[273513]: 2026-01-20 19:16:16.918927976 +0000 UTC m=+0.498533268 container remove a4ca4cf319123e5474aa13840d1a9caea867da60df23d2d88f331a3d7b3c867b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_engelbart, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 20 19:16:16 compute-0 systemd[1]: libpod-conmon-a4ca4cf319123e5474aa13840d1a9caea867da60df23d2d88f331a3d7b3c867b.scope: Deactivated successfully.
Jan 20 19:16:16 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:16:16 compute-0 sudo[273403]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:17 compute-0 sudo[273550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:16:17 compute-0 sudo[273550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:16:17 compute-0 sudo[273550]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:17 compute-0 sudo[273575]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- raw list --format json
Jan 20 19:16:17 compute-0 sudo[273575]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:16:17 compute-0 nova_compute[254061]: 2026-01-20 19:16:17.124 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:16:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:16:17.217Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:16:17 compute-0 podman[273639]: 2026-01-20 19:16:17.422975312 +0000 UTC m=+0.021889184 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:16:17 compute-0 podman[273639]: 2026-01-20 19:16:17.524021855 +0000 UTC m=+0.122935737 container create dd5cbd5f23271507fe797562a18b3ca0c9e3aa62d3e7c9f6ac3c9ff7670f5115 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_lewin, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 20 19:16:17 compute-0 ceph-mon[74381]: pgmap v1030: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 9.7 KiB/s wr, 62 op/s
Jan 20 19:16:17 compute-0 systemd[1]: Started libpod-conmon-dd5cbd5f23271507fe797562a18b3ca0c9e3aa62d3e7c9f6ac3c9ff7670f5115.scope.
Jan 20 19:16:17 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:16:17 compute-0 podman[273639]: 2026-01-20 19:16:17.607484844 +0000 UTC m=+0.206398716 container init dd5cbd5f23271507fe797562a18b3ca0c9e3aa62d3e7c9f6ac3c9ff7670f5115 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:16:17 compute-0 podman[273639]: 2026-01-20 19:16:17.615690456 +0000 UTC m=+0.214604338 container start dd5cbd5f23271507fe797562a18b3ca0c9e3aa62d3e7c9f6ac3c9ff7670f5115 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_lewin, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Jan 20 19:16:17 compute-0 podman[273639]: 2026-01-20 19:16:17.619140749 +0000 UTC m=+0.218054621 container attach dd5cbd5f23271507fe797562a18b3ca0c9e3aa62d3e7c9f6ac3c9ff7670f5115 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_lewin, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Jan 20 19:16:17 compute-0 trusting_lewin[273655]: 167 167
Jan 20 19:16:17 compute-0 systemd[1]: libpod-dd5cbd5f23271507fe797562a18b3ca0c9e3aa62d3e7c9f6ac3c9ff7670f5115.scope: Deactivated successfully.
Jan 20 19:16:17 compute-0 podman[273639]: 2026-01-20 19:16:17.620705891 +0000 UTC m=+0.219619783 container died dd5cbd5f23271507fe797562a18b3ca0c9e3aa62d3e7c9f6ac3c9ff7670f5115 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 19:16:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1847b15859779622ccd678fa312e9d0a20ff9066ee1838d99a669818c3e97a3-merged.mount: Deactivated successfully.
Jan 20 19:16:17 compute-0 podman[273639]: 2026-01-20 19:16:17.659822749 +0000 UTC m=+0.258736631 container remove dd5cbd5f23271507fe797562a18b3ca0c9e3aa62d3e7c9f6ac3c9ff7670f5115 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_lewin, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 20 19:16:17 compute-0 systemd[1]: libpod-conmon-dd5cbd5f23271507fe797562a18b3ca0c9e3aa62d3e7c9f6ac3c9ff7670f5115.scope: Deactivated successfully.
Jan 20 19:16:17 compute-0 podman[273681]: 2026-01-20 19:16:17.860121038 +0000 UTC m=+0.053986181 container create 77aafb5e6f76298de140a74feb88bb57705c6e745a550c15ef4de1f3709f6864 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_lalande, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:16:17 compute-0 systemd[1]: Started libpod-conmon-77aafb5e6f76298de140a74feb88bb57705c6e745a550c15ef4de1f3709f6864.scope.
Jan 20 19:16:17 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:16:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14c0287b3c0f289e152f86eef1779b6d31448735fd086a0ac3eb21c05deb878a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:16:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14c0287b3c0f289e152f86eef1779b6d31448735fd086a0ac3eb21c05deb878a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:16:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14c0287b3c0f289e152f86eef1779b6d31448735fd086a0ac3eb21c05deb878a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:16:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14c0287b3c0f289e152f86eef1779b6d31448735fd086a0ac3eb21c05deb878a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:16:17 compute-0 podman[273681]: 2026-01-20 19:16:17.83873601 +0000 UTC m=+0.032601193 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:16:17 compute-0 podman[273681]: 2026-01-20 19:16:17.938944191 +0000 UTC m=+0.132809414 container init 77aafb5e6f76298de140a74feb88bb57705c6e745a550c15ef4de1f3709f6864 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_lalande, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:16:17 compute-0 podman[273681]: 2026-01-20 19:16:17.946895035 +0000 UTC m=+0.140760178 container start 77aafb5e6f76298de140a74feb88bb57705c6e745a550c15ef4de1f3709f6864 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_lalande, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Jan 20 19:16:17 compute-0 podman[273681]: 2026-01-20 19:16:17.95036286 +0000 UTC m=+0.144228033 container attach 77aafb5e6f76298de140a74feb88bb57705c6e745a550c15ef4de1f3709f6864 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_lalande, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 20 19:16:18 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:16:18 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:16:18 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:16:18.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:16:18 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1031: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.3 KiB/s wr, 31 op/s
Jan 20 19:16:18 compute-0 nova_compute[254061]: 2026-01-20 19:16:18.144 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:16:18 compute-0 nova_compute[254061]: 2026-01-20 19:16:18.254 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:16:18 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:16:18 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:16:18 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:16:18.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:16:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[106777]: logger=infra.usagestats t=2026-01-20T19:16:18.537656077Z level=info msg="Usage stats are ready to report"
Jan 20 19:16:18 compute-0 lvm[273775]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 19:16:18 compute-0 lvm[273775]: VG ceph_vg0 finished
Jan 20 19:16:18 compute-0 dazzling_lalande[273698]: {}
Jan 20 19:16:18 compute-0 systemd[1]: libpod-77aafb5e6f76298de140a74feb88bb57705c6e745a550c15ef4de1f3709f6864.scope: Deactivated successfully.
Jan 20 19:16:18 compute-0 podman[273681]: 2026-01-20 19:16:18.65748585 +0000 UTC m=+0.851350983 container died 77aafb5e6f76298de140a74feb88bb57705c6e745a550c15ef4de1f3709f6864 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_lalande, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Jan 20 19:16:18 compute-0 systemd[1]: libpod-77aafb5e6f76298de140a74feb88bb57705c6e745a550c15ef4de1f3709f6864.scope: Consumed 1.119s CPU time.
Jan 20 19:16:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-14c0287b3c0f289e152f86eef1779b6d31448735fd086a0ac3eb21c05deb878a-merged.mount: Deactivated successfully.
Jan 20 19:16:18 compute-0 podman[273681]: 2026-01-20 19:16:18.69926861 +0000 UTC m=+0.893133743 container remove 77aafb5e6f76298de140a74feb88bb57705c6e745a550c15ef4de1f3709f6864 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:16:18 compute-0 systemd[1]: libpod-conmon-77aafb5e6f76298de140a74feb88bb57705c6e745a550c15ef4de1f3709f6864.scope: Deactivated successfully.
Jan 20 19:16:18 compute-0 sudo[273575]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:18 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:16:18 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:16:18 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:16:18 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:16:18 compute-0 sudo[273790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 19:16:18 compute-0 sudo[273790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:16:18 compute-0 sudo[273790]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:16:18.904Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:16:19 compute-0 sudo[273815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:16:19 compute-0 sudo[273815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:16:19 compute-0 sudo[273815]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:19 compute-0 ceph-mon[74381]: pgmap v1031: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.3 KiB/s wr, 31 op/s
Jan 20 19:16:19 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:16:19 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:16:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:16:19] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Jan 20 19:16:19 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:16:19] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Jan 20 19:16:20 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:16:20 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:16:20 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:16:20.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:16:20 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1032: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.3 KiB/s wr, 31 op/s
Jan 20 19:16:20 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:16:20 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:16:20 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:16:20.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:16:21 compute-0 nova_compute[254061]: 2026-01-20 19:16:21.517 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:16:21 compute-0 ceph-mon[74381]: pgmap v1032: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.3 KiB/s wr, 31 op/s
Jan 20 19:16:21 compute-0 nova_compute[254061]: 2026-01-20 19:16:21.911 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:16:21 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:16:22 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:16:22 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:16:22 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:16:22.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:16:22 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1033: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.3 KiB/s wr, 31 op/s
Jan 20 19:16:22 compute-0 nova_compute[254061]: 2026-01-20 19:16:22.123 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:16:22 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:16:22 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:16:22 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:16:22.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:16:22 compute-0 ceph-mon[74381]: pgmap v1033: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.3 KiB/s wr, 31 op/s
Jan 20 19:16:24 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:16:24 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:16:24 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:16:24.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:16:24 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1034: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.3 KiB/s wr, 30 op/s
Jan 20 19:16:24 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:16:24 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:16:24 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:16:24.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:16:25 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Jan 20 19:16:25 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:16:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:16:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:16:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:16:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:16:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:16:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:16:25 compute-0 ceph-mon[74381]: pgmap v1034: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.3 KiB/s wr, 30 op/s
Jan 20 19:16:25 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:16:25 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:16:26 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:16:26 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:16:26 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:16:26.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:16:26 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1035: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Jan 20 19:16:26 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:16:26 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:16:26 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:16:26.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:16:26 compute-0 nova_compute[254061]: 2026-01-20 19:16:26.433 254065 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768936571.4326205, 464ffed9-a738-406a-9a42-2bd3d60d27f2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 19:16:26 compute-0 nova_compute[254061]: 2026-01-20 19:16:26.434 254065 INFO nova.compute.manager [-] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] VM Stopped (Lifecycle Event)
Jan 20 19:16:26 compute-0 nova_compute[254061]: 2026-01-20 19:16:26.453 254065 DEBUG nova.compute.manager [None req-2d4bf836-a3c9-44df-bedb-c3f6d1aac0ea - - - - - -] [instance: 464ffed9-a738-406a-9a42-2bd3d60d27f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 19:16:26 compute-0 nova_compute[254061]: 2026-01-20 19:16:26.520 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:16:26 compute-0 nova_compute[254061]: 2026-01-20 19:16:26.913 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:16:26 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:16:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:16:27.218Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:16:27 compute-0 ceph-mon[74381]: pgmap v1035: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Jan 20 19:16:28 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:16:28 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:16:28 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:16:28.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:16:28 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1036: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:16:28 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:16:28 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:16:28 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:16:28.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:16:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:16:28.906Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:16:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:16:28.906Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:16:29 compute-0 ceph-mon[74381]: pgmap v1036: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:16:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:16:29] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Jan 20 19:16:29 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:16:29] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Jan 20 19:16:30 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:16:30 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:16:30 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:16:30.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:16:30 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1037: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:16:30 compute-0 podman[273851]: 2026-01-20 19:16:30.098692131 +0000 UTC m=+0.070476728 container health_status 7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 20 19:16:30 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:16:30 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:16:30 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:16:30.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:16:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:16:30.293 165659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:16:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:16:30.293 165659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:16:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:16:30.293 165659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:16:30 compute-0 ceph-mon[74381]: pgmap v1037: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:16:31 compute-0 nova_compute[254061]: 2026-01-20 19:16:31.523 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:16:31 compute-0 nova_compute[254061]: 2026-01-20 19:16:31.916 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:16:31 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:16:32 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:16:32 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:16:32 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:16:32.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:16:32 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1038: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:16:32 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:16:32 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:16:32 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:16:32.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:16:33 compute-0 ceph-mon[74381]: pgmap v1038: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:16:34 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:16:34 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:16:34 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:16:34.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:16:34 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1039: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:16:34 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:16:34 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:16:34 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:16:34.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:16:35 compute-0 ceph-mon[74381]: pgmap v1039: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:16:36 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:16:36 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:16:36 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:16:36.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:16:36 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1040: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:16:36 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:16:36 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:16:36 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:16:36.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:16:36 compute-0 nova_compute[254061]: 2026-01-20 19:16:36.526 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:16:36 compute-0 nova_compute[254061]: 2026-01-20 19:16:36.855 254065 DEBUG oslo_concurrency.lockutils [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Acquiring lock "6abaa16d-d5c3-447d-948f-53b77897103a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:16:36 compute-0 nova_compute[254061]: 2026-01-20 19:16:36.855 254065 DEBUG oslo_concurrency.lockutils [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "6abaa16d-d5c3-447d-948f-53b77897103a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:16:36 compute-0 nova_compute[254061]: 2026-01-20 19:16:36.891 254065 DEBUG nova.compute.manager [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 20 19:16:36 compute-0 nova_compute[254061]: 2026-01-20 19:16:36.918 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:16:36 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:16:36 compute-0 nova_compute[254061]: 2026-01-20 19:16:36.977 254065 DEBUG oslo_concurrency.lockutils [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:16:36 compute-0 nova_compute[254061]: 2026-01-20 19:16:36.978 254065 DEBUG oslo_concurrency.lockutils [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:16:36 compute-0 nova_compute[254061]: 2026-01-20 19:16:36.986 254065 DEBUG nova.virt.hardware [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 20 19:16:36 compute-0 nova_compute[254061]: 2026-01-20 19:16:36.986 254065 INFO nova.compute.claims [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] Claim successful on node compute-0.ctlplane.example.com
Jan 20 19:16:37 compute-0 nova_compute[254061]: 2026-01-20 19:16:37.132 254065 DEBUG oslo_concurrency.processutils [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:16:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:16:37.219Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:16:37 compute-0 ceph-mon[74381]: pgmap v1040: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:16:37 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:16:37 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/596012666' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:16:37 compute-0 nova_compute[254061]: 2026-01-20 19:16:37.614 254065 DEBUG oslo_concurrency.processutils [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:16:37 compute-0 nova_compute[254061]: 2026-01-20 19:16:37.620 254065 DEBUG nova.compute.provider_tree [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Inventory has not changed in ProviderTree for provider: cb9161e5-191d-495c-920a-01144f42a215 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 19:16:37 compute-0 nova_compute[254061]: 2026-01-20 19:16:37.638 254065 DEBUG nova.scheduler.client.report [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Inventory has not changed for provider cb9161e5-191d-495c-920a-01144f42a215 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 19:16:37 compute-0 nova_compute[254061]: 2026-01-20 19:16:37.667 254065 DEBUG oslo_concurrency.lockutils [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.689s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:16:37 compute-0 nova_compute[254061]: 2026-01-20 19:16:37.669 254065 DEBUG nova.compute.manager [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 20 19:16:37 compute-0 nova_compute[254061]: 2026-01-20 19:16:37.729 254065 DEBUG nova.compute.manager [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 20 19:16:37 compute-0 nova_compute[254061]: 2026-01-20 19:16:37.730 254065 DEBUG nova.network.neutron [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 20 19:16:37 compute-0 nova_compute[254061]: 2026-01-20 19:16:37.748 254065 INFO nova.virt.libvirt.driver [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 20 19:16:37 compute-0 nova_compute[254061]: 2026-01-20 19:16:37.766 254065 DEBUG nova.compute.manager [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 20 19:16:37 compute-0 nova_compute[254061]: 2026-01-20 19:16:37.897 254065 DEBUG nova.compute.manager [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 20 19:16:37 compute-0 nova_compute[254061]: 2026-01-20 19:16:37.898 254065 DEBUG nova.virt.libvirt.driver [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 20 19:16:37 compute-0 nova_compute[254061]: 2026-01-20 19:16:37.898 254065 INFO nova.virt.libvirt.driver [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] Creating image(s)
Jan 20 19:16:37 compute-0 nova_compute[254061]: 2026-01-20 19:16:37.923 254065 DEBUG nova.storage.rbd_utils [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] rbd image 6abaa16d-d5c3-447d-948f-53b77897103a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 19:16:37 compute-0 nova_compute[254061]: 2026-01-20 19:16:37.952 254065 DEBUG nova.storage.rbd_utils [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] rbd image 6abaa16d-d5c3-447d-948f-53b77897103a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 19:16:37 compute-0 nova_compute[254061]: 2026-01-20 19:16:37.980 254065 DEBUG nova.storage.rbd_utils [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] rbd image 6abaa16d-d5c3-447d-948f-53b77897103a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 19:16:37 compute-0 nova_compute[254061]: 2026-01-20 19:16:37.984 254065 DEBUG oslo_concurrency.processutils [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2e09eef5d7d60aeeb43ad4911302a9acdced7386 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:16:38 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:16:38 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:16:38 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:16:38.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:16:38 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1041: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:16:38 compute-0 nova_compute[254061]: 2026-01-20 19:16:38.064 254065 DEBUG oslo_concurrency.processutils [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/2e09eef5d7d60aeeb43ad4911302a9acdced7386 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:16:38 compute-0 nova_compute[254061]: 2026-01-20 19:16:38.065 254065 DEBUG oslo_concurrency.lockutils [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Acquiring lock "2e09eef5d7d60aeeb43ad4911302a9acdced7386" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:16:38 compute-0 nova_compute[254061]: 2026-01-20 19:16:38.066 254065 DEBUG oslo_concurrency.lockutils [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "2e09eef5d7d60aeeb43ad4911302a9acdced7386" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:16:38 compute-0 nova_compute[254061]: 2026-01-20 19:16:38.066 254065 DEBUG oslo_concurrency.lockutils [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "2e09eef5d7d60aeeb43ad4911302a9acdced7386" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:16:38 compute-0 nova_compute[254061]: 2026-01-20 19:16:38.095 254065 DEBUG nova.storage.rbd_utils [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] rbd image 6abaa16d-d5c3-447d-948f-53b77897103a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 19:16:38 compute-0 nova_compute[254061]: 2026-01-20 19:16:38.100 254065 DEBUG oslo_concurrency.processutils [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/2e09eef5d7d60aeeb43ad4911302a9acdced7386 6abaa16d-d5c3-447d-948f-53b77897103a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:16:38 compute-0 nova_compute[254061]: 2026-01-20 19:16:38.239 254065 DEBUG nova.policy [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd34bd159f8884263a7481e3fcff15267', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'dc8a6ea17f334edbbfaf2a91ec6fd167', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 20 19:16:38 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:16:38 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:16:38 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:16:38.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:16:38 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/596012666' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:16:38 compute-0 nova_compute[254061]: 2026-01-20 19:16:38.457 254065 DEBUG oslo_concurrency.processutils [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/2e09eef5d7d60aeeb43ad4911302a9acdced7386 6abaa16d-d5c3-447d-948f-53b77897103a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.357s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:16:38 compute-0 nova_compute[254061]: 2026-01-20 19:16:38.544 254065 DEBUG nova.storage.rbd_utils [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] resizing rbd image 6abaa16d-d5c3-447d-948f-53b77897103a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 20 19:16:38 compute-0 nova_compute[254061]: 2026-01-20 19:16:38.668 254065 DEBUG nova.objects.instance [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lazy-loading 'migration_context' on Instance uuid 6abaa16d-d5c3-447d-948f-53b77897103a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 19:16:38 compute-0 nova_compute[254061]: 2026-01-20 19:16:38.696 254065 DEBUG nova.virt.libvirt.driver [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 20 19:16:38 compute-0 nova_compute[254061]: 2026-01-20 19:16:38.696 254065 DEBUG nova.virt.libvirt.driver [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] Ensure instance console log exists: /var/lib/nova/instances/6abaa16d-d5c3-447d-948f-53b77897103a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 20 19:16:38 compute-0 nova_compute[254061]: 2026-01-20 19:16:38.697 254065 DEBUG oslo_concurrency.lockutils [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:16:38 compute-0 nova_compute[254061]: 2026-01-20 19:16:38.698 254065 DEBUG oslo_concurrency.lockutils [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:16:38 compute-0 nova_compute[254061]: 2026-01-20 19:16:38.699 254065 DEBUG oslo_concurrency.lockutils [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:16:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:16:38.907Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:16:38 compute-0 ceph-mgr[74676]: [dashboard INFO request] [192.168.122.100:42946] [POST] [200] [0.005s] [4.0B] [6fb8fb5b-b5dd-4187-9ee3-e1177d2e7e22] /api/prometheus_receiver
Jan 20 19:16:39 compute-0 nova_compute[254061]: 2026-01-20 19:16:39.024 254065 DEBUG nova.network.neutron [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] Successfully created port: 64b2dadc-ee9c-4d67-8a7a-8e7044e8326c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 20 19:16:39 compute-0 sudo[274078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:16:39 compute-0 sudo[274078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:16:39 compute-0 sudo[274078]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:39 compute-0 podman[274068]: 2026-01-20 19:16:39.136610246 +0000 UTC m=+0.104770746 container health_status d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller)
Jan 20 19:16:39 compute-0 ceph-mon[74381]: pgmap v1041: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:16:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:16:39] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Jan 20 19:16:39 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:16:39] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Jan 20 19:16:39 compute-0 nova_compute[254061]: 2026-01-20 19:16:39.905 254065 DEBUG nova.network.neutron [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] Successfully updated port: 64b2dadc-ee9c-4d67-8a7a-8e7044e8326c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 20 19:16:39 compute-0 nova_compute[254061]: 2026-01-20 19:16:39.934 254065 DEBUG oslo_concurrency.lockutils [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Acquiring lock "refresh_cache-6abaa16d-d5c3-447d-948f-53b77897103a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 19:16:39 compute-0 nova_compute[254061]: 2026-01-20 19:16:39.935 254065 DEBUG oslo_concurrency.lockutils [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Acquired lock "refresh_cache-6abaa16d-d5c3-447d-948f-53b77897103a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 19:16:39 compute-0 nova_compute[254061]: 2026-01-20 19:16:39.935 254065 DEBUG nova.network.neutron [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 20 19:16:40 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:16:40 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:16:40 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:16:40.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:16:40 compute-0 nova_compute[254061]: 2026-01-20 19:16:40.063 254065 DEBUG nova.compute.manager [req-203632db-454a-4964-9fd2-832557977c09 req-7cf25a53-99cf-4e60-98f2-7514a94e1048 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] Received event network-changed-64b2dadc-ee9c-4d67-8a7a-8e7044e8326c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 19:16:40 compute-0 nova_compute[254061]: 2026-01-20 19:16:40.064 254065 DEBUG nova.compute.manager [req-203632db-454a-4964-9fd2-832557977c09 req-7cf25a53-99cf-4e60-98f2-7514a94e1048 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] Refreshing instance network info cache due to event network-changed-64b2dadc-ee9c-4d67-8a7a-8e7044e8326c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 19:16:40 compute-0 nova_compute[254061]: 2026-01-20 19:16:40.064 254065 DEBUG oslo_concurrency.lockutils [req-203632db-454a-4964-9fd2-832557977c09 req-7cf25a53-99cf-4e60-98f2-7514a94e1048 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Acquiring lock "refresh_cache-6abaa16d-d5c3-447d-948f-53b77897103a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 19:16:40 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1042: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:16:40 compute-0 nova_compute[254061]: 2026-01-20 19:16:40.136 254065 DEBUG nova.network.neutron [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 20 19:16:40 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:16:40 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:16:40 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:16:40.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:16:40 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:16:40 compute-0 nova_compute[254061]: 2026-01-20 19:16:40.827 254065 DEBUG nova.network.neutron [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] Updating instance_info_cache with network_info: [{"id": "64b2dadc-ee9c-4d67-8a7a-8e7044e8326c", "address": "fa:16:3e:e2:08:a5", "network": {"id": "72d40385-358a-4a8f-a099-4346339210d1", "bridge": "br-int", "label": "tempest-network-smoke--417948324", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64b2dadc-ee", "ovs_interfaceid": "64b2dadc-ee9c-4d67-8a7a-8e7044e8326c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 19:16:40 compute-0 nova_compute[254061]: 2026-01-20 19:16:40.895 254065 DEBUG oslo_concurrency.lockutils [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Releasing lock "refresh_cache-6abaa16d-d5c3-447d-948f-53b77897103a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 19:16:40 compute-0 nova_compute[254061]: 2026-01-20 19:16:40.895 254065 DEBUG nova.compute.manager [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] Instance network_info: |[{"id": "64b2dadc-ee9c-4d67-8a7a-8e7044e8326c", "address": "fa:16:3e:e2:08:a5", "network": {"id": "72d40385-358a-4a8f-a099-4346339210d1", "bridge": "br-int", "label": "tempest-network-smoke--417948324", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64b2dadc-ee", "ovs_interfaceid": "64b2dadc-ee9c-4d67-8a7a-8e7044e8326c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 20 19:16:40 compute-0 nova_compute[254061]: 2026-01-20 19:16:40.896 254065 DEBUG oslo_concurrency.lockutils [req-203632db-454a-4964-9fd2-832557977c09 req-7cf25a53-99cf-4e60-98f2-7514a94e1048 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Acquired lock "refresh_cache-6abaa16d-d5c3-447d-948f-53b77897103a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 19:16:40 compute-0 nova_compute[254061]: 2026-01-20 19:16:40.896 254065 DEBUG nova.network.neutron [req-203632db-454a-4964-9fd2-832557977c09 req-7cf25a53-99cf-4e60-98f2-7514a94e1048 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] Refreshing network info cache for port 64b2dadc-ee9c-4d67-8a7a-8e7044e8326c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 19:16:40 compute-0 nova_compute[254061]: 2026-01-20 19:16:40.900 254065 DEBUG nova.virt.libvirt.driver [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] Start _get_guest_xml network_info=[{"id": "64b2dadc-ee9c-4d67-8a7a-8e7044e8326c", "address": "fa:16:3e:e2:08:a5", "network": {"id": "72d40385-358a-4a8f-a099-4346339210d1", "bridge": "br-int", "label": "tempest-network-smoke--417948324", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64b2dadc-ee", "ovs_interfaceid": "64b2dadc-ee9c-4d67-8a7a-8e7044e8326c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T19:05:59Z,direct_url=<?>,disk_format='qcow2',id=bc57af0c-4b71-499e-9808-3c8fc070a488,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='811a4eb676464ca2bd20c0cc2d2f61c9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T19:06:05Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'boot_index': 0, 'encryption_secret_uuid': None, 'encryption_format': None, 'size': 0, 'encrypted': False, 'guest_format': None, 'encryption_options': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'disk_bus': 'virtio', 'image_id': 'bc57af0c-4b71-499e-9808-3c8fc070a488'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 20 19:16:40 compute-0 nova_compute[254061]: 2026-01-20 19:16:40.906 254065 WARNING nova.virt.libvirt.driver [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 19:16:40 compute-0 nova_compute[254061]: 2026-01-20 19:16:40.915 254065 DEBUG nova.virt.libvirt.host [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 20 19:16:40 compute-0 nova_compute[254061]: 2026-01-20 19:16:40.916 254065 DEBUG nova.virt.libvirt.host [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 20 19:16:40 compute-0 nova_compute[254061]: 2026-01-20 19:16:40.919 254065 DEBUG nova.virt.libvirt.host [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 20 19:16:40 compute-0 nova_compute[254061]: 2026-01-20 19:16:40.920 254065 DEBUG nova.virt.libvirt.host [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 20 19:16:40 compute-0 nova_compute[254061]: 2026-01-20 19:16:40.920 254065 DEBUG nova.virt.libvirt.driver [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 19:16:40 compute-0 nova_compute[254061]: 2026-01-20 19:16:40.920 254065 DEBUG nova.virt.hardware [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-20T19:05:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='7446c314-5a17-42fd-97d9-a7a94e27bff9',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-20T19:05:59Z,direct_url=<?>,disk_format='qcow2',id=bc57af0c-4b71-499e-9808-3c8fc070a488,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='811a4eb676464ca2bd20c0cc2d2f61c9',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-20T19:06:05Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 20 19:16:40 compute-0 nova_compute[254061]: 2026-01-20 19:16:40.921 254065 DEBUG nova.virt.hardware [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 20 19:16:40 compute-0 nova_compute[254061]: 2026-01-20 19:16:40.921 254065 DEBUG nova.virt.hardware [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 20 19:16:40 compute-0 nova_compute[254061]: 2026-01-20 19:16:40.921 254065 DEBUG nova.virt.hardware [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 20 19:16:40 compute-0 nova_compute[254061]: 2026-01-20 19:16:40.922 254065 DEBUG nova.virt.hardware [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 20 19:16:40 compute-0 nova_compute[254061]: 2026-01-20 19:16:40.922 254065 DEBUG nova.virt.hardware [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 20 19:16:40 compute-0 nova_compute[254061]: 2026-01-20 19:16:40.922 254065 DEBUG nova.virt.hardware [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 20 19:16:40 compute-0 nova_compute[254061]: 2026-01-20 19:16:40.923 254065 DEBUG nova.virt.hardware [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 20 19:16:40 compute-0 nova_compute[254061]: 2026-01-20 19:16:40.923 254065 DEBUG nova.virt.hardware [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 20 19:16:40 compute-0 nova_compute[254061]: 2026-01-20 19:16:40.923 254065 DEBUG nova.virt.hardware [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 20 19:16:40 compute-0 nova_compute[254061]: 2026-01-20 19:16:40.923 254065 DEBUG nova.virt.hardware [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 20 19:16:40 compute-0 nova_compute[254061]: 2026-01-20 19:16:40.926 254065 DEBUG oslo_concurrency.processutils [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:16:41 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 20 19:16:41 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1100757737' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 19:16:41 compute-0 nova_compute[254061]: 2026-01-20 19:16:41.407 254065 DEBUG oslo_concurrency.processutils [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:16:41 compute-0 nova_compute[254061]: 2026-01-20 19:16:41.433 254065 DEBUG nova.storage.rbd_utils [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] rbd image 6abaa16d-d5c3-447d-948f-53b77897103a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 19:16:41 compute-0 nova_compute[254061]: 2026-01-20 19:16:41.436 254065 DEBUG oslo_concurrency.processutils [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:16:41 compute-0 nova_compute[254061]: 2026-01-20 19:16:41.529 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:16:41 compute-0 ceph-mon[74381]: pgmap v1042: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:16:41 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/1100757737' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 19:16:41 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 20 19:16:41 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1058488939' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 19:16:41 compute-0 nova_compute[254061]: 2026-01-20 19:16:41.920 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:16:41 compute-0 nova_compute[254061]: 2026-01-20 19:16:41.921 254065 DEBUG oslo_concurrency.processutils [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:16:41 compute-0 nova_compute[254061]: 2026-01-20 19:16:41.922 254065 DEBUG nova.virt.libvirt.vif [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T19:16:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-689113489',display_name='tempest-TestNetworkBasicOps-server-689113489',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-689113489',id=13,image_ref='bc57af0c-4b71-499e-9808-3c8fc070a488',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPIEsAInuTid1fpeD1gsq6bIvon3rcQVKfA2yp3/BazBy/JSbIFbbiiEhZClF4hCXOFgXRxm2U+y5vSyy5Fn94takx1GziyzcHmLPxbW+JplEmUL8mMF0cmTVNqxVVjR6A==',key_name='tempest-TestNetworkBasicOps-1523414470',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dc8a6ea17f334edbbfaf2a91ec6fd167',ramdisk_id='',reservation_id='r-aze4m31r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='bc57af0c-4b71-499e-9808-3c8fc070a488',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-899583499',owner_user_name='tempest-TestNetworkBasicOps-899583499-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T19:16:37Z,user_data=None,user_id='d34bd159f8884263a7481e3fcff15267',uuid=6abaa16d-d5c3-447d-948f-53b77897103a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "64b2dadc-ee9c-4d67-8a7a-8e7044e8326c", "address": "fa:16:3e:e2:08:a5", "network": {"id": "72d40385-358a-4a8f-a099-4346339210d1", "bridge": "br-int", "label": "tempest-network-smoke--417948324", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64b2dadc-ee", "ovs_interfaceid": "64b2dadc-ee9c-4d67-8a7a-8e7044e8326c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 20 19:16:41 compute-0 nova_compute[254061]: 2026-01-20 19:16:41.923 254065 DEBUG nova.network.os_vif_util [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Converting VIF {"id": "64b2dadc-ee9c-4d67-8a7a-8e7044e8326c", "address": "fa:16:3e:e2:08:a5", "network": {"id": "72d40385-358a-4a8f-a099-4346339210d1", "bridge": "br-int", "label": "tempest-network-smoke--417948324", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64b2dadc-ee", "ovs_interfaceid": "64b2dadc-ee9c-4d67-8a7a-8e7044e8326c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 19:16:41 compute-0 nova_compute[254061]: 2026-01-20 19:16:41.923 254065 DEBUG nova.network.os_vif_util [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:08:a5,bridge_name='br-int',has_traffic_filtering=True,id=64b2dadc-ee9c-4d67-8a7a-8e7044e8326c,network=Network(72d40385-358a-4a8f-a099-4346339210d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap64b2dadc-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 19:16:41 compute-0 nova_compute[254061]: 2026-01-20 19:16:41.925 254065 DEBUG nova.objects.instance [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6abaa16d-d5c3-447d-948f-53b77897103a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 19:16:41 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:16:41 compute-0 nova_compute[254061]: 2026-01-20 19:16:41.951 254065 DEBUG nova.virt.libvirt.driver [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] End _get_guest_xml xml=<domain type="kvm">
Jan 20 19:16:41 compute-0 nova_compute[254061]:   <uuid>6abaa16d-d5c3-447d-948f-53b77897103a</uuid>
Jan 20 19:16:41 compute-0 nova_compute[254061]:   <name>instance-0000000d</name>
Jan 20 19:16:41 compute-0 nova_compute[254061]:   <memory>131072</memory>
Jan 20 19:16:41 compute-0 nova_compute[254061]:   <vcpu>1</vcpu>
Jan 20 19:16:41 compute-0 nova_compute[254061]:   <metadata>
Jan 20 19:16:41 compute-0 nova_compute[254061]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 20 19:16:41 compute-0 nova_compute[254061]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 20 19:16:41 compute-0 nova_compute[254061]:       <nova:name>tempest-TestNetworkBasicOps-server-689113489</nova:name>
Jan 20 19:16:41 compute-0 nova_compute[254061]:       <nova:creationTime>2026-01-20 19:16:40</nova:creationTime>
Jan 20 19:16:41 compute-0 nova_compute[254061]:       <nova:flavor name="m1.nano">
Jan 20 19:16:41 compute-0 nova_compute[254061]:         <nova:memory>128</nova:memory>
Jan 20 19:16:41 compute-0 nova_compute[254061]:         <nova:disk>1</nova:disk>
Jan 20 19:16:41 compute-0 nova_compute[254061]:         <nova:swap>0</nova:swap>
Jan 20 19:16:41 compute-0 nova_compute[254061]:         <nova:ephemeral>0</nova:ephemeral>
Jan 20 19:16:41 compute-0 nova_compute[254061]:         <nova:vcpus>1</nova:vcpus>
Jan 20 19:16:41 compute-0 nova_compute[254061]:       </nova:flavor>
Jan 20 19:16:41 compute-0 nova_compute[254061]:       <nova:owner>
Jan 20 19:16:41 compute-0 nova_compute[254061]:         <nova:user uuid="d34bd159f8884263a7481e3fcff15267">tempest-TestNetworkBasicOps-899583499-project-member</nova:user>
Jan 20 19:16:41 compute-0 nova_compute[254061]:         <nova:project uuid="dc8a6ea17f334edbbfaf2a91ec6fd167">tempest-TestNetworkBasicOps-899583499</nova:project>
Jan 20 19:16:41 compute-0 nova_compute[254061]:       </nova:owner>
Jan 20 19:16:41 compute-0 nova_compute[254061]:       <nova:root type="image" uuid="bc57af0c-4b71-499e-9808-3c8fc070a488"/>
Jan 20 19:16:41 compute-0 nova_compute[254061]:       <nova:ports>
Jan 20 19:16:41 compute-0 nova_compute[254061]:         <nova:port uuid="64b2dadc-ee9c-4d67-8a7a-8e7044e8326c">
Jan 20 19:16:41 compute-0 nova_compute[254061]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 20 19:16:41 compute-0 nova_compute[254061]:         </nova:port>
Jan 20 19:16:41 compute-0 nova_compute[254061]:       </nova:ports>
Jan 20 19:16:41 compute-0 nova_compute[254061]:     </nova:instance>
Jan 20 19:16:41 compute-0 nova_compute[254061]:   </metadata>
Jan 20 19:16:41 compute-0 nova_compute[254061]:   <sysinfo type="smbios">
Jan 20 19:16:41 compute-0 nova_compute[254061]:     <system>
Jan 20 19:16:41 compute-0 nova_compute[254061]:       <entry name="manufacturer">RDO</entry>
Jan 20 19:16:41 compute-0 nova_compute[254061]:       <entry name="product">OpenStack Compute</entry>
Jan 20 19:16:41 compute-0 nova_compute[254061]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 20 19:16:41 compute-0 nova_compute[254061]:       <entry name="serial">6abaa16d-d5c3-447d-948f-53b77897103a</entry>
Jan 20 19:16:41 compute-0 nova_compute[254061]:       <entry name="uuid">6abaa16d-d5c3-447d-948f-53b77897103a</entry>
Jan 20 19:16:41 compute-0 nova_compute[254061]:       <entry name="family">Virtual Machine</entry>
Jan 20 19:16:41 compute-0 nova_compute[254061]:     </system>
Jan 20 19:16:41 compute-0 nova_compute[254061]:   </sysinfo>
Jan 20 19:16:41 compute-0 nova_compute[254061]:   <os>
Jan 20 19:16:41 compute-0 nova_compute[254061]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 20 19:16:41 compute-0 nova_compute[254061]:     <boot dev="hd"/>
Jan 20 19:16:41 compute-0 nova_compute[254061]:     <smbios mode="sysinfo"/>
Jan 20 19:16:41 compute-0 nova_compute[254061]:   </os>
Jan 20 19:16:41 compute-0 nova_compute[254061]:   <features>
Jan 20 19:16:41 compute-0 nova_compute[254061]:     <acpi/>
Jan 20 19:16:41 compute-0 nova_compute[254061]:     <apic/>
Jan 20 19:16:41 compute-0 nova_compute[254061]:     <vmcoreinfo/>
Jan 20 19:16:41 compute-0 nova_compute[254061]:   </features>
Jan 20 19:16:41 compute-0 nova_compute[254061]:   <clock offset="utc">
Jan 20 19:16:41 compute-0 nova_compute[254061]:     <timer name="pit" tickpolicy="delay"/>
Jan 20 19:16:41 compute-0 nova_compute[254061]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 20 19:16:41 compute-0 nova_compute[254061]:     <timer name="hpet" present="no"/>
Jan 20 19:16:41 compute-0 nova_compute[254061]:   </clock>
Jan 20 19:16:41 compute-0 nova_compute[254061]:   <cpu mode="host-model" match="exact">
Jan 20 19:16:41 compute-0 nova_compute[254061]:     <topology sockets="1" cores="1" threads="1"/>
Jan 20 19:16:41 compute-0 nova_compute[254061]:   </cpu>
Jan 20 19:16:41 compute-0 nova_compute[254061]:   <devices>
Jan 20 19:16:41 compute-0 nova_compute[254061]:     <disk type="network" device="disk">
Jan 20 19:16:41 compute-0 nova_compute[254061]:       <driver type="raw" cache="none"/>
Jan 20 19:16:41 compute-0 nova_compute[254061]:       <source protocol="rbd" name="vms/6abaa16d-d5c3-447d-948f-53b77897103a_disk">
Jan 20 19:16:41 compute-0 nova_compute[254061]:         <host name="192.168.122.100" port="6789"/>
Jan 20 19:16:41 compute-0 nova_compute[254061]:         <host name="192.168.122.102" port="6789"/>
Jan 20 19:16:41 compute-0 nova_compute[254061]:         <host name="192.168.122.101" port="6789"/>
Jan 20 19:16:41 compute-0 nova_compute[254061]:       </source>
Jan 20 19:16:41 compute-0 nova_compute[254061]:       <auth username="openstack">
Jan 20 19:16:41 compute-0 nova_compute[254061]:         <secret type="ceph" uuid="aecbbf3b-b405-507b-97d7-637a83f5b4b1"/>
Jan 20 19:16:41 compute-0 nova_compute[254061]:       </auth>
Jan 20 19:16:41 compute-0 nova_compute[254061]:       <target dev="vda" bus="virtio"/>
Jan 20 19:16:41 compute-0 nova_compute[254061]:     </disk>
Jan 20 19:16:41 compute-0 nova_compute[254061]:     <disk type="network" device="cdrom">
Jan 20 19:16:41 compute-0 nova_compute[254061]:       <driver type="raw" cache="none"/>
Jan 20 19:16:41 compute-0 nova_compute[254061]:       <source protocol="rbd" name="vms/6abaa16d-d5c3-447d-948f-53b77897103a_disk.config">
Jan 20 19:16:41 compute-0 nova_compute[254061]:         <host name="192.168.122.100" port="6789"/>
Jan 20 19:16:41 compute-0 nova_compute[254061]:         <host name="192.168.122.102" port="6789"/>
Jan 20 19:16:41 compute-0 nova_compute[254061]:         <host name="192.168.122.101" port="6789"/>
Jan 20 19:16:41 compute-0 nova_compute[254061]:       </source>
Jan 20 19:16:41 compute-0 nova_compute[254061]:       <auth username="openstack">
Jan 20 19:16:41 compute-0 nova_compute[254061]:         <secret type="ceph" uuid="aecbbf3b-b405-507b-97d7-637a83f5b4b1"/>
Jan 20 19:16:41 compute-0 nova_compute[254061]:       </auth>
Jan 20 19:16:41 compute-0 nova_compute[254061]:       <target dev="sda" bus="sata"/>
Jan 20 19:16:41 compute-0 nova_compute[254061]:     </disk>
Jan 20 19:16:41 compute-0 nova_compute[254061]:     <interface type="ethernet">
Jan 20 19:16:41 compute-0 nova_compute[254061]:       <mac address="fa:16:3e:e2:08:a5"/>
Jan 20 19:16:41 compute-0 nova_compute[254061]:       <model type="virtio"/>
Jan 20 19:16:41 compute-0 nova_compute[254061]:       <driver name="vhost" rx_queue_size="512"/>
Jan 20 19:16:41 compute-0 nova_compute[254061]:       <mtu size="1442"/>
Jan 20 19:16:41 compute-0 nova_compute[254061]:       <target dev="tap64b2dadc-ee"/>
Jan 20 19:16:41 compute-0 nova_compute[254061]:     </interface>
Jan 20 19:16:41 compute-0 nova_compute[254061]:     <serial type="pty">
Jan 20 19:16:41 compute-0 nova_compute[254061]:       <log file="/var/lib/nova/instances/6abaa16d-d5c3-447d-948f-53b77897103a/console.log" append="off"/>
Jan 20 19:16:41 compute-0 nova_compute[254061]:     </serial>
Jan 20 19:16:41 compute-0 nova_compute[254061]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 20 19:16:41 compute-0 nova_compute[254061]:     <video>
Jan 20 19:16:41 compute-0 nova_compute[254061]:       <model type="virtio"/>
Jan 20 19:16:41 compute-0 nova_compute[254061]:     </video>
Jan 20 19:16:41 compute-0 nova_compute[254061]:     <input type="tablet" bus="usb"/>
Jan 20 19:16:41 compute-0 nova_compute[254061]:     <rng model="virtio">
Jan 20 19:16:41 compute-0 nova_compute[254061]:       <backend model="random">/dev/urandom</backend>
Jan 20 19:16:41 compute-0 nova_compute[254061]:     </rng>
Jan 20 19:16:41 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root"/>
Jan 20 19:16:41 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:16:41 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:16:41 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:16:41 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:16:41 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:16:41 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:16:41 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:16:41 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:16:41 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:16:41 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:16:41 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:16:41 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:16:41 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:16:41 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:16:41 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:16:41 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:16:41 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:16:41 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:16:41 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:16:41 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:16:41 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:16:41 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:16:41 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:16:41 compute-0 nova_compute[254061]:     <controller type="pci" model="pcie-root-port"/>
Jan 20 19:16:41 compute-0 nova_compute[254061]:     <controller type="usb" index="0"/>
Jan 20 19:16:41 compute-0 nova_compute[254061]:     <memballoon model="virtio">
Jan 20 19:16:41 compute-0 nova_compute[254061]:       <stats period="10"/>
Jan 20 19:16:41 compute-0 nova_compute[254061]:     </memballoon>
Jan 20 19:16:41 compute-0 nova_compute[254061]:   </devices>
Jan 20 19:16:41 compute-0 nova_compute[254061]: </domain>
Jan 20 19:16:41 compute-0 nova_compute[254061]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 20 19:16:41 compute-0 nova_compute[254061]: 2026-01-20 19:16:41.953 254065 DEBUG nova.compute.manager [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] Preparing to wait for external event network-vif-plugged-64b2dadc-ee9c-4d67-8a7a-8e7044e8326c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 20 19:16:41 compute-0 nova_compute[254061]: 2026-01-20 19:16:41.954 254065 DEBUG oslo_concurrency.lockutils [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Acquiring lock "6abaa16d-d5c3-447d-948f-53b77897103a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:16:41 compute-0 nova_compute[254061]: 2026-01-20 19:16:41.954 254065 DEBUG oslo_concurrency.lockutils [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "6abaa16d-d5c3-447d-948f-53b77897103a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:16:41 compute-0 nova_compute[254061]: 2026-01-20 19:16:41.955 254065 DEBUG oslo_concurrency.lockutils [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "6abaa16d-d5c3-447d-948f-53b77897103a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:16:41 compute-0 nova_compute[254061]: 2026-01-20 19:16:41.956 254065 DEBUG nova.virt.libvirt.vif [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-20T19:16:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-689113489',display_name='tempest-TestNetworkBasicOps-server-689113489',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-689113489',id=13,image_ref='bc57af0c-4b71-499e-9808-3c8fc070a488',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPIEsAInuTid1fpeD1gsq6bIvon3rcQVKfA2yp3/BazBy/JSbIFbbiiEhZClF4hCXOFgXRxm2U+y5vSyy5Fn94takx1GziyzcHmLPxbW+JplEmUL8mMF0cmTVNqxVVjR6A==',key_name='tempest-TestNetworkBasicOps-1523414470',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='dc8a6ea17f334edbbfaf2a91ec6fd167',ramdisk_id='',reservation_id='r-aze4m31r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='bc57af0c-4b71-499e-9808-3c8fc070a488',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-899583499',owner_user_name='tempest-TestNetworkBasicOps-899583499-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-20T19:16:37Z,user_data=None,user_id='d34bd159f8884263a7481e3fcff15267',uuid=6abaa16d-d5c3-447d-948f-53b77897103a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "64b2dadc-ee9c-4d67-8a7a-8e7044e8326c", "address": "fa:16:3e:e2:08:a5", "network": {"id": "72d40385-358a-4a8f-a099-4346339210d1", "bridge": "br-int", "label": "tempest-network-smoke--417948324", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64b2dadc-ee", "ovs_interfaceid": "64b2dadc-ee9c-4d67-8a7a-8e7044e8326c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 20 19:16:41 compute-0 nova_compute[254061]: 2026-01-20 19:16:41.956 254065 DEBUG nova.network.os_vif_util [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Converting VIF {"id": "64b2dadc-ee9c-4d67-8a7a-8e7044e8326c", "address": "fa:16:3e:e2:08:a5", "network": {"id": "72d40385-358a-4a8f-a099-4346339210d1", "bridge": "br-int", "label": "tempest-network-smoke--417948324", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64b2dadc-ee", "ovs_interfaceid": "64b2dadc-ee9c-4d67-8a7a-8e7044e8326c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 19:16:41 compute-0 nova_compute[254061]: 2026-01-20 19:16:41.957 254065 DEBUG nova.network.os_vif_util [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:08:a5,bridge_name='br-int',has_traffic_filtering=True,id=64b2dadc-ee9c-4d67-8a7a-8e7044e8326c,network=Network(72d40385-358a-4a8f-a099-4346339210d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap64b2dadc-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 19:16:41 compute-0 nova_compute[254061]: 2026-01-20 19:16:41.958 254065 DEBUG os_vif [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:08:a5,bridge_name='br-int',has_traffic_filtering=True,id=64b2dadc-ee9c-4d67-8a7a-8e7044e8326c,network=Network(72d40385-358a-4a8f-a099-4346339210d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap64b2dadc-ee') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 20 19:16:41 compute-0 nova_compute[254061]: 2026-01-20 19:16:41.959 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:16:41 compute-0 nova_compute[254061]: 2026-01-20 19:16:41.959 254065 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 19:16:41 compute-0 nova_compute[254061]: 2026-01-20 19:16:41.960 254065 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 19:16:41 compute-0 nova_compute[254061]: 2026-01-20 19:16:41.964 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:16:41 compute-0 nova_compute[254061]: 2026-01-20 19:16:41.964 254065 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap64b2dadc-ee, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 19:16:41 compute-0 nova_compute[254061]: 2026-01-20 19:16:41.965 254065 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap64b2dadc-ee, col_values=(('external_ids', {'iface-id': '64b2dadc-ee9c-4d67-8a7a-8e7044e8326c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e2:08:a5', 'vm-uuid': '6abaa16d-d5c3-447d-948f-53b77897103a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 19:16:41 compute-0 nova_compute[254061]: 2026-01-20 19:16:41.968 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:16:41 compute-0 NetworkManager[48914]: <info>  [1768936601.9684] manager: (tap64b2dadc-ee): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/60)
Jan 20 19:16:41 compute-0 nova_compute[254061]: 2026-01-20 19:16:41.970 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:16:41 compute-0 nova_compute[254061]: 2026-01-20 19:16:41.974 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:16:41 compute-0 nova_compute[254061]: 2026-01-20 19:16:41.976 254065 INFO os_vif [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:08:a5,bridge_name='br-int',has_traffic_filtering=True,id=64b2dadc-ee9c-4d67-8a7a-8e7044e8326c,network=Network(72d40385-358a-4a8f-a099-4346339210d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap64b2dadc-ee')
Jan 20 19:16:42 compute-0 nova_compute[254061]: 2026-01-20 19:16:42.036 254065 DEBUG nova.virt.libvirt.driver [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 19:16:42 compute-0 nova_compute[254061]: 2026-01-20 19:16:42.040 254065 DEBUG nova.virt.libvirt.driver [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 20 19:16:42 compute-0 nova_compute[254061]: 2026-01-20 19:16:42.041 254065 DEBUG nova.virt.libvirt.driver [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] No VIF found with MAC fa:16:3e:e2:08:a5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 20 19:16:42 compute-0 nova_compute[254061]: 2026-01-20 19:16:42.042 254065 INFO nova.virt.libvirt.driver [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] Using config drive
Jan 20 19:16:42 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:16:42 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:16:42 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:16:42.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:16:42 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1043: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 20 19:16:42 compute-0 nova_compute[254061]: 2026-01-20 19:16:42.070 254065 DEBUG nova.storage.rbd_utils [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] rbd image 6abaa16d-d5c3-447d-948f-53b77897103a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 19:16:42 compute-0 nova_compute[254061]: 2026-01-20 19:16:42.267 254065 DEBUG nova.network.neutron [req-203632db-454a-4964-9fd2-832557977c09 req-7cf25a53-99cf-4e60-98f2-7514a94e1048 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] Updated VIF entry in instance network info cache for port 64b2dadc-ee9c-4d67-8a7a-8e7044e8326c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 19:16:42 compute-0 nova_compute[254061]: 2026-01-20 19:16:42.268 254065 DEBUG nova.network.neutron [req-203632db-454a-4964-9fd2-832557977c09 req-7cf25a53-99cf-4e60-98f2-7514a94e1048 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] Updating instance_info_cache with network_info: [{"id": "64b2dadc-ee9c-4d67-8a7a-8e7044e8326c", "address": "fa:16:3e:e2:08:a5", "network": {"id": "72d40385-358a-4a8f-a099-4346339210d1", "bridge": "br-int", "label": "tempest-network-smoke--417948324", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64b2dadc-ee", "ovs_interfaceid": "64b2dadc-ee9c-4d67-8a7a-8e7044e8326c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 19:16:42 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:16:42 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:16:42 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:16:42.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:16:42 compute-0 nova_compute[254061]: 2026-01-20 19:16:42.297 254065 DEBUG oslo_concurrency.lockutils [req-203632db-454a-4964-9fd2-832557977c09 req-7cf25a53-99cf-4e60-98f2-7514a94e1048 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Releasing lock "refresh_cache-6abaa16d-d5c3-447d-948f-53b77897103a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 19:16:42 compute-0 nova_compute[254061]: 2026-01-20 19:16:42.483 254065 INFO nova.virt.libvirt.driver [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] Creating config drive at /var/lib/nova/instances/6abaa16d-d5c3-447d-948f-53b77897103a/disk.config
Jan 20 19:16:42 compute-0 nova_compute[254061]: 2026-01-20 19:16:42.492 254065 DEBUG oslo_concurrency.processutils [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6abaa16d-d5c3-447d-948f-53b77897103a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp285qozpq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:16:42 compute-0 nova_compute[254061]: 2026-01-20 19:16:42.633 254065 DEBUG oslo_concurrency.processutils [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6abaa16d-d5c3-447d-948f-53b77897103a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp285qozpq" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:16:42 compute-0 nova_compute[254061]: 2026-01-20 19:16:42.659 254065 DEBUG nova.storage.rbd_utils [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] rbd image 6abaa16d-d5c3-447d-948f-53b77897103a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 20 19:16:42 compute-0 nova_compute[254061]: 2026-01-20 19:16:42.663 254065 DEBUG oslo_concurrency.processutils [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6abaa16d-d5c3-447d-948f-53b77897103a/disk.config 6abaa16d-d5c3-447d-948f-53b77897103a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:16:42 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/1058488939' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Jan 20 19:16:42 compute-0 ceph-mon[74381]: pgmap v1043: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 20 19:16:42 compute-0 nova_compute[254061]: 2026-01-20 19:16:42.825 254065 DEBUG oslo_concurrency.processutils [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6abaa16d-d5c3-447d-948f-53b77897103a/disk.config 6abaa16d-d5c3-447d-948f-53b77897103a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:16:42 compute-0 nova_compute[254061]: 2026-01-20 19:16:42.825 254065 INFO nova.virt.libvirt.driver [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] Deleting local config drive /var/lib/nova/instances/6abaa16d-d5c3-447d-948f-53b77897103a/disk.config because it was imported into RBD.
Jan 20 19:16:42 compute-0 kernel: tap64b2dadc-ee: entered promiscuous mode
Jan 20 19:16:42 compute-0 NetworkManager[48914]: <info>  [1768936602.8683] manager: (tap64b2dadc-ee): new Tun device (/org/freedesktop/NetworkManager/Devices/61)
Jan 20 19:16:42 compute-0 nova_compute[254061]: 2026-01-20 19:16:42.868 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:16:42 compute-0 ovn_controller[155128]: 2026-01-20T19:16:42Z|00091|binding|INFO|Claiming lport 64b2dadc-ee9c-4d67-8a7a-8e7044e8326c for this chassis.
Jan 20 19:16:42 compute-0 ovn_controller[155128]: 2026-01-20T19:16:42Z|00092|binding|INFO|64b2dadc-ee9c-4d67-8a7a-8e7044e8326c: Claiming fa:16:3e:e2:08:a5 10.100.0.3
Jan 20 19:16:42 compute-0 nova_compute[254061]: 2026-01-20 19:16:42.873 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:16:42 compute-0 nova_compute[254061]: 2026-01-20 19:16:42.875 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:16:42 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:16:42.887 165659 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e2:08:a5 10.100.0.3'], port_security=['fa:16:3e:e2:08:a5 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '6abaa16d-d5c3-447d-948f-53b77897103a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-72d40385-358a-4a8f-a099-4346339210d1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dc8a6ea17f334edbbfaf2a91ec6fd167', 'neutron:revision_number': '2', 'neutron:security_group_ids': '57b76c66-c7b1-468f-b117-c212bf26e6a2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=46147e3e-f856-40a5-93eb-6c6a79ad7ebf, chassis=[<ovs.db.idl.Row object at 0x7f4e780f9880>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4e780f9880>], logical_port=64b2dadc-ee9c-4d67-8a7a-8e7044e8326c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 19:16:42 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:16:42.889 165659 INFO neutron.agent.ovn.metadata.agent [-] Port 64b2dadc-ee9c-4d67-8a7a-8e7044e8326c in datapath 72d40385-358a-4a8f-a099-4346339210d1 bound to our chassis
Jan 20 19:16:42 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:16:42.890 165659 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 72d40385-358a-4a8f-a099-4346339210d1
Jan 20 19:16:42 compute-0 systemd-machined[220746]: New machine qemu-6-instance-0000000d.
Jan 20 19:16:42 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:16:42.901 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[679a73d2-2147-46fd-9186-f6e26030cc06]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:16:42 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:16:42.902 165659 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap72d40385-31 in ovnmeta-72d40385-358a-4a8f-a099-4346339210d1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 20 19:16:42 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:16:42.903 259376 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap72d40385-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 20 19:16:42 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:16:42.904 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[9c00a65d-9050-4bd9-b71c-5f4a8f7a7e1c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:16:42 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:16:42.904 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[3399e695-a685-452e-80cb-cd0959cd7b41]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:16:42 compute-0 systemd[1]: Started Virtual Machine qemu-6-instance-0000000d.
Jan 20 19:16:42 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:16:42.915 166372 DEBUG oslo.privsep.daemon [-] privsep: reply[9965b024-c042-4ad7-ba27-48b63f72a9eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:16:42 compute-0 systemd-udevd[274261]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 19:16:42 compute-0 NetworkManager[48914]: <info>  [1768936602.9329] device (tap64b2dadc-ee): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 19:16:42 compute-0 NetworkManager[48914]: <info>  [1768936602.9337] device (tap64b2dadc-ee): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 20 19:16:42 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:16:42.940 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[4ac27685-9d1b-462d-9678-fa33ac1aab44]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:16:42 compute-0 nova_compute[254061]: 2026-01-20 19:16:42.945 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:16:42 compute-0 ovn_controller[155128]: 2026-01-20T19:16:42Z|00093|binding|INFO|Setting lport 64b2dadc-ee9c-4d67-8a7a-8e7044e8326c ovn-installed in OVS
Jan 20 19:16:42 compute-0 ovn_controller[155128]: 2026-01-20T19:16:42Z|00094|binding|INFO|Setting lport 64b2dadc-ee9c-4d67-8a7a-8e7044e8326c up in Southbound
Jan 20 19:16:42 compute-0 nova_compute[254061]: 2026-01-20 19:16:42.949 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:16:42 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:16:42.966 259434 DEBUG oslo.privsep.daemon [-] privsep: reply[2eb658df-562c-44ba-a617-dd0cc28baf50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:16:42 compute-0 NetworkManager[48914]: <info>  [1768936602.9724] manager: (tap72d40385-30): new Veth device (/org/freedesktop/NetworkManager/Devices/62)
Jan 20 19:16:42 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:16:42.970 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[1922f634-eee6-44db-987c-6f4c44c7a920]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:16:42 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:16:42.997 259434 DEBUG oslo.privsep.daemon [-] privsep: reply[2a559205-5948-4565-90f1-078f5ba6c166]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:16:43 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:16:43.000 259434 DEBUG oslo.privsep.daemon [-] privsep: reply[0fafe1c5-eb78-4fc6-b6bf-db510bb80bb1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:16:43 compute-0 NetworkManager[48914]: <info>  [1768936603.0216] device (tap72d40385-30): carrier: link connected
Jan 20 19:16:43 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:16:43.025 259434 DEBUG oslo.privsep.daemon [-] privsep: reply[61676dfe-0221-427b-a965-a7a190ef58c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:16:43 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:16:43.039 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[b4ad41fe-a35e-4350-871b-1abda4cd030d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap72d40385-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e1:20:45'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 479880, 'reachable_time': 33491, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274292, 'error': None, 'target': 'ovnmeta-72d40385-358a-4a8f-a099-4346339210d1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:16:43 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:16:43.049 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[bce65c6e-7009-4969-9eba-7bec83c65d75]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee1:2045'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 479880, 'tstamp': 479880}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 274293, 'error': None, 'target': 'ovnmeta-72d40385-358a-4a8f-a099-4346339210d1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:16:43 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:16:43.061 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[042e7694-1b36-4b1a-8c1c-4c33adb267d2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap72d40385-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e1:20:45'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 479880, 'reachable_time': 33491, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 274294, 'error': None, 'target': 'ovnmeta-72d40385-358a-4a8f-a099-4346339210d1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:16:43 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:16:43.091 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[40e04ae4-6181-4950-8923-b2613b776ab0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:16:43 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:16:43.147 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[0e3a50a2-e4a0-4144-a62a-4ac99fe9685d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:16:43 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:16:43.148 165659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap72d40385-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 19:16:43 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:16:43.148 165659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 20 19:16:43 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:16:43.148 165659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap72d40385-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 19:16:43 compute-0 nova_compute[254061]: 2026-01-20 19:16:43.149 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:16:43 compute-0 NetworkManager[48914]: <info>  [1768936603.1504] manager: (tap72d40385-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/63)
Jan 20 19:16:43 compute-0 kernel: tap72d40385-30: entered promiscuous mode
Jan 20 19:16:43 compute-0 nova_compute[254061]: 2026-01-20 19:16:43.153 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:16:43 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:16:43.154 165659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap72d40385-30, col_values=(('external_ids', {'iface-id': '2f47d281-b5da-4c7e-b9dc-e350681417d2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 19:16:43 compute-0 nova_compute[254061]: 2026-01-20 19:16:43.155 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:16:43 compute-0 ovn_controller[155128]: 2026-01-20T19:16:43Z|00095|binding|INFO|Releasing lport 2f47d281-b5da-4c7e-b9dc-e350681417d2 from this chassis (sb_readonly=0)
Jan 20 19:16:43 compute-0 nova_compute[254061]: 2026-01-20 19:16:43.155 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:16:43 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:16:43.156 165659 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/72d40385-358a-4a8f-a099-4346339210d1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/72d40385-358a-4a8f-a099-4346339210d1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 20 19:16:43 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:16:43.157 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[15f7be1e-92b3-4d17-a7c9-620b76e1f1eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:16:43 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:16:43.157 165659 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 20 19:16:43 compute-0 ovn_metadata_agent[165637]: global
Jan 20 19:16:43 compute-0 ovn_metadata_agent[165637]:     log         /dev/log local0 debug
Jan 20 19:16:43 compute-0 ovn_metadata_agent[165637]:     log-tag     haproxy-metadata-proxy-72d40385-358a-4a8f-a099-4346339210d1
Jan 20 19:16:43 compute-0 ovn_metadata_agent[165637]:     user        root
Jan 20 19:16:43 compute-0 ovn_metadata_agent[165637]:     group       root
Jan 20 19:16:43 compute-0 ovn_metadata_agent[165637]:     maxconn     1024
Jan 20 19:16:43 compute-0 ovn_metadata_agent[165637]:     pidfile     /var/lib/neutron/external/pids/72d40385-358a-4a8f-a099-4346339210d1.pid.haproxy
Jan 20 19:16:43 compute-0 ovn_metadata_agent[165637]:     daemon
Jan 20 19:16:43 compute-0 ovn_metadata_agent[165637]: 
Jan 20 19:16:43 compute-0 ovn_metadata_agent[165637]: defaults
Jan 20 19:16:43 compute-0 ovn_metadata_agent[165637]:     log global
Jan 20 19:16:43 compute-0 ovn_metadata_agent[165637]:     mode http
Jan 20 19:16:43 compute-0 ovn_metadata_agent[165637]:     option httplog
Jan 20 19:16:43 compute-0 ovn_metadata_agent[165637]:     option dontlognull
Jan 20 19:16:43 compute-0 ovn_metadata_agent[165637]:     option http-server-close
Jan 20 19:16:43 compute-0 ovn_metadata_agent[165637]:     option forwardfor
Jan 20 19:16:43 compute-0 ovn_metadata_agent[165637]:     retries                 3
Jan 20 19:16:43 compute-0 ovn_metadata_agent[165637]:     timeout http-request    30s
Jan 20 19:16:43 compute-0 ovn_metadata_agent[165637]:     timeout connect         30s
Jan 20 19:16:43 compute-0 ovn_metadata_agent[165637]:     timeout client          32s
Jan 20 19:16:43 compute-0 ovn_metadata_agent[165637]:     timeout server          32s
Jan 20 19:16:43 compute-0 ovn_metadata_agent[165637]:     timeout http-keep-alive 30s
Jan 20 19:16:43 compute-0 ovn_metadata_agent[165637]: 
Jan 20 19:16:43 compute-0 ovn_metadata_agent[165637]: 
Jan 20 19:16:43 compute-0 ovn_metadata_agent[165637]: listen listener
Jan 20 19:16:43 compute-0 ovn_metadata_agent[165637]:     bind 169.254.169.254:80
Jan 20 19:16:43 compute-0 ovn_metadata_agent[165637]:     server metadata /var/lib/neutron/metadata_proxy
Jan 20 19:16:43 compute-0 ovn_metadata_agent[165637]:     http-request add-header X-OVN-Network-ID 72d40385-358a-4a8f-a099-4346339210d1
Jan 20 19:16:43 compute-0 ovn_metadata_agent[165637]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 20 19:16:43 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:16:43.158 165659 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-72d40385-358a-4a8f-a099-4346339210d1', 'env', 'PROCESS_TAG=haproxy-72d40385-358a-4a8f-a099-4346339210d1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/72d40385-358a-4a8f-a099-4346339210d1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 20 19:16:43 compute-0 nova_compute[254061]: 2026-01-20 19:16:43.171 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:16:43 compute-0 podman[274326]: 2026-01-20 19:16:43.488949661 +0000 UTC m=+0.042774439 container create 61c3552ef3a373a0887ca77738523d88b9e3b86b2041f239de154900830abc48 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-72d40385-358a-4a8f-a099-4346339210d1, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 20 19:16:43 compute-0 systemd[1]: Started libpod-conmon-61c3552ef3a373a0887ca77738523d88b9e3b86b2041f239de154900830abc48.scope.
Jan 20 19:16:43 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:16:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d44883a4460d68fd8f995239faaaa579da69594cb90d543dbf9e691feed70d7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 19:16:43 compute-0 podman[274326]: 2026-01-20 19:16:43.46601597 +0000 UTC m=+0.019840768 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 19:16:43 compute-0 podman[274326]: 2026-01-20 19:16:43.570052305 +0000 UTC m=+0.123877103 container init 61c3552ef3a373a0887ca77738523d88b9e3b86b2041f239de154900830abc48 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-72d40385-358a-4a8f-a099-4346339210d1, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:16:43 compute-0 podman[274326]: 2026-01-20 19:16:43.57467909 +0000 UTC m=+0.128503878 container start 61c3552ef3a373a0887ca77738523d88b9e3b86b2041f239de154900830abc48 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-72d40385-358a-4a8f-a099-4346339210d1, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 20 19:16:43 compute-0 nova_compute[254061]: 2026-01-20 19:16:43.577 254065 DEBUG nova.compute.manager [req-c8dc4f44-e12d-4b8b-8581-94093041561a req-0445bd96-13dd-47ff-9177-2de1badc81ae 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] Received event network-vif-plugged-64b2dadc-ee9c-4d67-8a7a-8e7044e8326c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 19:16:43 compute-0 nova_compute[254061]: 2026-01-20 19:16:43.578 254065 DEBUG oslo_concurrency.lockutils [req-c8dc4f44-e12d-4b8b-8581-94093041561a req-0445bd96-13dd-47ff-9177-2de1badc81ae 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Acquiring lock "6abaa16d-d5c3-447d-948f-53b77897103a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:16:43 compute-0 nova_compute[254061]: 2026-01-20 19:16:43.578 254065 DEBUG oslo_concurrency.lockutils [req-c8dc4f44-e12d-4b8b-8581-94093041561a req-0445bd96-13dd-47ff-9177-2de1badc81ae 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Lock "6abaa16d-d5c3-447d-948f-53b77897103a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:16:43 compute-0 nova_compute[254061]: 2026-01-20 19:16:43.578 254065 DEBUG oslo_concurrency.lockutils [req-c8dc4f44-e12d-4b8b-8581-94093041561a req-0445bd96-13dd-47ff-9177-2de1badc81ae 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Lock "6abaa16d-d5c3-447d-948f-53b77897103a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:16:43 compute-0 nova_compute[254061]: 2026-01-20 19:16:43.579 254065 DEBUG nova.compute.manager [req-c8dc4f44-e12d-4b8b-8581-94093041561a req-0445bd96-13dd-47ff-9177-2de1badc81ae 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] Processing event network-vif-plugged-64b2dadc-ee9c-4d67-8a7a-8e7044e8326c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 20 19:16:43 compute-0 neutron-haproxy-ovnmeta-72d40385-358a-4a8f-a099-4346339210d1[274342]: [NOTICE]   (274354) : New worker (274363) forked
Jan 20 19:16:43 compute-0 neutron-haproxy-ovnmeta-72d40385-358a-4a8f-a099-4346339210d1[274342]: [NOTICE]   (274354) : Loading success.
Jan 20 19:16:43 compute-0 nova_compute[254061]: 2026-01-20 19:16:43.751 254065 DEBUG nova.virt.driver [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] Emitting event <LifecycleEvent: 1768936603.7508886, 6abaa16d-d5c3-447d-948f-53b77897103a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 19:16:43 compute-0 nova_compute[254061]: 2026-01-20 19:16:43.751 254065 INFO nova.compute.manager [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] VM Started (Lifecycle Event)
Jan 20 19:16:43 compute-0 nova_compute[254061]: 2026-01-20 19:16:43.753 254065 DEBUG nova.compute.manager [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 20 19:16:43 compute-0 nova_compute[254061]: 2026-01-20 19:16:43.756 254065 DEBUG nova.virt.libvirt.driver [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 20 19:16:43 compute-0 nova_compute[254061]: 2026-01-20 19:16:43.758 254065 INFO nova.virt.libvirt.driver [-] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] Instance spawned successfully.
Jan 20 19:16:43 compute-0 nova_compute[254061]: 2026-01-20 19:16:43.759 254065 DEBUG nova.virt.libvirt.driver [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 20 19:16:43 compute-0 nova_compute[254061]: 2026-01-20 19:16:43.873 254065 DEBUG nova.compute.manager [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 19:16:43 compute-0 nova_compute[254061]: 2026-01-20 19:16:43.883 254065 DEBUG nova.compute.manager [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 19:16:43 compute-0 nova_compute[254061]: 2026-01-20 19:16:43.893 254065 DEBUG nova.virt.libvirt.driver [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 19:16:43 compute-0 nova_compute[254061]: 2026-01-20 19:16:43.894 254065 DEBUG nova.virt.libvirt.driver [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 19:16:43 compute-0 nova_compute[254061]: 2026-01-20 19:16:43.896 254065 DEBUG nova.virt.libvirt.driver [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 19:16:43 compute-0 nova_compute[254061]: 2026-01-20 19:16:43.897 254065 DEBUG nova.virt.libvirt.driver [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 19:16:43 compute-0 nova_compute[254061]: 2026-01-20 19:16:43.897 254065 DEBUG nova.virt.libvirt.driver [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 19:16:43 compute-0 nova_compute[254061]: 2026-01-20 19:16:43.898 254065 DEBUG nova.virt.libvirt.driver [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 20 19:16:43 compute-0 nova_compute[254061]: 2026-01-20 19:16:43.908 254065 INFO nova.compute.manager [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 19:16:43 compute-0 nova_compute[254061]: 2026-01-20 19:16:43.909 254065 DEBUG nova.virt.driver [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] Emitting event <LifecycleEvent: 1768936603.7516313, 6abaa16d-d5c3-447d-948f-53b77897103a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 19:16:43 compute-0 nova_compute[254061]: 2026-01-20 19:16:43.910 254065 INFO nova.compute.manager [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] VM Paused (Lifecycle Event)
Jan 20 19:16:43 compute-0 nova_compute[254061]: 2026-01-20 19:16:43.939 254065 DEBUG nova.compute.manager [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 19:16:43 compute-0 nova_compute[254061]: 2026-01-20 19:16:43.944 254065 DEBUG nova.virt.driver [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] Emitting event <LifecycleEvent: 1768936603.756181, 6abaa16d-d5c3-447d-948f-53b77897103a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 19:16:43 compute-0 nova_compute[254061]: 2026-01-20 19:16:43.944 254065 INFO nova.compute.manager [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] VM Resumed (Lifecycle Event)
Jan 20 19:16:43 compute-0 nova_compute[254061]: 2026-01-20 19:16:43.963 254065 DEBUG nova.compute.manager [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 19:16:43 compute-0 nova_compute[254061]: 2026-01-20 19:16:43.967 254065 DEBUG nova.compute.manager [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 20 19:16:43 compute-0 nova_compute[254061]: 2026-01-20 19:16:43.972 254065 INFO nova.compute.manager [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] Took 6.07 seconds to spawn the instance on the hypervisor.
Jan 20 19:16:43 compute-0 nova_compute[254061]: 2026-01-20 19:16:43.973 254065 DEBUG nova.compute.manager [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 19:16:43 compute-0 nova_compute[254061]: 2026-01-20 19:16:43.995 254065 INFO nova.compute.manager [None req-fc0b9a36-1250-488e-a161-16ab4deabeb9 - - - - - -] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 20 19:16:44 compute-0 nova_compute[254061]: 2026-01-20 19:16:44.031 254065 INFO nova.compute.manager [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] Took 7.08 seconds to build instance.
Jan 20 19:16:44 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:16:44 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:16:44 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:16:44.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:16:44 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1044: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 20 19:16:44 compute-0 nova_compute[254061]: 2026-01-20 19:16:44.101 254065 DEBUG oslo_concurrency.lockutils [None req-50a13d70-62be-44b8-87f6-a442793306ac d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "6abaa16d-d5c3-447d-948f-53b77897103a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.246s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:16:44 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:16:44 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:16:44 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:16:44.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:16:45 compute-0 ceph-mon[74381]: pgmap v1044: 337 pgs: 337 active+clean; 88 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 20 19:16:45 compute-0 nova_compute[254061]: 2026-01-20 19:16:45.835 254065 DEBUG nova.compute.manager [req-5b5d30cc-8f7d-4be1-9dc9-321bdb36f1fd req-6d68f56b-e3f4-4dfa-902f-b4c1b45e64b5 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] Received event network-vif-plugged-64b2dadc-ee9c-4d67-8a7a-8e7044e8326c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 19:16:45 compute-0 nova_compute[254061]: 2026-01-20 19:16:45.835 254065 DEBUG oslo_concurrency.lockutils [req-5b5d30cc-8f7d-4be1-9dc9-321bdb36f1fd req-6d68f56b-e3f4-4dfa-902f-b4c1b45e64b5 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Acquiring lock "6abaa16d-d5c3-447d-948f-53b77897103a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:16:45 compute-0 nova_compute[254061]: 2026-01-20 19:16:45.835 254065 DEBUG oslo_concurrency.lockutils [req-5b5d30cc-8f7d-4be1-9dc9-321bdb36f1fd req-6d68f56b-e3f4-4dfa-902f-b4c1b45e64b5 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Lock "6abaa16d-d5c3-447d-948f-53b77897103a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:16:45 compute-0 nova_compute[254061]: 2026-01-20 19:16:45.836 254065 DEBUG oslo_concurrency.lockutils [req-5b5d30cc-8f7d-4be1-9dc9-321bdb36f1fd req-6d68f56b-e3f4-4dfa-902f-b4c1b45e64b5 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Lock "6abaa16d-d5c3-447d-948f-53b77897103a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:16:45 compute-0 nova_compute[254061]: 2026-01-20 19:16:45.836 254065 DEBUG nova.compute.manager [req-5b5d30cc-8f7d-4be1-9dc9-321bdb36f1fd req-6d68f56b-e3f4-4dfa-902f-b4c1b45e64b5 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] No waiting events found dispatching network-vif-plugged-64b2dadc-ee9c-4d67-8a7a-8e7044e8326c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 19:16:45 compute-0 nova_compute[254061]: 2026-01-20 19:16:45.836 254065 WARNING nova.compute.manager [req-5b5d30cc-8f7d-4be1-9dc9-321bdb36f1fd req-6d68f56b-e3f4-4dfa-902f-b4c1b45e64b5 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] Received unexpected event network-vif-plugged-64b2dadc-ee9c-4d67-8a7a-8e7044e8326c for instance with vm_state active and task_state None.
Jan 20 19:16:46 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1045: 337 pgs: 337 active+clean; 88 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Jan 20 19:16:46 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:16:46 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:16:46 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:16:46.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:16:46 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:16:46 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:16:46 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:16:46.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:16:46 compute-0 ceph-mon[74381]: pgmap v1045: 337 pgs: 337 active+clean; 88 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Jan 20 19:16:46 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:16:46 compute-0 nova_compute[254061]: 2026-01-20 19:16:46.954 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:16:46 compute-0 nova_compute[254061]: 2026-01-20 19:16:46.968 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:16:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:16:47.221Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:16:47 compute-0 ovn_controller[155128]: 2026-01-20T19:16:47Z|00096|binding|INFO|Releasing lport 2f47d281-b5da-4c7e-b9dc-e350681417d2 from this chassis (sb_readonly=0)
Jan 20 19:16:47 compute-0 NetworkManager[48914]: <info>  [1768936607.7737] manager: (patch-provnet-d23a33c8-2f92-47f7-9446-58aa0ac25f0e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/64)
Jan 20 19:16:47 compute-0 NetworkManager[48914]: <info>  [1768936607.7746] manager: (patch-br-int-to-provnet-d23a33c8-2f92-47f7-9446-58aa0ac25f0e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/65)
Jan 20 19:16:47 compute-0 nova_compute[254061]: 2026-01-20 19:16:47.773 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:16:47 compute-0 ovn_controller[155128]: 2026-01-20T19:16:47Z|00097|binding|INFO|Releasing lport 2f47d281-b5da-4c7e-b9dc-e350681417d2 from this chassis (sb_readonly=0)
Jan 20 19:16:47 compute-0 nova_compute[254061]: 2026-01-20 19:16:47.823 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:16:47 compute-0 nova_compute[254061]: 2026-01-20 19:16:47.827 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:16:47 compute-0 nova_compute[254061]: 2026-01-20 19:16:47.962 254065 DEBUG nova.compute.manager [req-b9f52203-c6b1-48b1-b1c3-e4acb45a06ae req-dd090db6-b533-4eea-a9c1-0c8787643a6c 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] Received event network-changed-64b2dadc-ee9c-4d67-8a7a-8e7044e8326c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 19:16:47 compute-0 nova_compute[254061]: 2026-01-20 19:16:47.964 254065 DEBUG nova.compute.manager [req-b9f52203-c6b1-48b1-b1c3-e4acb45a06ae req-dd090db6-b533-4eea-a9c1-0c8787643a6c 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] Refreshing instance network info cache due to event network-changed-64b2dadc-ee9c-4d67-8a7a-8e7044e8326c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 19:16:47 compute-0 nova_compute[254061]: 2026-01-20 19:16:47.965 254065 DEBUG oslo_concurrency.lockutils [req-b9f52203-c6b1-48b1-b1c3-e4acb45a06ae req-dd090db6-b533-4eea-a9c1-0c8787643a6c 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Acquiring lock "refresh_cache-6abaa16d-d5c3-447d-948f-53b77897103a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 19:16:47 compute-0 nova_compute[254061]: 2026-01-20 19:16:47.965 254065 DEBUG oslo_concurrency.lockutils [req-b9f52203-c6b1-48b1-b1c3-e4acb45a06ae req-dd090db6-b533-4eea-a9c1-0c8787643a6c 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Acquired lock "refresh_cache-6abaa16d-d5c3-447d-948f-53b77897103a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 19:16:47 compute-0 nova_compute[254061]: 2026-01-20 19:16:47.966 254065 DEBUG nova.network.neutron [req-b9f52203-c6b1-48b1-b1c3-e4acb45a06ae req-dd090db6-b533-4eea-a9c1-0c8787643a6c 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] Refreshing network info cache for port 64b2dadc-ee9c-4d67-8a7a-8e7044e8326c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 19:16:48 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1046: 337 pgs: 337 active+clean; 88 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 20 19:16:48 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:16:48 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:16:48 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:16:48.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:16:48 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:16:48 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:16:48 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:16:48.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:16:48 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 20 19:16:48 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2943983755' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 19:16:48 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 20 19:16:48 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2943983755' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 19:16:49 compute-0 ceph-mon[74381]: pgmap v1046: 337 pgs: 337 active+clean; 88 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 20 19:16:49 compute-0 ceph-mon[74381]: from='client.? 192.168.122.10:0/2943983755' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 19:16:49 compute-0 ceph-mon[74381]: from='client.? 192.168.122.10:0/2943983755' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 19:16:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:16:49] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Jan 20 19:16:49 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:16:49] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Jan 20 19:16:50 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1047: 337 pgs: 337 active+clean; 88 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 20 19:16:50 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:16:50 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:16:50 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:16:50.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:16:50 compute-0 nova_compute[254061]: 2026-01-20 19:16:50.229 254065 DEBUG nova.network.neutron [req-b9f52203-c6b1-48b1-b1c3-e4acb45a06ae req-dd090db6-b533-4eea-a9c1-0c8787643a6c 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] Updated VIF entry in instance network info cache for port 64b2dadc-ee9c-4d67-8a7a-8e7044e8326c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 19:16:50 compute-0 nova_compute[254061]: 2026-01-20 19:16:50.230 254065 DEBUG nova.network.neutron [req-b9f52203-c6b1-48b1-b1c3-e4acb45a06ae req-dd090db6-b533-4eea-a9c1-0c8787643a6c 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] Updating instance_info_cache with network_info: [{"id": "64b2dadc-ee9c-4d67-8a7a-8e7044e8326c", "address": "fa:16:3e:e2:08:a5", "network": {"id": "72d40385-358a-4a8f-a099-4346339210d1", "bridge": "br-int", "label": "tempest-network-smoke--417948324", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64b2dadc-ee", "ovs_interfaceid": "64b2dadc-ee9c-4d67-8a7a-8e7044e8326c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 19:16:50 compute-0 nova_compute[254061]: 2026-01-20 19:16:50.257 254065 DEBUG oslo_concurrency.lockutils [req-b9f52203-c6b1-48b1-b1c3-e4acb45a06ae req-dd090db6-b533-4eea-a9c1-0c8787643a6c 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Releasing lock "refresh_cache-6abaa16d-d5c3-447d-948f-53b77897103a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 19:16:50 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:16:50 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:16:50 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:16:50.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:16:51 compute-0 ceph-mon[74381]: pgmap v1047: 337 pgs: 337 active+clean; 88 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 20 19:16:51 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:16:51 compute-0 nova_compute[254061]: 2026-01-20 19:16:51.954 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:16:51 compute-0 nova_compute[254061]: 2026-01-20 19:16:51.969 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:16:52 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1048: 337 pgs: 337 active+clean; 88 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Jan 20 19:16:52 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:16:52 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:16:52 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:16:52.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:16:52 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:16:52 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:16:52 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:16:52.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:16:53 compute-0 ceph-mon[74381]: pgmap v1048: 337 pgs: 337 active+clean; 88 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Jan 20 19:16:54 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1049: 337 pgs: 337 active+clean; 88 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 20 19:16:54 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:16:54 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:16:54 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:16:54.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:16:54 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:16:54 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:16:54 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:16:54.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:16:55 compute-0 ceph-mgr[74676]: [balancer INFO root] Optimize plan auto_2026-01-20_19:16:55
Jan 20 19:16:55 compute-0 ceph-mgr[74676]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 19:16:55 compute-0 ceph-mgr[74676]: [balancer INFO root] do_upmap
Jan 20 19:16:55 compute-0 ceph-mgr[74676]: [balancer INFO root] pools ['default.rgw.meta', 'backups', '.nfs', 'default.rgw.control', 'volumes', 'cephfs.cephfs.meta', 'images', 'default.rgw.log', '.rgw.root', '.mgr', 'vms', 'cephfs.cephfs.data']
Jan 20 19:16:55 compute-0 ceph-mgr[74676]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 19:16:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:16:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:16:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:16:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:16:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:16:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:16:55 compute-0 ceph-mon[74381]: pgmap v1049: 337 pgs: 337 active+clean; 88 MiB data, 290 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Jan 20 19:16:55 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:16:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 19:16:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:16:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 19:16:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:16:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Jan 20 19:16:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:16:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:16:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:16:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:16:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:16:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 20 19:16:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:16:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 20 19:16:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:16:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:16:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:16:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 19:16:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:16:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 20 19:16:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:16:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:16:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:16:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 19:16:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:16:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 20 19:16:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 19:16:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:16:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 19:16:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:16:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:16:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:16:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:16:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:16:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:16:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:16:56 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1050: 337 pgs: 337 active+clean; 109 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.0 MiB/s wr, 120 op/s
Jan 20 19:16:56 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:16:56 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:16:56 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:16:56.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:16:56 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:16:56 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:16:56 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:16:56.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:16:56 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:16:56 compute-0 nova_compute[254061]: 2026-01-20 19:16:56.970 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:16:56 compute-0 nova_compute[254061]: 2026-01-20 19:16:56.972 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:16:56 compute-0 nova_compute[254061]: 2026-01-20 19:16:56.972 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 20 19:16:56 compute-0 nova_compute[254061]: 2026-01-20 19:16:56.972 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 19:16:57 compute-0 nova_compute[254061]: 2026-01-20 19:16:57.018 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:16:57 compute-0 nova_compute[254061]: 2026-01-20 19:16:57.019 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 19:16:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:16:57.222Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:16:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:16:57.222Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:16:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:16:57.223Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:16:57 compute-0 ovn_controller[155128]: 2026-01-20T19:16:57Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e2:08:a5 10.100.0.3
Jan 20 19:16:57 compute-0 ovn_controller[155128]: 2026-01-20T19:16:57Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e2:08:a5 10.100.0.3
Jan 20 19:16:57 compute-0 ceph-mon[74381]: pgmap v1050: 337 pgs: 337 active+clean; 109 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.0 MiB/s wr, 120 op/s
Jan 20 19:16:58 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1051: 337 pgs: 337 active+clean; 109 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 260 KiB/s rd, 2.0 MiB/s wr, 45 op/s
Jan 20 19:16:58 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:16:58 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:16:58 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:16:58.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:16:58 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:16:58 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:16:58 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:16:58.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:16:59 compute-0 sudo[274416]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:16:59 compute-0 sudo[274416]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:16:59 compute-0 sudo[274416]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:59 compute-0 ceph-mon[74381]: pgmap v1051: 337 pgs: 337 active+clean; 109 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 260 KiB/s rd, 2.0 MiB/s wr, 45 op/s
Jan 20 19:16:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:16:59] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Jan 20 19:16:59 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:16:59] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Jan 20 19:17:00 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1052: 337 pgs: 337 active+clean; 109 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 260 KiB/s rd, 2.0 MiB/s wr, 45 op/s
Jan 20 19:17:00 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:17:00 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:17:00 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:17:00.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:17:00 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:17:00 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:17:00 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:17:00.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:17:01 compute-0 podman[274443]: 2026-01-20 19:17:01.088165055 +0000 UTC m=+0.059727046 container health_status 7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 20 19:17:01 compute-0 ceph-mon[74381]: pgmap v1052: 337 pgs: 337 active+clean; 109 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 260 KiB/s rd, 2.0 MiB/s wr, 45 op/s
Jan 20 19:17:01 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:17:02 compute-0 nova_compute[254061]: 2026-01-20 19:17:02.019 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:17:02 compute-0 nova_compute[254061]: 2026-01-20 19:17:02.021 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:17:02 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1053: 337 pgs: 337 active+clean; 121 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 20 19:17:02 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:17:02 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:17:02 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:17:02.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:17:02 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:17:02 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:17:02 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:17:02.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:17:02 compute-0 nova_compute[254061]: 2026-01-20 19:17:02.517 254065 INFO nova.compute.manager [None req-ce3bd5c0-676a-44f5-afee-fc445c4d882c d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] Get console output
Jan 20 19:17:02 compute-0 nova_compute[254061]: 2026-01-20 19:17:02.522 260360 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 20 19:17:02 compute-0 ceph-mon[74381]: pgmap v1053: 337 pgs: 337 active+clean; 121 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 20 19:17:03 compute-0 ovn_controller[155128]: 2026-01-20T19:17:03Z|00098|binding|INFO|Releasing lport 2f47d281-b5da-4c7e-b9dc-e350681417d2 from this chassis (sb_readonly=0)
Jan 20 19:17:03 compute-0 nova_compute[254061]: 2026-01-20 19:17:03.257 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:17:03 compute-0 ovn_controller[155128]: 2026-01-20T19:17:03Z|00099|binding|INFO|Releasing lport 2f47d281-b5da-4c7e-b9dc-e350681417d2 from this chassis (sb_readonly=0)
Jan 20 19:17:03 compute-0 nova_compute[254061]: 2026-01-20 19:17:03.330 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:17:04 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1054: 337 pgs: 337 active+clean; 121 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 20 19:17:04 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:17:04 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:17:04 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:17:04.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:17:04 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:17:04 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:17:04 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:17:04.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:17:04 compute-0 nova_compute[254061]: 2026-01-20 19:17:04.529 254065 INFO nova.compute.manager [None req-a9309987-e0b1-49c2-9181-8ce6c479330b d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] Get console output
Jan 20 19:17:04 compute-0 nova_compute[254061]: 2026-01-20 19:17:04.533 260360 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 20 19:17:05 compute-0 ceph-mon[74381]: pgmap v1054: 337 pgs: 337 active+clean; 121 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 20 19:17:05 compute-0 nova_compute[254061]: 2026-01-20 19:17:05.290 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:17:05 compute-0 NetworkManager[48914]: <info>  [1768936625.2914] manager: (patch-provnet-d23a33c8-2f92-47f7-9446-58aa0ac25f0e-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/66)
Jan 20 19:17:05 compute-0 NetworkManager[48914]: <info>  [1768936625.2920] manager: (patch-br-int-to-provnet-d23a33c8-2f92-47f7-9446-58aa0ac25f0e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/67)
Jan 20 19:17:05 compute-0 nova_compute[254061]: 2026-01-20 19:17:05.329 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:17:05 compute-0 ovn_controller[155128]: 2026-01-20T19:17:05Z|00100|binding|INFO|Releasing lport 2f47d281-b5da-4c7e-b9dc-e350681417d2 from this chassis (sb_readonly=0)
Jan 20 19:17:05 compute-0 nova_compute[254061]: 2026-01-20 19:17:05.333 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:17:05 compute-0 nova_compute[254061]: 2026-01-20 19:17:05.637 254065 INFO nova.compute.manager [None req-9b227a8e-ce22-4b76-b6e5-0a53bc276aa1 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] Get console output
Jan 20 19:17:05 compute-0 nova_compute[254061]: 2026-01-20 19:17:05.643 260360 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 20 19:17:06 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1055: 337 pgs: 337 active+clean; 121 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 20 19:17:06 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:17:06 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:17:06 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:17:06.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:17:06 compute-0 nova_compute[254061]: 2026-01-20 19:17:06.264 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:17:06 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:17:06.265 165659 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'be:fe:f2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '0e:71:69:cd:a8:95'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 19:17:06 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:17:06.266 165659 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 19:17:06 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:17:06 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:17:06 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:17:06.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:17:06 compute-0 nova_compute[254061]: 2026-01-20 19:17:06.627 254065 DEBUG nova.compute.manager [req-da17545e-5353-4ccd-abbb-cfbae81bff41 req-9deb3a42-273a-4373-bc3a-09940b21e201 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] Received event network-changed-64b2dadc-ee9c-4d67-8a7a-8e7044e8326c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 19:17:06 compute-0 nova_compute[254061]: 2026-01-20 19:17:06.627 254065 DEBUG nova.compute.manager [req-da17545e-5353-4ccd-abbb-cfbae81bff41 req-9deb3a42-273a-4373-bc3a-09940b21e201 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] Refreshing instance network info cache due to event network-changed-64b2dadc-ee9c-4d67-8a7a-8e7044e8326c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 20 19:17:06 compute-0 nova_compute[254061]: 2026-01-20 19:17:06.627 254065 DEBUG oslo_concurrency.lockutils [req-da17545e-5353-4ccd-abbb-cfbae81bff41 req-9deb3a42-273a-4373-bc3a-09940b21e201 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Acquiring lock "refresh_cache-6abaa16d-d5c3-447d-948f-53b77897103a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 19:17:06 compute-0 nova_compute[254061]: 2026-01-20 19:17:06.627 254065 DEBUG oslo_concurrency.lockutils [req-da17545e-5353-4ccd-abbb-cfbae81bff41 req-9deb3a42-273a-4373-bc3a-09940b21e201 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Acquired lock "refresh_cache-6abaa16d-d5c3-447d-948f-53b77897103a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 19:17:06 compute-0 nova_compute[254061]: 2026-01-20 19:17:06.627 254065 DEBUG nova.network.neutron [req-da17545e-5353-4ccd-abbb-cfbae81bff41 req-9deb3a42-273a-4373-bc3a-09940b21e201 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] Refreshing network info cache for port 64b2dadc-ee9c-4d67-8a7a-8e7044e8326c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 20 19:17:06 compute-0 nova_compute[254061]: 2026-01-20 19:17:06.666 254065 DEBUG oslo_concurrency.lockutils [None req-af20c3c6-2e1e-433f-ac54-d88688466280 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Acquiring lock "6abaa16d-d5c3-447d-948f-53b77897103a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:17:06 compute-0 nova_compute[254061]: 2026-01-20 19:17:06.667 254065 DEBUG oslo_concurrency.lockutils [None req-af20c3c6-2e1e-433f-ac54-d88688466280 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "6abaa16d-d5c3-447d-948f-53b77897103a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:17:06 compute-0 nova_compute[254061]: 2026-01-20 19:17:06.667 254065 DEBUG oslo_concurrency.lockutils [None req-af20c3c6-2e1e-433f-ac54-d88688466280 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Acquiring lock "6abaa16d-d5c3-447d-948f-53b77897103a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:17:06 compute-0 nova_compute[254061]: 2026-01-20 19:17:06.667 254065 DEBUG oslo_concurrency.lockutils [None req-af20c3c6-2e1e-433f-ac54-d88688466280 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "6abaa16d-d5c3-447d-948f-53b77897103a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:17:06 compute-0 nova_compute[254061]: 2026-01-20 19:17:06.667 254065 DEBUG oslo_concurrency.lockutils [None req-af20c3c6-2e1e-433f-ac54-d88688466280 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "6abaa16d-d5c3-447d-948f-53b77897103a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:17:06 compute-0 nova_compute[254061]: 2026-01-20 19:17:06.668 254065 INFO nova.compute.manager [None req-af20c3c6-2e1e-433f-ac54-d88688466280 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] Terminating instance
Jan 20 19:17:06 compute-0 nova_compute[254061]: 2026-01-20 19:17:06.670 254065 DEBUG nova.compute.manager [None req-af20c3c6-2e1e-433f-ac54-d88688466280 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 20 19:17:06 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:17:06 compute-0 kernel: tap64b2dadc-ee (unregistering): left promiscuous mode
Jan 20 19:17:06 compute-0 NetworkManager[48914]: <info>  [1768936626.9725] device (tap64b2dadc-ee): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 20 19:17:06 compute-0 nova_compute[254061]: 2026-01-20 19:17:06.985 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:17:06 compute-0 ovn_controller[155128]: 2026-01-20T19:17:06Z|00101|binding|INFO|Releasing lport 64b2dadc-ee9c-4d67-8a7a-8e7044e8326c from this chassis (sb_readonly=0)
Jan 20 19:17:06 compute-0 ovn_controller[155128]: 2026-01-20T19:17:06Z|00102|binding|INFO|Setting lport 64b2dadc-ee9c-4d67-8a7a-8e7044e8326c down in Southbound
Jan 20 19:17:06 compute-0 ovn_controller[155128]: 2026-01-20T19:17:06Z|00103|binding|INFO|Removing iface tap64b2dadc-ee ovn-installed in OVS
Jan 20 19:17:06 compute-0 nova_compute[254061]: 2026-01-20 19:17:06.987 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:17:06 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:17:06.994 165659 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e2:08:a5 10.100.0.3'], port_security=['fa:16:3e:e2:08:a5 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '6abaa16d-d5c3-447d-948f-53b77897103a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-72d40385-358a-4a8f-a099-4346339210d1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'dc8a6ea17f334edbbfaf2a91ec6fd167', 'neutron:revision_number': '4', 'neutron:security_group_ids': '57b76c66-c7b1-468f-b117-c212bf26e6a2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=46147e3e-f856-40a5-93eb-6c6a79ad7ebf, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f4e780f9880>], logical_port=64b2dadc-ee9c-4d67-8a7a-8e7044e8326c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f4e780f9880>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 19:17:06 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:17:06.996 165659 INFO neutron.agent.ovn.metadata.agent [-] Port 64b2dadc-ee9c-4d67-8a7a-8e7044e8326c in datapath 72d40385-358a-4a8f-a099-4346339210d1 unbound from our chassis
Jan 20 19:17:06 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:17:06.996 165659 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 72d40385-358a-4a8f-a099-4346339210d1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 20 19:17:06 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:17:06.997 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[d9b53200-5116-48a4-8329-b02e8062d5d7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:17:06 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:17:06.998 165659 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-72d40385-358a-4a8f-a099-4346339210d1 namespace which is not needed anymore
Jan 20 19:17:07 compute-0 nova_compute[254061]: 2026-01-20 19:17:07.013 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:17:07 compute-0 nova_compute[254061]: 2026-01-20 19:17:07.022 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:17:07 compute-0 nova_compute[254061]: 2026-01-20 19:17:07.023 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:17:07 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Jan 20 19:17:07 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d0000000d.scope: Consumed 13.592s CPU time.
Jan 20 19:17:07 compute-0 systemd-machined[220746]: Machine qemu-6-instance-0000000d terminated.
Jan 20 19:17:07 compute-0 nova_compute[254061]: 2026-01-20 19:17:07.091 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:17:07 compute-0 nova_compute[254061]: 2026-01-20 19:17:07.095 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:17:07 compute-0 nova_compute[254061]: 2026-01-20 19:17:07.104 254065 INFO nova.virt.libvirt.driver [-] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] Instance destroyed successfully.
Jan 20 19:17:07 compute-0 nova_compute[254061]: 2026-01-20 19:17:07.104 254065 DEBUG nova.objects.instance [None req-af20c3c6-2e1e-433f-ac54-d88688466280 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lazy-loading 'resources' on Instance uuid 6abaa16d-d5c3-447d-948f-53b77897103a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 20 19:17:07 compute-0 nova_compute[254061]: 2026-01-20 19:17:07.120 254065 DEBUG nova.virt.libvirt.vif [None req-af20c3c6-2e1e-433f-ac54-d88688466280 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-20T19:16:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-689113489',display_name='tempest-TestNetworkBasicOps-server-689113489',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-689113489',id=13,image_ref='bc57af0c-4b71-499e-9808-3c8fc070a488',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPIEsAInuTid1fpeD1gsq6bIvon3rcQVKfA2yp3/BazBy/JSbIFbbiiEhZClF4hCXOFgXRxm2U+y5vSyy5Fn94takx1GziyzcHmLPxbW+JplEmUL8mMF0cmTVNqxVVjR6A==',key_name='tempest-TestNetworkBasicOps-1523414470',keypairs=<?>,launch_index=0,launched_at=2026-01-20T19:16:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='dc8a6ea17f334edbbfaf2a91ec6fd167',ramdisk_id='',reservation_id='r-aze4m31r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='bc57af0c-4b71-499e-9808-3c8fc070a488',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-899583499',owner_user_name='tempest-TestNetworkBasicOps-899583499-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-20T19:16:44Z,user_data=None,user_id='d34bd159f8884263a7481e3fcff15267',uuid=6abaa16d-d5c3-447d-948f-53b77897103a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "64b2dadc-ee9c-4d67-8a7a-8e7044e8326c", "address": "fa:16:3e:e2:08:a5", "network": {"id": "72d40385-358a-4a8f-a099-4346339210d1", "bridge": "br-int", "label": "tempest-network-smoke--417948324", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64b2dadc-ee", "ovs_interfaceid": "64b2dadc-ee9c-4d67-8a7a-8e7044e8326c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 20 19:17:07 compute-0 nova_compute[254061]: 2026-01-20 19:17:07.121 254065 DEBUG nova.network.os_vif_util [None req-af20c3c6-2e1e-433f-ac54-d88688466280 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Converting VIF {"id": "64b2dadc-ee9c-4d67-8a7a-8e7044e8326c", "address": "fa:16:3e:e2:08:a5", "network": {"id": "72d40385-358a-4a8f-a099-4346339210d1", "bridge": "br-int", "label": "tempest-network-smoke--417948324", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.235", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64b2dadc-ee", "ovs_interfaceid": "64b2dadc-ee9c-4d67-8a7a-8e7044e8326c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 20 19:17:07 compute-0 nova_compute[254061]: 2026-01-20 19:17:07.121 254065 DEBUG nova.network.os_vif_util [None req-af20c3c6-2e1e-433f-ac54-d88688466280 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e2:08:a5,bridge_name='br-int',has_traffic_filtering=True,id=64b2dadc-ee9c-4d67-8a7a-8e7044e8326c,network=Network(72d40385-358a-4a8f-a099-4346339210d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap64b2dadc-ee') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 20 19:17:07 compute-0 nova_compute[254061]: 2026-01-20 19:17:07.122 254065 DEBUG os_vif [None req-af20c3c6-2e1e-433f-ac54-d88688466280 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e2:08:a5,bridge_name='br-int',has_traffic_filtering=True,id=64b2dadc-ee9c-4d67-8a7a-8e7044e8326c,network=Network(72d40385-358a-4a8f-a099-4346339210d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap64b2dadc-ee') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 20 19:17:07 compute-0 nova_compute[254061]: 2026-01-20 19:17:07.123 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:17:07 compute-0 nova_compute[254061]: 2026-01-20 19:17:07.124 254065 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap64b2dadc-ee, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 19:17:07 compute-0 nova_compute[254061]: 2026-01-20 19:17:07.125 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:17:07 compute-0 nova_compute[254061]: 2026-01-20 19:17:07.127 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:17:07 compute-0 nova_compute[254061]: 2026-01-20 19:17:07.127 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:17:07 compute-0 nova_compute[254061]: 2026-01-20 19:17:07.129 254065 INFO os_vif [None req-af20c3c6-2e1e-433f-ac54-d88688466280 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e2:08:a5,bridge_name='br-int',has_traffic_filtering=True,id=64b2dadc-ee9c-4d67-8a7a-8e7044e8326c,network=Network(72d40385-358a-4a8f-a099-4346339210d1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap64b2dadc-ee')
Jan 20 19:17:07 compute-0 neutron-haproxy-ovnmeta-72d40385-358a-4a8f-a099-4346339210d1[274342]: [NOTICE]   (274354) : haproxy version is 2.8.14-c23fe91
Jan 20 19:17:07 compute-0 neutron-haproxy-ovnmeta-72d40385-358a-4a8f-a099-4346339210d1[274342]: [NOTICE]   (274354) : path to executable is /usr/sbin/haproxy
Jan 20 19:17:07 compute-0 neutron-haproxy-ovnmeta-72d40385-358a-4a8f-a099-4346339210d1[274342]: [WARNING]  (274354) : Exiting Master process...
Jan 20 19:17:07 compute-0 neutron-haproxy-ovnmeta-72d40385-358a-4a8f-a099-4346339210d1[274342]: [ALERT]    (274354) : Current worker (274363) exited with code 143 (Terminated)
Jan 20 19:17:07 compute-0 neutron-haproxy-ovnmeta-72d40385-358a-4a8f-a099-4346339210d1[274342]: [WARNING]  (274354) : All workers exited. Exiting... (0)
Jan 20 19:17:07 compute-0 systemd[1]: libpod-61c3552ef3a373a0887ca77738523d88b9e3b86b2041f239de154900830abc48.scope: Deactivated successfully.
Jan 20 19:17:07 compute-0 conmon[274342]: conmon 61c3552ef3a373a0887c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-61c3552ef3a373a0887ca77738523d88b9e3b86b2041f239de154900830abc48.scope/container/memory.events
Jan 20 19:17:07 compute-0 podman[274499]: 2026-01-20 19:17:07.165388884 +0000 UTC m=+0.054370822 container died 61c3552ef3a373a0887ca77738523d88b9e3b86b2041f239de154900830abc48 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-72d40385-358a-4a8f-a099-4346339210d1, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 19:17:07 compute-0 nova_compute[254061]: 2026-01-20 19:17:07.167 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:17:07 compute-0 nova_compute[254061]: 2026-01-20 19:17:07.167 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:17:07 compute-0 nova_compute[254061]: 2026-01-20 19:17:07.167 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:17:07 compute-0 nova_compute[254061]: 2026-01-20 19:17:07.168 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 19:17:07 compute-0 nova_compute[254061]: 2026-01-20 19:17:07.168 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:17:07 compute-0 ceph-mon[74381]: pgmap v1055: 337 pgs: 337 active+clean; 121 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 20 19:17:07 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-61c3552ef3a373a0887ca77738523d88b9e3b86b2041f239de154900830abc48-userdata-shm.mount: Deactivated successfully.
Jan 20 19:17:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-7d44883a4460d68fd8f995239faaaa579da69594cb90d543dbf9e691feed70d7-merged.mount: Deactivated successfully.
Jan 20 19:17:07 compute-0 podman[274499]: 2026-01-20 19:17:07.211947493 +0000 UTC m=+0.100929441 container cleanup 61c3552ef3a373a0887ca77738523d88b9e3b86b2041f239de154900830abc48 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-72d40385-358a-4a8f-a099-4346339210d1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 20 19:17:07 compute-0 systemd[1]: libpod-conmon-61c3552ef3a373a0887ca77738523d88b9e3b86b2041f239de154900830abc48.scope: Deactivated successfully.
Jan 20 19:17:07 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:17:07.224Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:17:07 compute-0 nova_compute[254061]: 2026-01-20 19:17:07.227 254065 DEBUG nova.compute.manager [req-36d5c762-9b81-4773-a3c2-051864a3a14d req-7fb8d204-4bea-4cea-b949-f5298f6fecb8 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] Received event network-vif-unplugged-64b2dadc-ee9c-4d67-8a7a-8e7044e8326c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 19:17:07 compute-0 nova_compute[254061]: 2026-01-20 19:17:07.227 254065 DEBUG oslo_concurrency.lockutils [req-36d5c762-9b81-4773-a3c2-051864a3a14d req-7fb8d204-4bea-4cea-b949-f5298f6fecb8 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Acquiring lock "6abaa16d-d5c3-447d-948f-53b77897103a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:17:07 compute-0 nova_compute[254061]: 2026-01-20 19:17:07.228 254065 DEBUG oslo_concurrency.lockutils [req-36d5c762-9b81-4773-a3c2-051864a3a14d req-7fb8d204-4bea-4cea-b949-f5298f6fecb8 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Lock "6abaa16d-d5c3-447d-948f-53b77897103a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:17:07 compute-0 nova_compute[254061]: 2026-01-20 19:17:07.228 254065 DEBUG oslo_concurrency.lockutils [req-36d5c762-9b81-4773-a3c2-051864a3a14d req-7fb8d204-4bea-4cea-b949-f5298f6fecb8 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Lock "6abaa16d-d5c3-447d-948f-53b77897103a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:17:07 compute-0 nova_compute[254061]: 2026-01-20 19:17:07.228 254065 DEBUG nova.compute.manager [req-36d5c762-9b81-4773-a3c2-051864a3a14d req-7fb8d204-4bea-4cea-b949-f5298f6fecb8 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] No waiting events found dispatching network-vif-unplugged-64b2dadc-ee9c-4d67-8a7a-8e7044e8326c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 19:17:07 compute-0 nova_compute[254061]: 2026-01-20 19:17:07.228 254065 DEBUG nova.compute.manager [req-36d5c762-9b81-4773-a3c2-051864a3a14d req-7fb8d204-4bea-4cea-b949-f5298f6fecb8 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] Received event network-vif-unplugged-64b2dadc-ee9c-4d67-8a7a-8e7044e8326c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 20 19:17:07 compute-0 podman[274553]: 2026-01-20 19:17:07.284719472 +0000 UTC m=+0.048300248 container remove 61c3552ef3a373a0887ca77738523d88b9e3b86b2041f239de154900830abc48 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-72d40385-358a-4a8f-a099-4346339210d1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 20 19:17:07 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:17:07.290 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[305f547c-1955-4365-9672-7b766bec3d8a]: (4, ('Tue Jan 20 07:17:07 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-72d40385-358a-4a8f-a099-4346339210d1 (61c3552ef3a373a0887ca77738523d88b9e3b86b2041f239de154900830abc48)\n61c3552ef3a373a0887ca77738523d88b9e3b86b2041f239de154900830abc48\nTue Jan 20 07:17:07 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-72d40385-358a-4a8f-a099-4346339210d1 (61c3552ef3a373a0887ca77738523d88b9e3b86b2041f239de154900830abc48)\n61c3552ef3a373a0887ca77738523d88b9e3b86b2041f239de154900830abc48\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:17:07 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:17:07.292 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[a1ac0cff-fb86-49ef-8c2d-9f0a0669a8c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:17:07 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:17:07.293 165659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap72d40385-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 19:17:07 compute-0 nova_compute[254061]: 2026-01-20 19:17:07.294 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:17:07 compute-0 kernel: tap72d40385-30: left promiscuous mode
Jan 20 19:17:07 compute-0 nova_compute[254061]: 2026-01-20 19:17:07.316 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:17:07 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:17:07.318 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[b2316ac9-499a-4d98-bcc4-beae69b556cb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:17:07 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:17:07.337 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[72aa1e81-3e79-482d-8e91-cb61d78a2f7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:17:07 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:17:07.338 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[f945e4b3-2a3f-4570-990d-474d93aacb0a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:17:07 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:17:07.356 259376 DEBUG oslo.privsep.daemon [-] privsep: reply[05228654-2acb-4ac4-af97-5a8b7ff0fba0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 479874, 'reachable_time': 34199, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274587, 'error': None, 'target': 'ovnmeta-72d40385-358a-4a8f-a099-4346339210d1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:17:07 compute-0 systemd[1]: run-netns-ovnmeta\x2d72d40385\x2d358a\x2d4a8f\x2da099\x2d4346339210d1.mount: Deactivated successfully.
Jan 20 19:17:07 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:17:07.359 166372 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-72d40385-358a-4a8f-a099-4346339210d1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 20 19:17:07 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:17:07.359 166372 DEBUG oslo.privsep.daemon [-] privsep: reply[52c47281-110d-4038-beeb-b720c9006c4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:17:07 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:17:07 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/364986689' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:17:07 compute-0 nova_compute[254061]: 2026-01-20 19:17:07.614 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:17:07 compute-0 nova_compute[254061]: 2026-01-20 19:17:07.669 254065 DEBUG nova.virt.libvirt.driver [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 19:17:07 compute-0 nova_compute[254061]: 2026-01-20 19:17:07.669 254065 DEBUG nova.virt.libvirt.driver [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 20 19:17:07 compute-0 nova_compute[254061]: 2026-01-20 19:17:07.748 254065 INFO nova.virt.libvirt.driver [None req-af20c3c6-2e1e-433f-ac54-d88688466280 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] Deleting instance files /var/lib/nova/instances/6abaa16d-d5c3-447d-948f-53b77897103a_del
Jan 20 19:17:07 compute-0 nova_compute[254061]: 2026-01-20 19:17:07.748 254065 INFO nova.virt.libvirt.driver [None req-af20c3c6-2e1e-433f-ac54-d88688466280 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] Deletion of /var/lib/nova/instances/6abaa16d-d5c3-447d-948f-53b77897103a_del complete
Jan 20 19:17:07 compute-0 nova_compute[254061]: 2026-01-20 19:17:07.803 254065 WARNING nova.virt.libvirt.driver [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 19:17:07 compute-0 nova_compute[254061]: 2026-01-20 19:17:07.804 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4563MB free_disk=59.942752838134766GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 19:17:07 compute-0 nova_compute[254061]: 2026-01-20 19:17:07.805 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:17:07 compute-0 nova_compute[254061]: 2026-01-20 19:17:07.805 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:17:07 compute-0 nova_compute[254061]: 2026-01-20 19:17:07.806 254065 INFO nova.compute.manager [None req-af20c3c6-2e1e-433f-ac54-d88688466280 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] Took 1.14 seconds to destroy the instance on the hypervisor.
Jan 20 19:17:07 compute-0 nova_compute[254061]: 2026-01-20 19:17:07.806 254065 DEBUG oslo.service.loopingcall [None req-af20c3c6-2e1e-433f-ac54-d88688466280 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 20 19:17:07 compute-0 nova_compute[254061]: 2026-01-20 19:17:07.806 254065 DEBUG nova.compute.manager [-] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 20 19:17:07 compute-0 nova_compute[254061]: 2026-01-20 19:17:07.807 254065 DEBUG nova.network.neutron [-] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 20 19:17:08 compute-0 nova_compute[254061]: 2026-01-20 19:17:08.028 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Instance 6abaa16d-d5c3-447d-948f-53b77897103a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 20 19:17:08 compute-0 nova_compute[254061]: 2026-01-20 19:17:08.028 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 19:17:08 compute-0 nova_compute[254061]: 2026-01-20 19:17:08.028 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 19:17:08 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1056: 337 pgs: 337 active+clean; 121 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 107 KiB/s wr, 19 op/s
Jan 20 19:17:08 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:17:08 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:17:08 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:17:08.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:17:08 compute-0 nova_compute[254061]: 2026-01-20 19:17:08.108 254065 DEBUG nova.scheduler.client.report [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Refreshing inventories for resource provider cb9161e5-191d-495c-920a-01144f42a215 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 20 19:17:08 compute-0 nova_compute[254061]: 2026-01-20 19:17:08.178 254065 DEBUG nova.scheduler.client.report [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Updating ProviderTree inventory for provider cb9161e5-191d-495c-920a-01144f42a215 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 20 19:17:08 compute-0 nova_compute[254061]: 2026-01-20 19:17:08.178 254065 DEBUG nova.compute.provider_tree [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Updating inventory in ProviderTree for provider cb9161e5-191d-495c-920a-01144f42a215 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 20 19:17:08 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/364986689' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:17:08 compute-0 nova_compute[254061]: 2026-01-20 19:17:08.191 254065 DEBUG nova.scheduler.client.report [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Refreshing aggregate associations for resource provider cb9161e5-191d-495c-920a-01144f42a215, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 20 19:17:08 compute-0 nova_compute[254061]: 2026-01-20 19:17:08.227 254065 DEBUG nova.scheduler.client.report [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Refreshing trait associations for resource provider cb9161e5-191d-495c-920a-01144f42a215, traits: COMPUTE_DEVICE_TAGGING,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AESNI,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_ABM,COMPUTE_ACCELERATORS,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE4A,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NODE,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_F16C,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_USB,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE,HW_CPU_X86_SSE2,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_BMI,HW_CPU_X86_SVM,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 20 19:17:08 compute-0 nova_compute[254061]: 2026-01-20 19:17:08.286 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:17:08 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:17:08 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:17:08 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:17:08.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:17:08 compute-0 nova_compute[254061]: 2026-01-20 19:17:08.389 254065 DEBUG nova.network.neutron [-] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 19:17:08 compute-0 nova_compute[254061]: 2026-01-20 19:17:08.410 254065 INFO nova.compute.manager [-] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] Took 0.60 seconds to deallocate network for instance.
Jan 20 19:17:08 compute-0 nova_compute[254061]: 2026-01-20 19:17:08.419 254065 DEBUG nova.network.neutron [req-da17545e-5353-4ccd-abbb-cfbae81bff41 req-9deb3a42-273a-4373-bc3a-09940b21e201 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] Updated VIF entry in instance network info cache for port 64b2dadc-ee9c-4d67-8a7a-8e7044e8326c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 20 19:17:08 compute-0 nova_compute[254061]: 2026-01-20 19:17:08.420 254065 DEBUG nova.network.neutron [req-da17545e-5353-4ccd-abbb-cfbae81bff41 req-9deb3a42-273a-4373-bc3a-09940b21e201 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] Updating instance_info_cache with network_info: [{"id": "64b2dadc-ee9c-4d67-8a7a-8e7044e8326c", "address": "fa:16:3e:e2:08:a5", "network": {"id": "72d40385-358a-4a8f-a099-4346339210d1", "bridge": "br-int", "label": "tempest-network-smoke--417948324", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "dc8a6ea17f334edbbfaf2a91ec6fd167", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64b2dadc-ee", "ovs_interfaceid": "64b2dadc-ee9c-4d67-8a7a-8e7044e8326c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 19:17:08 compute-0 nova_compute[254061]: 2026-01-20 19:17:08.460 254065 DEBUG oslo_concurrency.lockutils [req-da17545e-5353-4ccd-abbb-cfbae81bff41 req-9deb3a42-273a-4373-bc3a-09940b21e201 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Releasing lock "refresh_cache-6abaa16d-d5c3-447d-948f-53b77897103a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 19:17:08 compute-0 nova_compute[254061]: 2026-01-20 19:17:08.473 254065 DEBUG oslo_concurrency.lockutils [None req-af20c3c6-2e1e-433f-ac54-d88688466280 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:17:08 compute-0 nova_compute[254061]: 2026-01-20 19:17:08.738 254065 DEBUG nova.compute.manager [req-34d3a061-bbca-46d1-9568-8ebc13aeb6f8 req-99c9a170-e7f4-40d6-a9f9-a1b0fedf94f2 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] Received event network-vif-deleted-64b2dadc-ee9c-4d67-8a7a-8e7044e8326c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 19:17:08 compute-0 nova_compute[254061]: 2026-01-20 19:17:08.739 254065 INFO nova.compute.manager [req-34d3a061-bbca-46d1-9568-8ebc13aeb6f8 req-99c9a170-e7f4-40d6-a9f9-a1b0fedf94f2 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] Neutron deleted interface 64b2dadc-ee9c-4d67-8a7a-8e7044e8326c; detaching it from the instance and deleting it from the info cache
Jan 20 19:17:08 compute-0 nova_compute[254061]: 2026-01-20 19:17:08.739 254065 DEBUG nova.network.neutron [req-34d3a061-bbca-46d1-9568-8ebc13aeb6f8 req-99c9a170-e7f4-40d6-a9f9-a1b0fedf94f2 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 20 19:17:08 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:17:08 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2942618338' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:17:08 compute-0 nova_compute[254061]: 2026-01-20 19:17:08.759 254065 DEBUG nova.compute.manager [req-34d3a061-bbca-46d1-9568-8ebc13aeb6f8 req-99c9a170-e7f4-40d6-a9f9-a1b0fedf94f2 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] Detach interface failed, port_id=64b2dadc-ee9c-4d67-8a7a-8e7044e8326c, reason: Instance 6abaa16d-d5c3-447d-948f-53b77897103a could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 20 19:17:08 compute-0 nova_compute[254061]: 2026-01-20 19:17:08.760 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:17:08 compute-0 nova_compute[254061]: 2026-01-20 19:17:08.764 254065 DEBUG nova.compute.provider_tree [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Inventory has not changed in ProviderTree for provider: cb9161e5-191d-495c-920a-01144f42a215 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 19:17:08 compute-0 nova_compute[254061]: 2026-01-20 19:17:08.778 254065 DEBUG nova.scheduler.client.report [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Inventory has not changed for provider cb9161e5-191d-495c-920a-01144f42a215 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 19:17:08 compute-0 nova_compute[254061]: 2026-01-20 19:17:08.801 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 19:17:08 compute-0 nova_compute[254061]: 2026-01-20 19:17:08.802 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.997s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:17:08 compute-0 nova_compute[254061]: 2026-01-20 19:17:08.802 254065 DEBUG oslo_concurrency.lockutils [None req-af20c3c6-2e1e-433f-ac54-d88688466280 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.329s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:17:08 compute-0 nova_compute[254061]: 2026-01-20 19:17:08.838 254065 DEBUG oslo_concurrency.processutils [None req-af20c3c6-2e1e-433f-ac54-d88688466280 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:17:09 compute-0 ceph-mon[74381]: pgmap v1056: 337 pgs: 337 active+clean; 121 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 107 KiB/s wr, 19 op/s
Jan 20 19:17:09 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/2942618338' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:17:09 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:17:09 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2142720756' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:17:09 compute-0 nova_compute[254061]: 2026-01-20 19:17:09.272 254065 DEBUG oslo_concurrency.processutils [None req-af20c3c6-2e1e-433f-ac54-d88688466280 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:17:09 compute-0 nova_compute[254061]: 2026-01-20 19:17:09.277 254065 DEBUG nova.compute.provider_tree [None req-af20c3c6-2e1e-433f-ac54-d88688466280 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Inventory has not changed in ProviderTree for provider: cb9161e5-191d-495c-920a-01144f42a215 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 19:17:09 compute-0 nova_compute[254061]: 2026-01-20 19:17:09.296 254065 DEBUG nova.scheduler.client.report [None req-af20c3c6-2e1e-433f-ac54-d88688466280 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Inventory has not changed for provider cb9161e5-191d-495c-920a-01144f42a215 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 19:17:09 compute-0 nova_compute[254061]: 2026-01-20 19:17:09.319 254065 DEBUG nova.compute.manager [req-5167cacc-a9d4-436a-8d50-fe937a55a2ec req-3b78fe10-bd01-4aee-99bb-816e47faa312 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] Received event network-vif-plugged-64b2dadc-ee9c-4d67-8a7a-8e7044e8326c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 20 19:17:09 compute-0 nova_compute[254061]: 2026-01-20 19:17:09.320 254065 DEBUG oslo_concurrency.lockutils [req-5167cacc-a9d4-436a-8d50-fe937a55a2ec req-3b78fe10-bd01-4aee-99bb-816e47faa312 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Acquiring lock "6abaa16d-d5c3-447d-948f-53b77897103a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:17:09 compute-0 nova_compute[254061]: 2026-01-20 19:17:09.321 254065 DEBUG oslo_concurrency.lockutils [req-5167cacc-a9d4-436a-8d50-fe937a55a2ec req-3b78fe10-bd01-4aee-99bb-816e47faa312 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Lock "6abaa16d-d5c3-447d-948f-53b77897103a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:17:09 compute-0 nova_compute[254061]: 2026-01-20 19:17:09.321 254065 DEBUG oslo_concurrency.lockutils [req-5167cacc-a9d4-436a-8d50-fe937a55a2ec req-3b78fe10-bd01-4aee-99bb-816e47faa312 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] Lock "6abaa16d-d5c3-447d-948f-53b77897103a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:17:09 compute-0 nova_compute[254061]: 2026-01-20 19:17:09.322 254065 DEBUG nova.compute.manager [req-5167cacc-a9d4-436a-8d50-fe937a55a2ec req-3b78fe10-bd01-4aee-99bb-816e47faa312 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] No waiting events found dispatching network-vif-plugged-64b2dadc-ee9c-4d67-8a7a-8e7044e8326c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 20 19:17:09 compute-0 nova_compute[254061]: 2026-01-20 19:17:09.322 254065 WARNING nova.compute.manager [req-5167cacc-a9d4-436a-8d50-fe937a55a2ec req-3b78fe10-bd01-4aee-99bb-816e47faa312 1c559ce237d745a3be104c26e5409427 618cbf228dfa42d2b56aac664c549207 - - default default] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] Received unexpected event network-vif-plugged-64b2dadc-ee9c-4d67-8a7a-8e7044e8326c for instance with vm_state deleted and task_state None.
Jan 20 19:17:09 compute-0 nova_compute[254061]: 2026-01-20 19:17:09.324 254065 DEBUG oslo_concurrency.lockutils [None req-af20c3c6-2e1e-433f-ac54-d88688466280 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.522s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:17:09 compute-0 nova_compute[254061]: 2026-01-20 19:17:09.349 254065 INFO nova.scheduler.client.report [None req-af20c3c6-2e1e-433f-ac54-d88688466280 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Deleted allocations for instance 6abaa16d-d5c3-447d-948f-53b77897103a
Jan 20 19:17:09 compute-0 nova_compute[254061]: 2026-01-20 19:17:09.420 254065 DEBUG oslo_concurrency.lockutils [None req-af20c3c6-2e1e-433f-ac54-d88688466280 d34bd159f8884263a7481e3fcff15267 dc8a6ea17f334edbbfaf2a91ec6fd167 - - default default] Lock "6abaa16d-d5c3-447d-948f-53b77897103a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.754s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:17:09 compute-0 nova_compute[254061]: 2026-01-20 19:17:09.803 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:17:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:17:09] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Jan 20 19:17:09 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:17:09] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Jan 20 19:17:10 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1057: 337 pgs: 337 active+clean; 121 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 107 KiB/s wr, 19 op/s
Jan 20 19:17:10 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:17:10 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:17:10 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:17:10.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:17:10 compute-0 podman[274638]: 2026-01-20 19:17:10.15269911 +0000 UTC m=+0.125180338 container health_status d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller)
Jan 20 19:17:10 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/2142720756' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:17:10 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:17:10 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:17:10 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:17:10 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:17:10.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:17:11 compute-0 nova_compute[254061]: 2026-01-20 19:17:11.129 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:17:11 compute-0 nova_compute[254061]: 2026-01-20 19:17:11.129 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:17:11 compute-0 ceph-mon[74381]: pgmap v1057: 337 pgs: 337 active+clean; 121 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 107 KiB/s wr, 19 op/s
Jan 20 19:17:11 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:17:12 compute-0 nova_compute[254061]: 2026-01-20 19:17:12.024 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:17:12 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1058: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 87 KiB/s rd, 108 KiB/s wr, 48 op/s
Jan 20 19:17:12 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:17:12 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:17:12 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:17:12.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:17:12 compute-0 nova_compute[254061]: 2026-01-20 19:17:12.147 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:17:12 compute-0 nova_compute[254061]: 2026-01-20 19:17:12.148 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 19:17:12 compute-0 nova_compute[254061]: 2026-01-20 19:17:12.148 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 19:17:12 compute-0 nova_compute[254061]: 2026-01-20 19:17:12.158 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:17:12 compute-0 nova_compute[254061]: 2026-01-20 19:17:12.174 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 19:17:12 compute-0 nova_compute[254061]: 2026-01-20 19:17:12.174 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:17:12 compute-0 nova_compute[254061]: 2026-01-20 19:17:12.175 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:17:12 compute-0 nova_compute[254061]: 2026-01-20 19:17:12.175 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 19:17:12 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:17:12 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:17:12 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:17:12.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:17:13 compute-0 nova_compute[254061]: 2026-01-20 19:17:13.128 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:17:13 compute-0 nova_compute[254061]: 2026-01-20 19:17:13.128 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 20 19:17:13 compute-0 nova_compute[254061]: 2026-01-20 19:17:13.152 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 20 19:17:13 compute-0 ceph-mon[74381]: pgmap v1058: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 87 KiB/s rd, 108 KiB/s wr, 48 op/s
Jan 20 19:17:13 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/1414230395' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:17:14 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1059: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 13 KiB/s wr, 29 op/s
Jan 20 19:17:14 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:17:14 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:17:14 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:17:14.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:17:14 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/993688253' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:17:14 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:17:14 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:17:14 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:17:14.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:17:15 compute-0 nova_compute[254061]: 2026-01-20 19:17:15.152 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:17:15 compute-0 ceph-mon[74381]: pgmap v1059: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 13 KiB/s wr, 29 op/s
Jan 20 19:17:15 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/4157528016' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:17:15 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/1427365168' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:17:16 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1060: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 13 KiB/s wr, 29 op/s
Jan 20 19:17:16 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:17:16 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:17:16 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:17:16.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:17:16 compute-0 nova_compute[254061]: 2026-01-20 19:17:16.129 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:17:16 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:17:16.268 165659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7018ca8a-de0e-4b56-bb43-675238d4f8b3, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 19:17:16 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:17:16 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:17:16 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:17:16.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:17:16 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:17:17 compute-0 nova_compute[254061]: 2026-01-20 19:17:17.026 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:17:17 compute-0 nova_compute[254061]: 2026-01-20 19:17:17.041 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:17:17 compute-0 nova_compute[254061]: 2026-01-20 19:17:17.138 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:17:17 compute-0 nova_compute[254061]: 2026-01-20 19:17:17.159 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:17:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:17:17.225Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:17:18 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1061: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Jan 20 19:17:18 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:17:18 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:17:18 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:17:18.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:17:18 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:17:18 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:17:18 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:17:18.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:17:18 compute-0 ceph-mon[74381]: pgmap v1060: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 13 KiB/s wr, 29 op/s
Jan 20 19:17:19 compute-0 nova_compute[254061]: 2026-01-20 19:17:19.124 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:17:19 compute-0 sudo[274677]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:17:19 compute-0 sudo[274677]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:17:19 compute-0 sudo[274677]: pam_unix(sudo:session): session closed for user root
Jan 20 19:17:19 compute-0 sudo[274702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 20 19:17:19 compute-0 sudo[274702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:17:19 compute-0 sudo[274723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:17:19 compute-0 sudo[274723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:17:19 compute-0 sudo[274723]: pam_unix(sudo:session): session closed for user root
Jan 20 19:17:19 compute-0 sudo[274702]: pam_unix(sudo:session): session closed for user root
Jan 20 19:17:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:17:19] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Jan 20 19:17:19 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:17:19] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Jan 20 19:17:19 compute-0 ceph-mon[74381]: pgmap v1061: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Jan 20 19:17:20 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1062: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Jan 20 19:17:20 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:17:20 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:17:20 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:17:20.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:17:20 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:17:20 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:17:20 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:17:20.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:17:20 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 19:17:20 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:17:20 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 20 19:17:20 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:17:20 compute-0 sudo[274786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:17:20 compute-0 sudo[274786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:17:20 compute-0 sudo[274786]: pam_unix(sudo:session): session closed for user root
Jan 20 19:17:20 compute-0 sudo[274811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 19:17:20 compute-0 sudo[274811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:17:20 compute-0 ceph-mon[74381]: pgmap v1062: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Jan 20 19:17:20 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 19:17:20 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 19:17:20 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:17:20 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:17:20 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 19:17:20 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 19:17:20 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 19:17:21 compute-0 podman[274877]: 2026-01-20 19:17:21.281913222 +0000 UTC m=+0.041130015 container create 392b94a351055b359eda4dee09c23dd1735faeb0aa106028aaeb037ed7e9b014 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_bouman, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:17:21 compute-0 systemd[1]: Started libpod-conmon-392b94a351055b359eda4dee09c23dd1735faeb0aa106028aaeb037ed7e9b014.scope.
Jan 20 19:17:21 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:17:21 compute-0 podman[274877]: 2026-01-20 19:17:21.35689575 +0000 UTC m=+0.116112573 container init 392b94a351055b359eda4dee09c23dd1735faeb0aa106028aaeb037ed7e9b014 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Jan 20 19:17:21 compute-0 podman[274877]: 2026-01-20 19:17:21.266221167 +0000 UTC m=+0.025437980 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:17:21 compute-0 podman[274877]: 2026-01-20 19:17:21.363621312 +0000 UTC m=+0.122838125 container start 392b94a351055b359eda4dee09c23dd1735faeb0aa106028aaeb037ed7e9b014 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_bouman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 19:17:21 compute-0 podman[274877]: 2026-01-20 19:17:21.367390514 +0000 UTC m=+0.126607327 container attach 392b94a351055b359eda4dee09c23dd1735faeb0aa106028aaeb037ed7e9b014 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Jan 20 19:17:21 compute-0 eager_bouman[274894]: 167 167
Jan 20 19:17:21 compute-0 systemd[1]: libpod-392b94a351055b359eda4dee09c23dd1735faeb0aa106028aaeb037ed7e9b014.scope: Deactivated successfully.
Jan 20 19:17:21 compute-0 podman[274877]: 2026-01-20 19:17:21.373345175 +0000 UTC m=+0.132561978 container died 392b94a351055b359eda4dee09c23dd1735faeb0aa106028aaeb037ed7e9b014 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_bouman, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Jan 20 19:17:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-fcb96f2f61c2c3b97c81c7e754ac48ed3ec7b1eb06e3271b8005526252b876f7-merged.mount: Deactivated successfully.
Jan 20 19:17:21 compute-0 podman[274877]: 2026-01-20 19:17:21.416529183 +0000 UTC m=+0.175746006 container remove 392b94a351055b359eda4dee09c23dd1735faeb0aa106028aaeb037ed7e9b014 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_bouman, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Jan 20 19:17:21 compute-0 systemd[1]: libpod-conmon-392b94a351055b359eda4dee09c23dd1735faeb0aa106028aaeb037ed7e9b014.scope: Deactivated successfully.
Jan 20 19:17:21 compute-0 podman[274918]: 2026-01-20 19:17:21.56869401 +0000 UTC m=+0.041407961 container create 45458d4288672946add9e82f94170302a119b0f585f84bce2b71fe74e417ecc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True)
Jan 20 19:17:21 compute-0 systemd[1]: Started libpod-conmon-45458d4288672946add9e82f94170302a119b0f585f84bce2b71fe74e417ecc9.scope.
Jan 20 19:17:21 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:17:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1b57c2b28ea6c5359f3715b5e01ff44a7dd123ea43f486c6266a3bb66965940/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:17:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1b57c2b28ea6c5359f3715b5e01ff44a7dd123ea43f486c6266a3bb66965940/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:17:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1b57c2b28ea6c5359f3715b5e01ff44a7dd123ea43f486c6266a3bb66965940/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:17:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1b57c2b28ea6c5359f3715b5e01ff44a7dd123ea43f486c6266a3bb66965940/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:17:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1b57c2b28ea6c5359f3715b5e01ff44a7dd123ea43f486c6266a3bb66965940/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:17:21 compute-0 podman[274918]: 2026-01-20 19:17:21.551183096 +0000 UTC m=+0.023897067 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:17:21 compute-0 podman[274918]: 2026-01-20 19:17:21.648607602 +0000 UTC m=+0.121321563 container init 45458d4288672946add9e82f94170302a119b0f585f84bce2b71fe74e417ecc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:17:21 compute-0 podman[274918]: 2026-01-20 19:17:21.663144235 +0000 UTC m=+0.135858186 container start 45458d4288672946add9e82f94170302a119b0f585f84bce2b71fe74e417ecc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_archimedes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 20 19:17:21 compute-0 podman[274918]: 2026-01-20 19:17:21.666555948 +0000 UTC m=+0.139269899 container attach 45458d4288672946add9e82f94170302a119b0f585f84bce2b71fe74e417ecc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_archimedes, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default)
Jan 20 19:17:21 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:17:21 compute-0 eloquent_archimedes[274934]: --> passed data devices: 0 physical, 1 LVM
Jan 20 19:17:21 compute-0 eloquent_archimedes[274934]: --> All data devices are unavailable
Jan 20 19:17:22 compute-0 systemd[1]: libpod-45458d4288672946add9e82f94170302a119b0f585f84bce2b71fe74e417ecc9.scope: Deactivated successfully.
Jan 20 19:17:22 compute-0 nova_compute[254061]: 2026-01-20 19:17:22.028 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:17:22 compute-0 podman[274918]: 2026-01-20 19:17:22.031069159 +0000 UTC m=+0.503783210 container died 45458d4288672946add9e82f94170302a119b0f585f84bce2b71fe74e417ecc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_archimedes, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Jan 20 19:17:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1b57c2b28ea6c5359f3715b5e01ff44a7dd123ea43f486c6266a3bb66965940-merged.mount: Deactivated successfully.
Jan 20 19:17:22 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1063: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Jan 20 19:17:22 compute-0 podman[274918]: 2026-01-20 19:17:22.091840682 +0000 UTC m=+0.564554633 container remove 45458d4288672946add9e82f94170302a119b0f585f84bce2b71fe74e417ecc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eloquent_archimedes, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Jan 20 19:17:22 compute-0 systemd[1]: libpod-conmon-45458d4288672946add9e82f94170302a119b0f585f84bce2b71fe74e417ecc9.scope: Deactivated successfully.
Jan 20 19:17:22 compute-0 nova_compute[254061]: 2026-01-20 19:17:22.103 254065 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1768936627.1018424, 6abaa16d-d5c3-447d-948f-53b77897103a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 20 19:17:22 compute-0 nova_compute[254061]: 2026-01-20 19:17:22.103 254065 INFO nova.compute.manager [-] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] VM Stopped (Lifecycle Event)
Jan 20 19:17:22 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:17:22 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:17:22 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:17:22.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:17:22 compute-0 nova_compute[254061]: 2026-01-20 19:17:22.126 254065 DEBUG nova.compute.manager [None req-7fee8bfe-92a6-4107-ba36-574930ad8118 - - - - - -] [instance: 6abaa16d-d5c3-447d-948f-53b77897103a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 20 19:17:22 compute-0 sudo[274811]: pam_unix(sudo:session): session closed for user root
Jan 20 19:17:22 compute-0 nova_compute[254061]: 2026-01-20 19:17:22.160 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:17:22 compute-0 sudo[274962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:17:22 compute-0 sudo[274962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:17:22 compute-0 sudo[274962]: pam_unix(sudo:session): session closed for user root
Jan 20 19:17:22 compute-0 sudo[274987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- lvm list --format json
Jan 20 19:17:22 compute-0 sudo[274987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:17:22 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:17:22 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:17:22 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:17:22.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:17:22 compute-0 podman[275054]: 2026-01-20 19:17:22.71310581 +0000 UTC m=+0.044680520 container create 9a78ac63b077b3618b4641ded5ddc82a484a38a9f82cf20b03cf52b76e56e6ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_solomon, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Jan 20 19:17:22 compute-0 systemd[1]: Started libpod-conmon-9a78ac63b077b3618b4641ded5ddc82a484a38a9f82cf20b03cf52b76e56e6ee.scope.
Jan 20 19:17:22 compute-0 podman[275054]: 2026-01-20 19:17:22.697055496 +0000 UTC m=+0.028630196 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:17:22 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:17:22 compute-0 podman[275054]: 2026-01-20 19:17:22.816723493 +0000 UTC m=+0.148298273 container init 9a78ac63b077b3618b4641ded5ddc82a484a38a9f82cf20b03cf52b76e56e6ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_solomon, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:17:22 compute-0 podman[275054]: 2026-01-20 19:17:22.823722162 +0000 UTC m=+0.155296842 container start 9a78ac63b077b3618b4641ded5ddc82a484a38a9f82cf20b03cf52b76e56e6ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_solomon, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Jan 20 19:17:22 compute-0 podman[275054]: 2026-01-20 19:17:22.827209857 +0000 UTC m=+0.158784647 container attach 9a78ac63b077b3618b4641ded5ddc82a484a38a9f82cf20b03cf52b76e56e6ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_solomon, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Jan 20 19:17:22 compute-0 stupefied_solomon[275070]: 167 167
Jan 20 19:17:22 compute-0 systemd[1]: libpod-9a78ac63b077b3618b4641ded5ddc82a484a38a9f82cf20b03cf52b76e56e6ee.scope: Deactivated successfully.
Jan 20 19:17:22 compute-0 podman[275054]: 2026-01-20 19:17:22.829443417 +0000 UTC m=+0.161018117 container died 9a78ac63b077b3618b4641ded5ddc82a484a38a9f82cf20b03cf52b76e56e6ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:17:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e9680ddf6339331a5ce1417fcafef398c66107b9eebebf38fc9461b465b1083-merged.mount: Deactivated successfully.
Jan 20 19:17:22 compute-0 podman[275054]: 2026-01-20 19:17:22.88240424 +0000 UTC m=+0.213978930 container remove 9a78ac63b077b3618b4641ded5ddc82a484a38a9f82cf20b03cf52b76e56e6ee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_solomon, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 20 19:17:22 compute-0 systemd[1]: libpod-conmon-9a78ac63b077b3618b4641ded5ddc82a484a38a9f82cf20b03cf52b76e56e6ee.scope: Deactivated successfully.
Jan 20 19:17:23 compute-0 podman[275096]: 2026-01-20 19:17:23.05019111 +0000 UTC m=+0.039950453 container create 8f975e1cd9876e277b3b3a67551b51ca7c20f132d7b7f225995c012c0b22e300 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_ardinghelli, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:17:23 compute-0 systemd[1]: Started libpod-conmon-8f975e1cd9876e277b3b3a67551b51ca7c20f132d7b7f225995c012c0b22e300.scope.
Jan 20 19:17:23 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:17:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/266ba77299c3db14bb69cf3be2e6c8307cc174d4ed188efd6c1a3c6dcd4ff6f1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:17:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/266ba77299c3db14bb69cf3be2e6c8307cc174d4ed188efd6c1a3c6dcd4ff6f1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:17:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/266ba77299c3db14bb69cf3be2e6c8307cc174d4ed188efd6c1a3c6dcd4ff6f1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:17:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/266ba77299c3db14bb69cf3be2e6c8307cc174d4ed188efd6c1a3c6dcd4ff6f1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:17:23 compute-0 podman[275096]: 2026-01-20 19:17:23.112081954 +0000 UTC m=+0.101841327 container init 8f975e1cd9876e277b3b3a67551b51ca7c20f132d7b7f225995c012c0b22e300 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_ardinghelli, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:17:23 compute-0 podman[275096]: 2026-01-20 19:17:23.122830325 +0000 UTC m=+0.112589668 container start 8f975e1cd9876e277b3b3a67551b51ca7c20f132d7b7f225995c012c0b22e300 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:17:23 compute-0 podman[275096]: 2026-01-20 19:17:23.126913855 +0000 UTC m=+0.116673198 container attach 8f975e1cd9876e277b3b3a67551b51ca7c20f132d7b7f225995c012c0b22e300 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_ardinghelli, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Jan 20 19:17:23 compute-0 podman[275096]: 2026-01-20 19:17:23.032718976 +0000 UTC m=+0.022478339 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:17:23 compute-0 nova_compute[254061]: 2026-01-20 19:17:23.129 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:17:23 compute-0 nova_compute[254061]: 2026-01-20 19:17:23.131 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 20 19:17:23 compute-0 ceph-mon[74381]: pgmap v1063: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Jan 20 19:17:23 compute-0 crazy_ardinghelli[275112]: {
Jan 20 19:17:23 compute-0 crazy_ardinghelli[275112]:     "0": [
Jan 20 19:17:23 compute-0 crazy_ardinghelli[275112]:         {
Jan 20 19:17:23 compute-0 crazy_ardinghelli[275112]:             "devices": [
Jan 20 19:17:23 compute-0 crazy_ardinghelli[275112]:                 "/dev/loop3"
Jan 20 19:17:23 compute-0 crazy_ardinghelli[275112]:             ],
Jan 20 19:17:23 compute-0 crazy_ardinghelli[275112]:             "lv_name": "ceph_lv0",
Jan 20 19:17:23 compute-0 crazy_ardinghelli[275112]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:17:23 compute-0 crazy_ardinghelli[275112]:             "lv_size": "21470642176",
Jan 20 19:17:23 compute-0 crazy_ardinghelli[275112]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=aecbbf3b-b405-507b-97d7-637a83f5b4b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5f53c0c6-6046-4836-83f9-ff93da7e674e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:17:23 compute-0 crazy_ardinghelli[275112]:             "lv_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 19:17:23 compute-0 crazy_ardinghelli[275112]:             "name": "ceph_lv0",
Jan 20 19:17:23 compute-0 crazy_ardinghelli[275112]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:17:23 compute-0 crazy_ardinghelli[275112]:             "tags": {
Jan 20 19:17:23 compute-0 crazy_ardinghelli[275112]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:17:23 compute-0 crazy_ardinghelli[275112]:                 "ceph.block_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 19:17:23 compute-0 crazy_ardinghelli[275112]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:17:23 compute-0 crazy_ardinghelli[275112]:                 "ceph.cluster_fsid": "aecbbf3b-b405-507b-97d7-637a83f5b4b1",
Jan 20 19:17:23 compute-0 crazy_ardinghelli[275112]:                 "ceph.cluster_name": "ceph",
Jan 20 19:17:23 compute-0 crazy_ardinghelli[275112]:                 "ceph.crush_device_class": "",
Jan 20 19:17:23 compute-0 crazy_ardinghelli[275112]:                 "ceph.encrypted": "0",
Jan 20 19:17:23 compute-0 crazy_ardinghelli[275112]:                 "ceph.osd_fsid": "5f53c0c6-6046-4836-83f9-ff93da7e674e",
Jan 20 19:17:23 compute-0 crazy_ardinghelli[275112]:                 "ceph.osd_id": "0",
Jan 20 19:17:23 compute-0 crazy_ardinghelli[275112]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:17:23 compute-0 crazy_ardinghelli[275112]:                 "ceph.type": "block",
Jan 20 19:17:23 compute-0 crazy_ardinghelli[275112]:                 "ceph.vdo": "0",
Jan 20 19:17:23 compute-0 crazy_ardinghelli[275112]:                 "ceph.with_tpm": "0"
Jan 20 19:17:23 compute-0 crazy_ardinghelli[275112]:             },
Jan 20 19:17:23 compute-0 crazy_ardinghelli[275112]:             "type": "block",
Jan 20 19:17:23 compute-0 crazy_ardinghelli[275112]:             "vg_name": "ceph_vg0"
Jan 20 19:17:23 compute-0 crazy_ardinghelli[275112]:         }
Jan 20 19:17:23 compute-0 crazy_ardinghelli[275112]:     ]
Jan 20 19:17:23 compute-0 crazy_ardinghelli[275112]: }
Jan 20 19:17:23 compute-0 systemd[1]: libpod-8f975e1cd9876e277b3b3a67551b51ca7c20f132d7b7f225995c012c0b22e300.scope: Deactivated successfully.
Jan 20 19:17:23 compute-0 podman[275096]: 2026-01-20 19:17:23.399193941 +0000 UTC m=+0.388953374 container died 8f975e1cd9876e277b3b3a67551b51ca7c20f132d7b7f225995c012c0b22e300 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_ardinghelli, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid)
Jan 20 19:17:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-266ba77299c3db14bb69cf3be2e6c8307cc174d4ed188efd6c1a3c6dcd4ff6f1-merged.mount: Deactivated successfully.
Jan 20 19:17:23 compute-0 podman[275096]: 2026-01-20 19:17:23.453643914 +0000 UTC m=+0.443403277 container remove 8f975e1cd9876e277b3b3a67551b51ca7c20f132d7b7f225995c012c0b22e300 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_ardinghelli, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:17:23 compute-0 systemd[1]: libpod-conmon-8f975e1cd9876e277b3b3a67551b51ca7c20f132d7b7f225995c012c0b22e300.scope: Deactivated successfully.
Jan 20 19:17:23 compute-0 sudo[274987]: pam_unix(sudo:session): session closed for user root
Jan 20 19:17:23 compute-0 sudo[275133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:17:23 compute-0 sudo[275133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:17:23 compute-0 sudo[275133]: pam_unix(sudo:session): session closed for user root
Jan 20 19:17:23 compute-0 sudo[275158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- raw list --format json
Jan 20 19:17:23 compute-0 sudo[275158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:17:24 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1064: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:17:24 compute-0 podman[275224]: 2026-01-20 19:17:24.094139131 +0000 UTC m=+0.046319184 container create f9336efc6724900776dbb42af055b9c67fea928272af3afd24d6977a0261b46b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:17:24 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:17:24 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:17:24 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:17:24.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:17:24 compute-0 systemd[1]: Started libpod-conmon-f9336efc6724900776dbb42af055b9c67fea928272af3afd24d6977a0261b46b.scope.
Jan 20 19:17:24 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:17:24 compute-0 podman[275224]: 2026-01-20 19:17:24.070653767 +0000 UTC m=+0.022833830 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:17:24 compute-0 podman[275224]: 2026-01-20 19:17:24.175985506 +0000 UTC m=+0.128165599 container init f9336efc6724900776dbb42af055b9c67fea928272af3afd24d6977a0261b46b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_blackburn, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 20 19:17:24 compute-0 podman[275224]: 2026-01-20 19:17:24.183574771 +0000 UTC m=+0.135754794 container start f9336efc6724900776dbb42af055b9c67fea928272af3afd24d6977a0261b46b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_blackburn, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Jan 20 19:17:24 compute-0 zen_blackburn[275242]: 167 167
Jan 20 19:17:24 compute-0 systemd[1]: libpod-f9336efc6724900776dbb42af055b9c67fea928272af3afd24d6977a0261b46b.scope: Deactivated successfully.
Jan 20 19:17:24 compute-0 podman[275224]: 2026-01-20 19:17:24.186972773 +0000 UTC m=+0.139152806 container attach f9336efc6724900776dbb42af055b9c67fea928272af3afd24d6977a0261b46b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_blackburn, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:17:24 compute-0 podman[275224]: 2026-01-20 19:17:24.187891537 +0000 UTC m=+0.140071570 container died f9336efc6724900776dbb42af055b9c67fea928272af3afd24d6977a0261b46b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_blackburn, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Jan 20 19:17:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-70c008ddbbc4a94bb0aa15ac92bf4e121598810a0439a65236156a9b5a4a2517-merged.mount: Deactivated successfully.
Jan 20 19:17:24 compute-0 podman[275224]: 2026-01-20 19:17:24.22677281 +0000 UTC m=+0.178952833 container remove f9336efc6724900776dbb42af055b9c67fea928272af3afd24d6977a0261b46b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:17:24 compute-0 systemd[1]: libpod-conmon-f9336efc6724900776dbb42af055b9c67fea928272af3afd24d6977a0261b46b.scope: Deactivated successfully.
Jan 20 19:17:24 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:17:24 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:17:24 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:17:24.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:17:24 compute-0 podman[275268]: 2026-01-20 19:17:24.429106064 +0000 UTC m=+0.054444594 container create 2ec6c956e7bd6035132843b3c770247814d5d0c28e171237ffa6500f3a995737 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_nightingale, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Jan 20 19:17:24 compute-0 systemd[1]: Started libpod-conmon-2ec6c956e7bd6035132843b3c770247814d5d0c28e171237ffa6500f3a995737.scope.
Jan 20 19:17:24 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:17:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f8b89d491eb22dd0d96ee07f8d4577f8214cf376a434a80e9f14d0bc0c0b939/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:17:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f8b89d491eb22dd0d96ee07f8d4577f8214cf376a434a80e9f14d0bc0c0b939/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:17:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f8b89d491eb22dd0d96ee07f8d4577f8214cf376a434a80e9f14d0bc0c0b939/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:17:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f8b89d491eb22dd0d96ee07f8d4577f8214cf376a434a80e9f14d0bc0c0b939/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:17:24 compute-0 podman[275268]: 2026-01-20 19:17:24.496544858 +0000 UTC m=+0.121883388 container init 2ec6c956e7bd6035132843b3c770247814d5d0c28e171237ffa6500f3a995737 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Jan 20 19:17:24 compute-0 podman[275268]: 2026-01-20 19:17:24.404278992 +0000 UTC m=+0.029617572 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:17:24 compute-0 podman[275268]: 2026-01-20 19:17:24.50290722 +0000 UTC m=+0.128245750 container start 2ec6c956e7bd6035132843b3c770247814d5d0c28e171237ffa6500f3a995737 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_nightingale, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:17:24 compute-0 podman[275268]: 2026-01-20 19:17:24.505568172 +0000 UTC m=+0.130906742 container attach 2ec6c956e7bd6035132843b3c770247814d5d0c28e171237ffa6500f3a995737 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_nightingale, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Jan 20 19:17:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:17:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:17:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:17:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:17:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:17:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:17:25 compute-0 ceph-mon[74381]: pgmap v1064: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:17:25 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:17:25 compute-0 lvm[275360]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 19:17:25 compute-0 lvm[275360]: VG ceph_vg0 finished
Jan 20 19:17:25 compute-0 hardcore_nightingale[275285]: {}
Jan 20 19:17:25 compute-0 systemd[1]: libpod-2ec6c956e7bd6035132843b3c770247814d5d0c28e171237ffa6500f3a995737.scope: Deactivated successfully.
Jan 20 19:17:25 compute-0 podman[275268]: 2026-01-20 19:17:25.266927679 +0000 UTC m=+0.892266209 container died 2ec6c956e7bd6035132843b3c770247814d5d0c28e171237ffa6500f3a995737 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_nightingale, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 19:17:25 compute-0 systemd[1]: libpod-2ec6c956e7bd6035132843b3c770247814d5d0c28e171237ffa6500f3a995737.scope: Consumed 1.193s CPU time.
Jan 20 19:17:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f8b89d491eb22dd0d96ee07f8d4577f8214cf376a434a80e9f14d0bc0c0b939-merged.mount: Deactivated successfully.
Jan 20 19:17:25 compute-0 podman[275268]: 2026-01-20 19:17:25.30833615 +0000 UTC m=+0.933674690 container remove 2ec6c956e7bd6035132843b3c770247814d5d0c28e171237ffa6500f3a995737 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 20 19:17:25 compute-0 systemd[1]: libpod-conmon-2ec6c956e7bd6035132843b3c770247814d5d0c28e171237ffa6500f3a995737.scope: Deactivated successfully.
Jan 20 19:17:25 compute-0 sudo[275158]: pam_unix(sudo:session): session closed for user root
Jan 20 19:17:25 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:17:25 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:17:25 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:17:25 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:17:25 compute-0 sudo[275376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 19:17:25 compute-0 sudo[275376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:17:25 compute-0 sudo[275376]: pam_unix(sudo:session): session closed for user root
Jan 20 19:17:26 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1065: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:17:26 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:17:26 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:17:26 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:17:26.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:17:26 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:17:26 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:17:26 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:17:26.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:17:26 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:17:26 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:17:26 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:17:27 compute-0 nova_compute[254061]: 2026-01-20 19:17:27.029 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:17:27 compute-0 nova_compute[254061]: 2026-01-20 19:17:27.161 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:17:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:17:27.226Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:17:27 compute-0 ceph-mon[74381]: pgmap v1065: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:17:28 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1066: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:17:28 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:17:28 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:17:28 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:17:28.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:17:28 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:17:28 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:17:28 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:17:28.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:17:29 compute-0 ceph-mon[74381]: pgmap v1066: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:17:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:17:29] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Jan 20 19:17:29 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:17:29] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Jan 20 19:17:30 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1067: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:17:30 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:17:30 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:17:30 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:17:30.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:17:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:17:30.294 165659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:17:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:17:30.295 165659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:17:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:17:30.295 165659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:17:30 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:17:30 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:17:30 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:17:30.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:17:31 compute-0 ceph-mon[74381]: pgmap v1067: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:17:31 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:17:32 compute-0 nova_compute[254061]: 2026-01-20 19:17:32.031 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:17:32 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1068: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:17:32 compute-0 podman[275408]: 2026-01-20 19:17:32.103304566 +0000 UTC m=+0.068601367 container health_status 7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 20 19:17:32 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:17:32 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:17:32 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:17:32.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:17:32 compute-0 nova_compute[254061]: 2026-01-20 19:17:32.201 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:17:32 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:17:32 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:17:32 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:17:32.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:17:33 compute-0 ceph-mon[74381]: pgmap v1068: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:17:34 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1069: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:17:34 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:17:34 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:17:34 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:17:34.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:17:34 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:17:34 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:17:34 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:17:34.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:17:35 compute-0 ceph-mon[74381]: pgmap v1069: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:17:36 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1070: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:17:36 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:17:36 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:17:36 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:17:36.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:17:36 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:17:36 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:17:36 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:17:36.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:17:36 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:17:37 compute-0 nova_compute[254061]: 2026-01-20 19:17:37.033 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:17:37 compute-0 nova_compute[254061]: 2026-01-20 19:17:37.202 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:17:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:17:37.227Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:17:37 compute-0 ceph-mon[74381]: pgmap v1070: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:17:37 compute-0 sshd-session[275433]: banner exchange: Connection from 104.218.165.188 port 56566: invalid format
Jan 20 19:17:38 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1071: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:17:38 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:17:38 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:17:38 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:17:38.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:17:38 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:17:38 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:17:38 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:17:38.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:17:38 compute-0 ceph-mon[74381]: pgmap v1071: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:17:39 compute-0 sudo[275437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:17:39 compute-0 sudo[275437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:17:39 compute-0 sudo[275437]: pam_unix(sudo:session): session closed for user root
Jan 20 19:17:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:17:39] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Jan 20 19:17:39 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:17:39] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Jan 20 19:17:40 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1072: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:17:40 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:17:40 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:17:40 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:17:40 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:17:40.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:17:40 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:17:40 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:17:40 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:17:40.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:17:41 compute-0 ceph-mon[74381]: pgmap v1072: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:17:41 compute-0 podman[275464]: 2026-01-20 19:17:41.174365248 +0000 UTC m=+0.132860956 container health_status d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Jan 20 19:17:41 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:17:42 compute-0 nova_compute[254061]: 2026-01-20 19:17:42.035 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:17:42 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1073: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:17:42 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:17:42 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:17:42 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:17:42.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:17:42 compute-0 nova_compute[254061]: 2026-01-20 19:17:42.204 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:17:42 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:17:42 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:17:42 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:17:42.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:17:43 compute-0 ceph-mon[74381]: pgmap v1073: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:17:44 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1074: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:17:44 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:17:44 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:17:44 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:17:44.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:17:44 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:17:44 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:17:44 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:17:44.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:17:45 compute-0 ceph-mon[74381]: pgmap v1074: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:17:46 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1075: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:17:46 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:17:46 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:17:46 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:17:46.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:17:46 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:17:46 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:17:46 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:17:46.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:17:46 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:17:47 compute-0 nova_compute[254061]: 2026-01-20 19:17:47.091 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:17:47 compute-0 nova_compute[254061]: 2026-01-20 19:17:47.206 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:17:47 compute-0 ceph-mon[74381]: pgmap v1075: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:17:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:17:47.227Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:17:47 compute-0 ovn_controller[155128]: 2026-01-20T19:17:47Z|00104|memory_trim|INFO|Detected inactivity (last active 30006 ms ago): trimming memory
Jan 20 19:17:48 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1076: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:17:48 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:17:48 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:17:48 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:17:48.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:17:48 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:17:48 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:17:48 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:17:48.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:17:49 compute-0 ceph-mon[74381]: pgmap v1076: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:17:49 compute-0 ceph-mon[74381]: from='client.? 192.168.122.10:0/799670811' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 19:17:49 compute-0 ceph-mon[74381]: from='client.? 192.168.122.10:0/799670811' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 19:17:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:17:49] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Jan 20 19:17:49 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:17:49] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Jan 20 19:17:50 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1077: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:17:50 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:17:50 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:17:50 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:17:50.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:17:50 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:17:50 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:17:50 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:17:50.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:17:51 compute-0 ceph-mon[74381]: pgmap v1077: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:17:51 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:17:52 compute-0 nova_compute[254061]: 2026-01-20 19:17:52.094 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:17:52 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1078: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:17:52 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:17:52 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:17:52 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:17:52.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:17:52 compute-0 nova_compute[254061]: 2026-01-20 19:17:52.208 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:17:52 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:17:52 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:17:52 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:17:52.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:17:53 compute-0 ceph-mon[74381]: pgmap v1078: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:17:54 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1079: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:17:54 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:17:54 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:17:54 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:17:54.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:17:54 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:17:54 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:17:54 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:17:54.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:17:55 compute-0 ceph-mgr[74676]: [balancer INFO root] Optimize plan auto_2026-01-20_19:17:55
Jan 20 19:17:55 compute-0 ceph-mgr[74676]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 19:17:55 compute-0 ceph-mgr[74676]: [balancer INFO root] do_upmap
Jan 20 19:17:55 compute-0 ceph-mgr[74676]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.mgr', 'default.rgw.log', 'vms', '.nfs', 'images', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.data', 'backups', 'default.rgw.meta', 'volumes']
Jan 20 19:17:55 compute-0 ceph-mgr[74676]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 19:17:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:17:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:17:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:17:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:17:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:17:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:17:55 compute-0 ceph-mon[74381]: pgmap v1079: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:17:55 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:17:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 19:17:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:17:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 19:17:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:17:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 20 19:17:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:17:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:17:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:17:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:17:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:17:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 20 19:17:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:17:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 20 19:17:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:17:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:17:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:17:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 19:17:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:17:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 20 19:17:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:17:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:17:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:17:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 19:17:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:17:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 20 19:17:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 19:17:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:17:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 19:17:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:17:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:17:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:17:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:17:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:17:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:17:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:17:55 compute-0 sshd-session[275434]: Connection closed by 104.218.165.188 port 56574
Jan 20 19:17:56 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1080: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:17:56 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:17:56 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:17:56 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:17:56.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:17:56 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:17:56 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:17:56 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:17:56.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:17:56 compute-0 sshd-session[275505]: Connection closed by 104.218.165.188 port 47962 [preauth]
Jan 20 19:17:56 compute-0 sshd-session[275509]: error: Protocol major versions differ: 2 vs. 1
Jan 20 19:17:56 compute-0 sshd-session[275509]: banner exchange: Connection from 104.218.165.188 port 47968: could not read protocol version
Jan 20 19:17:56 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:17:57 compute-0 nova_compute[254061]: 2026-01-20 19:17:57.096 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:17:57 compute-0 nova_compute[254061]: 2026-01-20 19:17:57.210 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:17:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:17:57.229Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:17:57 compute-0 ceph-mon[74381]: pgmap v1080: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:17:58 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1081: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:17:58 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:17:58 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:17:58 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:17:58.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:17:58 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:17:58 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:17:58 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:17:58.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:17:59 compute-0 ceph-mon[74381]: pgmap v1081: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:17:59 compute-0 sudo[275512]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:17:59 compute-0 sudo[275512]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:17:59 compute-0 sudo[275512]: pam_unix(sudo:session): session closed for user root
Jan 20 19:17:59 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:17:59] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Jan 20 19:17:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:17:59] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Jan 20 19:18:00 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1082: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:18:00 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:18:00 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:18:00 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:18:00.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:18:00 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:18:00 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:18:00 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:18:00.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:18:01 compute-0 ceph-mon[74381]: pgmap v1082: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:18:01 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:18:02 compute-0 nova_compute[254061]: 2026-01-20 19:18:02.100 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:18:02 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1083: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:18:02 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:18:02 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:18:02 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:18:02.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:18:02 compute-0 nova_compute[254061]: 2026-01-20 19:18:02.212 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:18:02 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:18:02 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:18:02 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:18:02.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:18:03 compute-0 podman[275541]: 2026-01-20 19:18:03.119104399 +0000 UTC m=+0.087646803 container health_status 7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 20 19:18:03 compute-0 ceph-mon[74381]: pgmap v1083: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:18:04 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1084: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:18:04 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:18:04 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:18:04 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:18:04.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:18:04 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:18:04 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:18:04 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:18:04.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:18:04 compute-0 ceph-mon[74381]: pgmap v1084: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:18:06 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1085: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:18:06 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:18:06 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 19:18:06 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:18:06.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 19:18:06 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:18:06 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:18:06 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:18:06.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:18:06 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:18:07 compute-0 nova_compute[254061]: 2026-01-20 19:18:07.100 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:18:07 compute-0 ceph-mon[74381]: pgmap v1085: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:18:07 compute-0 nova_compute[254061]: 2026-01-20 19:18:07.214 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:18:07 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:18:07.230Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:18:08 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1086: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:18:08 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:18:08 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:18:08 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:18:08.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:18:08 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:18:08 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:18:08 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:18:08.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:18:09 compute-0 nova_compute[254061]: 2026-01-20 19:18:09.147 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:18:09 compute-0 ceph-mon[74381]: pgmap v1086: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:18:09 compute-0 nova_compute[254061]: 2026-01-20 19:18:09.268 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:18:09 compute-0 nova_compute[254061]: 2026-01-20 19:18:09.269 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:18:09 compute-0 nova_compute[254061]: 2026-01-20 19:18:09.269 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:18:09 compute-0 nova_compute[254061]: 2026-01-20 19:18:09.269 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 19:18:09 compute-0 nova_compute[254061]: 2026-01-20 19:18:09.269 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:18:09 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:18:09 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2180103408' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:18:09 compute-0 nova_compute[254061]: 2026-01-20 19:18:09.696 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:18:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:18:09] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Jan 20 19:18:09 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:18:09] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Jan 20 19:18:09 compute-0 nova_compute[254061]: 2026-01-20 19:18:09.888 254065 WARNING nova.virt.libvirt.driver [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 19:18:09 compute-0 nova_compute[254061]: 2026-01-20 19:18:09.890 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4560MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 19:18:09 compute-0 nova_compute[254061]: 2026-01-20 19:18:09.890 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:18:09 compute-0 nova_compute[254061]: 2026-01-20 19:18:09.891 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:18:10 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1087: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:18:10 compute-0 nova_compute[254061]: 2026-01-20 19:18:10.120 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 19:18:10 compute-0 nova_compute[254061]: 2026-01-20 19:18:10.121 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 19:18:10 compute-0 nova_compute[254061]: 2026-01-20 19:18:10.139 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:18:10 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:18:10 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:18:10 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:18:10.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:18:10 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/2180103408' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:18:10 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:18:10 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:18:10 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.002000052s ======
Jan 20 19:18:10 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:18:10.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Jan 20 19:18:10 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:18:10 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1178541402' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:18:10 compute-0 nova_compute[254061]: 2026-01-20 19:18:10.613 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:18:10 compute-0 nova_compute[254061]: 2026-01-20 19:18:10.623 254065 DEBUG nova.compute.provider_tree [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Inventory has not changed in ProviderTree for provider: cb9161e5-191d-495c-920a-01144f42a215 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 19:18:10 compute-0 nova_compute[254061]: 2026-01-20 19:18:10.784 254065 DEBUG nova.scheduler.client.report [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Inventory has not changed for provider cb9161e5-191d-495c-920a-01144f42a215 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 19:18:10 compute-0 nova_compute[254061]: 2026-01-20 19:18:10.878 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 19:18:10 compute-0 nova_compute[254061]: 2026-01-20 19:18:10.879 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.988s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:18:11 compute-0 ceph-mon[74381]: pgmap v1087: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:18:11 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/1178541402' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:18:11 compute-0 nova_compute[254061]: 2026-01-20 19:18:11.861 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:18:11 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:18:12 compute-0 nova_compute[254061]: 2026-01-20 19:18:12.102 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:18:12 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1088: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:18:12 compute-0 nova_compute[254061]: 2026-01-20 19:18:12.129 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:18:12 compute-0 nova_compute[254061]: 2026-01-20 19:18:12.129 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 19:18:12 compute-0 podman[275612]: 2026-01-20 19:18:12.143497338 +0000 UTC m=+0.107530372 container health_status d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 20 19:18:12 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:18:12 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:18:12 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:18:12.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:18:12 compute-0 nova_compute[254061]: 2026-01-20 19:18:12.216 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:18:12 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:18:12 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:18:12 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:18:12.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:18:13 compute-0 nova_compute[254061]: 2026-01-20 19:18:13.130 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:18:13 compute-0 ceph-mon[74381]: pgmap v1088: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:18:14 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1089: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:18:14 compute-0 nova_compute[254061]: 2026-01-20 19:18:14.128 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:18:14 compute-0 nova_compute[254061]: 2026-01-20 19:18:14.128 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 19:18:14 compute-0 nova_compute[254061]: 2026-01-20 19:18:14.129 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 19:18:14 compute-0 nova_compute[254061]: 2026-01-20 19:18:14.150 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 19:18:14 compute-0 nova_compute[254061]: 2026-01-20 19:18:14.150 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:18:14 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:18:14 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:18:14 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:18:14.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:18:14 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/1219679551' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:18:14 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:18:14 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:18:14 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:18:14.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:18:15 compute-0 nova_compute[254061]: 2026-01-20 19:18:15.130 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:18:15 compute-0 ceph-mon[74381]: pgmap v1089: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:18:16 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1090: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:18:16 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:18:16 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.002000052s ======
Jan 20 19:18:16 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:18:16.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Jan 20 19:18:16 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/113639036' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:18:16 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/2873579722' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:18:16 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/3692937494' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:18:16 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:18:16 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:18:16 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:18:16.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:18:16 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:18:17 compute-0 nova_compute[254061]: 2026-01-20 19:18:17.105 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:18:17 compute-0 nova_compute[254061]: 2026-01-20 19:18:17.218 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:18:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:18:17.230Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:18:17 compute-0 ceph-mon[74381]: pgmap v1090: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:18:18 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1091: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:18:18 compute-0 nova_compute[254061]: 2026-01-20 19:18:18.131 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:18:18 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:18:18 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.002000051s ======
Jan 20 19:18:18 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:18:18.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000051s
Jan 20 19:18:18 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:18:18 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:18:18 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:18:18.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:18:18 compute-0 sshd-session[275647]: Accepted publickey for zuul from 192.168.122.10 port 57520 ssh2: ECDSA SHA256:OqjpBifwMHsrzwMbJxHbqE54q1skVz6aecL1tN9sOps
Jan 20 19:18:18 compute-0 systemd-logind[796]: New session 57 of user zuul.
Jan 20 19:18:18 compute-0 systemd[1]: Started Session 57 of User zuul.
Jan 20 19:18:18 compute-0 sshd-session[275647]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 19:18:18 compute-0 sudo[275651]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Jan 20 19:18:18 compute-0 sudo[275651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:18:19 compute-0 ceph-mon[74381]: pgmap v1091: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:18:19 compute-0 sudo[275685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:18:19 compute-0 sudo[275685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:18:19 compute-0 sudo[275685]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:18:19] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Jan 20 19:18:19 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:18:19] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Jan 20 19:18:20 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1092: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:18:20 compute-0 nova_compute[254061]: 2026-01-20 19:18:20.125 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:18:20 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:18:20 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:18:20 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:18:20.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:18:20 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:18:20 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:18:20 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:18:20.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:18:21 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.25652 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:21 compute-0 ceph-mon[74381]: pgmap v1092: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:18:21 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.16653 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:21 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.25600 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:21 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.25667 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:21 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:18:22 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.16668 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:22 compute-0 nova_compute[254061]: 2026-01-20 19:18:22.108 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:18:22 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1093: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:18:22 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.25673 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:22 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:18:22 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:18:22 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:18:22.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:18:22 compute-0 nova_compute[254061]: 2026-01-20 19:18:22.220 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:18:22 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:18:22 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:18:22 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:18:22.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:18:22 compute-0 ceph-mon[74381]: from='client.25652 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:22 compute-0 ceph-mon[74381]: from='client.16653 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:22 compute-0 ceph-mon[74381]: from='client.25600 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:22 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/1499355319' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 20 19:18:22 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0)
Jan 20 19:18:22 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3995146545' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 20 19:18:23 compute-0 ceph-mon[74381]: from='client.25667 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:23 compute-0 ceph-mon[74381]: from='client.16668 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:23 compute-0 ceph-mon[74381]: pgmap v1093: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:18:23 compute-0 ceph-mon[74381]: from='client.25673 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:23 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/3995146545' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 20 19:18:23 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/1887410304' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 20 19:18:24 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1094: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:18:24 compute-0 nova_compute[254061]: 2026-01-20 19:18:24.124 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:18:24 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:18:24 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:18:24 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:18:24.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:18:24 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:18:24 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:18:24 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:18:24.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:18:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:18:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:18:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:18:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:18:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:18:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:18:25 compute-0 ceph-mon[74381]: pgmap v1094: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:18:25 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:18:25 compute-0 sudo[276035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:18:25 compute-0 sudo[276035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:18:25 compute-0 sudo[276035]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:25 compute-0 sudo[276060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Jan 20 19:18:25 compute-0 sudo[276060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:18:26 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1095: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:18:26 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:18:26 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 19:18:26 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:18:26.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 19:18:26 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:18:26 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:18:26 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:18:26.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:18:26 compute-0 podman[276159]: 2026-01-20 19:18:26.436772475 +0000 UTC m=+0.075154892 container exec 2fba800b181b7eef43f9bbe592c3e2bd413c4a1140b3b26fe8cd839a68603da7 (image=quay.io/ceph/ceph:v19, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mon-compute-0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:18:26 compute-0 podman[276159]: 2026-01-20 19:18:26.531270591 +0000 UTC m=+0.169653008 container exec_died 2fba800b181b7eef43f9bbe592c3e2bd413c4a1140b3b26fe8cd839a68603da7 (image=quay.io/ceph/ceph:v19, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 19:18:26 compute-0 ceph-mon[74381]: pgmap v1095: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:18:26 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:18:27 compute-0 podman[276293]: 2026-01-20 19:18:27.097896481 +0000 UTC m=+0.053893529 container exec d3ebab8fa832c2f5835ae87e49226279b06e4ab5bb811ac33df7c9afc2478663 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 19:18:27 compute-0 podman[276293]: 2026-01-20 19:18:27.108074231 +0000 UTC m=+0.064071279 container exec_died d3ebab8fa832c2f5835ae87e49226279b06e4ab5bb811ac33df7c9afc2478663 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 19:18:27 compute-0 nova_compute[254061]: 2026-01-20 19:18:27.110 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:18:27 compute-0 nova_compute[254061]: 2026-01-20 19:18:27.222 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:18:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:18:27.231Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:18:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:18:27.231Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:18:27 compute-0 ovs-vsctl[276409]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Jan 20 19:18:27 compute-0 podman[276445]: 2026-01-20 19:18:27.685283942 +0000 UTC m=+0.066858233 container exec 4e8e761c58a323b2e232c17a02958108663f7cfbfd5c846bd9edf3835afc34ea (image=quay.io/ceph/haproxy:2.3, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm)
Jan 20 19:18:27 compute-0 podman[276445]: 2026-01-20 19:18:27.699481469 +0000 UTC m=+0.081055770 container exec_died 4e8e761c58a323b2e232c17a02958108663f7cfbfd5c846bd9edf3835afc34ea (image=quay.io/ceph/haproxy:2.3, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm)
Jan 20 19:18:27 compute-0 podman[276539]: 2026-01-20 19:18:27.968851299 +0000 UTC m=+0.078529163 container exec 03da620c845e54fd933e41497235ed884812b05ac119911f7372fce805879a03 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-keepalived-nfs-cephfs-compute-0-kuklye, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, description=keepalived for Ceph, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, name=keepalived, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, build-date=2023-02-22T09:23:20, release=1793, io.k8s.display-name=Keepalived on RHEL 9)
Jan 20 19:18:27 compute-0 podman[276539]: 2026-01-20 19:18:27.981290339 +0000 UTC m=+0.090968183 container exec_died 03da620c845e54fd933e41497235ed884812b05ac119911f7372fce805879a03 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-keepalived-nfs-cephfs-compute-0-kuklye, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, com.redhat.component=keepalived-container, distribution-scope=public, version=2.2.4, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, architecture=x86_64, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, io.openshift.expose-services=)
Jan 20 19:18:28 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1096: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:18:28 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:18:28 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:18:28 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:18:28.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:18:28 compute-0 podman[276607]: 2026-01-20 19:18:28.225681928 +0000 UTC m=+0.058730818 container exec a8dfcd2937823e651e7ddb7da3250c4b2d68a69832f3be4617cbfedf4f0f7749 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 19:18:28 compute-0 podman[276607]: 2026-01-20 19:18:28.260522461 +0000 UTC m=+0.093571321 container exec_died a8dfcd2937823e651e7ddb7da3250c4b2d68a69832f3be4617cbfedf4f0f7749 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 19:18:28 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:18:28 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:18:28 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:18:28.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:18:28 compute-0 podman[276758]: 2026-01-20 19:18:28.501034187 +0000 UTC m=+0.068146268 container exec 07b235bbcf8e275aef429beeae1d676b774a5b2e004859627b1a936217e5b679 (image=quay.io/ceph/grafana:10.4.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 20 19:18:28 compute-0 virtqemud[253535]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Jan 20 19:18:28 compute-0 virtqemud[253535]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Jan 20 19:18:28 compute-0 virtqemud[253535]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 20 19:18:28 compute-0 podman[276758]: 2026-01-20 19:18:28.662130938 +0000 UTC m=+0.229242999 container exec_died 07b235bbcf8e275aef429beeae1d676b774a5b2e004859627b1a936217e5b679 (image=quay.io/ceph/grafana:10.4.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 20 19:18:29 compute-0 podman[277006]: 2026-01-20 19:18:29.034309973 +0000 UTC m=+0.060200677 container exec 3b46b6d3dc84fc240ebe1bff0868890dcc11b4799d0aa0050a48fba74c90cc31 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 19:18:29 compute-0 ceph-mds[96670]: mds.cephfs.compute-0.bekmxe asok_command: cache status {prefix=cache status} (starting...)
Jan 20 19:18:29 compute-0 podman[277006]: 2026-01-20 19:18:29.08020187 +0000 UTC m=+0.106092544 container exec_died 3b46b6d3dc84fc240ebe1bff0868890dcc11b4799d0aa0050a48fba74c90cc31 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 19:18:29 compute-0 sudo[276060]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:29 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:18:29 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:18:29 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:18:29 compute-0 lvm[277099]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 19:18:29 compute-0 lvm[277099]: VG ceph_vg0 finished
Jan 20 19:18:29 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:18:29 compute-0 ceph-mds[96670]: mds.cephfs.compute-0.bekmxe asok_command: client ls {prefix=client ls} (starting...)
Jan 20 19:18:29 compute-0 ceph-mon[74381]: pgmap v1096: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:18:29 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:18:29 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:18:29 compute-0 sudo[277109]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:18:29 compute-0 sudo[277109]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:18:29 compute-0 sudo[277109]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:29 compute-0 sudo[277152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 20 19:18:29 compute-0 sudo[277152]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:18:29 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.25688 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:29 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.16686 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:18:29] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Jan 20 19:18:29 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:18:29] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Jan 20 19:18:29 compute-0 sudo[277152]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:29 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1097: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Jan 20 19:18:29 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 19:18:29 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:18:29 compute-0 ceph-mds[96670]: mds.cephfs.compute-0.bekmxe asok_command: damage ls {prefix=damage ls} (starting...)
Jan 20 19:18:29 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 20 19:18:30 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Jan 20 19:18:30 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 20 19:18:30 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Jan 20 19:18:30 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1690444460' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 20 19:18:30 compute-0 ceph-mds[96670]: mds.cephfs.compute-0.bekmxe asok_command: dump loads {prefix=dump loads} (starting...)
Jan 20 19:18:30 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.25624 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:30 compute-0 ceph-mds[96670]: mds.cephfs.compute-0.bekmxe asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Jan 20 19:18:30 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:18:30 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:18:30 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:18:30.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:18:30 compute-0 ceph-mon[74381]: log_channel(cluster) log [WRN] : Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Jan 20 19:18:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:18:30.295 165659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:18:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:18:30.296 165659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:18:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:18:30.296 165659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:18:30 compute-0 ceph-mds[96670]: mds.cephfs.compute-0.bekmxe asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Jan 20 19:18:30 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 19:18:30 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2089857888' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 19:18:30 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:18:30 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:18:30 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:18:30.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:18:30 compute-0 ceph-mds[96670]: mds.cephfs.compute-0.bekmxe asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Jan 20 19:18:30 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Jan 20 19:18:30 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 20 19:18:30 compute-0 ceph-mds[96670]: mds.cephfs.compute-0.bekmxe asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Jan 20 19:18:30 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config log"} v 0)
Jan 20 19:18:30 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3029521871' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 20 19:18:30 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.25700 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:30 compute-0 ceph-mds[96670]: mds.cephfs.compute-0.bekmxe asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Jan 20 19:18:30 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:18:30 compute-0 ceph-mds[96670]: mds.cephfs.compute-0.bekmxe asok_command: get subtrees {prefix=get subtrees} (starting...)
Jan 20 19:18:30 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.16701 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:30 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 19:18:30 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 19:18:30 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:18:30 compute-0 ceph-mon[74381]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 20 19:18:30 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/1690444460' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 20 19:18:31 compute-0 sudo[277434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:18:31 compute-0 sudo[277434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:18:31 compute-0 sudo[277434]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:31 compute-0 sudo[277459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 19:18:31 compute-0 sudo[277459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:18:31 compute-0 ceph-mds[96670]: mds.cephfs.compute-0.bekmxe asok_command: ops {prefix=ops} (starting...)
Jan 20 19:18:31 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.25648 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:31 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config-key dump"} v 0)
Jan 20 19:18:31 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2689375801' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 20 19:18:31 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.16737 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:31 compute-0 podman[277574]: 2026-01-20 19:18:31.511754958 +0000 UTC m=+0.037535466 container create 981a79b79c3c6edacd8805993e5f7a2348b566364dda66f44c0f1c02582828e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Jan 20 19:18:31 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.25727 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:31 compute-0 systemd[1]: Started libpod-conmon-981a79b79c3c6edacd8805993e5f7a2348b566364dda66f44c0f1c02582828e0.scope.
Jan 20 19:18:31 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:18:31 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.16743 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:31 compute-0 podman[277574]: 2026-01-20 19:18:31.494505591 +0000 UTC m=+0.020286119 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:18:31 compute-0 podman[277574]: 2026-01-20 19:18:31.593534915 +0000 UTC m=+0.119315473 container init 981a79b79c3c6edacd8805993e5f7a2348b566364dda66f44c0f1c02582828e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_lehmann, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Jan 20 19:18:31 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.25663 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:31 compute-0 podman[277574]: 2026-01-20 19:18:31.602157614 +0000 UTC m=+0.127938142 container start 981a79b79c3c6edacd8805993e5f7a2348b566364dda66f44c0f1c02582828e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_lehmann, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:18:31 compute-0 podman[277574]: 2026-01-20 19:18:31.606416777 +0000 UTC m=+0.132197285 container attach 981a79b79c3c6edacd8805993e5f7a2348b566364dda66f44c0f1c02582828e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_lehmann, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Jan 20 19:18:31 compute-0 funny_lehmann[277610]: 167 167
Jan 20 19:18:31 compute-0 systemd[1]: libpod-981a79b79c3c6edacd8805993e5f7a2348b566364dda66f44c0f1c02582828e0.scope: Deactivated successfully.
Jan 20 19:18:31 compute-0 podman[277574]: 2026-01-20 19:18:31.608747339 +0000 UTC m=+0.134527867 container died 981a79b79c3c6edacd8805993e5f7a2348b566364dda66f44c0f1c02582828e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Jan 20 19:18:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-59bca1194c0141d87b28ffab2c7a861904cc41418758c6c35f2457b94e2b89b1-merged.mount: Deactivated successfully.
Jan 20 19:18:31 compute-0 podman[277574]: 2026-01-20 19:18:31.657189293 +0000 UTC m=+0.182969801 container remove 981a79b79c3c6edacd8805993e5f7a2348b566364dda66f44c0f1c02582828e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_lehmann, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:18:31 compute-0 systemd[1]: libpod-conmon-981a79b79c3c6edacd8805993e5f7a2348b566364dda66f44c0f1c02582828e0.scope: Deactivated successfully.
Jan 20 19:18:31 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.16752 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:31 compute-0 podman[277658]: 2026-01-20 19:18:31.824211211 +0000 UTC m=+0.044994364 container create eb8c21828039a9d8cffd1a7a358609f7ae89e0f411bf1f758acf8b8ed3f39f07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:18:31 compute-0 systemd[1]: Started libpod-conmon-eb8c21828039a9d8cffd1a7a358609f7ae89e0f411bf1f758acf8b8ed3f39f07.scope.
Jan 20 19:18:31 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:18:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63fa2527ab9870f7bffd454c4006454e798639e8bf6fff26d2001a7e16cc8aa5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:18:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63fa2527ab9870f7bffd454c4006454e798639e8bf6fff26d2001a7e16cc8aa5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:18:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63fa2527ab9870f7bffd454c4006454e798639e8bf6fff26d2001a7e16cc8aa5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:18:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63fa2527ab9870f7bffd454c4006454e798639e8bf6fff26d2001a7e16cc8aa5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:18:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63fa2527ab9870f7bffd454c4006454e798639e8bf6fff26d2001a7e16cc8aa5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:18:31 compute-0 ceph-mds[96670]: mds.cephfs.compute-0.bekmxe asok_command: session ls {prefix=session ls} (starting...)
Jan 20 19:18:31 compute-0 podman[277658]: 2026-01-20 19:18:31.805885585 +0000 UTC m=+0.026668658 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:18:31 compute-0 podman[277658]: 2026-01-20 19:18:31.905433444 +0000 UTC m=+0.126216527 container init eb8c21828039a9d8cffd1a7a358609f7ae89e0f411bf1f758acf8b8ed3f39f07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 20 19:18:31 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1098: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:18:31 compute-0 podman[277658]: 2026-01-20 19:18:31.921269573 +0000 UTC m=+0.142052636 container start eb8c21828039a9d8cffd1a7a358609f7ae89e0f411bf1f758acf8b8ed3f39f07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:18:31 compute-0 podman[277658]: 2026-01-20 19:18:31.924772077 +0000 UTC m=+0.145555140 container attach eb8c21828039a9d8cffd1a7a358609f7ae89e0f411bf1f758acf8b8ed3f39f07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 20 19:18:31 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.25742 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:31 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:18:32 compute-0 ceph-mon[74381]: from='client.25688 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:32 compute-0 ceph-mon[74381]: from='client.16686 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:32 compute-0 ceph-mon[74381]: pgmap v1097: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Jan 20 19:18:32 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/498350049' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 20 19:18:32 compute-0 ceph-mon[74381]: from='client.25624 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:32 compute-0 ceph-mon[74381]: Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Jan 20 19:18:32 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/374009010' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 19:18:32 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/2089857888' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 19:18:32 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/3735775170' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 20 19:18:32 compute-0 ceph-mon[74381]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 20 19:18:32 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/3029521871' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 20 19:18:32 compute-0 ceph-mon[74381]: from='client.25700 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:32 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:18:32 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 19:18:32 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/3199918154' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 19:18:32 compute-0 ceph-mon[74381]: from='client.16701 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:32 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 19:18:32 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 19:18:32 compute-0 ceph-mon[74381]: from='client.25648 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:32 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/2689375801' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 20 19:18:32 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/3447952620' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 20 19:18:32 compute-0 ceph-mon[74381]: from='client.16737 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:32 compute-0 ceph-mon[74381]: from='client.25727 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:32 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/2295871397' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 20 19:18:32 compute-0 ceph-mon[74381]: from='client.16743 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:32 compute-0 ceph-mon[74381]: from='client.25663 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:32 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/68543276' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 20 19:18:32 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.25681 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:32 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.16761 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:32 compute-0 ceph-mds[96670]: mds.cephfs.compute-0.bekmxe asok_command: status {prefix=status} (starting...)
Jan 20 19:18:32 compute-0 nova_compute[254061]: 2026-01-20 19:18:32.149 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:18:32 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.25687 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:32 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:18:32 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:18:32 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:18:32.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:18:32 compute-0 nova_compute[254061]: 2026-01-20 19:18:32.224 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:18:32 compute-0 mystifying_dubinsky[277679]: --> passed data devices: 0 physical, 1 LVM
Jan 20 19:18:32 compute-0 mystifying_dubinsky[277679]: --> All data devices are unavailable
Jan 20 19:18:32 compute-0 systemd[1]: libpod-eb8c21828039a9d8cffd1a7a358609f7ae89e0f411bf1f758acf8b8ed3f39f07.scope: Deactivated successfully.
Jan 20 19:18:32 compute-0 podman[277658]: 2026-01-20 19:18:32.265330194 +0000 UTC m=+0.486113257 container died eb8c21828039a9d8cffd1a7a358609f7ae89e0f411bf1f758acf8b8ed3f39f07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_dubinsky, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:18:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-63fa2527ab9870f7bffd454c4006454e798639e8bf6fff26d2001a7e16cc8aa5-merged.mount: Deactivated successfully.
Jan 20 19:18:32 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Jan 20 19:18:32 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4280615681' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 20 19:18:32 compute-0 podman[277658]: 2026-01-20 19:18:32.309040083 +0000 UTC m=+0.529823146 container remove eb8c21828039a9d8cffd1a7a358609f7ae89e0f411bf1f758acf8b8ed3f39f07 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_dubinsky, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:18:32 compute-0 systemd[1]: libpod-conmon-eb8c21828039a9d8cffd1a7a358609f7ae89e0f411bf1f758acf8b8ed3f39f07.scope: Deactivated successfully.
Jan 20 19:18:32 compute-0 sudo[277459]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:32 compute-0 sudo[277770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:18:32 compute-0 sudo[277770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:18:32 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:18:32 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:18:32 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:18:32.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:18:32 compute-0 sudo[277770]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:32 compute-0 sudo[277797]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- lvm list --format json
Jan 20 19:18:32 compute-0 sudo[277797]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:18:32 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.25766 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:32 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Jan 20 19:18:32 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2406801097' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 20 19:18:32 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.25702 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:32 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Jan 20 19:18:32 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2077948565' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 20 19:18:32 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.25778 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:32 compute-0 podman[277915]: 2026-01-20 19:18:32.851001649 +0000 UTC m=+0.035912052 container create 42be81bfe8236f5f9449e9af622299df6d2eef91ea25d988e1288b575025d4e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_spence, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Jan 20 19:18:32 compute-0 systemd[1]: Started libpod-conmon-42be81bfe8236f5f9449e9af622299df6d2eef91ea25d988e1288b575025d4e3.scope.
Jan 20 19:18:32 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:18:32 compute-0 podman[277915]: 2026-01-20 19:18:32.925413203 +0000 UTC m=+0.110323606 container init 42be81bfe8236f5f9449e9af622299df6d2eef91ea25d988e1288b575025d4e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 20 19:18:32 compute-0 podman[277915]: 2026-01-20 19:18:32.834638726 +0000 UTC m=+0.019549159 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:18:32 compute-0 podman[277915]: 2026-01-20 19:18:32.932103169 +0000 UTC m=+0.117013572 container start 42be81bfe8236f5f9449e9af622299df6d2eef91ea25d988e1288b575025d4e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_spence, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:18:32 compute-0 xenodochial_spence[277941]: 167 167
Jan 20 19:18:32 compute-0 systemd[1]: libpod-42be81bfe8236f5f9449e9af622299df6d2eef91ea25d988e1288b575025d4e3.scope: Deactivated successfully.
Jan 20 19:18:32 compute-0 podman[277915]: 2026-01-20 19:18:32.936767234 +0000 UTC m=+0.121677637 container attach 42be81bfe8236f5f9449e9af622299df6d2eef91ea25d988e1288b575025d4e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_spence, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Jan 20 19:18:32 compute-0 podman[277915]: 2026-01-20 19:18:32.93703388 +0000 UTC m=+0.121944283 container died 42be81bfe8236f5f9449e9af622299df6d2eef91ea25d988e1288b575025d4e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_spence, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 19:18:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-de7b8d002ef1d4716791371e7d65ebba19e3002a780b4ade1af4d59af8699ed5-merged.mount: Deactivated successfully.
Jan 20 19:18:32 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Jan 20 19:18:32 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/129417229' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 20 19:18:32 compute-0 podman[277915]: 2026-01-20 19:18:32.973833006 +0000 UTC m=+0.158743409 container remove 42be81bfe8236f5f9449e9af622299df6d2eef91ea25d988e1288b575025d4e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_spence, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:18:32 compute-0 systemd[1]: libpod-conmon-42be81bfe8236f5f9449e9af622299df6d2eef91ea25d988e1288b575025d4e3.scope: Deactivated successfully.
Jan 20 19:18:33 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Jan 20 19:18:33 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 20 19:18:33 compute-0 ceph-mon[74381]: from='client.16752 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:33 compute-0 ceph-mon[74381]: pgmap v1098: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:18:33 compute-0 ceph-mon[74381]: from='client.25742 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:33 compute-0 ceph-mon[74381]: from='client.25681 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:33 compute-0 ceph-mon[74381]: from='client.16761 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:33 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/2502548731' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 20 19:18:33 compute-0 ceph-mon[74381]: from='client.25687 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:33 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/4280615681' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 20 19:18:33 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/929285382' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 20 19:18:33 compute-0 ceph-mon[74381]: from='client.25766 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:33 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/2406801097' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 20 19:18:33 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/2853487584' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 20 19:18:33 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/2077948565' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 20 19:18:33 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/604113417' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 20 19:18:33 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/129417229' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 20 19:18:33 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/3707371614' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 20 19:18:33 compute-0 podman[277980]: 2026-01-20 19:18:33.15957491 +0000 UTC m=+0.056917820 container create f1643194e679dba7337980ea4f96a09390a0f9364d87b8bb2f5f68ba481c1eee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_germain, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Jan 20 19:18:33 compute-0 systemd[1]: Started libpod-conmon-f1643194e679dba7337980ea4f96a09390a0f9364d87b8bb2f5f68ba481c1eee.scope.
Jan 20 19:18:33 compute-0 podman[277980]: 2026-01-20 19:18:33.14110368 +0000 UTC m=+0.038446620 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:18:33 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:18:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c36f5b9948277c0b83a34eb68823ba5bb804609a37534eba62573290b178d101/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:18:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c36f5b9948277c0b83a34eb68823ba5bb804609a37534eba62573290b178d101/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:18:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c36f5b9948277c0b83a34eb68823ba5bb804609a37534eba62573290b178d101/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:18:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c36f5b9948277c0b83a34eb68823ba5bb804609a37534eba62573290b178d101/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:18:33 compute-0 podman[277980]: 2026-01-20 19:18:33.262679252 +0000 UTC m=+0.160022182 container init f1643194e679dba7337980ea4f96a09390a0f9364d87b8bb2f5f68ba481c1eee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_germain, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Jan 20 19:18:33 compute-0 podman[278017]: 2026-01-20 19:18:33.262698183 +0000 UTC m=+0.056740635 container health_status 7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202)
Jan 20 19:18:33 compute-0 podman[277980]: 2026-01-20 19:18:33.270086359 +0000 UTC m=+0.167429269 container start f1643194e679dba7337980ea4f96a09390a0f9364d87b8bb2f5f68ba481c1eee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_germain, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:18:33 compute-0 podman[277980]: 2026-01-20 19:18:33.273870959 +0000 UTC m=+0.171213869 container attach f1643194e679dba7337980ea4f96a09390a0f9364d87b8bb2f5f68ba481c1eee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_germain, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Jan 20 19:18:33 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.16815 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T19:18:33.355+0000 7fb4429f1640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 20 19:18:33 compute-0 ceph-mgr[74676]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 20 19:18:33 compute-0 distracted_germain[278031]: {
Jan 20 19:18:33 compute-0 distracted_germain[278031]:     "0": [
Jan 20 19:18:33 compute-0 distracted_germain[278031]:         {
Jan 20 19:18:33 compute-0 distracted_germain[278031]:             "devices": [
Jan 20 19:18:33 compute-0 distracted_germain[278031]:                 "/dev/loop3"
Jan 20 19:18:33 compute-0 distracted_germain[278031]:             ],
Jan 20 19:18:33 compute-0 distracted_germain[278031]:             "lv_name": "ceph_lv0",
Jan 20 19:18:33 compute-0 distracted_germain[278031]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:18:33 compute-0 distracted_germain[278031]:             "lv_size": "21470642176",
Jan 20 19:18:33 compute-0 distracted_germain[278031]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=aecbbf3b-b405-507b-97d7-637a83f5b4b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5f53c0c6-6046-4836-83f9-ff93da7e674e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:18:33 compute-0 distracted_germain[278031]:             "lv_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 19:18:33 compute-0 distracted_germain[278031]:             "name": "ceph_lv0",
Jan 20 19:18:33 compute-0 distracted_germain[278031]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:18:33 compute-0 distracted_germain[278031]:             "tags": {
Jan 20 19:18:33 compute-0 distracted_germain[278031]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:18:33 compute-0 distracted_germain[278031]:                 "ceph.block_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 19:18:33 compute-0 distracted_germain[278031]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:18:33 compute-0 distracted_germain[278031]:                 "ceph.cluster_fsid": "aecbbf3b-b405-507b-97d7-637a83f5b4b1",
Jan 20 19:18:33 compute-0 distracted_germain[278031]:                 "ceph.cluster_name": "ceph",
Jan 20 19:18:33 compute-0 distracted_germain[278031]:                 "ceph.crush_device_class": "",
Jan 20 19:18:33 compute-0 distracted_germain[278031]:                 "ceph.encrypted": "0",
Jan 20 19:18:33 compute-0 distracted_germain[278031]:                 "ceph.osd_fsid": "5f53c0c6-6046-4836-83f9-ff93da7e674e",
Jan 20 19:18:33 compute-0 distracted_germain[278031]:                 "ceph.osd_id": "0",
Jan 20 19:18:33 compute-0 distracted_germain[278031]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:18:33 compute-0 distracted_germain[278031]:                 "ceph.type": "block",
Jan 20 19:18:33 compute-0 distracted_germain[278031]:                 "ceph.vdo": "0",
Jan 20 19:18:33 compute-0 distracted_germain[278031]:                 "ceph.with_tpm": "0"
Jan 20 19:18:33 compute-0 distracted_germain[278031]:             },
Jan 20 19:18:33 compute-0 distracted_germain[278031]:             "type": "block",
Jan 20 19:18:33 compute-0 distracted_germain[278031]:             "vg_name": "ceph_vg0"
Jan 20 19:18:33 compute-0 distracted_germain[278031]:         }
Jan 20 19:18:33 compute-0 distracted_germain[278031]:     ]
Jan 20 19:18:33 compute-0 distracted_germain[278031]: }
Jan 20 19:18:33 compute-0 systemd[1]: libpod-f1643194e679dba7337980ea4f96a09390a0f9364d87b8bb2f5f68ba481c1eee.scope: Deactivated successfully.
Jan 20 19:18:33 compute-0 podman[277980]: 2026-01-20 19:18:33.557034946 +0000 UTC m=+0.454377866 container died f1643194e679dba7337980ea4f96a09390a0f9364d87b8bb2f5f68ba481c1eee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:18:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-c36f5b9948277c0b83a34eb68823ba5bb804609a37534eba62573290b178d101-merged.mount: Deactivated successfully.
Jan 20 19:18:33 compute-0 podman[277980]: 2026-01-20 19:18:33.599025669 +0000 UTC m=+0.496368579 container remove f1643194e679dba7337980ea4f96a09390a0f9364d87b8bb2f5f68ba481c1eee (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_germain, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2)
Jan 20 19:18:33 compute-0 systemd[1]: libpod-conmon-f1643194e679dba7337980ea4f96a09390a0f9364d87b8bb2f5f68ba481c1eee.scope: Deactivated successfully.
Jan 20 19:18:33 compute-0 sudo[277797]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:33 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Jan 20 19:18:33 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 20 19:18:33 compute-0 sudo[278110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:18:33 compute-0 sudo[278110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:18:33 compute-0 sudo[278110]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:33 compute-0 sudo[278137]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- raw list --format json
Jan 20 19:18:33 compute-0 sudo[278137]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:18:33 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Jan 20 19:18:33 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/631557007' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 20 19:18:33 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.25738 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:33 compute-0 ceph-mgr[74676]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 20 19:18:33 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T19:18:33.895+0000 7fb4429f1640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 20 19:18:33 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1099: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Jan 20 19:18:34 compute-0 podman[278254]: 2026-01-20 19:18:34.180500063 +0000 UTC m=+0.046090533 container create 5e32039252ec0e88c470d7584b368666af1d75fad3213f5bc186a99a1c6c104f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_colden, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 20 19:18:34 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 20 19:18:34 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/30701454' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 20 19:18:34 compute-0 systemd[1]: Started libpod-conmon-5e32039252ec0e88c470d7584b368666af1d75fad3213f5bc186a99a1c6c104f.scope.
Jan 20 19:18:34 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:18:34 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:18:34 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:18:34.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:18:34 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:18:34 compute-0 podman[278254]: 2026-01-20 19:18:34.157172944 +0000 UTC m=+0.022763434 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:18:34 compute-0 podman[278254]: 2026-01-20 19:18:34.252997835 +0000 UTC m=+0.118588325 container init 5e32039252ec0e88c470d7584b368666af1d75fad3213f5bc186a99a1c6c104f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Jan 20 19:18:34 compute-0 podman[278254]: 2026-01-20 19:18:34.26034753 +0000 UTC m=+0.125938010 container start 5e32039252ec0e88c470d7584b368666af1d75fad3213f5bc186a99a1c6c104f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_colden, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:18:34 compute-0 reverent_colden[278271]: 167 167
Jan 20 19:18:34 compute-0 systemd[1]: libpod-5e32039252ec0e88c470d7584b368666af1d75fad3213f5bc186a99a1c6c104f.scope: Deactivated successfully.
Jan 20 19:18:34 compute-0 podman[278254]: 2026-01-20 19:18:34.272025229 +0000 UTC m=+0.137615749 container attach 5e32039252ec0e88c470d7584b368666af1d75fad3213f5bc186a99a1c6c104f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_colden, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1)
Jan 20 19:18:34 compute-0 podman[278254]: 2026-01-20 19:18:34.272687107 +0000 UTC m=+0.138277597 container died 5e32039252ec0e88c470d7584b368666af1d75fad3213f5bc186a99a1c6c104f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_colden, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 20 19:18:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-0ffbff17652e9403c2e287f1a8edd3804b033566160c3b3b4760af2584f4bbb6-merged.mount: Deactivated successfully.
Jan 20 19:18:34 compute-0 podman[278254]: 2026-01-20 19:18:34.306199776 +0000 UTC m=+0.171790246 container remove 5e32039252ec0e88c470d7584b368666af1d75fad3213f5bc186a99a1c6c104f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_colden, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 20 19:18:34 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Jan 20 19:18:34 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/232166718' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 20 19:18:34 compute-0 systemd[1]: libpod-conmon-5e32039252ec0e88c470d7584b368666af1d75fad3213f5bc186a99a1c6c104f.scope: Deactivated successfully.
Jan 20 19:18:34 compute-0 ceph-mon[74381]: from='client.25702 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:34 compute-0 ceph-mon[74381]: from='client.25778 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:34 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/3397283718' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 20 19:18:34 compute-0 ceph-mon[74381]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 20 19:18:34 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/1397667005' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 20 19:18:34 compute-0 ceph-mon[74381]: from='client.16815 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:34 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/3262941817' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 20 19:18:34 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/880056048' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 20 19:18:34 compute-0 ceph-mon[74381]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 20 19:18:34 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/2595916231' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 20 19:18:34 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/4217462611' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 20 19:18:34 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/1898717301' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 20 19:18:34 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/631557007' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 20 19:18:34 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/4268775306' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 20 19:18:34 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:18:34 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:18:34 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:18:34.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:18:34 compute-0 podman[278326]: 2026-01-20 19:18:34.480980169 +0000 UTC m=+0.044424489 container create 7c82d2da478310bbc710528ac13ed4336d1cb8ffe136807c5c2399cd1690b3e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_blackburn, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:18:34 compute-0 systemd[1]: Started libpod-conmon-7c82d2da478310bbc710528ac13ed4336d1cb8ffe136807c5c2399cd1690b3e2.scope.
Jan 20 19:18:34 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:18:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8240118b418965c82c2d89c62efbcfd155478e1b2bb80e18e60cb398f6ce6cf8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:18:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8240118b418965c82c2d89c62efbcfd155478e1b2bb80e18e60cb398f6ce6cf8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:18:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8240118b418965c82c2d89c62efbcfd155478e1b2bb80e18e60cb398f6ce6cf8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:18:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8240118b418965c82c2d89c62efbcfd155478e1b2bb80e18e60cb398f6ce6cf8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:18:34 compute-0 podman[278326]: 2026-01-20 19:18:34.548840557 +0000 UTC m=+0.112284897 container init 7c82d2da478310bbc710528ac13ed4336d1cb8ffe136807c5c2399cd1690b3e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_blackburn, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 20 19:18:34 compute-0 podman[278326]: 2026-01-20 19:18:34.55612593 +0000 UTC m=+0.119570250 container start 7c82d2da478310bbc710528ac13ed4336d1cb8ffe136807c5c2399cd1690b3e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_blackburn, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Jan 20 19:18:34 compute-0 podman[278326]: 2026-01-20 19:18:34.463374962 +0000 UTC m=+0.026819312 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:18:34 compute-0 podman[278326]: 2026-01-20 19:18:34.560030614 +0000 UTC m=+0.123474964 container attach 7c82d2da478310bbc710528ac13ed4336d1cb8ffe136807c5c2399cd1690b3e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 20 19:18:34 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat"} v 0)
Jan 20 19:18:34 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2059534676' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 20 19:18:34 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Jan 20 19:18:34 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1946649339' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 20 19:18:35 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.25835 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:35 compute-0 ceph-mgr[74676]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 20 19:18:35 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T19:18:35.080+0000 7fb4429f1640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 20 19:18:35 compute-0 lvm[278494]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 19:18:35 compute-0 lvm[278494]: VG ceph_vg0 finished
Jan 20 19:18:35 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Jan 20 19:18:35 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1923728329' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 20 19:18:35 compute-0 hungry_blackburn[278363]: {}
Jan 20 19:18:35 compute-0 systemd[1]: libpod-7c82d2da478310bbc710528ac13ed4336d1cb8ffe136807c5c2399cd1690b3e2.scope: Deactivated successfully.
Jan 20 19:18:35 compute-0 systemd[1]: libpod-7c82d2da478310bbc710528ac13ed4336d1cb8ffe136807c5c2399cd1690b3e2.scope: Consumed 1.012s CPU time.
Jan 20 19:18:35 compute-0 podman[278326]: 2026-01-20 19:18:35.173100806 +0000 UTC m=+0.736545126 container died 7c82d2da478310bbc710528ac13ed4336d1cb8ffe136807c5c2399cd1690b3e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:18:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-8240118b418965c82c2d89c62efbcfd155478e1b2bb80e18e60cb398f6ce6cf8-merged.mount: Deactivated successfully.
Jan 20 19:18:35 compute-0 podman[278326]: 2026-01-20 19:18:35.233668051 +0000 UTC m=+0.797112381 container remove 7c82d2da478310bbc710528ac13ed4336d1cb8ffe136807c5c2399cd1690b3e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_blackburn, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:18:35 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Jan 20 19:18:35 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/277486606' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 20 19:18:35 compute-0 systemd[1]: libpod-conmon-7c82d2da478310bbc710528ac13ed4336d1cb8ffe136807c5c2399cd1690b3e2.scope: Deactivated successfully.
Jan 20 19:18:35 compute-0 sudo[278137]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:35 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 155 heartbeat osd_stat(store_statfs(0x4fca52000/0x0/0x4ffc00000, data 0x10fd89/0x1c8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941190 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:46:09.705881+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 4464640 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 155 heartbeat osd_stat(store_statfs(0x4fca4e000/0x0/0x4ffc00000, data 0x111f26/0x1cb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 155 handle_osd_map epochs [155,156], i have 155, src has [1,156]
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.533835411s of 10.722471237s, submitted: 55
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 155 handle_osd_map epochs [155,156], i have 156, src has [1,156]
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 156 pg[9.1f( empty local-lis/les=0/0 n=0 ec=62/41 lis/c=109/109 les/c/f=110/110/0 sis=155) [0] r=0 lpr=155 pi=[109,155)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.089603 2 0.000066
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 156 pg[9.1f( empty local-lis/les=0/0 n=0 ec=62/41 lis/c=109/109 les/c/f=110/110/0 sis=155) [0] r=0 lpr=155 pi=[109,155)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.089970 0 0.000000
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 156 pg[9.1f( empty local-lis/les=0/0 n=0 ec=62/41 lis/c=109/109 les/c/f=110/110/0 sis=155) [0] r=0 lpr=155 pi=[109,155)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.090041 0 0.000000
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 156 pg[9.1f( empty local-lis/les=0/0 n=0 ec=62/41 lis/c=109/109 les/c/f=110/110/0 sis=155) [0] r=0 lpr=155 pi=[109,155)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 156 pg[9.1f( empty local-lis/les=0/0 n=0 ec=62/41 lis/c=109/109 les/c/f=110/110/0 sis=156) [0]/[1] r=-1 lpr=156 pi=[109,156)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 156 pg[9.1f( empty local-lis/les=0/0 n=0 ec=62/41 lis/c=109/109 les/c/f=110/110/0 sis=156) [0]/[1] r=-1 lpr=156 pi=[109,156)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000234 1 0.000452
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 156 pg[9.1f( empty local-lis/les=0/0 n=0 ec=62/41 lis/c=109/109 les/c/f=110/110/0 sis=156) [0]/[1] r=-1 lpr=156 pi=[109,156)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 156 pg[9.1e( v 49'1085 (0'0,49'1085] local-lis/les=0/0 n=5 ec=62/41 lis/c=154/86 les/c/f=155/87/0 sis=154) [0]/[1] r=-1 lpr=154 pi=[86,154)/1 luod=0'0 crt=49'1085 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 1.035946 1 0.000077
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 156 pg[9.1e( v 49'1085 (0'0,49'1085] local-lis/les=0/0 n=5 ec=62/41 lis/c=154/86 les/c/f=155/87/0 sis=154) [0]/[1] r=-1 lpr=154 pi=[86,154)/1 luod=0'0 crt=49'1085 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.092135 0 0.000000
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 156 pg[9.1e( v 49'1085 (0'0,49'1085] local-lis/les=0/0 n=5 ec=62/41 lis/c=154/86 les/c/f=155/87/0 sis=154) [0]/[1] r=-1 lpr=154 pi=[86,154)/1 luod=0'0 crt=49'1085 mlcod 0'0 active+remapped mbc={}] exit Started 2.113854 0 0.000000
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 156 pg[9.1e( v 49'1085 (0'0,49'1085] local-lis/les=0/0 n=5 ec=62/41 lis/c=154/86 les/c/f=155/87/0 sis=154) [0]/[1] r=-1 lpr=154 pi=[86,154)/1 luod=0'0 crt=49'1085 mlcod 0'0 active+remapped mbc={}] enter Reset
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 156 pg[9.1f( empty local-lis/les=0/0 n=0 ec=62/41 lis/c=109/109 les/c/f=110/110/0 sis=156) [0]/[1] r=-1 lpr=156 pi=[109,156)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 156 pg[9.1e( v 49'1085 (0'0,49'1085] local-lis/les=0/0 n=5 ec=62/41 lis/c=154/86 les/c/f=155/87/0 sis=156) [0] r=0 lpr=156 pi=[86,156)/1 luod=0'0 crt=49'1085 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 156 pg[9.1e( v 49'1085 (0'0,49'1085] local-lis/les=0/0 n=5 ec=62/41 lis/c=154/86 les/c/f=155/87/0 sis=156) [0] r=0 lpr=156 pi=[86,156)/1 crt=49'1085 mlcod 0'0 unknown mbc={}] exit Reset 0.000066 1 0.000111
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 156 pg[9.1e( v 49'1085 (0'0,49'1085] local-lis/les=0/0 n=5 ec=62/41 lis/c=154/86 les/c/f=155/87/0 sis=156) [0] r=0 lpr=156 pi=[86,156)/1 crt=49'1085 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 156 pg[9.1e( v 49'1085 (0'0,49'1085] local-lis/les=0/0 n=5 ec=62/41 lis/c=154/86 les/c/f=155/87/0 sis=156) [0] r=0 lpr=156 pi=[86,156)/1 crt=49'1085 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 156 pg[9.1e( v 49'1085 (0'0,49'1085] local-lis/les=0/0 n=5 ec=62/41 lis/c=154/86 les/c/f=155/87/0 sis=156) [0] r=0 lpr=156 pi=[86,156)/1 crt=49'1085 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 156 pg[9.1e( v 49'1085 (0'0,49'1085] local-lis/les=0/0 n=5 ec=62/41 lis/c=154/86 les/c/f=155/87/0 sis=156) [0] r=0 lpr=156 pi=[86,156)/1 crt=49'1085 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 156 pg[9.1e( v 49'1085 (0'0,49'1085] local-lis/les=0/0 n=5 ec=62/41 lis/c=154/86 les/c/f=155/87/0 sis=156) [0] r=0 lpr=156 pi=[86,156)/1 crt=49'1085 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 156 pg[9.1e( v 49'1085 (0'0,49'1085] local-lis/les=0/0 n=5 ec=62/41 lis/c=154/86 les/c/f=155/87/0 sis=156) [0] r=0 lpr=156 pi=[86,156)/1 crt=49'1085 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 156 pg[9.1e( v 49'1085 (0'0,49'1085] local-lis/les=0/0 n=5 ec=62/41 lis/c=154/86 les/c/f=155/87/0 sis=156) [0] r=0 lpr=156 pi=[86,156)/1 crt=49'1085 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 156 pg[9.1f( empty local-lis/les=0/0 n=0 ec=62/41 lis/c=109/109 les/c/f=110/110/0 sis=156) [0]/[1] r=-1 lpr=156 pi=[109,156)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 156 pg[9.1f( empty local-lis/les=0/0 n=0 ec=62/41 lis/c=109/109 les/c/f=110/110/0 sis=156) [0]/[1] r=-1 lpr=156 pi=[109,156)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000261 0 0.000000
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 156 pg[9.1f( empty local-lis/les=0/0 n=0 ec=62/41 lis/c=109/109 les/c/f=110/110/0 sis=156) [0]/[1] r=-1 lpr=156 pi=[109,156)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 156 pg[9.1e( v 49'1085 (0'0,49'1085] local-lis/les=0/0 n=5 ec=62/41 lis/c=154/86 les/c/f=155/87/0 sis=156) [0] r=0 lpr=156 pi=[86,156)/1 crt=49'1085 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.065300 2 0.000044
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 156 pg[9.1e( v 49'1085 (0'0,49'1085] local-lis/les=0/0 n=5 ec=62/41 lis/c=154/86 les/c/f=155/87/0 sis=156) [0] r=0 lpr=156 pi=[86,156)/1 crt=49'1085 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 156 handle_osd_map epochs [156,156], i have 156, src has [1,156]
Jan 20 19:18:35 compute-0 ceph-osd[82836]: merge_log_dups log.dups.size()=0olog.dups.size()=29
Jan 20 19:18:35 compute-0 ceph-osd[82836]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=29
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 156 pg[9.1e( v 49'1085 (0'0,49'1085] local-lis/les=154/155 n=5 ec=62/41 lis/c=154/86 les/c/f=155/87/0 sis=156) [0] r=0 lpr=156 pi=[86,156)/1 crt=49'1085 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001395 2 0.000203
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 156 pg[9.1e( v 49'1085 (0'0,49'1085] local-lis/les=154/155 n=5 ec=62/41 lis/c=154/86 les/c/f=155/87/0 sis=156) [0] r=0 lpr=156 pi=[86,156)/1 crt=49'1085 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 156 pg[9.1e( v 49'1085 (0'0,49'1085] local-lis/les=154/155 n=5 ec=62/41 lis/c=154/86 les/c/f=155/87/0 sis=156) [0] r=0 lpr=156 pi=[86,156)/1 crt=49'1085 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000017 0 0.000000
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 156 pg[9.1e( v 49'1085 (0'0,49'1085] local-lis/les=154/155 n=5 ec=62/41 lis/c=154/86 les/c/f=155/87/0 sis=156) [0] r=0 lpr=156 pi=[86,156)/1 crt=49'1085 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:46:10.706194+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 84697088 unmapped: 4472832 heap: 89169920 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 156 heartbeat osd_stat(store_statfs(0x4fca4d000/0x0/0x4ffc00000, data 0x113fa7/0x1ce000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 156 handle_osd_map epochs [157,157], i have 156, src has [1,157]
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 157 pg[9.1e( v 49'1085 (0'0,49'1085] local-lis/les=154/155 n=5 ec=62/41 lis/c=154/86 les/c/f=155/87/0 sis=156) [0] r=0 lpr=156 pi=[86,156)/1 crt=49'1085 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.945269 2 0.000203
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 157 pg[9.1e( v 49'1085 (0'0,49'1085] local-lis/les=154/155 n=5 ec=62/41 lis/c=154/86 les/c/f=155/87/0 sis=156) [0] r=0 lpr=156 pi=[86,156)/1 crt=49'1085 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.012160 0 0.000000
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 157 pg[9.1e( v 49'1085 (0'0,49'1085] local-lis/les=154/155 n=5 ec=62/41 lis/c=154/86 les/c/f=155/87/0 sis=156) [0] r=0 lpr=156 pi=[86,156)/1 crt=49'1085 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 157 pg[9.1e( v 49'1085 (0'0,49'1085] local-lis/les=156/157 n=5 ec=62/41 lis/c=154/86 les/c/f=155/87/0 sis=156) [0] r=0 lpr=156 pi=[86,156)/1 crt=49'1085 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:18:35 compute-0 ceph-osd[82836]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 157 pg[9.1f( v 49'1085 lc 0'0 (0'0,49'1085] local-lis/les=0/0 n=5 ec=62/41 lis/c=109/109 les/c/f=110/110/0 sis=156) [0]/[1] r=-1 lpr=156 pi=[109,156)/1 crt=49'1085 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.012062 5 0.000723
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 157 pg[9.1f( v 49'1085 lc 0'0 (0'0,49'1085] local-lis/les=0/0 n=5 ec=62/41 lis/c=109/109 les/c/f=110/110/0 sis=156) [0]/[1] r=-1 lpr=156 pi=[109,156)/1 crt=49'1085 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 157 pg[9.1f( v 49'1085 lc 0'0 (0'0,49'1085] local-lis/les=0/0 n=5 ec=62/41 lis/c=109/109 les/c/f=110/110/0 sis=156) [0]/[1] r=-1 lpr=156 pi=[109,156)/1 crt=49'1085 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 157 handle_osd_map epochs [157,157], i have 157, src has [1,157]
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 157 pg[9.1e( v 49'1085 (0'0,49'1085] local-lis/les=156/157 n=5 ec=62/41 lis/c=154/86 les/c/f=155/87/0 sis=156) [0] r=0 lpr=156 pi=[86,156)/1 crt=49'1085 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 157 pg[9.1e( v 49'1085 (0'0,49'1085] local-lis/les=156/157 n=5 ec=62/41 lis/c=156/86 les/c/f=157/87/0 sis=156) [0] r=0 lpr=156 pi=[86,156)/1 crt=49'1085 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003853 4 0.000174
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 157 pg[9.1e( v 49'1085 (0'0,49'1085] local-lis/les=156/157 n=5 ec=62/41 lis/c=156/86 les/c/f=157/87/0 sis=156) [0] r=0 lpr=156 pi=[86,156)/1 crt=49'1085 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 157 pg[9.1e( v 49'1085 (0'0,49'1085] local-lis/les=156/157 n=5 ec=62/41 lis/c=156/86 les/c/f=157/87/0 sis=156) [0] r=0 lpr=156 pi=[86,156)/1 crt=49'1085 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000017 0 0.000000
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 157 pg[9.1e( v 49'1085 (0'0,49'1085] local-lis/les=156/157 n=5 ec=62/41 lis/c=156/86 les/c/f=157/87/0 sis=156) [0] r=0 lpr=156 pi=[86,156)/1 crt=49'1085 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 157 pg[9.1f( v 49'1085 lc 49'438 (0'0,49'1085] local-lis/les=0/0 n=5 ec=62/41 lis/c=156/109 les/c/f=157/110/0 sis=156) [0]/[1] r=-1 lpr=156 pi=[109,156)/1 luod=0'0 crt=49'1085 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.007334 4 0.000295
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 157 pg[9.1f( v 49'1085 lc 49'438 (0'0,49'1085] local-lis/les=0/0 n=5 ec=62/41 lis/c=156/109 les/c/f=157/110/0 sis=156) [0]/[1] r=-1 lpr=156 pi=[109,156)/1 luod=0'0 crt=49'1085 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 157 pg[9.1f( v 49'1085 lc 49'438 (0'0,49'1085] local-lis/les=0/0 n=5 ec=62/41 lis/c=156/109 les/c/f=157/110/0 sis=156) [0]/[1] r=-1 lpr=156 pi=[109,156)/1 luod=0'0 crt=49'1085 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000166 1 0.000053
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 157 pg[9.1f( v 49'1085 lc 49'438 (0'0,49'1085] local-lis/les=0/0 n=5 ec=62/41 lis/c=156/109 les/c/f=157/110/0 sis=156) [0]/[1] r=-1 lpr=156 pi=[109,156)/1 luod=0'0 crt=49'1085 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 157 pg[9.1f( v 49'1085 (0'0,49'1085] local-lis/les=0/0 n=5 ec=62/41 lis/c=156/109 les/c/f=157/110/0 sis=156) [0]/[1] r=-1 lpr=156 pi=[109,156)/1 luod=0'0 crt=49'1085 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.037611 1 0.000086
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 157 pg[9.1f( v 49'1085 (0'0,49'1085] local-lis/les=0/0 n=5 ec=62/41 lis/c=156/109 les/c/f=157/110/0 sis=156) [0]/[1] r=-1 lpr=156 pi=[109,156)/1 luod=0'0 crt=49'1085 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:46:11.706401+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 5464064 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 157 handle_osd_map epochs [157,158], i have 157, src has [1,158]
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 158 pg[9.1f( v 49'1085 (0'0,49'1085] local-lis/les=0/0 n=5 ec=62/41 lis/c=156/109 les/c/f=157/110/0 sis=156) [0]/[1] r=-1 lpr=156 pi=[109,156)/1 luod=0'0 crt=49'1085 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.973710 1 0.000050
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 158 pg[9.1f( v 49'1085 (0'0,49'1085] local-lis/les=0/0 n=5 ec=62/41 lis/c=156/109 les/c/f=157/110/0 sis=156) [0]/[1] r=-1 lpr=156 pi=[109,156)/1 luod=0'0 crt=49'1085 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.018992 0 0.000000
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 158 pg[9.1f( v 49'1085 (0'0,49'1085] local-lis/les=0/0 n=5 ec=62/41 lis/c=156/109 les/c/f=157/110/0 sis=156) [0]/[1] r=-1 lpr=156 pi=[109,156)/1 luod=0'0 crt=49'1085 mlcod 0'0 active+remapped mbc={}] exit Started 2.031531 0 0.000000
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 158 pg[9.1f( v 49'1085 (0'0,49'1085] local-lis/les=0/0 n=5 ec=62/41 lis/c=156/109 les/c/f=157/110/0 sis=156) [0]/[1] r=-1 lpr=156 pi=[109,156)/1 luod=0'0 crt=49'1085 mlcod 0'0 active+remapped mbc={}] enter Reset
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 158 pg[9.1f( v 49'1085 (0'0,49'1085] local-lis/les=0/0 n=5 ec=62/41 lis/c=156/109 les/c/f=157/110/0 sis=158) [0] r=0 lpr=158 pi=[109,158)/1 luod=0'0 crt=49'1085 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 158 pg[9.1f( v 49'1085 (0'0,49'1085] local-lis/les=0/0 n=5 ec=62/41 lis/c=156/109 les/c/f=157/110/0 sis=158) [0] r=0 lpr=158 pi=[109,158)/1 crt=49'1085 mlcod 0'0 unknown mbc={}] exit Reset 0.000207 1 0.000300
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 158 pg[9.1f( v 49'1085 (0'0,49'1085] local-lis/les=0/0 n=5 ec=62/41 lis/c=156/109 les/c/f=157/110/0 sis=158) [0] r=0 lpr=158 pi=[109,158)/1 crt=49'1085 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 158 pg[9.1f( v 49'1085 (0'0,49'1085] local-lis/les=0/0 n=5 ec=62/41 lis/c=156/109 les/c/f=157/110/0 sis=158) [0] r=0 lpr=158 pi=[109,158)/1 crt=49'1085 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 158 pg[9.1f( v 49'1085 (0'0,49'1085] local-lis/les=0/0 n=5 ec=62/41 lis/c=156/109 les/c/f=157/110/0 sis=158) [0] r=0 lpr=158 pi=[109,158)/1 crt=49'1085 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 158 pg[9.1f( v 49'1085 (0'0,49'1085] local-lis/les=0/0 n=5 ec=62/41 lis/c=156/109 les/c/f=157/110/0 sis=158) [0] r=0 lpr=158 pi=[109,158)/1 crt=49'1085 mlcod 0'0 unknown mbc={}] exit Start 0.000021 0 0.000000
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 158 pg[9.1f( v 49'1085 (0'0,49'1085] local-lis/les=0/0 n=5 ec=62/41 lis/c=156/109 les/c/f=157/110/0 sis=158) [0] r=0 lpr=158 pi=[109,158)/1 crt=49'1085 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 158 pg[9.1f( v 49'1085 (0'0,49'1085] local-lis/les=0/0 n=5 ec=62/41 lis/c=156/109 les/c/f=157/110/0 sis=158) [0] r=0 lpr=158 pi=[109,158)/1 crt=49'1085 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 158 pg[9.1f( v 49'1085 (0'0,49'1085] local-lis/les=0/0 n=5 ec=62/41 lis/c=156/109 les/c/f=157/110/0 sis=158) [0] r=0 lpr=158 pi=[109,158)/1 crt=49'1085 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 158 pg[9.1f( v 49'1085 (0'0,49'1085] local-lis/les=0/0 n=5 ec=62/41 lis/c=156/109 les/c/f=157/110/0 sis=158) [0] r=0 lpr=158 pi=[109,158)/1 crt=49'1085 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.002788 2 0.000125
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 158 pg[9.1f( v 49'1085 (0'0,49'1085] local-lis/les=0/0 n=5 ec=62/41 lis/c=156/109 les/c/f=157/110/0 sis=158) [0] r=0 lpr=158 pi=[109,158)/1 crt=49'1085 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 158 handle_osd_map epochs [158,158], i have 158, src has [1,158]
Jan 20 19:18:35 compute-0 ceph-osd[82836]: merge_log_dups log.dups.size()=0olog.dups.size()=32
Jan 20 19:18:35 compute-0 ceph-osd[82836]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=32
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 158 pg[9.1f( v 49'1085 (0'0,49'1085] local-lis/les=156/157 n=5 ec=62/41 lis/c=156/109 les/c/f=157/110/0 sis=158) [0] r=0 lpr=158 pi=[109,158)/1 crt=49'1085 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000664 2 0.000079
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 158 pg[9.1f( v 49'1085 (0'0,49'1085] local-lis/les=156/157 n=5 ec=62/41 lis/c=156/109 les/c/f=157/110/0 sis=158) [0] r=0 lpr=158 pi=[109,158)/1 crt=49'1085 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 158 pg[9.1f( v 49'1085 (0'0,49'1085] local-lis/les=156/157 n=5 ec=62/41 lis/c=156/109 les/c/f=157/110/0 sis=158) [0] r=0 lpr=158 pi=[109,158)/1 crt=49'1085 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000022 0 0.000000
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 158 pg[9.1f( v 49'1085 (0'0,49'1085] local-lis/les=156/157 n=5 ec=62/41 lis/c=156/109 les/c/f=157/110/0 sis=158) [0] r=0 lpr=158 pi=[109,158)/1 crt=49'1085 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:46:12.706551+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 5464064 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 158 handle_osd_map epochs [158,159], i have 158, src has [1,159]
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 158 handle_osd_map epochs [159,159], i have 159, src has [1,159]
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 159 pg[9.1f( v 49'1085 (0'0,49'1085] local-lis/les=156/157 n=5 ec=62/41 lis/c=156/109 les/c/f=157/110/0 sis=158) [0] r=0 lpr=158 pi=[109,158)/1 crt=49'1085 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.020054 2 0.000221
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 159 pg[9.1f( v 49'1085 (0'0,49'1085] local-lis/les=156/157 n=5 ec=62/41 lis/c=156/109 les/c/f=157/110/0 sis=158) [0] r=0 lpr=158 pi=[109,158)/1 crt=49'1085 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.023683 0 0.000000
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 159 pg[9.1f( v 49'1085 (0'0,49'1085] local-lis/les=156/157 n=5 ec=62/41 lis/c=156/109 les/c/f=157/110/0 sis=158) [0] r=0 lpr=158 pi=[109,158)/1 crt=49'1085 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 159 pg[9.1f( v 49'1085 (0'0,49'1085] local-lis/les=158/159 n=5 ec=62/41 lis/c=156/109 les/c/f=157/110/0 sis=158) [0] r=0 lpr=158 pi=[109,158)/1 crt=49'1085 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 159 pg[9.1f( v 49'1085 (0'0,49'1085] local-lis/les=158/159 n=5 ec=62/41 lis/c=156/109 les/c/f=157/110/0 sis=158) [0] r=0 lpr=158 pi=[109,158)/1 crt=49'1085 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 159 pg[9.1f( v 49'1085 (0'0,49'1085] local-lis/les=158/159 n=5 ec=62/41 lis/c=158/109 les/c/f=159/110/0 sis=158) [0] r=0 lpr=158 pi=[109,158)/1 crt=49'1085 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002922 4 0.000277
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 159 pg[9.1f( v 49'1085 (0'0,49'1085] local-lis/les=158/159 n=5 ec=62/41 lis/c=158/109 les/c/f=159/110/0 sis=158) [0] r=0 lpr=158 pi=[109,158)/1 crt=49'1085 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 159 pg[9.1f( v 49'1085 (0'0,49'1085] local-lis/les=158/159 n=5 ec=62/41 lis/c=158/109 les/c/f=159/110/0 sis=158) [0] r=0 lpr=158 pi=[109,158)/1 crt=49'1085 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 pg_epoch: 159 pg[9.1f( v 49'1085 (0'0,49'1085] local-lis/les=158/159 n=5 ec=62/41 lis/c=158/109 les/c/f=159/110/0 sis=158) [0] r=0 lpr=158 pi=[109,158)/1 crt=49'1085 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x1180af/0x1d5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:46:13.706695+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 5464064 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958472 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:46:14.706901+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 5464064 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca43000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:46:15.707040+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 5464064 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:46:16.707194+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 84762624 unmapped: 5455872 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:46:17.707385+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 84762624 unmapped: 5455872 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca43000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:46:18.707576+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 5447680 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958340 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:46:19.707719+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 5447680 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:46:20.707848+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 5447680 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:46:21.707981+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 84779008 unmapped: 5439488 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:46:22.708167+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 84779008 unmapped: 5439488 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:46:23.708338+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca43000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 5431296 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958340 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:46:24.708516+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 5431296 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:46:25.708651+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 5431296 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca43000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:46:26.708865+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 84803584 unmapped: 5414912 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:46:27.709112+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 84803584 unmapped: 5414912 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:46:28.709281+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 84811776 unmapped: 5406720 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958340 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:46:29.709440+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 84811776 unmapped: 5406720 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca43000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:46:30.709632+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 84819968 unmapped: 5398528 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:46:31.709799+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 84819968 unmapped: 5398528 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:46:32.710409+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 84819968 unmapped: 5398528 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:46:33.710636+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 84828160 unmapped: 5390336 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958340 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:46:34.710907+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 84828160 unmapped: 5390336 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca43000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:46:35.711081+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 84836352 unmapped: 5382144 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca43000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:46:36.711283+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 84836352 unmapped: 5382144 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:46:37.711495+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 5373952 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:46:38.711859+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 5373952 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958340 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:46:39.712063+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 5373952 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:46:40.712228+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 84852736 unmapped: 5365760 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca43000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:46:41.712485+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 84852736 unmapped: 5365760 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:46:42.712722+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 5357568 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca43000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:46:43.712969+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 5357568 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958340 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:46:44.713166+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 84869120 unmapped: 5349376 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:46:45.713406+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 84869120 unmapped: 5349376 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:46:46.713572+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 84869120 unmapped: 5349376 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca43000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:46:47.713951+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 84869120 unmapped: 5349376 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:46:48.714142+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 84877312 unmapped: 5341184 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca43000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958340 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:46:49.714289+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 84877312 unmapped: 5341184 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:46:50.714521+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 84885504 unmapped: 5332992 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:46:51.714697+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 84885504 unmapped: 5332992 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:46:52.714898+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 84885504 unmapped: 5332992 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:46:53.715121+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 5324800 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958340 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:46:54.715310+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 5324800 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca43000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:46:55.715513+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 5316608 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:46:56.715676+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 5316608 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca43000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:46:57.715882+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 5316608 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:46:58.716022+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 5308416 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958340 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:46:59.716145+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca43000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 5308416 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca43000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:47:00.716318+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 84918272 unmapped: 5300224 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:47:01.716480+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 84918272 unmapped: 5300224 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca43000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:47:02.716733+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 5292032 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:47:03.716870+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 5292032 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958340 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:47:04.717015+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 5292032 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:47:05.717099+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 84934656 unmapped: 5283840 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:47:06.717223+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca43000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 84942848 unmapped: 5275648 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:47:07.717406+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca43000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 84942848 unmapped: 5275648 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:47:08.717585+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 84942848 unmapped: 5275648 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958340 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:47:09.717755+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 84942848 unmapped: 5275648 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:47:10.717946+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 84959232 unmapped: 5259264 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:47:11.718091+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 84959232 unmapped: 5259264 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca43000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:47:12.718292+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 5251072 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 ms_handle_reset con 0x5649cfbf1800 session 0x5649cf924d20
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 ms_handle_reset con 0x5649cfbf2400 session 0x5649cf64b680
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:47:13.718431+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 5251072 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958340 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:47:14.718573+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 84975616 unmapped: 5242880 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca43000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:47:15.718705+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 84975616 unmapped: 5242880 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:47:16.718875+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 84975616 unmapped: 5242880 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca43000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:47:17.719054+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 84983808 unmapped: 5234688 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:47:18.719206+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 84983808 unmapped: 5234688 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958340 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:47:19.719350+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 84983808 unmapped: 5234688 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:47:20.719482+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 84992000 unmapped: 5226496 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:47:21.719677+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca43000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 84992000 unmapped: 5226496 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:47:22.719879+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85000192 unmapped: 5218304 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:47:23.720020+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85000192 unmapped: 5218304 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0df4400
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 74.102874756s of 74.182830811s, submitted: 36
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 957632 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:47:24.720257+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85000192 unmapped: 5218304 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:47:25.720441+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85008384 unmapped: 5210112 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:47:26.720574+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85008384 unmapped: 5210112 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:47:27.720776+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85016576 unmapped: 5201920 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:47:28.720919+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85016576 unmapped: 5201920 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 957632 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:47:29.721085+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85016576 unmapped: 5201920 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:47:30.721214+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85024768 unmapped: 5193728 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:47:31.721369+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85024768 unmapped: 5193728 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:47:32.721530+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 5185536 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:47:33.721676+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 5185536 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 957041 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:47:34.721831+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 5185536 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:47:35.722128+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85041152 unmapped: 5177344 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:47:36.722270+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85041152 unmapped: 5177344 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:47:37.722483+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 5169152 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:47:38.722665+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 5169152 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.873767853s of 14.881192207s, submitted: 2
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:47:39.722853+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956909 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 5160960 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:47:40.723072+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 5160960 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:47:41.723296+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 5160960 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:47:42.723493+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 5152768 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:47:43.723646+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 5152768 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:47:44.723849+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956909 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 5152768 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:47:45.724004+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85073920 unmapped: 5144576 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:47:46.724163+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85073920 unmapped: 5144576 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:47:47.724363+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85082112 unmapped: 5136384 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:47:48.724580+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85082112 unmapped: 5136384 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:47:49.724793+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956909 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85090304 unmapped: 5128192 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:47:50.725062+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85090304 unmapped: 5128192 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:47:51.725182+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85090304 unmapped: 5128192 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:47:52.725303+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85098496 unmapped: 5120000 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:47:53.725445+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85098496 unmapped: 5120000 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:47:54.725567+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956909 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85106688 unmapped: 5111808 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:47:55.725706+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85106688 unmapped: 5111808 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:47:56.725866+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85106688 unmapped: 5111808 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:47:57.726019+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 5103616 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:47:58.726155+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 5103616 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:47:59.726344+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956909 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85123072 unmapped: 5095424 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:48:00.726484+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85123072 unmapped: 5095424 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:48:01.726654+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85123072 unmapped: 5095424 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:48:02.726793+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85131264 unmapped: 5087232 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:48:03.726951+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85131264 unmapped: 5087232 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:48:04.727086+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956909 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85139456 unmapped: 5079040 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:48:05.727326+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85139456 unmapped: 5079040 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:48:06.727496+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85147648 unmapped: 5070848 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:48:07.727650+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85147648 unmapped: 5070848 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:48:08.727762+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85147648 unmapped: 5070848 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:48:09.727933+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956909 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85155840 unmapped: 5062656 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:48:10.728089+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85155840 unmapped: 5062656 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:48:11.728231+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85164032 unmapped: 5054464 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:48:12.728371+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85164032 unmapped: 5054464 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:48:13.728503+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85164032 unmapped: 5054464 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:48:14.728652+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956909 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85172224 unmapped: 5046272 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:48:15.728776+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85172224 unmapped: 5046272 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:48:16.728930+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 5038080 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:48:17.729163+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 5038080 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:48:18.729337+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 5029888 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:48:19.729472+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956909 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 5029888 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:48:20.729611+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 5029888 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:48:21.729752+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 5021696 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:48:22.729868+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 5021696 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:48:23.730001+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 5021696 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:48:24.730178+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956909 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 5013504 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:48:25.730301+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 5005312 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:48:26.730635+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85221376 unmapped: 4997120 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:48:27.730836+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85221376 unmapped: 4997120 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:48:28.731049+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 4988928 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:48:29.731238+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956909 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 4988928 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:48:30.731388+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 4988928 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:48:31.731504+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 4980736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:48:32.731646+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 4980736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:48:33.731788+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 4972544 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 ms_handle_reset con 0x5649cfbef800 session 0x5649cf64b0e0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:48:34.731993+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956909 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 4972544 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:48:35.732130+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85245952 unmapped: 4972544 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:48:36.732261+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 4964352 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:48:37.732477+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 4964352 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:48:38.732604+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 4956160 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:48:39.732745+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956909 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 4956160 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:48:40.732879+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 4956160 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:48:41.732989+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 4947968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:48:42.733124+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 4947968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:48:43.733247+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 4939776 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:48:44.733376+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956909 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 4939776 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbec400
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 66.174011230s of 66.177337646s, submitted: 1
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:48:45.733470+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 4939776 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:48:46.733630+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 4931584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:48:47.733861+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 4931584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbe9800
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:48:48.733989+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 4923392 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:48:49.734119+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958553 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 4923392 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:48:50.734276+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 4915200 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:48:51.734422+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 4915200 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:48:52.734614+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 4915200 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:48:53.734727+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 4907008 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:48:54.734865+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958553 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 4907008 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:48:55.734987+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 4907008 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:48:56.735112+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 4898816 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:48:57.735369+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 4898816 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:48:58.735553+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.959854126s of 13.965412140s, submitted: 2
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 4882432 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:48:59.735777+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958421 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 4882432 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:49:00.735916+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 4882432 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:49:01.736095+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 4874240 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:49:02.736231+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 4874240 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:49:03.736380+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 4866048 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:49:04.736553+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958421 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 4866048 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:49:05.736689+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 4857856 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:49:06.736823+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 4857856 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:49:07.736972+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 4857856 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:49:08.737146+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 4849664 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:49:09.737283+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958421 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 4849664 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:49:10.737437+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 4841472 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:49:11.737573+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 4841472 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:49:12.737739+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 4841472 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:49:13.737884+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85385216 unmapped: 4833280 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:49:14.738016+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958421 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85393408 unmapped: 4825088 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:49:15.738145+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 4816896 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:49:16.738292+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 4816896 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:49:17.738453+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85401600 unmapped: 4816896 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:49:18.738639+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 4808704 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:49:19.738768+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958421 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85409792 unmapped: 4808704 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:49:20.739009+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 4800512 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:49:21.739231+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85417984 unmapped: 4800512 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:49:22.739370+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 4792320 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:49:23.739521+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 4792320 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:49:24.739665+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958421 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85426176 unmapped: 4792320 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:49:25.739860+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85434368 unmapped: 4784128 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:49:26.740022+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85434368 unmapped: 4784128 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:49:27.740223+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 4775936 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:49:28.740380+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 4775936 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:49:29.740502+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958421 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 4767744 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:49:30.740640+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 4767744 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:49:31.740771+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85450752 unmapped: 4767744 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:49:32.740890+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85458944 unmapped: 4759552 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:49:33.741006+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85458944 unmapped: 4759552 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:49:34.741114+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958421 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85458944 unmapped: 4759552 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:49:35.741252+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85467136 unmapped: 4751360 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:49:36.741369+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85467136 unmapped: 4751360 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:49:37.741548+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85475328 unmapped: 4743168 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:49:38.741677+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85475328 unmapped: 4743168 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:49:39.741829+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958421 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85475328 unmapped: 4743168 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:49:40.741957+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85483520 unmapped: 4734976 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:49:41.742060+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85483520 unmapped: 4734976 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:49:42.742213+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 4726784 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:49:43.742350+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 4726784 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:49:44.742476+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958421 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 4718592 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:49:45.742635+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 4718592 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:49:46.742768+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 4718592 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:49:47.743011+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 4710400 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:49:48.743163+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 4710400 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:49:49.743286+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958421 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 4710400 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:49:50.743413+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85516288 unmapped: 4702208 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:49:51.743597+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85516288 unmapped: 4702208 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:49:52.743752+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 4694016 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:49:53.743874+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 ms_handle_reset con 0x5649cfbec400 session 0x5649d01690e0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 4694016 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:49:54.744002+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958421 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85532672 unmapped: 4685824 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:49:55.744132+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 ms_handle_reset con 0x5649cfbe9800 session 0x5649cf925c20
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 ms_handle_reset con 0x5649d0df4400 session 0x5649cdd00f00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85532672 unmapped: 4685824 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:49:56.744252+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85532672 unmapped: 4685824 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:49:57.744411+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 4677632 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:49:58.744534+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 4677632 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:49:59.744666+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958421 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85549056 unmapped: 4669440 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:50:00.744784+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85549056 unmapped: 4669440 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:50:01.744885+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85549056 unmapped: 4669440 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:50:02.745017+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 8388 writes, 34K keys, 8388 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.04 MB/s
                                           Cumulative WAL: 8388 writes, 1629 syncs, 5.15 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 8388 writes, 34K keys, 8388 commit groups, 1.0 writes per commit group, ingest: 21.74 MB, 0.04 MB/s
                                           Interval WAL: 8388 writes, 1629 syncs, 5.15 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5649cc1b5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5649cc1b5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5649cc1b5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5649cc1b5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5649cc1b5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5649cc1b5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5649cc1b5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5649cc1b49b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5649cc1b49b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5649cc1b49b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 4e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5649cc1b5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5649cc1b5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 4603904 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:50:03.745152+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85614592 unmapped: 4603904 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:50:04.745314+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958421 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfa0bc00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 66.110221863s of 66.115867615s, submitted: 1
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85622784 unmapped: 4595712 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:50:05.745453+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 4587520 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:50:06.745576+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0df4000
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85647360 unmapped: 4571136 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:50:07.745729+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85647360 unmapped: 4571136 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:50:08.745950+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85647360 unmapped: 4571136 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:50:09.746147+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958685 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 4562944 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:50:10.746280+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85655552 unmapped: 4562944 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:50:11.746398+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85663744 unmapped: 4554752 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:50:12.746566+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbef800
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85680128 unmapped: 4538368 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:50:13.746761+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85688320 unmapped: 4530176 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:50:14.746964+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960197 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85688320 unmapped: 4530176 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:50:15.747146+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85688320 unmapped: 4530176 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:50:16.747313+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85696512 unmapped: 4521984 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:50:17.747587+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85696512 unmapped: 4521984 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:50:18.747858+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.847237587s of 13.860383987s, submitted: 3
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85704704 unmapped: 4513792 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:50:19.748298+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959474 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85704704 unmapped: 4513792 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:50:20.748783+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85704704 unmapped: 4513792 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:50:21.749073+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:50:22.749253+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85712896 unmapped: 4505600 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:50:23.749417+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85721088 unmapped: 4497408 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:50:24.749560+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959342 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85721088 unmapped: 4497408 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:50:25.749864+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:50:26.750126+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85729280 unmapped: 4489216 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:50:27.750373+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:50:28.750584+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85737472 unmapped: 4481024 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:50:29.750795+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 4472832 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959342 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:50:30.750980+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 4472832 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:50:31.751140+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85745664 unmapped: 4472832 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:50:32.751292+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 4464640 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:50:33.751522+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 4464640 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:50:34.751698+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85753856 unmapped: 4464640 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959342 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:50:35.751934+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85762048 unmapped: 4456448 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:50:36.752222+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85762048 unmapped: 4456448 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:50:37.752467+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 4448256 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:50:38.752618+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85770240 unmapped: 4448256 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:50:39.752767+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85778432 unmapped: 4440064 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959342 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:50:40.752863+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85778432 unmapped: 4440064 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:50:41.753017+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85778432 unmapped: 4440064 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:50:42.753159+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85794816 unmapped: 4423680 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:50:43.753366+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85803008 unmapped: 4415488 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:50:44.753587+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85803008 unmapped: 4415488 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959342 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:50:45.753784+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85803008 unmapped: 4415488 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:50:46.754010+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85811200 unmapped: 4407296 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:50:47.754254+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85803008 unmapped: 4415488 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:50:48.754412+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85803008 unmapped: 4415488 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:50:49.754614+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85811200 unmapped: 4407296 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959342 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:50:50.754788+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85811200 unmapped: 4407296 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:50:51.755276+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 ms_handle_reset con 0x5649cfbef800 session 0x5649d01685a0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 ms_handle_reset con 0x5649d0df4000 session 0x5649cf64ab40
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85827584 unmapped: 4390912 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:50:52.755422+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85827584 unmapped: 4390912 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:50:53.755598+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85827584 unmapped: 4390912 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:50:54.755774+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85835776 unmapped: 4382720 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959342 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:50:55.756232+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85835776 unmapped: 4382720 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:50:56.756384+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85843968 unmapped: 4374528 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:50:57.756694+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85843968 unmapped: 4374528 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:50:58.756886+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85843968 unmapped: 4374528 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:50:59.757008+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85852160 unmapped: 4366336 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959342 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:51:00.757373+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85852160 unmapped: 4366336 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:51:01.757565+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 4358144 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cf97c800
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 43.151241302s of 43.162471771s, submitted: 3
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:51:02.757702+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85884928 unmapped: 4333568 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:51:03.757842+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85893120 unmapped: 4325376 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:51:04.758062+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85893120 unmapped: 4325376 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbf8800
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959474 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:51:05.758292+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85901312 unmapped: 4317184 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:51:06.758421+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85909504 unmapped: 4308992 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:51:07.758628+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85917696 unmapped: 4300800 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbf6c00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:51:08.758752+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85925888 unmapped: 4292608 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:51:09.758902+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85925888 unmapped: 4292608 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960986 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:51:10.759040+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 4276224 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:51:11.759164+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 4276224 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:51:12.759288+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85942272 unmapped: 4276224 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:51:13.759412+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85950464 unmapped: 4268032 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.942431450s of 12.079504013s, submitted: 2
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:51:14.759580+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85950464 unmapped: 4268032 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960395 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:51:15.759722+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85950464 unmapped: 4268032 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:51:16.759872+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4259840 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:51:17.760058+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4259840 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:51:18.760205+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85958656 unmapped: 4259840 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:51:19.760389+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85966848 unmapped: 4251648 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960263 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:51:20.760570+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85966848 unmapped: 4251648 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:51:21.760732+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 4243456 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:51:22.760892+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 4243456 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:51:23.761023+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85975040 unmapped: 4243456 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:51:24.761224+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85983232 unmapped: 4235264 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960263 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:51:25.761363+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85983232 unmapped: 4235264 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:51:26.761500+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85983232 unmapped: 4235264 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:51:27.761675+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85999616 unmapped: 4218880 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:51:28.761857+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 85999616 unmapped: 4218880 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:51:29.762094+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 4210688 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960263 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:51:30.762377+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86007808 unmapped: 4210688 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:51:31.762573+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86016000 unmapped: 4202496 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:51:32.762850+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86016000 unmapped: 4202496 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:51:33.763010+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86016000 unmapped: 4202496 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:51:34.763187+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86024192 unmapped: 4194304 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960263 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:51:35.763334+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 4186112 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:51:36.763490+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86032384 unmapped: 4186112 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:51:37.763644+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86040576 unmapped: 4177920 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:51:38.763883+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86040576 unmapped: 4177920 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:51:39.764033+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86048768 unmapped: 4169728 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960263 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:51:40.764175+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86048768 unmapped: 4169728 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:51:41.764311+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 4161536 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:51:42.764464+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86056960 unmapped: 4161536 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:51:43.764604+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86065152 unmapped: 4153344 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:51:44.764742+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86065152 unmapped: 4153344 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960263 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:51:45.764899+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86073344 unmapped: 4145152 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:51:46.765058+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86081536 unmapped: 4136960 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:51:47.765219+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86081536 unmapped: 4136960 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:51:48.765424+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86089728 unmapped: 4128768 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:51:49.765565+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86089728 unmapped: 4128768 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960263 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:51:50.765761+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86097920 unmapped: 4120576 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:51:51.765904+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86097920 unmapped: 4120576 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:51:52.766049+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 38.072029114s of 38.474628448s, submitted: 2
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fca44000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [0,1])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86130688 unmapped: 4087808 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:51:53.766180+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86089728 unmapped: 4128768 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:51:54.766382+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86138880 unmapped: 4079616 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960263 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:51:55.766908+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86204416 unmapped: 4014080 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:51:56.767329+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86204416 unmapped: 4014080 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:51:57.767524+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86204416 unmapped: 4014080 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:51:58.767661+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86204416 unmapped: 4014080 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:51:59.767819+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 ms_handle_reset con 0x5649cfbf8800 session 0x5649ce59ad20
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 ms_handle_reset con 0x5649cfa0bc00 session 0x5649cfd3b680
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86204416 unmapped: 4014080 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960263 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:52:00.768168+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86204416 unmapped: 4014080 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:52:01.768421+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86204416 unmapped: 4014080 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:52:02.768615+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86204416 unmapped: 4014080 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:52:03.768875+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86204416 unmapped: 4014080 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:52:04.769002+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86204416 unmapped: 4014080 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:52:05.769205+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960263 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86204416 unmapped: 4014080 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:52:06.769327+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86204416 unmapped: 4014080 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:52:07.769488+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86204416 unmapped: 4014080 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:52:08.769730+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86204416 unmapped: 4014080 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:52:09.769924+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86204416 unmapped: 4014080 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:52:10.770068+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960263 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0df5c00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.363565445s of 18.056884766s, submitted: 252
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86204416 unmapped: 4014080 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:52:11.770287+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86204416 unmapped: 4014080 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:52:12.770445+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86204416 unmapped: 4014080 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:52:13.770575+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86204416 unmapped: 4014080 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:52:14.770775+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86204416 unmapped: 4014080 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:52:15.770875+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960395 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86204416 unmapped: 4014080 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:52:16.771009+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86204416 unmapped: 4014080 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:52:17.771169+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86204416 unmapped: 4014080 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:52:18.771306+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86204416 unmapped: 4014080 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfa0b800
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:52:19.771496+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86204416 unmapped: 4014080 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:52:20.771693+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 963419 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86204416 unmapped: 4014080 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:52:21.771853+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86204416 unmapped: 4014080 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:52:22.771995+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86212608 unmapped: 4005888 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:52:23.772176+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86212608 unmapped: 4005888 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:52:24.772389+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86212608 unmapped: 4005888 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.351023674s of 14.362688065s, submitted: 3
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:52:25.772594+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 963287 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86220800 unmapped: 3997696 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:52:26.772720+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86220800 unmapped: 3997696 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:52:27.772846+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86220800 unmapped: 3997696 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:52:28.772966+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86220800 unmapped: 3997696 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:52:29.773100+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86220800 unmapped: 3997696 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:52:30.773269+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 962696 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86220800 unmapped: 3997696 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:52:31.773410+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86220800 unmapped: 3997696 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:52:32.773556+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86220800 unmapped: 3997696 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:52:33.773671+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86220800 unmapped: 3997696 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:52:34.773794+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86220800 unmapped: 3997696 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:52:35.773989+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 962696 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86220800 unmapped: 3997696 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:52:36.774133+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86220800 unmapped: 3997696 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:52:37.774302+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86228992 unmapped: 3989504 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:52:38.774441+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86228992 unmapped: 3989504 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:52:39.774551+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86228992 unmapped: 3989504 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:52:40.774666+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 962696 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86228992 unmapped: 3989504 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:52:41.774865+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86228992 unmapped: 3989504 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:52:42.775023+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86228992 unmapped: 3989504 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 ms_handle_reset con 0x5649cfbf6c00 session 0x5649ce8d54a0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 ms_handle_reset con 0x5649cf97c800 session 0x5649cf802f00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:52:43.775141+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86228992 unmapped: 3989504 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:52:44.775255+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86228992 unmapped: 3989504 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:52:45.775382+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 962696 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86228992 unmapped: 3989504 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:52:46.775513+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86228992 unmapped: 3989504 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:52:47.775681+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86228992 unmapped: 3989504 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:52:48.775852+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86228992 unmapped: 3989504 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:52:49.775961+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86228992 unmapped: 3989504 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:52:50.776113+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 962696 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86228992 unmapped: 3989504 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:52:51.776231+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86228992 unmapped: 3989504 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:52:52.776357+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86228992 unmapped: 3989504 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:52:53.776488+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86228992 unmapped: 3989504 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:52:54.776621+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86228992 unmapped: 3989504 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:52:55.776744+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 962696 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86228992 unmapped: 3989504 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:52:56.776936+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86228992 unmapped: 3989504 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:52:57.777104+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86228992 unmapped: 3989504 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:52:58.777230+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86228992 unmapped: 3989504 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:52:59.777374+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86228992 unmapped: 3989504 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:53:00.777539+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 962696 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cdc8a800
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 35.600845337s of 35.608341217s, submitted: 2
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86228992 unmapped: 3989504 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:53:01.777669+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86228992 unmapped: 3989504 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:53:02.777928+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86228992 unmapped: 3989504 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:53:03.778397+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 ms_handle_reset con 0x5649cfbf1400 session 0x5649cfdd12c0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 ms_handle_reset con 0x5649d01e9000 session 0x5649cf9745a0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86228992 unmapped: 3989504 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:53:04.778623+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86228992 unmapped: 3989504 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:53:05.779185+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 964340 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86228992 unmapped: 3989504 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:53:06.779368+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cf983000
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86237184 unmapped: 3981312 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:53:07.779705+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86237184 unmapped: 3981312 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:53:08.780010+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86237184 unmapped: 3981312 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:53:09.780308+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86237184 unmapped: 3981312 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:53:10.780609+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 964208 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86237184 unmapped: 3981312 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:53:11.780886+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86237184 unmapped: 3981312 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:53:12.781104+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86237184 unmapped: 3981312 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:53:13.781312+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86237184 unmapped: 3981312 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:53:14.781516+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86237184 unmapped: 3981312 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:53:15.781660+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cf80a400
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.244354248s of 14.788368225s, submitted: 3
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [1,0,0,0,0,0,0,0,1])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 964340 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86245376 unmapped: 3973120 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:53:16.781868+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86245376 unmapped: 3973120 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:53:17.782060+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86245376 unmapped: 3973120 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:53:18.782250+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86245376 unmapped: 3973120 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:53:19.782383+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86245376 unmapped: 3973120 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:53:20.782533+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 964340 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:53:21.782723+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cf88b000
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:53:22.782914+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:53:23.783083+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:53:24.783233+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:53:25.783437+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 966050 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:53:26.783590+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:53:27.783763+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:53:28.783898+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:53:29.784055+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 ms_handle_reset con 0x5649cf88b000 session 0x5649d0f714a0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 ms_handle_reset con 0x5649cf80a400 session 0x5649ce8f6780
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:53:30.784225+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 966050 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:53:31.784534+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:53:32.784652+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:53:33.784857+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:53:34.784984+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:53:35.785078+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 966050 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:53:36.785212+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:53:37.785389+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:53:38.785528+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 ms_handle_reset con 0x5649cf983000 session 0x5649d06283c0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 ms_handle_reset con 0x5649cdc8a800 session 0x5649cf803c20
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:53:39.785658+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:53:40.785785+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cf80a400
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 24.812126160s of 25.115001678s, submitted: 6
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 966182 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:53:41.785962+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 ms_handle_reset con 0x5649cfa0b800 session 0x5649ce984000
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 ms_handle_reset con 0x5649d0df5c00 session 0x5649d0dc72c0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:53:42.786125+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:53:43.786244+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:53:44.786393+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:53:45.786541+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 966182 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:53:46.786670+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0812800
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:53:47.786843+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:53:48.786979+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:53:49.787109+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbf1000
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:53:50.787265+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 966314 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:53:51.787395+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:53:52.787529+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cf9d9800
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.112362862s of 12.119877815s, submitted: 2
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:53:53.787734+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:53:54.787881+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86269952 unmapped: 3948544 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:53:55.788015+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 966314 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86269952 unmapped: 3948544 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:53:56.788215+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86269952 unmapped: 3948544 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0174c00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:53:57.788408+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86269952 unmapped: 3948544 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:53:58.788542+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86269952 unmapped: 3948544 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:53:59.788698+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86269952 unmapped: 3948544 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:54:00.788879+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965723 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86269952 unmapped: 3948544 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:54:01.789033+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86269952 unmapped: 3948544 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:54:02.789179+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 3940352 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:54:03.789311+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 3940352 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:54:04.789595+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.178146362s of 12.192501068s, submitted: 3
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 3940352 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:54:05.789853+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965459 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 3940352 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:54:06.790036+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 3940352 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:54:07.790296+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:54:08.790416+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 3940352 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:54:09.790553+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 3940352 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:54:10.791463+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 3940352 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965459 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:54:11.791682+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 3940352 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:54:12.791901+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 3940352 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:54:13.792049+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 3940352 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:54:14.792267+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 3940352 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:54:15.792434+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 3940352 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 ms_handle_reset con 0x5649d0812800 session 0x5649ce8d85a0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 ms_handle_reset con 0x5649cf80a400 session 0x5649d0181c20
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965459 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:54:16.792591+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 3940352 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:54:17.792785+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 3940352 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:54:18.792992+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 3940352 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:54:19.793173+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 3940352 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:54:20.793361+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 3940352 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965459 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:54:21.793523+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 3940352 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:54:22.793690+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 3940352 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:54:23.793854+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 3940352 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:54:24.793994+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 3940352 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:54:25.794222+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86286336 unmapped: 3932160 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965459 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:54:26.794362+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86286336 unmapped: 3932160 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:54:27.794631+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86286336 unmapped: 3932160 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 ms_handle_reset con 0x5649d0174c00 session 0x5649d0e08d20
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 ms_handle_reset con 0x5649cfbf1000 session 0x5649ce8fcf00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cf80a400
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 22.733675003s of 22.743671417s, submitted: 3
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:54:28.794909+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86286336 unmapped: 3932160 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:54:29.795052+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86286336 unmapped: 3932160 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:54:30.795197+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86286336 unmapped: 3932160 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965591 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:54:31.795350+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86286336 unmapped: 3932160 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:54:32.795511+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86286336 unmapped: 3932160 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:54:33.795689+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86286336 unmapped: 3932160 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:54:34.795971+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86286336 unmapped: 3932160 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:54:35.796110+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86286336 unmapped: 3932160 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965591 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 ms_handle_reset con 0x5649cf9d9800 session 0x5649cf974960
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:54:36.796253+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86286336 unmapped: 3932160 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:54:37.796442+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86286336 unmapped: 3932160 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:54:38.796597+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86286336 unmapped: 3932160 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cdab5c00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.979516983s of 10.983094215s, submitted: 1
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:54:39.796725+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86286336 unmapped: 3932160 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:54:40.796850+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86286336 unmapped: 3932160 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965591 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:54:41.796989+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86286336 unmapped: 3932160 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:54:42.797176+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86286336 unmapped: 3932160 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:54:43.797339+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86286336 unmapped: 3932160 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:54:44.797469+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86286336 unmapped: 3932160 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbf2000
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:54:45.797697+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 967103 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:54:46.797849+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbf9000
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:54:47.798015+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:54:48.798133+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:54:49.798247+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:54:50.798390+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 967235 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:54:51.798576+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:54:52.798747+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:54:53.798937+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:54:54.799071+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbef800
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.469666481s of 16.171043396s, submitted: 4
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:54:55.799195+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 968615 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:54:56.799334+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:54:57.799533+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:54:58.799678+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:54:59.799921+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:55:00.800142+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 968024 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:55:01.800272+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86319104 unmapped: 3899392 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:55:02.800438+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86319104 unmapped: 3899392 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:55:03.800564+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86327296 unmapped: 3891200 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:55:04.800683+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86327296 unmapped: 3891200 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:55:05.800878+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86327296 unmapped: 3891200 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 967892 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:55:06.801007+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86327296 unmapped: 3891200 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:55:07.801298+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86327296 unmapped: 3891200 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:55:08.801439+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86327296 unmapped: 3891200 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:55:09.801878+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86327296 unmapped: 3891200 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:55:10.802243+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86327296 unmapped: 3891200 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 967892 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:55:11.802405+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86327296 unmapped: 3891200 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:55:12.802988+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86327296 unmapped: 3891200 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:55:13.803518+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86327296 unmapped: 3891200 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:55:14.804469+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86327296 unmapped: 3891200 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:55:15.805600+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86319104 unmapped: 3899392 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 967892 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:55:16.805755+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86319104 unmapped: 3899392 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:55:17.805895+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86319104 unmapped: 3899392 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:55:18.806046+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86319104 unmapped: 3899392 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:55:19.806172+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86319104 unmapped: 3899392 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:55:20.806343+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86319104 unmapped: 3899392 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 967892 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:55:21.806499+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86319104 unmapped: 3899392 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:55:22.806643+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86319104 unmapped: 3899392 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:55:23.807053+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86319104 unmapped: 3899392 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:55:24.807163+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86319104 unmapped: 3899392 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:55:25.807299+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86319104 unmapped: 3899392 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 967892 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:55:26.807486+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86319104 unmapped: 3899392 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:55:27.807796+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86327296 unmapped: 3891200 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:55:28.808116+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86327296 unmapped: 3891200 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:55:29.808353+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86327296 unmapped: 3891200 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:55:30.808500+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86327296 unmapped: 3891200 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 ms_handle_reset con 0x5649cfbef800 session 0x5649d08e01e0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 ms_handle_reset con 0x5649cfbf9000 session 0x5649ce8d54a0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 967892 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:55:31.808658+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86327296 unmapped: 3891200 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:55:32.808814+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86327296 unmapped: 3891200 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:55:33.809014+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86327296 unmapped: 3891200 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:55:34.809158+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86327296 unmapped: 3891200 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:55:35.809297+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86327296 unmapped: 3891200 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 967892 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:55:36.809423+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86327296 unmapped: 3891200 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:55:37.809571+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86327296 unmapped: 3891200 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:55:38.809695+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86327296 unmapped: 3891200 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:55:39.809848+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86327296 unmapped: 3891200 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:55:40.809967+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86327296 unmapped: 3891200 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 967892 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:55:41.810105+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86327296 unmapped: 3891200 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0df7400
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 46.398178101s of 47.175918579s, submitted: 5
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:55:42.810252+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86327296 unmapped: 3891200 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:55:43.810332+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86327296 unmapped: 3891200 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:55:44.810514+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86327296 unmapped: 3891200 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:55:45.810649+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86327296 unmapped: 3891200 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 968024 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:55:46.810768+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86327296 unmapped: 3891200 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:55:47.811056+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86327296 unmapped: 3891200 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfb40000
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:55:48.811226+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86327296 unmapped: 3891200 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:55:49.811405+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86327296 unmapped: 3891200 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:55:50.811522+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86327296 unmapped: 3891200 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 968024 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:55:51.811662+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86327296 unmapped: 3891200 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:55:52.811838+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86327296 unmapped: 3891200 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:55:53.811961+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86327296 unmapped: 3891200 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:55:54.812070+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86327296 unmapped: 3891200 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:55:55.812219+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.636700630s of 13.639803886s, submitted: 1
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86327296 unmapped: 3891200 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 967433 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:55:56.812340+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86327296 unmapped: 3891200 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:55:57.812503+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86327296 unmapped: 3891200 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:55:58.812628+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86327296 unmapped: 3891200 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:55:59.812758+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86327296 unmapped: 3891200 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:56:00.812921+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86327296 unmapped: 3891200 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 967301 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:56:01.813056+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86327296 unmapped: 3891200 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:56:02.813183+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86327296 unmapped: 3891200 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:56:03.813378+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86327296 unmapped: 3891200 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:56:04.813535+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86327296 unmapped: 3891200 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:56:05.813662+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86327296 unmapped: 3891200 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 967301 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:56:06.813794+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86327296 unmapped: 3891200 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:56:07.813987+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86327296 unmapped: 3891200 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:56:08.814115+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86327296 unmapped: 3891200 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:56:09.814235+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86327296 unmapped: 3891200 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:56:10.814355+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86327296 unmapped: 3891200 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 967301 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:56:11.814523+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86327296 unmapped: 3891200 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:56:12.814656+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86335488 unmapped: 3883008 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread fragmentation_score=0.000035 took=0.000045s
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:56:13.814858+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86335488 unmapped: 3883008 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:56:14.814989+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86335488 unmapped: 3883008 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:56:15.816443+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86335488 unmapped: 3883008 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 967301 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:56:16.816630+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86335488 unmapped: 3883008 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:56:17.817426+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86335488 unmapped: 3883008 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:56:18.817627+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86335488 unmapped: 3883008 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:56:19.818242+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86335488 unmapped: 3883008 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:56:20.818450+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86335488 unmapped: 3883008 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:56:21.818599+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 967301 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86335488 unmapped: 3883008 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:56:22.818726+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86335488 unmapped: 3883008 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:56:23.818873+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86335488 unmapped: 3883008 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:56:24.819011+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86343680 unmapped: 3874816 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:56:25.819562+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86343680 unmapped: 3874816 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:56:26.819997+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 967301 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86343680 unmapped: 3874816 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:56:27.820409+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86343680 unmapped: 3874816 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:56:28.820557+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86343680 unmapped: 3874816 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:56:29.820688+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86343680 unmapped: 3874816 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:56:30.820828+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86343680 unmapped: 3874816 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:56:31.821054+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 967301 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86343680 unmapped: 3874816 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:56:32.821241+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86343680 unmapped: 3874816 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 ms_handle_reset con 0x5649cdab5c00 session 0x5649cf8025a0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:56:33.821380+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86343680 unmapped: 3874816 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:56:34.821519+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86343680 unmapped: 3874816 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:56:35.821638+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86343680 unmapped: 3874816 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:56:36.821795+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 967301 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86343680 unmapped: 3874816 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:56:37.821972+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86343680 unmapped: 3874816 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 ms_handle_reset con 0x5649cf88b400 session 0x5649ce8d45a0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649ce398000
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:56:38.822126+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86343680 unmapped: 3874816 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:56:39.822269+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86343680 unmapped: 3874816 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:56:40.822405+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86343680 unmapped: 3874816 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:56:41.822508+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 967301 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86343680 unmapped: 3874816 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:56:42.822663+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86343680 unmapped: 3874816 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:56:43.822875+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86343680 unmapped: 3874816 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d080f800
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 48.469837189s of 48.558231354s, submitted: 2
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:56:44.823054+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86343680 unmapped: 3874816 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:56:45.823195+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86343680 unmapped: 3874816 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:56:46.823400+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 967433 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86343680 unmapped: 3874816 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:56:47.823585+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86343680 unmapped: 3874816 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 ms_handle_reset con 0x5649cfb40000 session 0x5649d0dc7680
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 ms_handle_reset con 0x5649d0df7400 session 0x5649d0ac05a0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:56:48.823727+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86343680 unmapped: 3874816 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:56:49.823868+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86343680 unmapped: 3874816 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cf80b000
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:56:50.823972+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86343680 unmapped: 3874816 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:56:51.824109+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 968945 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86343680 unmapped: 3874816 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:56:52.824219+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86343680 unmapped: 3874816 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:56:53.824370+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86343680 unmapped: 3874816 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:56:54.824500+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86343680 unmapped: 3874816 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:56:55.824632+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86343680 unmapped: 3874816 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:56:56.824748+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 968945 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86343680 unmapped: 3874816 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:56:57.824936+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86343680 unmapped: 3874816 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:56:58.825100+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.445297241s of 14.450569153s, submitted: 2
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86351872 unmapped: 3866624 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649ce399c00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:56:59.825232+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86351872 unmapped: 3866624 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:57:00.825362+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86351872 unmapped: 3866624 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:57:01.825528+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 968945 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86351872 unmapped: 3866624 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:57:02.825678+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86351872 unmapped: 3866624 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:57:03.825860+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86351872 unmapped: 3866624 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:57:04.826018+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86351872 unmapped: 3866624 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:57:05.826155+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86351872 unmapped: 3866624 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:57:06.826328+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 970457 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86351872 unmapped: 3866624 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:57:07.826534+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86351872 unmapped: 3866624 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:57:08.826656+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86351872 unmapped: 3866624 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:57:09.826777+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86351872 unmapped: 3866624 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:57:10.826942+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86351872 unmapped: 3866624 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.620107651s of 12.632669449s, submitted: 3
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:57:11.827082+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969866 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86351872 unmapped: 3866624 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:57:12.827279+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86351872 unmapped: 3866624 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:57:13.827424+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86360064 unmapped: 3858432 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:57:14.827582+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86360064 unmapped: 3858432 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:57:15.827727+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86360064 unmapped: 3858432 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:57:16.827858+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969734 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86360064 unmapped: 3858432 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:57:17.828033+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86360064 unmapped: 3858432 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:57:18.828180+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86360064 unmapped: 3858432 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:57:19.828315+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86360064 unmapped: 3858432 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:57:20.828504+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86360064 unmapped: 3858432 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:57:21.828664+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969734 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 ms_handle_reset con 0x5649cfbf2000 session 0x5649ce470f00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 ms_handle_reset con 0x5649cf80a400 session 0x5649ce986000
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86360064 unmapped: 3858432 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:57:22.828801+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86360064 unmapped: 3858432 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:57:23.828964+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86360064 unmapped: 3858432 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:57:24.829092+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86360064 unmapped: 3858432 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:57:25.829244+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86360064 unmapped: 3858432 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:57:26.829360+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969734 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86360064 unmapped: 3858432 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:57:27.829545+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86360064 unmapped: 3858432 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:57:28.829689+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86360064 unmapped: 3858432 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:57:29.829872+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86360064 unmapped: 3858432 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:57:30.830003+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86360064 unmapped: 3858432 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:57:31.830137+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969734 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86360064 unmapped: 3858432 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:57:32.830264+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbed000
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 21.673086166s of 21.681987762s, submitted: 2
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86360064 unmapped: 3858432 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:57:33.830454+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86360064 unmapped: 3858432 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:57:34.830604+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86360064 unmapped: 3858432 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:57:35.830775+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0df8800
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86368256 unmapped: 3850240 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:57:36.830990+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971378 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86368256 unmapped: 3850240 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:57:37.831119+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86368256 unmapped: 3850240 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:57:38.831253+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cf97cc00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86376448 unmapped: 3842048 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:57:39.831477+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86376448 unmapped: 3842048 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:57:40.831698+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86376448 unmapped: 3842048 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:57:41.831854+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972890 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86376448 unmapped: 3842048 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:57:42.832029+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86376448 unmapped: 3842048 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:57:43.832190+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86376448 unmapped: 3842048 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:57:44.832344+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86376448 unmapped: 3842048 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:57:45.832503+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86376448 unmapped: 3842048 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:57:46.832732+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972299 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86376448 unmapped: 3842048 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:57:47.832974+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.688027382s of 14.706945419s, submitted: 4
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86376448 unmapped: 3842048 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:57:48.833121+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86376448 unmapped: 3842048 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:57:49.833388+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86376448 unmapped: 3842048 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:57:50.833544+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86376448 unmapped: 3842048 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:57:51.833685+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972167 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86376448 unmapped: 3842048 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:57:52.833794+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86376448 unmapped: 3842048 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:57:53.833962+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 ms_handle_reset con 0x5649cf80b000 session 0x5649d0ac0f00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 ms_handle_reset con 0x5649d080f800 session 0x5649cf802f00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86384640 unmapped: 3833856 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:57:54.834108+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86384640 unmapped: 3833856 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:57:55.834276+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86384640 unmapped: 3833856 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:57:56.834424+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972167 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86384640 unmapped: 3833856 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:57:57.834560+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86384640 unmapped: 3833856 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:57:58.834677+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86384640 unmapped: 3833856 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:57:59.834822+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86384640 unmapped: 3833856 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:58:00.834959+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86384640 unmapped: 3833856 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:58:01.835067+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972167 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86384640 unmapped: 3833856 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:58:02.835226+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86384640 unmapped: 3833856 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:58:03.835372+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbf8800
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.594081879s of 16.597219467s, submitted: 1
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86384640 unmapped: 3833856 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:58:04.835442+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86384640 unmapped: 3833856 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:58:05.835563+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86384640 unmapped: 3833856 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:58:06.835692+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972299 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86384640 unmapped: 3833856 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:58:07.835873+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86384640 unmapped: 3833856 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:58:08.836059+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86384640 unmapped: 3833856 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:58:09.836300+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86384640 unmapped: 3833856 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:58:10.836440+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86384640 unmapped: 3833856 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:58:11.836572+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972299 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86384640 unmapped: 3833856 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:58:12.836736+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86384640 unmapped: 3833856 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:58:13.836856+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86384640 unmapped: 3833856 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:58:14.836983+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86384640 unmapped: 3833856 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:58:15.837105+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86384640 unmapped: 3833856 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:58:16.837225+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971117 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86384640 unmapped: 3833856 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:58:17.837419+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86392832 unmapped: 3825664 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:58:18.837664+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.919934273s of 14.927965164s, submitted: 3
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86401024 unmapped: 3817472 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:58:19.837888+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86401024 unmapped: 3817472 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:58:20.838031+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86401024 unmapped: 3817472 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:58:21.838197+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 970985 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86401024 unmapped: 3817472 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:58:22.838366+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86401024 unmapped: 3817472 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:58:23.838500+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86401024 unmapped: 3817472 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:58:24.838641+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86409216 unmapped: 3809280 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:58:25.838890+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86409216 unmapped: 3809280 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:58:26.839104+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 970985 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86409216 unmapped: 3809280 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:58:27.839292+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86409216 unmapped: 3809280 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:58:28.839412+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 3801088 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:58:29.839581+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 3801088 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:58:30.839732+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 3801088 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:58:31.839871+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 970985 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 3801088 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:58:32.839996+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 3801088 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:58:33.840196+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 3801088 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:58:34.840380+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 3801088 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:58:35.840548+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 3801088 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:58:36.840727+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 970985 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 3801088 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:58:37.840861+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 3801088 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:58:38.840968+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 3801088 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:58:39.841138+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 ms_handle_reset con 0x5649cfbf8800 session 0x5649d0ac10e0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:58:40.841308+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 3801088 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:58:41.841514+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 3801088 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 970985 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:58:42.841897+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 3801088 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:58:43.842104+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 3801088 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:58:44.842309+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 3801088 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:58:45.842505+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 3801088 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:58:46.842666+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 3801088 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 970985 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:58:47.842860+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 3801088 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:58:48.843082+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 3801088 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:58:49.843300+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 3792896 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:58:50.843544+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 3792896 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfb3e000
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 31.763120651s of 31.766254425s, submitted: 1
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 ms_handle_reset con 0x5649d0df8800 session 0x5649d0db72c0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 ms_handle_reset con 0x5649ce399c00 session 0x5649ce8d85a0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:58:51.843908+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 3792896 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971117 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:58:52.844090+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 3792896 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:58:53.844243+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 3792896 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:58:54.844477+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 3792896 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:58:55.844672+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 3792896 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:58:56.844929+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 3792896 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbea000
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972629 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:58:57.845140+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 3792896 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:58:58.845325+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 3792896 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:58:59.845561+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 3792896 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:00.845863+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 3792896 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:01.846073+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 3792896 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972629 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbe8400
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.176573753s of 11.186085701s, submitted: 2
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:02.846249+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86409216 unmapped: 3809280 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:03.846409+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86409216 unmapped: 3809280 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:04.846543+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86409216 unmapped: 3809280 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:05.846720+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86409216 unmapped: 3809280 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:06.846910+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86409216 unmapped: 3809280 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974141 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:07.847073+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86409216 unmapped: 3809280 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:08.847192+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86409216 unmapped: 3809280 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:09.847386+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86409216 unmapped: 3809280 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:10.847594+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86409216 unmapped: 3809280 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:11.847735+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86409216 unmapped: 3809280 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974141 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:12.847885+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86409216 unmapped: 3809280 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:13.848075+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86409216 unmapped: 3809280 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.656981468s of 12.160857201s, submitted: 3
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:14.848208+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86409216 unmapped: 3809280 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:15.848354+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86409216 unmapped: 3809280 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:16.848578+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86409216 unmapped: 3809280 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 973550 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:17.848870+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86409216 unmapped: 3809280 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:18.849217+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86409216 unmapped: 3809280 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:19.849427+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86409216 unmapped: 3809280 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:20.849615+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 3801088 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:21.849750+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 3801088 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 973418 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:22.849878+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 3801088 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:23.850007+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 3801088 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:24.850126+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 3801088 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:25.850287+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 3801088 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:26.850426+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 3801088 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 973418 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:27.850646+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 3801088 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:28.850846+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 3801088 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:29.850986+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 3801088 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:30.851228+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 3801088 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:31.851378+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 3792896 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 973418 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:32.851755+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 3792896 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:33.851907+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 3792896 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:34.852060+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 3792896 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:35.852554+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 3792896 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:36.852796+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 3792896 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 973418 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:37.853285+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 3792896 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:38.853496+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 3792896 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:39.853705+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 3792896 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:40.853859+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 3792896 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:41.854106+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 3792896 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 973418 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:42.854266+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 3792896 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:43.854427+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 3792896 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:44.854600+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 3792896 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:45.854773+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 3792896 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 ms_handle_reset con 0x5649cfbe8400 session 0x5649d0db6f00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:46.854923+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 3792896 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 973418 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:47.855195+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 3792896 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:48.855460+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86433792 unmapped: 3784704 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:49.855654+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86433792 unmapped: 3784704 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:50.855884+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86433792 unmapped: 3784704 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:51.856079+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86433792 unmapped: 3784704 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 973418 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:52.856225+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86433792 unmapped: 3784704 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:53.856398+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86433792 unmapped: 3784704 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:54.856565+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86433792 unmapped: 3784704 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:55.856890+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86433792 unmapped: 3784704 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:56.857097+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86433792 unmapped: 3784704 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649ce398400
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 42.696498871s of 42.703052521s, submitted: 2
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 973550 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:57.857315+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86433792 unmapped: 3784704 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:58.857441+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86433792 unmapped: 3784704 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:00.274057+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86433792 unmapped: 3784704 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:01.274198+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86441984 unmapped: 3776512 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:02.274597+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86441984 unmapped: 3776512 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 975062 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 9010 writes, 35K keys, 9010 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 9010 writes, 1929 syncs, 4.67 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 622 writes, 961 keys, 622 commit groups, 1.0 writes per commit group, ingest: 0.32 MB, 0.00 MB/s
                                           Interval WAL: 622 writes, 300 syncs, 2.07 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5649cc1b5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5649cc1b5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5649cc1b5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5649cc1b5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5649cc1b5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5649cc1b5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5649cc1b5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5649cc1b49b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5649cc1b49b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5649cc1b49b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5649cc1b5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5649cc1b5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:03.274739+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86474752 unmapped: 3743744 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:04.274896+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86474752 unmapped: 3743744 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:05.275155+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86474752 unmapped: 3743744 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:06.275355+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86474752 unmapped: 3743744 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:07.275513+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86474752 unmapped: 3743744 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974471 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:08.275667+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86474752 unmapped: 3743744 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:09.275834+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86474752 unmapped: 3743744 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:10.276008+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86474752 unmapped: 3743744 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:11.276125+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86474752 unmapped: 3743744 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:12.276250+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86474752 unmapped: 3743744 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.116757393s of 15.127565384s, submitted: 3
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974339 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:13.276442+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86474752 unmapped: 3743744 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:14.276650+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86474752 unmapped: 3743744 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:15.276853+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86474752 unmapped: 3743744 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:16.276992+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86482944 unmapped: 3735552 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:17.277114+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86491136 unmapped: 3727360 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974339 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:18.277594+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86491136 unmapped: 3727360 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:19.277717+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86491136 unmapped: 3727360 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:20.277853+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86499328 unmapped: 3719168 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:21.277989+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86499328 unmapped: 3719168 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:22.278117+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86499328 unmapped: 3719168 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974339 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:23.278236+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86499328 unmapped: 3719168 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:24.278364+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86499328 unmapped: 3719168 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:25.278494+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 3710976 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:26.278625+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 3710976 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:27.278792+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 3710976 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974339 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:28.279074+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 3710976 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:29.279207+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 3710976 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:30.279341+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 3710976 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:31.279553+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 3710976 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:32.279901+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 3710976 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974339 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:33.280382+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 3710976 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:34.280509+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 3710976 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:35.280632+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 3710976 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:36.281039+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 3710976 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:37.281334+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 3702784 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974339 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:38.281544+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 3702784 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:39.281914+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 3702784 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:40.282614+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 3702784 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:41.282779+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 3702784 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:42.282926+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 3702784 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974339 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:43.283077+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 3702784 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:44.283198+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 3702784 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:45.283358+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 3702784 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:46.283513+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 3702784 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:47.283685+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 3702784 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974339 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:48.283876+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 3702784 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:49.284026+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 3702784 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:50.284147+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 3702784 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:51.284300+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 3694592 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:52.284429+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 3694592 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974339 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:53.284827+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 3694592 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:54.284961+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 3694592 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.16890 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:55.285074+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 3694592 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:56.285384+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 3694592 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:57.285516+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 3694592 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974339 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:58.285659+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 3694592 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:59.285817+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 3694592 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:00.285927+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 3694592 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:01.286056+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 3694592 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:02.286174+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 3694592 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974339 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:03.286303+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 3694592 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:04.286419+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 3694592 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:05.286669+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 3694592 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:06.286783+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 3694592 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:07.286959+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 3694592 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974339 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:08.287157+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 3694592 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:09.287301+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 3694592 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:10.287445+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 3694592 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:11.287571+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 3694592 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:12.287655+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 3694592 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974339 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:13.287776+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 3694592 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:14.287901+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 3694592 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:15.288065+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 3694592 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:16.288194+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 3694592 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:17.288371+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 3694592 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974339 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:18.288560+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 3694592 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:19.288729+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 3694592 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:20.288883+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 3694592 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:21.289026+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86532096 unmapped: 3686400 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:22.289148+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86532096 unmapped: 3686400 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974339 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:23.289256+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86532096 unmapped: 3686400 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:24.289433+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86532096 unmapped: 3686400 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:25.289583+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86540288 unmapped: 3678208 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:26.289709+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86540288 unmapped: 3678208 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:27.289904+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86540288 unmapped: 3678208 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974339 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:28.290070+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86540288 unmapped: 3678208 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:29.290240+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86540288 unmapped: 3678208 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 ms_handle_reset con 0x5649cfbea000 session 0x5649cf975c20
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 ms_handle_reset con 0x5649cfb3e000 session 0x5649d0d105a0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:30.290404+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86540288 unmapped: 3678208 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:31.290625+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86540288 unmapped: 3678208 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:32.290751+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86540288 unmapped: 3678208 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974339 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:33.290928+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86540288 unmapped: 3678208 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:34.291060+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86540288 unmapped: 3678208 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:35.291184+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86540288 unmapped: 3678208 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:36.291357+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86540288 unmapped: 3678208 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:37.291524+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86540288 unmapped: 3678208 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974339 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:38.291745+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86540288 unmapped: 3678208 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:39.291877+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86540288 unmapped: 3678208 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:40.292026+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cdab5400
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 88.315132141s of 88.409767151s, submitted: 1
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86540288 unmapped: 3678208 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:41.292178+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86540288 unmapped: 3678208 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:42.292337+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86540288 unmapped: 3678208 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974471 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:43.292527+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86540288 unmapped: 3678208 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:44.292707+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86548480 unmapped: 3670016 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:45.292858+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86548480 unmapped: 3670016 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:46.293013+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86548480 unmapped: 3670016 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:47.293195+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cdc88000
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86548480 unmapped: 3670016 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977495 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:48.293387+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86548480 unmapped: 3670016 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:49.293561+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 ms_handle_reset con 0x5649ce398400 session 0x5649d08e10e0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86548480 unmapped: 3670016 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:50.293703+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86556672 unmapped: 3661824 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.198902130s of 10.208003044s, submitted: 3
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:51.293883+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86556672 unmapped: 3661824 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:52.294032+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86556672 unmapped: 3661824 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976904 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:53.294227+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 3620864 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:54.294387+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 3620864 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:55.294511+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 3620864 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:56.294656+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 3620864 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:57.294826+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 3620864 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976916 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:58.294990+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 3620864 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:59.295167+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 3620864 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:00.295284+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 3620864 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:01.295440+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.799607277s of 10.407449722s, submitted: 108
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 3620864 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:02.295612+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cf9d7400
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86622208 unmapped: 3596288 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976904 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:03.295756+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86622208 unmapped: 3596288 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:04.295875+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [0,1,1,1])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [0,0,0,0,1])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:05.295998+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:06.296224+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:07.296394+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976904 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:08.296550+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:09.296664+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbeb000
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:10.296890+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:11.297036+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:12.297127+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.383151054s of 10.842937469s, submitted: 168
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977102 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:13.297261+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:14.297383+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:15.297503+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:16.297713+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:17.297909+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:18.298096+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977102 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:19.298244+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:20.298392+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:21.298534+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:22.298664+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:23.298796+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977102 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:24.298997+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:25.299177+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:26.299350+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:27.299566+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:28.299734+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977102 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:29.299845+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:30.300020+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:31.300171+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:32.300330+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:33.300480+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977102 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:34.300629+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:35.300795+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:36.300966+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86269952 unmapped: 3948544 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:37.301164+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86269952 unmapped: 3948544 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:38.301348+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86269952 unmapped: 3948544 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977102 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:39.301496+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86269952 unmapped: 3948544 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:40.301643+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86269952 unmapped: 3948544 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:41.301858+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 3940352 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:42.302031+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 3940352 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:43.302172+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 3940352 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977102 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:44.302304+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 3940352 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:45.302500+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 3940352 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:46.302657+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 3940352 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:47.302794+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 3940352 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:48.302997+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 3940352 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977102 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:49.303153+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 3940352 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:50.303314+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 3940352 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:51.303445+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 3940352 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:52.303577+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 3940352 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:53.303718+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 3940352 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977102 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:54.303887+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 3940352 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:55.304024+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 3940352 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:56.304150+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 3940352 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:57.304278+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 3940352 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:58.304422+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 3940352 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977102 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:59.304565+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 3940352 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:00.304707+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 3940352 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:01.304854+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 3940352 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:02.304972+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 3940352 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:03.305156+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 3940352 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977102 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:04.305292+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 3940352 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:05.305422+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 3940352 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:06.305542+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 3940352 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:07.305677+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 3940352 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:08.305879+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 3940352 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977102 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:09.306013+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86286336 unmapped: 3932160 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:10.306139+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86286336 unmapped: 3932160 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:11.306276+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86286336 unmapped: 3932160 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:12.306425+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86286336 unmapped: 3932160 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:13.306549+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86286336 unmapped: 3932160 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977102 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:14.306683+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86286336 unmapped: 3932160 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:15.306846+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86286336 unmapped: 3932160 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:16.306985+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:17.307140+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:18.307283+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977102 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:19.307423+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:20.307597+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:21.307734+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:22.307880+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:23.308023+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977102 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:24.308190+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:25.308327+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:26.308482+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:27.308687+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:28.308893+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977102 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:29.309058+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:30.309211+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:31.309346+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:32.309466+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:33.309642+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977102 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:34.309854+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:35.310031+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:36.310201+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:37.310383+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:38.310560+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977102 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:39.310689+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:40.310868+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:41.311000+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:42.311123+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:43.311259+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977102 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:44.311424+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:45.321511+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:46.321756+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:47.321948+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:48.322170+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977102 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:49.322367+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:50.322516+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:51.322687+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:52.322897+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:53.323048+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977102 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:54.323208+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:55.323341+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 ms_handle_reset con 0x5649cdc88000 session 0x5649d0ac1a40
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 ms_handle_reset con 0x5649cdab5400 session 0x5649d0d110e0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:56.323472+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:57.323656+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:58.323836+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977102 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:59.323974+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:00.324115+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:01.324239+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:02.324524+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:03.324725+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977102 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:04.324902+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:05.325070+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:06.325263+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbe8400
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 114.807556152s of 114.820899963s, submitted: 3
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:07.325428+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:08.325646+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977234 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:09.325940+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:10.326147+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:11.326295+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:12.326494+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:13.326680+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 978746 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:14.326907+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:15.327070+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:16.327217+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:17.327354+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:18.327527+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 978155 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:19.327715+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:20.327937+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.707267761s of 13.717912674s, submitted: 3
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:21.328106+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:22.328292+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:23.328474+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 978023 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:24.328606+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:25.328715+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86302720 unmapped: 3915776 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 ms_handle_reset con 0x5649cfbe8400 session 0x5649d0c072c0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:26.328865+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86302720 unmapped: 3915776 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:27.329063+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86302720 unmapped: 3915776 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:28.329230+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 978023 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86302720 unmapped: 3915776 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:29.329388+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86302720 unmapped: 3915776 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:30.329576+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 ms_handle_reset con 0x5649cfbeb000 session 0x5649d0d2b860
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 ms_handle_reset con 0x5649cf9d7400 session 0x5649cfc1fa40
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86302720 unmapped: 3915776 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:31.329784+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86302720 unmapped: 3915776 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:32.329951+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86302720 unmapped: 3915776 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:33.330098+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 978023 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86302720 unmapped: 3915776 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:34.330270+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86302720 unmapped: 3915776 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:35.330431+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86302720 unmapped: 3915776 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:36.330562+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cf9d9800
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.972690582s of 15.975857735s, submitted: 1
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86302720 unmapped: 3915776 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:37.330724+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86302720 unmapped: 3915776 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:38.330931+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 978155 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86302720 unmapped: 3915776 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:39.331126+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86302720 unmapped: 3915776 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:40.331250+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86302720 unmapped: 3915776 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:41.331386+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbec800
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86302720 unmapped: 3915776 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:42.331524+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86302720 unmapped: 3915776 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:43.331742+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 978287 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86302720 unmapped: 3915776 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:44.331932+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86302720 unmapped: 3915776 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:45.332136+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86302720 unmapped: 3915776 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:46.332371+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86302720 unmapped: 3915776 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:47.332559+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbeb400
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.718964577s of 10.725953102s, submitted: 2
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86302720 unmapped: 3915776 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:48.332799+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979799 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86302720 unmapped: 3915776 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:49.333010+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86302720 unmapped: 3915776 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:50.333154+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86302720 unmapped: 3915776 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:51.333337+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86302720 unmapped: 3915776 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:52.333501+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86302720 unmapped: 3915776 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:53.333636+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979667 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86302720 unmapped: 3915776 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:54.333774+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86302720 unmapped: 3915776 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:55.333899+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86302720 unmapped: 3915776 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:56.334106+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:57.334241+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.459848404s of 10.484528542s, submitted: 2
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:58.334400+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979535 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:59.334546+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:00.334684+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:01.334860+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:02.335002+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:03.335118+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979535 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:04.335239+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:05.335399+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:06.335512+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:07.335744+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:08.336046+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979535 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:09.336203+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:10.336412+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:11.336682+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:12.336976+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:13.337146+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979535 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:14.337277+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:15.337533+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:16.337694+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:17.337892+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:18.338061+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979535 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:19.338184+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:20.338312+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:21.338426+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:22.338544+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 ms_handle_reset con 0x5649cfbeb400 session 0x5649d08e1a40
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 ms_handle_reset con 0x5649cfbec800 session 0x5649d0c07860
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:23.338670+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979535 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:24.338867+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:25.339031+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 ms_handle_reset con 0x5649cf97cc00 session 0x5649d08e03c0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 ms_handle_reset con 0x5649cfbed000 session 0x5649d0e09c20
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:26.339160+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:27.339304+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:28.339489+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979535 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:29.339623+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d01e9000
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:30.339778+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:31.339931+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 159 handle_osd_map epochs [159,160], i have 159, src has [1,160]
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 34.143909454s of 34.247035980s, submitted: 1
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:32.340100+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:33.340233+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _renew_subs
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 160 handle_osd_map epochs [161,161], i have 160, src has [1,161]
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0df4400
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1019962 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86450176 unmapped: 13082624 heap: 99532800 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:34.340715+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 161 handle_osd_map epochs [161,162], i have 161, src has [1,162]
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86466560 unmapped: 13066240 heap: 99532800 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 162 ms_handle_reset con 0x5649d01e9000 session 0x5649cf7ab4a0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:35.340921+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 162 heartbeat osd_stat(store_statfs(0x4fc1bb000/0x0/0x4ffc00000, data 0x58e460/0x650000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86466560 unmapped: 13066240 heap: 99532800 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbf2c00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:36.341057+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86499328 unmapped: 21430272 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cdb48000
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:37.341220+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _renew_subs
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 162 handle_osd_map epochs [163,163], i have 162, src has [1,163]
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 163 ms_handle_reset con 0x5649cfbf2c00 session 0x5649d0d105a0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87556096 unmapped: 20373504 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:38.341421+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1080143 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87556096 unmapped: 20373504 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:39.341588+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfa04800
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87556096 unmapped: 20373504 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:40.341734+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87556096 unmapped: 20373504 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fb9b4000/0x0/0x4ffc00000, data 0xd927b3/0xe57000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:41.341881+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 163 handle_osd_map epochs [163,164], i have 163, src has [1,164]
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87556096 unmapped: 20373504 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:42.342002+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87556096 unmapped: 20373504 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0812800
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.648534775s of 10.933682442s, submitted: 74
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:43.342164+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085765 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87556096 unmapped: 20373504 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:44.342319+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87556096 unmapped: 20373504 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:45.342478+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fb9b1000/0x0/0x4ffc00000, data 0xd94815/0xe5a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87556096 unmapped: 20373504 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:46.342681+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87580672 unmapped: 20348928 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:47.342895+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87580672 unmapped: 20348928 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:48.343122+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fb9b2000/0x0/0x4ffc00000, data 0xd94815/0xe5a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085714 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87580672 unmapped: 20348928 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:49.343292+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87580672 unmapped: 20348928 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:50.343481+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fb9b2000/0x0/0x4ffc00000, data 0xd94815/0xe5a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87580672 unmapped: 20348928 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:51.343639+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87580672 unmapped: 20348928 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:52.343832+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87580672 unmapped: 20348928 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.901214600s of 10.076541901s, submitted: 5
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:53.344128+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fb9b2000/0x0/0x4ffc00000, data 0xd94815/0xe5a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084991 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87580672 unmapped: 20348928 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:54.344333+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87580672 unmapped: 20348928 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fb9b2000/0x0/0x4ffc00000, data 0xd94815/0xe5a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:55.344512+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87580672 unmapped: 20348928 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:56.344670+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87580672 unmapped: 20348928 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:57.344876+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87580672 unmapped: 20348928 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:58.345070+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084991 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87580672 unmapped: 20348928 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:59.345202+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87580672 unmapped: 20348928 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:00.345356+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fb9b2000/0x0/0x4ffc00000, data 0xd94815/0xe5a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87580672 unmapped: 20348928 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:01.345472+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87580672 unmapped: 20348928 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:02.345619+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87580672 unmapped: 20348928 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:03.345753+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084991 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87580672 unmapped: 20348928 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:04.345896+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87580672 unmapped: 20348928 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:05.346021+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87580672 unmapped: 20348928 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:06.346146+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fb9b2000/0x0/0x4ffc00000, data 0xd94815/0xe5a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87580672 unmapped: 20348928 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:07.346257+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87580672 unmapped: 20348928 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:08.346435+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084991 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87580672 unmapped: 20348928 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:09.346636+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87580672 unmapped: 20348928 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:10.346862+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87580672 unmapped: 20348928 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:11.347025+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fb9b2000/0x0/0x4ffc00000, data 0xd94815/0xe5a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87580672 unmapped: 20348928 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:12.347178+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87580672 unmapped: 20348928 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:13.347322+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084991 data_alloc: 218103808 data_used: 143360
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87580672 unmapped: 20348928 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:14.347467+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87580672 unmapped: 20348928 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:15.347695+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87580672 unmapped: 20348928 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:16.347853+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fb9b2000/0x0/0x4ffc00000, data 0xd94815/0xe5a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87588864 unmapped: 20340736 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:17.348027+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87588864 unmapped: 20340736 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:18.348201+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbf6c00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 164 ms_handle_reset con 0x5649cfbf6c00 session 0x5649cf64a780
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cf97cc00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 164 ms_handle_reset con 0x5649cf97cc00 session 0x5649d0d30780
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095935 data_alloc: 218103808 data_used: 4800512
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 92241920 unmapped: 15687680 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:19.348378+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fb9b2000/0x0/0x4ffc00000, data 0xd94815/0xe5a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 92241920 unmapped: 15687680 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 164 handle_osd_map epochs [165,165], i have 164, src has [1,165]
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 26.999790192s of 27.003772736s, submitted: 1
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:20.348528+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbee800
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _renew_subs
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 165 handle_osd_map epochs [166,166], i have 165, src has [1,166]
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 92561408 unmapped: 15368192 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:21.348692+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 166 ms_handle_reset con 0x5649cfbee800 session 0x5649ce8d8f00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbecc00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 166 ms_handle_reset con 0x5649cfbecc00 session 0x5649d0f701e0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 94994432 unmapped: 12935168 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:22.348882+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 94994432 unmapped: 12935168 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:23.349124+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbf8800
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 166 ms_handle_reset con 0x5649cfbf8800 session 0x5649cfb06780
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153554 data_alloc: 218103808 data_used: 4804608
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 94994432 unmapped: 12935168 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:24.349246+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 94994432 unmapped: 12935168 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:25.349368+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cd9f3c00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 166 ms_handle_reset con 0x5649cd9f3c00 session 0x5649ce8d2f00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fb46e000/0x0/0x4ffc00000, data 0x12d5bc3/0x139e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 94986240 unmapped: 12943360 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:26.349844+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbf6400
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 166 ms_handle_reset con 0x5649cfbf6400 session 0x5649cf64ab40
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfb41400
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 166 ms_handle_reset con 0x5649cfb41400 session 0x5649d06270e0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 94986240 unmapped: 12943360 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:27.349961+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 94986240 unmapped: 12943360 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:28.350116+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154165 data_alloc: 218103808 data_used: 4804608
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 94994432 unmapped: 12935168 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:29.350309+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbebc00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa2cd000/0x0/0x4ffc00000, data 0x12d5be6/0x139f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 99688448 unmapped: 8241152 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:30.350488+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 99688448 unmapped: 8241152 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:31.350627+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _renew_subs
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 166 handle_osd_map epochs [167,167], i have 166, src has [1,167]
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.404651642s of 11.529529572s, submitted: 53
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 100737024 unmapped: 7192576 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:32.350911+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 100737024 unmapped: 7192576 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:33.351158+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa2c9000/0x0/0x4ffc00000, data 0x12d7c48/0x13a2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193451 data_alloc: 234881024 data_used: 9924608
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 100753408 unmapped: 7176192 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:34.351369+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 100753408 unmapped: 7176192 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:35.351782+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa2c9000/0x0/0x4ffc00000, data 0x12d7c48/0x13a2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 100753408 unmapped: 7176192 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:36.352113+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 100753408 unmapped: 7176192 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:37.352453+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 100753408 unmapped: 7176192 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:38.352901+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193451 data_alloc: 234881024 data_used: 9924608
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 100753408 unmapped: 7176192 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:39.353055+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 100753408 unmapped: 7176192 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:40.353241+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa2c9000/0x0/0x4ffc00000, data 0x12d7c48/0x13a2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:41.355406+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 3776512 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.024612427s of 10.284733772s, submitted: 118
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:42.355578+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107356160 unmapped: 3719168 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f93f6000/0x0/0x4ffc00000, data 0x21abc48/0x2276000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:43.355890+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107806720 unmapped: 3268608 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1317907 data_alloc: 234881024 data_used: 11325440
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:44.356091+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107806720 unmapped: 3268608 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:45.356329+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107806720 unmapped: 3268608 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:46.356645+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107806720 unmapped: 3268608 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:47.356787+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107806720 unmapped: 3268608 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f936c000/0x0/0x4ffc00000, data 0x2235c48/0x2300000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:48.357008+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107200512 unmapped: 3874816 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f934b000/0x0/0x4ffc00000, data 0x2256c48/0x2321000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfa04800 session 0x5649cf8025a0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d0df4400 session 0x5649ce8fc960
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1314731 data_alloc: 234881024 data_used: 11333632
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:49.357146+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107200512 unmapped: 3874816 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:50.357336+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107208704 unmapped: 3866624 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:51.357474+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107208704 unmapped: 3866624 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:52.357753+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107208704 unmapped: 3866624 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:53.357962+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107208704 unmapped: 3866624 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.102064133s of 12.177471161s, submitted: 37
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1314835 data_alloc: 234881024 data_used: 11333632
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:54.358124+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f934b000/0x0/0x4ffc00000, data 0x2256c48/0x2321000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 3825664 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:55.358292+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 3825664 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9342000/0x0/0x4ffc00000, data 0x225fc48/0x232a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:56.358463+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 3825664 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9342000/0x0/0x4ffc00000, data 0x225fc48/0x232a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:57.358661+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 3825664 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:58.358849+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 3825664 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:59.359037+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1314835 data_alloc: 234881024 data_used: 11333632
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 3825664 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbefc00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0df9c00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d0df9c00 session 0x5649ce59ab40
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfa04800
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:00.359292+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107241472 unmapped: 3833856 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfa04800 session 0x5649d08e0d20
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:01.359569+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107241472 unmapped: 3833856 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbea000
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbea000 session 0x5649cfdb92c0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0811000
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d0811000 session 0x5649ce8f6960
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:02.359945+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107085824 unmapped: 12386304 heap: 119472128 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f8d96000/0x0/0x4ffc00000, data 0x280acaa/0x28d6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:03.360173+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107085824 unmapped: 12386304 heap: 119472128 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:04.360363+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1358416 data_alloc: 234881024 data_used: 11333632
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107085824 unmapped: 12386304 heap: 119472128 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:05.360516+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107085824 unmapped: 12386304 heap: 119472128 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0175800
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.447368622s of 11.733536720s, submitted: 28
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:06.360680+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107085824 unmapped: 12386304 heap: 119472128 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:07.360890+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107085824 unmapped: 12386304 heap: 119472128 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:08.361116+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f8d96000/0x0/0x4ffc00000, data 0x280acaa/0x28d6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 12353536 heap: 119472128 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0df4800
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:09.361290+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1358673 data_alloc: 234881024 data_used: 11366400
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107143168 unmapped: 12328960 heap: 119472128 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:10.361452+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 111050752 unmapped: 8421376 heap: 119472128 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:11.361617+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 111050752 unmapped: 8421376 heap: 119472128 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:12.361789+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 111050752 unmapped: 8421376 heap: 119472128 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:13.361983+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 111050752 unmapped: 8421376 heap: 119472128 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f8d96000/0x0/0x4ffc00000, data 0x280acaa/0x28d6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:14.362129+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1397534 data_alloc: 234881024 data_used: 16273408
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 111050752 unmapped: 8421376 heap: 119472128 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:15.362286+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 111050752 unmapped: 8421376 heap: 119472128 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:16.362461+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cf9d9800 session 0x5649ce8f74a0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 111050752 unmapped: 8421376 heap: 119472128 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f8d96000/0x0/0x4ffc00000, data 0x280acaa/0x28d6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:17.362614+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 111050752 unmapped: 8421376 heap: 119472128 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:18.362856+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.861881256s of 12.330757141s, submitted: 8
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 111050752 unmapped: 8421376 heap: 119472128 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:19.362992+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1397510 data_alloc: 234881024 data_used: 16273408
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 111050752 unmapped: 8421376 heap: 119472128 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:20.363108+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 111902720 unmapped: 7569408 heap: 119472128 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:21.363245+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115146752 unmapped: 5373952 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:22.363390+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115294208 unmapped: 5226496 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f8545000/0x0/0x4ffc00000, data 0x3053caa/0x311f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:23.363529+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115326976 unmapped: 5193728 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:24.363643+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1470454 data_alloc: 234881024 data_used: 16990208
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 6979584 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:25.363748+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 6979584 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:26.363885+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 6979584 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cf62dc00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:27.364023+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 6979584 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:28.364320+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f8544000/0x0/0x4ffc00000, data 0x305ccaa/0x3128000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 6979584 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.644721031s of 10.839123726s, submitted: 68
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d0df4800 session 0x5649cdc2d680
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:29.364467+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1470346 data_alloc: 234881024 data_used: 16990208
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113549312 unmapped: 6971392 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0df4800
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f8544000/0x0/0x4ffc00000, data 0x305ccaa/0x3128000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d0df4800 session 0x5649cf7a3860
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:30.364632+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 109682688 unmapped: 10838016 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:31.365125+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 109682688 unmapped: 10838016 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:32.365253+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 109682688 unmapped: 10838016 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cf97cc00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:33.365382+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 109682688 unmapped: 10838016 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:34.366036+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325354 data_alloc: 234881024 data_used: 10432512
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 109682688 unmapped: 10838016 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:35.366419+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 109682688 unmapped: 10838016 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9335000/0x0/0x4ffc00000, data 0x226bc48/0x2336000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:36.367476+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 109682688 unmapped: 10838016 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:37.368641+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 109682688 unmapped: 10838016 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:38.368868+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 109682688 unmapped: 10838016 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:39.369214+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1324135 data_alloc: 234881024 data_used: 10432512
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 109682688 unmapped: 10838016 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.674302101s of 10.743412971s, submitted: 23
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbebc00 session 0x5649d0ef0000
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0811c00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:40.369359+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105193472 unmapped: 15327232 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9336000/0x0/0x4ffc00000, data 0x226bc48/0x2336000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [1])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d0811c00 session 0x5649d0f71860
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:41.370109+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105226240 unmapped: 15294464 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:42.370455+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa457000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105226240 unmapped: 15294464 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa457000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:43.370989+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105226240 unmapped: 15294464 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:44.371947+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1129106 data_alloc: 218103808 data_used: 4788224
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105226240 unmapped: 15294464 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:45.372172+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105226240 unmapped: 15294464 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:46.372651+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105226240 unmapped: 15294464 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa457000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:47.373033+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105226240 unmapped: 15294464 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:48.373243+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105226240 unmapped: 15294464 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa457000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:49.373451+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1129106 data_alloc: 218103808 data_used: 4788224
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105226240 unmapped: 15294464 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:50.373622+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105226240 unmapped: 15294464 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:51.373873+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa457000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105226240 unmapped: 15294464 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:52.374064+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105226240 unmapped: 15294464 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:53.374364+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105226240 unmapped: 15294464 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:54.374528+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1129106 data_alloc: 218103808 data_used: 4788224
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105226240 unmapped: 15294464 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:55.374714+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105226240 unmapped: 15294464 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa457000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:56.374922+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105226240 unmapped: 15294464 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:57.375160+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105226240 unmapped: 15294464 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:58.375487+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105226240 unmapped: 15294464 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:59.375685+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1129106 data_alloc: 218103808 data_used: 4788224
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105226240 unmapped: 15294464 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:00.375902+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa457000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105226240 unmapped: 15294464 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:01.376104+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105226240 unmapped: 15294464 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:02.376301+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105226240 unmapped: 15294464 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:03.376433+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105226240 unmapped: 15294464 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:04.376586+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbf3000
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbf3000 session 0x5649d01810e0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbf8c00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa457000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1129106 data_alloc: 218103808 data_used: 4788224
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105226240 unmapped: 15294464 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbf8c00 session 0x5649cf9254a0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:05.377555+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105226240 unmapped: 15294464 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cf97cc00 session 0x5649cf925860
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cf62dc00 session 0x5649ce8f7860
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbebc00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 25.769863129s of 25.853391647s, submitted: 35
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbebc00 session 0x5649d06265a0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbf3000
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbf3000 session 0x5649cfb07e00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0afb800
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d0afb800 session 0x5649d0ef0d20
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cf62dc00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cf62dc00 session 0x5649ce9850e0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cf97cc00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cf97cc00 session 0x5649cf7ab680
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:06.378387+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 104980480 unmapped: 25518080 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:07.379229+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 104980480 unmapped: 25518080 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:08.380064+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9dbd000/0x0/0x4ffc00000, data 0x17e6bc3/0x18af000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 104980480 unmapped: 25518080 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbebc00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbebc00 session 0x5649cdc2d4a0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:09.380768+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204025 data_alloc: 218103808 data_used: 4788224
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 104988672 unmapped: 25509888 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:10.381306+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 104988672 unmapped: 25509888 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbf3000
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbf3000 session 0x5649cdc2d2c0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9dbd000/0x0/0x4ffc00000, data 0x17e6bc3/0x18af000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:11.381850+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649ce399c00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649ce399c00 session 0x5649cda95860
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649ce399c00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 104988672 unmapped: 25509888 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649ce399c00 session 0x5649cda950e0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:12.382229+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d0812800 session 0x5649cfb072c0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cdb48000 session 0x5649cdc2cb40
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105013248 unmapped: 25485312 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cf62dc00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cf97cc00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9dbc000/0x0/0x4ffc00000, data 0x17e6bd3/0x18b0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:13.382586+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105021440 unmapped: 25477120 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:14.382758+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1278595 data_alloc: 234881024 data_used: 15269888
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 110182400 unmapped: 20316160 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:15.383070+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 110182400 unmapped: 20316160 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:16.383211+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0df5c00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.744391441s of 10.835625648s, submitted: 21
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 110182400 unmapped: 20316160 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:17.383502+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 110182400 unmapped: 20316160 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:18.383864+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9dbc000/0x0/0x4ffc00000, data 0x17e6bd3/0x18b0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 110215168 unmapped: 20283392 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:19.384107+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1278727 data_alloc: 234881024 data_used: 15269888
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 110215168 unmapped: 20283392 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:20.384333+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 110215168 unmapped: 20283392 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:21.384514+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 110215168 unmapped: 20283392 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:22.384785+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbf4800
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 110215168 unmapped: 20283392 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:23.385095+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbf2400
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 110223360 unmapped: 20275200 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:24.385273+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9dbc000/0x0/0x4ffc00000, data 0x17e6bd3/0x18b0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280203 data_alloc: 234881024 data_used: 15269888
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 110256128 unmapped: 20242432 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:25.385497+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9dbc000/0x0/0x4ffc00000, data 0x17e6bd3/0x18b0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112836608 unmapped: 17661952 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9dbc000/0x0/0x4ffc00000, data 0x17e6bd3/0x18b0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:26.385898+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112902144 unmapped: 17596416 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:27.386316+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.594102859s of 10.725716591s, submitted: 56
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113025024 unmapped: 17473536 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:28.386897+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113025024 unmapped: 17473536 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:29.387229+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbf0800
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351290 data_alloc: 234881024 data_used: 15495168
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113025024 unmapped: 17473536 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:30.387583+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113025024 unmapped: 17473536 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:31.387900+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f94dc000/0x0/0x4ffc00000, data 0x20c6bd3/0x2190000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113025024 unmapped: 17473536 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:32.388114+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113033216 unmapped: 17465344 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:33.388268+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113033216 unmapped: 17465344 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:34.388595+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351158 data_alloc: 234881024 data_used: 15495168
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113033216 unmapped: 17465344 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:35.388890+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113033216 unmapped: 17465344 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f94dc000/0x0/0x4ffc00000, data 0x20c6bd3/0x2190000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:36.389064+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f94dc000/0x0/0x4ffc00000, data 0x20c6bd3/0x2190000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113065984 unmapped: 17432576 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:37.389261+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113065984 unmapped: 17432576 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.780820847s of 10.622550964s, submitted: 11
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbf2400 session 0x5649cf925e00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbf0800 session 0x5649cf925860
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:38.389553+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113065984 unmapped: 17432576 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:39.389747+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1349595 data_alloc: 234881024 data_used: 15495168
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113065984 unmapped: 17432576 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:40.389909+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113065984 unmapped: 17432576 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:41.390090+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113065984 unmapped: 17432576 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:42.390293+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f94dc000/0x0/0x4ffc00000, data 0x20c6bd3/0x2190000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113065984 unmapped: 17432576 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:43.390480+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113065984 unmapped: 17432576 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:44.390659+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1349595 data_alloc: 234881024 data_used: 15495168
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113065984 unmapped: 17432576 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:45.390918+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f94dc000/0x0/0x4ffc00000, data 0x20c6bd3/0x2190000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113065984 unmapped: 17432576 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:46.391119+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113065984 unmapped: 17432576 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:47.391277+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f94dc000/0x0/0x4ffc00000, data 0x20c6bd3/0x2190000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113065984 unmapped: 17432576 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:48.391482+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.931000710s of 10.931001663s, submitted: 0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cf62dc00 session 0x5649cfdb8f00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cf97cc00 session 0x5649cf9754a0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113065984 unmapped: 17432576 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cf80b000
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d01e9000
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f94dc000/0x0/0x4ffc00000, data 0x20c6bd3/0x2190000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:49.391653+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d01e9000 session 0x5649ce471860
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141386 data_alloc: 218103808 data_used: 4788224
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105955328 unmapped: 24543232 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:50.391914+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105955328 unmapped: 24543232 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:51.392094+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105955328 unmapped: 24543232 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:52.392287+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105955328 unmapped: 24543232 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:53.392519+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa809000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105955328 unmapped: 24543232 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:54.392710+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1142898 data_alloc: 218103808 data_used: 4788224
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105955328 unmapped: 24543232 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfa05000
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:55.392880+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa809000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105955328 unmapped: 24543232 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:56.393036+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105955328 unmapped: 24543232 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:57.393194+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa809000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105955328 unmapped: 24543232 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:58.393388+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa809000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105955328 unmapped: 24543232 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:59.393547+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1143819 data_alloc: 218103808 data_used: 4788224
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105955328 unmapped: 24543232 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa809000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:00.393702+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105955328 unmapped: 24543232 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:01.393899+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa809000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105955328 unmapped: 24543232 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:02.394088+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105955328 unmapped: 24543232 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.334737778s of 14.428777695s, submitted: 32
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:03.394211+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105955328 unmapped: 24543232 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:04.394392+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1143687 data_alloc: 218103808 data_used: 4788224
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105955328 unmapped: 24543232 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:05.394592+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa809000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105955328 unmapped: 24543232 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:06.394757+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105955328 unmapped: 24543232 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:07.394923+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa809000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105955328 unmapped: 24543232 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:08.395127+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa809000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105955328 unmapped: 24543232 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:09.395275+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1143687 data_alloc: 218103808 data_used: 4788224
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105955328 unmapped: 24543232 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:10.395432+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105955328 unmapped: 24543232 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:11.395568+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105955328 unmapped: 24543232 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:12.395900+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105955328 unmapped: 24543232 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:13.396112+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105955328 unmapped: 24543232 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:14.396339+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa809000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1143687 data_alloc: 218103808 data_used: 4788224
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105955328 unmapped: 24543232 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:15.396534+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbf5800
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbf5800 session 0x5649d0f71c20
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cdab5400
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cdab5400 session 0x5649d0f70f00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cdab5400
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cdab5400 session 0x5649d06274a0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105971712 unmapped: 24526848 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cf62dc00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cf62dc00 session 0x5649ce986000
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfb40000
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.963714600s of 12.966668129s, submitted: 1
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:16.396729+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfb40000 session 0x5649ce8d81e0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbe8c00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbe8c00 session 0x5649cf9741e0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbf8400
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbf8400 session 0x5649d0ef0d20
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbf8400
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbf8400 session 0x5649cfb072c0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cdab5400
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cdab5400 session 0x5649d0d2ab40
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d0175800 session 0x5649cf803680
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbefc00 session 0x5649cf975e00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa809000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 104931328 unmapped: 25567232 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:17.396988+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa6e7000/0x0/0x4ffc00000, data 0xebbc25/0xf85000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 104931328 unmapped: 25567232 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:18.397148+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 104931328 unmapped: 25567232 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa6e7000/0x0/0x4ffc00000, data 0xebbc25/0xf85000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:19.397314+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1162657 data_alloc: 218103808 data_used: 4788224
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 104931328 unmapped: 25567232 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:20.397461+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 104931328 unmapped: 25567232 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:21.397639+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cf62cc00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cf62cc00 session 0x5649d0e12d20
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 104931328 unmapped: 25567232 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cdab5400
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbefc00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:22.397773+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 104939520 unmapped: 25559040 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:23.398007+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 104939520 unmapped: 25559040 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:24.398153+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa6e7000/0x0/0x4ffc00000, data 0xebbc25/0xf85000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170594 data_alloc: 218103808 data_used: 5869568
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 104939520 unmapped: 25559040 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:25.398268+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 104939520 unmapped: 25559040 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:26.398439+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 104939520 unmapped: 25559040 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:27.398619+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbf9800
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.266611099s of 11.405261993s, submitted: 42
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 104947712 unmapped: 25550848 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:28.398869+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d0df5c00 session 0x5649d0ac1c20
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbf4800 session 0x5649cf924960
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa6e7000/0x0/0x4ffc00000, data 0xebbc25/0xf85000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 104947712 unmapped: 25550848 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:29.399051+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170726 data_alloc: 218103808 data_used: 5869568
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 104947712 unmapped: 25550848 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:30.399230+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 104947712 unmapped: 25550848 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:31.399444+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 104947712 unmapped: 25550848 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:32.399621+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 104947712 unmapped: 25550848 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:33.399781+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 104947712 unmapped: 25550848 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:34.399995+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa6e7000/0x0/0x4ffc00000, data 0xebbc25/0xf85000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198584 data_alloc: 218103808 data_used: 5976064
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 106487808 unmapped: 24010752 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:35.400160+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 106504192 unmapped: 23994368 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:36.400286+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 106504192 unmapped: 23994368 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:37.400638+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3cf000/0x0/0x4ffc00000, data 0x11cbc25/0x1295000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 106504192 unmapped: 23994368 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:38.400903+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 106504192 unmapped: 23994368 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:39.401081+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1201352 data_alloc: 218103808 data_used: 6217728
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 106504192 unmapped: 23994368 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:40.401241+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3cf000/0x0/0x4ffc00000, data 0x11cbc25/0x1295000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 106504192 unmapped: 23994368 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:41.401408+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.696000099s of 13.824744225s, submitted: 41
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 106520576 unmapped: 23977984 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:42.401630+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 106520576 unmapped: 23977984 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:43.401880+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3cf000/0x0/0x4ffc00000, data 0x11cbc25/0x1295000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 106520576 unmapped: 23977984 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:44.402110+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197980 data_alloc: 218103808 data_used: 6221824
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105406464 unmapped: 25092096 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:45.402266+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105406464 unmapped: 25092096 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:46.402439+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105406464 unmapped: 25092096 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:47.402647+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105406464 unmapped: 25092096 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:48.402915+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105406464 unmapped: 25092096 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:49.403080+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3d7000/0x0/0x4ffc00000, data 0x11cbc25/0x1295000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197980 data_alloc: 218103808 data_used: 6221824
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105406464 unmapped: 25092096 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:50.403294+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105406464 unmapped: 25092096 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:51.403483+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3d7000/0x0/0x4ffc00000, data 0x11cbc25/0x1295000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105406464 unmapped: 25092096 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:52.403661+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105406464 unmapped: 25092096 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:53.403795+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0df4800
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.430935860s of 12.437618256s, submitted: 2
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d0df4800 session 0x5649d0ef12c0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cdb49800
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cdb49800 session 0x5649d0ef14a0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9f3e000/0x0/0x4ffc00000, data 0x1663c4e/0x172e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 106225664 unmapped: 24272896 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:54.403985+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1260497 data_alloc: 218103808 data_used: 6221824
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 106225664 unmapped: 24272896 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:55.404099+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:56.404228+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 106225664 unmapped: 24272896 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9bbe000/0x0/0x4ffc00000, data 0x19e3c87/0x1aae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:57.404385+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 106242048 unmapped: 24256512 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:58.404631+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 106242048 unmapped: 24256512 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:59.404768+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 106242048 unmapped: 24256512 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9bbe000/0x0/0x4ffc00000, data 0x19e3c87/0x1aae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1260365 data_alloc: 218103808 data_used: 6221824
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:00.404898+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 106242048 unmapped: 24256512 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cdc8a000
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9bbe000/0x0/0x4ffc00000, data 0x19e3c87/0x1aae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:01.405026+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 110985216 unmapped: 19513344 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:02.405143+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 110985216 unmapped: 19513344 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 10K writes, 40K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 10K writes, 2672 syncs, 4.02 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1723 writes, 5203 keys, 1723 commit groups, 1.0 writes per commit group, ingest: 5.27 MB, 0.01 MB/s
                                           Interval WAL: 1723 writes, 743 syncs, 2.32 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:03.405293+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 110985216 unmapped: 19513344 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:04.405430+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 110985216 unmapped: 19513344 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315237 data_alloc: 234881024 data_used: 12255232
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:05.405559+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 110985216 unmapped: 19513344 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:06.405678+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 110985216 unmapped: 19513344 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9bbe000/0x0/0x4ffc00000, data 0x19e3c87/0x1aae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:07.405823+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 111026176 unmapped: 19472384 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:08.405983+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 111026176 unmapped: 19472384 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:09.406128+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 111026176 unmapped: 19472384 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9bbe000/0x0/0x4ffc00000, data 0x19e3c87/0x1aae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315237 data_alloc: 234881024 data_used: 12255232
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:10.406313+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 111058944 unmapped: 19439616 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.627267838s of 16.744453430s, submitted: 31
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:11.406487+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112648192 unmapped: 17850368 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9862000/0x0/0x4ffc00000, data 0x1d38c87/0x1e03000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:12.406633+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113483776 unmapped: 17014784 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:13.407097+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113467392 unmapped: 17031168 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:14.407437+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113467392 unmapped: 17031168 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1345691 data_alloc: 234881024 data_used: 12300288
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:15.407627+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113467392 unmapped: 17031168 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f97db000/0x0/0x4ffc00000, data 0x1dc0c87/0x1e8b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:16.407796+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113467392 unmapped: 17031168 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:17.408434+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113467392 unmapped: 17031168 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:18.408677+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113598464 unmapped: 16900096 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cdc8a000 session 0x5649ce59af00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:19.408867+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 16891904 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cdb49800
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cdb49800 session 0x5649cf8021e0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204881 data_alloc: 218103808 data_used: 4124672
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:20.409055+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 109330432 unmapped: 21168128 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:21.409314+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 109330432 unmapped: 21168128 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fd6000/0x0/0x4ffc00000, data 0x11cbc25/0x1295000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:22.409583+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.292435646s of 11.659756660s, submitted: 96
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 109330432 unmapped: 21168128 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:23.409801+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 109330432 unmapped: 21168128 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cdab5400 session 0x5649cda450e0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbefc00 session 0x5649cda443c0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:24.410148+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0afbc00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107151360 unmapped: 23347200 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d0afbc00 session 0x5649cfdd03c0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1157535 data_alloc: 218103808 data_used: 2691072
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:25.410411+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 23339008 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:26.410754+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 23339008 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:27.410976+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 23339008 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:28.411362+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 23339008 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:29.411650+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 23339008 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1157535 data_alloc: 218103808 data_used: 2691072
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:30.411917+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 23339008 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:31.412278+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 23339008 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:32.412457+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 23339008 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:33.412591+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 23339008 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:34.412866+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 23339008 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1157535 data_alloc: 218103808 data_used: 2691072
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:35.413103+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 23339008 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:36.413335+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 23339008 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:37.413550+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 23339008 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:38.413764+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 23339008 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:39.413987+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 23339008 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1157535 data_alloc: 218103808 data_used: 2691072
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:40.414208+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 23339008 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:41.414410+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 23339008 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:42.414663+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 23339008 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:43.414900+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 23339008 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:44.415069+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 23339008 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:45.415267+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1157535 data_alloc: 218103808 data_used: 2691072
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 23339008 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:46.415472+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 23339008 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:47.415635+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 23339008 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:48.415846+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 23339008 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:49.415970+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 23339008 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cdc88800
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cdc88800 session 0x5649ce59a5a0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cdab5400
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cdab5400 session 0x5649ce59ab40
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cdb49800
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cdb49800 session 0x5649ce8f70e0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cdc88800
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cdc88800 session 0x5649cdc2d2c0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:50.416103+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbefc00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 27.890466690s of 28.021562576s, submitted: 47
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161093 data_alloc: 218103808 data_used: 2691072
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbefc00 session 0x5649cdc2c780
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0afbc00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d0afbc00 session 0x5649cfe0fc20
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cdab5400
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cdab5400 session 0x5649cdd00d20
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 23199744 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cdb49800
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cdb49800 session 0x5649d0dc74a0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cdc88800
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cdc88800 session 0x5649d08e0780
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:51.416255+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107315200 unmapped: 23183360 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:52.416409+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9cd9000/0x0/0x4ffc00000, data 0x14b8c35/0x1583000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107315200 unmapped: 23183360 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9cd9000/0x0/0x4ffc00000, data 0x14b8c35/0x1583000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:53.416572+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107315200 unmapped: 23183360 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:54.416779+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107315200 unmapped: 23183360 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:55.416953+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1211944 data_alloc: 218103808 data_used: 2691072
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107315200 unmapped: 23183360 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:56.417154+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9cd9000/0x0/0x4ffc00000, data 0x14b8c35/0x1583000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107315200 unmapped: 23183360 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d080f800
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:57.417323+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107315200 unmapped: 23183360 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:58.417521+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 109797376 unmapped: 20701184 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:59.417669+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 109797376 unmapped: 20701184 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:00.417892+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261648 data_alloc: 234881024 data_used: 10063872
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9cd9000/0x0/0x4ffc00000, data 0x14b8c35/0x1583000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 109797376 unmapped: 20701184 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:01.418081+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 109797376 unmapped: 20701184 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:02.418214+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 109797376 unmapped: 20701184 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:03.418410+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9cd9000/0x0/0x4ffc00000, data 0x14b8c35/0x1583000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 109830144 unmapped: 20668416 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:04.418642+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 109830144 unmapped: 20668416 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:05.418790+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261648 data_alloc: 234881024 data_used: 10063872
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 109830144 unmapped: 20668416 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:06.418949+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 109830144 unmapped: 20668416 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9cd9000/0x0/0x4ffc00000, data 0x14b8c35/0x1583000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:07.419074+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 109871104 unmapped: 20627456 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:08.419260+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 109871104 unmapped: 20627456 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:09.419465+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.087341309s of 19.172225952s, submitted: 26
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 19054592 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:10.419655+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1313314 data_alloc: 234881024 data_used: 10055680
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 111894528 unmapped: 18604032 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f960e000/0x0/0x4ffc00000, data 0x1b7bc35/0x1c46000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:11.419793+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 111943680 unmapped: 18554880 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:12.419999+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112091136 unmapped: 18407424 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:13.420193+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9602000/0x0/0x4ffc00000, data 0x1b87c35/0x1c52000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112091136 unmapped: 18407424 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9602000/0x0/0x4ffc00000, data 0x1b87c35/0x1c52000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:14.420956+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112091136 unmapped: 18407424 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9602000/0x0/0x4ffc00000, data 0x1b87c35/0x1c52000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:15.421080+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1324806 data_alloc: 234881024 data_used: 10637312
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112099328 unmapped: 18399232 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:16.421212+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112099328 unmapped: 18399232 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:17.421348+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112099328 unmapped: 18399232 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:18.421525+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112099328 unmapped: 18399232 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:19.421891+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112099328 unmapped: 18399232 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:20.422124+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1324822 data_alloc: 234881024 data_used: 10637312
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112099328 unmapped: 18399232 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9602000/0x0/0x4ffc00000, data 0x1b87c35/0x1c52000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:21.422716+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112099328 unmapped: 18399232 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9602000/0x0/0x4ffc00000, data 0x1b87c35/0x1c52000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:22.423664+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112099328 unmapped: 18399232 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:23.423879+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112107520 unmapped: 18391040 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:24.424840+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112107520 unmapped: 18391040 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:25.424997+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9602000/0x0/0x4ffc00000, data 0x1b87c35/0x1c52000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1324822 data_alloc: 234881024 data_used: 10637312
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112107520 unmapped: 18391040 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:26.425140+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112107520 unmapped: 18391040 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9602000/0x0/0x4ffc00000, data 0x1b87c35/0x1c52000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:27.425263+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112107520 unmapped: 18391040 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:28.425615+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112107520 unmapped: 18391040 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:29.425976+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112107520 unmapped: 18391040 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9602000/0x0/0x4ffc00000, data 0x1b87c35/0x1c52000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:30.426154+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1324822 data_alloc: 234881024 data_used: 10637312
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112115712 unmapped: 18382848 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9602000/0x0/0x4ffc00000, data 0x1b87c35/0x1c52000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:31.426346+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112115712 unmapped: 18382848 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cf7edc00 session 0x5649ce8d5a40
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cddd0800
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:32.426628+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112115712 unmapped: 18382848 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cf7cfc00 session 0x5649cda94780
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbf0400
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:33.426885+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112115712 unmapped: 18382848 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:34.427142+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112115712 unmapped: 18382848 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:35.427491+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1324822 data_alloc: 234881024 data_used: 10637312
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112123904 unmapped: 18374656 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9602000/0x0/0x4ffc00000, data 0x1b87c35/0x1c52000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:36.427617+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0df9000
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 26.557266235s of 27.021982193s, submitted: 61
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d0df9000 session 0x5649ce59b860
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0811400
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d0811400 session 0x5649cdc2de00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cdab5400
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cdab5400 session 0x5649cda452c0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cdb49800
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 111460352 unmapped: 19038208 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cdb49800 session 0x5649ce8f81e0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cdc88800
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cdc88800 session 0x5649cda44f00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:37.427781+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 111468544 unmapped: 19030016 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:38.428009+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649ce398000 session 0x5649d0ef1e00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0df9000
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9050000/0x0/0x4ffc00000, data 0x2140c97/0x220c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 111468544 unmapped: 19030016 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:39.428203+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 111476736 unmapped: 19021824 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:40.428534+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1374730 data_alloc: 234881024 data_used: 10641408
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 111476736 unmapped: 19021824 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:41.428898+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 111476736 unmapped: 19021824 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:42.429111+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 111476736 unmapped: 19021824 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:43.429373+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9050000/0x0/0x4ffc00000, data 0x2140c97/0x220c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 111476736 unmapped: 19021824 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:44.429530+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbee400
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 111476736 unmapped: 19021824 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9050000/0x0/0x4ffc00000, data 0x2140c97/0x220c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:45.429745+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1383374 data_alloc: 234881024 data_used: 11751424
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113180672 unmapped: 17317888 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9050000/0x0/0x4ffc00000, data 0x2140c97/0x220c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:46.429902+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 114925568 unmapped: 15572992 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:47.430115+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 114933760 unmapped: 15564800 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:48.430280+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 114933760 unmapped: 15564800 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:49.430497+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 114933760 unmapped: 15564800 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:50.430742+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9050000/0x0/0x4ffc00000, data 0x2140c97/0x220c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1414534 data_alloc: 234881024 data_used: 16355328
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 114941952 unmapped: 15556608 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:51.431425+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 114941952 unmapped: 15556608 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:52.432278+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 114941952 unmapped: 15556608 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:53.432705+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 114941952 unmapped: 15556608 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:54.433180+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.700933456s of 17.824674606s, submitted: 45
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 15491072 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:55.433705+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1414366 data_alloc: 234881024 data_used: 16355328
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9050000/0x0/0x4ffc00000, data 0x2140c97/0x220c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [0,0,1])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116973568 unmapped: 13524992 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:56.433872+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 124428288 unmapped: 6070272 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:57.435947+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 124346368 unmapped: 6152192 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:58.437892+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f850f000/0x0/0x4ffc00000, data 0x2c81c97/0x2d4d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 124346368 unmapped: 6152192 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:59.439431+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 124346368 unmapped: 6152192 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:00.439578+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1522140 data_alloc: 234881024 data_used: 17633280
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 124354560 unmapped: 6144000 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:01.439891+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 124354560 unmapped: 6144000 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:02.440475+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 124362752 unmapped: 6135808 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:03.441101+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 124362752 unmapped: 6135808 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:04.441349+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f84f0000/0x0/0x4ffc00000, data 0x2ca0c97/0x2d6c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 124362752 unmapped: 6135808 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:05.442187+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1517884 data_alloc: 234881024 data_used: 17637376
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 124362752 unmapped: 6135808 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:06.442516+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 124362752 unmapped: 6135808 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:07.442889+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 124362752 unmapped: 6135808 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:08.443410+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 124362752 unmapped: 6135808 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.933344841s of 14.779636383s, submitted: 413
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:09.443587+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f84e6000/0x0/0x4ffc00000, data 0x2caac97/0x2d76000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 124526592 unmapped: 5971968 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:10.443729+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1518164 data_alloc: 234881024 data_used: 17637376
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 124526592 unmapped: 5971968 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:11.444056+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbee400 session 0x5649cd74b2c0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0df5800
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 119554048 unmapped: 10944512 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d0df5800 session 0x5649d01812c0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:12.444279+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 119562240 unmapped: 10936320 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:13.444698+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 119562240 unmapped: 10936320 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:14.445045+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 119562240 unmapped: 10936320 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:15.445490+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f916c000/0x0/0x4ffc00000, data 0x1b87c35/0x1c52000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1334544 data_alloc: 234881024 data_used: 9682944
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 119562240 unmapped: 10936320 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:16.445700+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 119562240 unmapped: 10936320 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:17.446076+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 119562240 unmapped: 10936320 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:18.446235+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 119562240 unmapped: 10936320 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f916c000/0x0/0x4ffc00000, data 0x1b87c35/0x1c52000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:19.446420+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 119562240 unmapped: 10936320 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:20.446610+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d080f800 session 0x5649d090a780
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333780 data_alloc: 234881024 data_used: 9682944
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbf2c00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.768727303s of 11.869369507s, submitted: 43
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 119570432 unmapped: 10928128 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:21.446733+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbf2c00 session 0x5649ce8f83c0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f960a000/0x0/0x4ffc00000, data 0x1b87c35/0x1c52000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9ff8000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 114491392 unmapped: 16007168 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:22.446921+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 114491392 unmapped: 16007168 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:23.447059+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 114491392 unmapped: 16007168 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:24.447251+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 114491392 unmapped: 16007168 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:25.447400+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182128 data_alloc: 218103808 data_used: 2691072
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 114491392 unmapped: 16007168 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:26.448218+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9ff8000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 114491392 unmapped: 16007168 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9ff8000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:27.448478+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9ff8000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113672192 unmapped: 16826368 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:28.448727+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113672192 unmapped: 16826368 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:29.449202+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9ff8000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113672192 unmapped: 16826368 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:30.449397+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182128 data_alloc: 218103808 data_used: 2691072
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113672192 unmapped: 16826368 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:31.449651+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113672192 unmapped: 16826368 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:32.449963+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113672192 unmapped: 16826368 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:33.450624+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9ff8000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113672192 unmapped: 16826368 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:34.451086+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113672192 unmapped: 16826368 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:35.451219+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182128 data_alloc: 218103808 data_used: 2691072
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113672192 unmapped: 16826368 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:36.451420+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113672192 unmapped: 16826368 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:37.451593+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9ff8000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113672192 unmapped: 16826368 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:38.451827+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113672192 unmapped: 16826368 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:39.452015+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113672192 unmapped: 16826368 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:40.452174+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182128 data_alloc: 218103808 data_used: 2691072
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113672192 unmapped: 16826368 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:41.452475+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113672192 unmapped: 16826368 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9ff8000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:42.452772+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113672192 unmapped: 16826368 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:43.452933+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113672192 unmapped: 16826368 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:44.453115+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9ff8000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113672192 unmapped: 16826368 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:45.453276+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182128 data_alloc: 218103808 data_used: 2691072
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113672192 unmapped: 16826368 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:46.453413+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0df6400
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d0df6400 session 0x5649d0d10b40
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cdb49800
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cdb49800 session 0x5649d0d101e0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0810000
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d0810000 session 0x5649cda44d20
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cdb49800
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cdb49800 session 0x5649d06265a0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbf2c00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 25.952045441s of 26.003047943s, submitted: 21
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbf2c00 session 0x5649d0180d20
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d080f800
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d080f800 session 0x5649d0f714a0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0df6400
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d0df6400 session 0x5649cf8030e0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113377280 unmapped: 21323776 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0df5400
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:47.453618+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d0df5400 session 0x5649d0656960
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cdb49800
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cdb49800 session 0x5649ce59b4a0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113377280 unmapped: 21323776 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:48.453975+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113377280 unmapped: 21323776 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:49.454105+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113377280 unmapped: 21323776 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:50.454278+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217597 data_alloc: 218103808 data_used: 2691072
Jan 20 19:18:35 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fab000/0x0/0x4ffc00000, data 0x11e8bc3/0x12b1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cf983400 session 0x5649cfdd1680
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfa04c00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113360896 unmapped: 21340160 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:51.454443+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113360896 unmapped: 21340160 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:52.454597+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113360896 unmapped: 21340160 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:53.454774+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cdc88400
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113360896 unmapped: 21340160 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:54.454973+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113451008 unmapped: 21250048 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:55.455117+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246001 data_alloc: 218103808 data_used: 6885376
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113451008 unmapped: 21250048 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:56.455261+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fab000/0x0/0x4ffc00000, data 0x11e8bc3/0x12b1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113451008 unmapped: 21250048 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:57.455382+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.537994385s of 10.609987259s, submitted: 18
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cdc88400 session 0x5649d0180f00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbf6c00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 110166016 unmapped: 24535040 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbf6c00 session 0x5649ce8d43c0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:58.455539+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 110166016 unmapped: 24535040 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:59.455780+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 110166016 unmapped: 24535040 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:00.456006+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1185801 data_alloc: 218103808 data_used: 2691072
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 110166016 unmapped: 24535040 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:01.456144+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 110166016 unmapped: 24535040 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:02.456293+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 110166016 unmapped: 24535040 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:03.456453+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 110166016 unmapped: 24535040 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:04.456615+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 110166016 unmapped: 24535040 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:05.457404+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1185801 data_alloc: 218103808 data_used: 2691072
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:06.457534+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 110166016 unmapped: 24535040 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:07.457683+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 110166016 unmapped: 24535040 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:08.457881+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 110166016 unmapped: 24535040 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0df5c00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.857390404s of 10.914286613s, submitted: 20
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [0,0,0,0,0,1,1])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d0df5c00 session 0x5649d0f71860
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:09.457977+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 110870528 unmapped: 23830528 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:10.458111+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 110870528 unmapped: 23830528 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1248497 data_alloc: 218103808 data_used: 2691072
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:11.458223+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 110870528 unmapped: 23830528 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cf97c400
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cf97c400 session 0x5649cf803680
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9c79000/0x0/0x4ffc00000, data 0x151abc3/0x15e3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cf97c400
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cf97c400 session 0x5649cf802f00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:12.458346+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 110870528 unmapped: 23830528 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cdb49800
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cdb49800 session 0x5649cf8023c0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cdc88400
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cdc88400 session 0x5649cf802d20
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbf6c00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:13.458468+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 110952448 unmapped: 23748608 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0df5c00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:14.458553+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 110952448 unmapped: 23748608 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:15.458694+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113737728 unmapped: 20963328 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d0df5c00 session 0x5649d0d305a0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbf6c00 session 0x5649d01692c0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303764 data_alloc: 234881024 data_used: 10366976
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbf6c00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:16.458868+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbf6c00 session 0x5649d0168780
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112222208 unmapped: 22478848 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:17.459005+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112222208 unmapped: 22478848 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f8000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:18.459158+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112222208 unmapped: 22478848 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:19.459280+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112222208 unmapped: 22478848 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:20.459419+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112222208 unmapped: 22478848 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193715 data_alloc: 218103808 data_used: 2691072
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:21.459608+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112222208 unmapped: 22478848 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f8000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:22.459748+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112222208 unmapped: 22478848 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:23.459885+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112222208 unmapped: 22478848 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f8000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:24.460024+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112222208 unmapped: 22478848 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:25.460219+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112222208 unmapped: 22478848 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193715 data_alloc: 218103808 data_used: 2691072
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:26.460463+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112222208 unmapped: 22478848 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f8000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:27.460638+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112222208 unmapped: 22478848 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:28.460823+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112222208 unmapped: 22478848 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:29.460945+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112222208 unmapped: 22478848 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f8000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:30.461067+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112222208 unmapped: 22478848 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193715 data_alloc: 218103808 data_used: 2691072
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:31.461184+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112222208 unmapped: 22478848 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:32.461303+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112222208 unmapped: 22478848 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f8000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:33.461414+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112222208 unmapped: 22478848 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:34.461543+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112222208 unmapped: 22478848 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:35.461654+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112222208 unmapped: 22478848 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193715 data_alloc: 218103808 data_used: 2691072
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:36.461777+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112222208 unmapped: 22478848 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f8000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f8000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:37.461898+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112222208 unmapped: 22478848 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:38.462088+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112222208 unmapped: 22478848 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:39.462292+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112222208 unmapped: 22478848 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f8000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:40.462428+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112222208 unmapped: 22478848 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193715 data_alloc: 218103808 data_used: 2691072
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:41.462580+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112222208 unmapped: 22478848 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f8000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:42.462730+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112222208 unmapped: 22478848 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cdc89c00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cdc89c00 session 0x5649d0d10d20
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbe8400
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbe8400 session 0x5649d0d11a40
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbf8000
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbf8000 session 0x5649d0d11680
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cf97cc00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cf97cc00 session 0x5649d0d10780
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f8000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cdc89c00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 34.183135986s of 34.295379639s, submitted: 35
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:43.462891+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112222208 unmapped: 22478848 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cdc89c00 session 0x5649cf974960
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbe8400
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbe8400 session 0x5649cf975860
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbf6c00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbf6c00 session 0x5649d0e12960
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbf8000
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbf8000 session 0x5649d0ef0d20
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0df5c00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d0df5c00 session 0x5649cf948b40
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:44.463084+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113410048 unmapped: 21291008 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa05e000/0x0/0x4ffc00000, data 0x1134c25/0x11fe000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:45.463249+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113410048 unmapped: 21291008 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1222981 data_alloc: 218103808 data_used: 2691072
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:46.463421+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113410048 unmapped: 21291008 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa05e000/0x0/0x4ffc00000, data 0x1134c25/0x11fe000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:47.463543+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113410048 unmapped: 21291008 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:48.463694+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113410048 unmapped: 21291008 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbf6000
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbf6000 session 0x5649cf949860
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:49.463875+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 21282816 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cf985400
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbefc00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:50.464051+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113426432 unmapped: 21274624 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238702 data_alloc: 218103808 data_used: 4853760
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:51.464238+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113426432 unmapped: 21274624 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa05e000/0x0/0x4ffc00000, data 0x1134c25/0x11fe000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:52.464445+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113426432 unmapped: 21274624 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:53.464579+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113426432 unmapped: 21274624 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:54.464794+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113426432 unmapped: 21274624 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:55.464950+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113426432 unmapped: 21274624 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1249494 data_alloc: 218103808 data_used: 6467584
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa05e000/0x0/0x4ffc00000, data 0x1134c25/0x11fe000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:56.465108+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113426432 unmapped: 21274624 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:57.465269+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113426432 unmapped: 21274624 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:58.465455+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113426432 unmapped: 21274624 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa05e000/0x0/0x4ffc00000, data 0x1134c25/0x11fe000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:59.465596+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113426432 unmapped: 21274624 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:00.465704+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113426432 unmapped: 21274624 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfa05000 session 0x5649cf9245a0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cf80b000 session 0x5649cf975c20
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1249494 data_alloc: 218103808 data_used: 6467584
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:01.465887+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.421611786s of 18.642799377s, submitted: 39
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 114008064 unmapped: 20692992 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:02.466009+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116039680 unmapped: 18661376 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9d43000/0x0/0x4ffc00000, data 0x144fc25/0x1519000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:03.466205+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116047872 unmapped: 18653184 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:04.466421+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116047872 unmapped: 18653184 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:05.466610+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116056064 unmapped: 18644992 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1278352 data_alloc: 218103808 data_used: 6868992
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:06.466741+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116219904 unmapped: 18481152 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:07.466859+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116219904 unmapped: 18481152 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9d3f000/0x0/0x4ffc00000, data 0x1453c25/0x151d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:08.467000+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116219904 unmapped: 18481152 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:09.467123+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116219904 unmapped: 18481152 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:10.467266+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116219904 unmapped: 18481152 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1278352 data_alloc: 218103808 data_used: 6868992
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:11.467400+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116219904 unmapped: 18481152 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cdb47400
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.050851822s of 10.144117355s, submitted: 41
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9d3f000/0x0/0x4ffc00000, data 0x1453c25/0x151d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:12.467578+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116219904 unmapped: 18481152 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:13.467729+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116219904 unmapped: 18481152 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:14.467863+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116219904 unmapped: 18481152 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cf985400 session 0x5649d0180780
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbefc00 session 0x5649d0d2ab40
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:15.468022+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115580928 unmapped: 19120128 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cf80b000
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cf80b000 session 0x5649cf803a40
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199614 data_alloc: 218103808 data_used: 2691072
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:16.468245+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 111992832 unmapped: 22708224 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:17.468396+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 111992832 unmapped: 22708224 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d012a400
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa082000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:18.468564+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 111992832 unmapped: 22708224 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:19.468737+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 111992832 unmapped: 22708224 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:20.469051+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 111992832 unmapped: 22708224 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199614 data_alloc: 218103808 data_used: 2691072
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:21.469190+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa082000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 111992832 unmapped: 22708224 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:22.469366+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 111992832 unmapped: 22708224 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa082000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:23.469509+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 111992832 unmapped: 22708224 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:24.469641+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 111992832 unmapped: 22708224 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:25.469764+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 111992832 unmapped: 22708224 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: mgrc ms_handle_reset ms_handle_reset con 0x5649cfb41c00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1083080178
Jan 20 19:18:35 compute-0 ceph-osd[82836]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1083080178,v1:192.168.122.100:6801/1083080178]
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: get_auth_request con 0x5649cf97cc00 auth_method 0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: mgrc handle_mgr_configure stats_period=5
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199614 data_alloc: 218103808 data_used: 2691072
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:26.469933+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 22691840 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:27.470127+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 22691840 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:28.470288+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 22691840 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa082000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.266271591s of 17.418762207s, submitted: 48
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:29.470422+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 22691840 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa082000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:30.470557+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 22691840 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199482 data_alloc: 218103808 data_used: 2691072
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:31.470703+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 22691840 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:32.470902+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 22691840 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa082000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:33.471116+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa082000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 22691840 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:34.471240+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 22691840 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:35.471340+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 22691840 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199482 data_alloc: 218103808 data_used: 2691072
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:36.471478+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 22691840 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:37.471615+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 22691840 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:38.471751+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 22691840 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cdaa8400
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cdaa8400 session 0x5649cfc1f680
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cf985c00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cf985c00 session 0x5649cfc1e3c0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfa05800
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfa05800 session 0x5649cf802b40
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0813800
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d0813800 session 0x5649cf8030e0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0813800
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.329735756s of 10.332962990s, submitted: 1
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:39.471861+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa082000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d0813800 session 0x5649cf803680
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112320512 unmapped: 22380544 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:40.471977+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112320512 unmapped: 22380544 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1234242 data_alloc: 218103808 data_used: 2691072
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:41.472157+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112386048 unmapped: 22315008 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa010000/0x0/0x4ffc00000, data 0x1183bc3/0x124c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:42.472372+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112386048 unmapped: 22315008 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:43.472510+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112386048 unmapped: 22315008 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbf0c00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbf0c00 session 0x5649d0f714a0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:44.472673+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa010000/0x0/0x4ffc00000, data 0x1183bc3/0x124c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112386048 unmapped: 22315008 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbf6000
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbf6000 session 0x5649d0f71860
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:45.472794+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cf62dc00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cf62dc00 session 0x5649ce8d43c0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfb41400
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112386048 unmapped: 22315008 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1234242 data_alloc: 218103808 data_used: 2691072
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:46.472961+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112394240 unmapped: 22306816 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfb41400 session 0x5649ce8d85a0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:47.475950+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa00f000/0x0/0x4ffc00000, data 0x1183bd3/0x124d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112394240 unmapped: 22306816 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfb41400
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:48.476096+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cf62dc00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112394240 unmapped: 22306816 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:49.476223+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 21766144 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:50.476321+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 21766144 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1260680 data_alloc: 218103808 data_used: 6393856
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:51.476446+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 21766144 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:52.476617+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 21766144 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:53.476735+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa00f000/0x0/0x4ffc00000, data 0x1183bd3/0x124d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 21766144 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:54.476838+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 21766144 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:55.476939+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa00f000/0x0/0x4ffc00000, data 0x1183bd3/0x124d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 21766144 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa00f000/0x0/0x4ffc00000, data 0x1183bd3/0x124d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1260680 data_alloc: 218103808 data_used: 6393856
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:56.477063+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 21766144 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:57.477191+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 21766144 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:58.478177+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa00f000/0x0/0x4ffc00000, data 0x1183bd3/0x124d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 21766144 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.720109940s of 19.755855560s, submitted: 12
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:59.478298+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 114401280 unmapped: 20299776 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:00.478422+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115113984 unmapped: 19587072 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1292800 data_alloc: 218103808 data_used: 6426624
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:01.478555+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115113984 unmapped: 19587072 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0df6800
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d0df6800 session 0x5649d0f70f00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0afb800
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d0afb800 session 0x5649cd74a960
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:02.478669+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115523584 unmapped: 23371776 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:03.478923+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115523584 unmapped: 23371776 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f93de000/0x0/0x4ffc00000, data 0x1db3c35/0x1e7e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:04.479051+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115523584 unmapped: 23371776 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:05.479171+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0df7c00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d0df7c00 session 0x5649d0ef1860
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115523584 unmapped: 23371776 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0df8800
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d0df8800 session 0x5649d0ef14a0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1355699 data_alloc: 218103808 data_used: 6426624
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:06.479365+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115507200 unmapped: 23388160 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbf3c00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbf3c00 session 0x5649d0ef10e0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:07.479510+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0afb800
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d0afb800 session 0x5649d0ef0b40
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115507200 unmapped: 23388160 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f93bf000/0x0/0x4ffc00000, data 0x1dd2c35/0x1e9d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0df6800
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:08.479682+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115507200 unmapped: 23388160 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:09.479853+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0df7c00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115556352 unmapped: 23339008 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:10.479977+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 120889344 unmapped: 18006016 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1421657 data_alloc: 234881024 data_used: 15912960
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:11.480162+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 120889344 unmapped: 18006016 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.846256256s of 13.088277817s, submitted: 78
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:12.480275+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 121061376 unmapped: 17833984 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f93be000/0x0/0x4ffc00000, data 0x1dd2c45/0x1e9e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:13.480431+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 121061376 unmapped: 17833984 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:14.480642+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 121061376 unmapped: 17833984 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:15.480926+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 121061376 unmapped: 17833984 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:16.481061+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1421601 data_alloc: 234881024 data_used: 15912960
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 121061376 unmapped: 17833984 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:17.481184+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 121061376 unmapped: 17833984 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f93b4000/0x0/0x4ffc00000, data 0x1ddcc45/0x1ea8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:18.481311+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 121135104 unmapped: 17760256 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:19.481431+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 121135104 unmapped: 17760256 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:20.481545+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 124837888 unmapped: 14057472 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:21.481850+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1461783 data_alloc: 234881024 data_used: 16588800
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 125165568 unmapped: 13729792 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:22.481984+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 14254080 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f8f64000/0x0/0x4ffc00000, data 0x222cc45/0x22f8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:23.482146+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 124682240 unmapped: 14213120 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:24.482266+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 124682240 unmapped: 14213120 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f8f64000/0x0/0x4ffc00000, data 0x222cc45/0x22f8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:25.482456+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 124682240 unmapped: 14213120 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f8f64000/0x0/0x4ffc00000, data 0x222cc45/0x22f8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:26.482593+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1465059 data_alloc: 234881024 data_used: 17182720
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 124682240 unmapped: 14213120 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:27.482714+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 124682240 unmapped: 14213120 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:28.482895+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 124682240 unmapped: 14213120 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:29.483040+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 124690432 unmapped: 14204928 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:30.483178+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 124706816 unmapped: 14188544 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.803529739s of 18.944644928s, submitted: 59
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:31.483346+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1466219 data_alloc: 234881024 data_used: 17256448
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 124706816 unmapped: 14188544 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f8f62000/0x0/0x4ffc00000, data 0x222dc45/0x22f9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:32.483503+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 124706816 unmapped: 14188544 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:33.483641+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 124706816 unmapped: 14188544 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:34.483892+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 124715008 unmapped: 14180352 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d0df7c00 session 0x5649d06270e0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d0df6800 session 0x5649ce8f9e00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:35.484073+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0df8000
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 20905984 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d0df8000 session 0x5649cdc2d2c0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:36.484206+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296890 data_alloc: 218103808 data_used: 6426624
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9a37000/0x0/0x4ffc00000, data 0x14dabd3/0x15a4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 20905984 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:37.484335+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 20905984 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:38.484488+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 20905984 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9a37000/0x0/0x4ffc00000, data 0x14dabd3/0x15a4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:39.484608+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 20905984 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:40.484719+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 20905984 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cf62dc00 session 0x5649d0ac0780
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfb41400 session 0x5649d0656960
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:41.484846+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0afb800
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296890 data_alloc: 218103808 data_used: 6426624
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.182349205s of 10.414586067s, submitted: 23
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115523584 unmapped: 23371776 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d0afb800 session 0x5649d0ef12c0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:42.484955+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 23363584 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:43.485081+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 23363584 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:44.485201+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 23363584 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:45.485380+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 23363584 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:46.485511+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212877 data_alloc: 218103808 data_used: 2691072
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 23363584 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:47.485634+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 23363584 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:48.485771+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 23363584 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:49.485919+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 23363584 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:50.486065+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 23363584 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:51.486239+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212877 data_alloc: 218103808 data_used: 2691072
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 23363584 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:52.486376+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 23363584 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:53.486500+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 23363584 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:54.486673+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 23363584 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:55.486858+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 23363584 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:56.487005+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212877 data_alloc: 218103808 data_used: 2691072
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 23363584 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:57.487160+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 23363584 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:58.487306+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 23363584 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:59.487480+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 23363584 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:00.487608+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 23363584 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:01.487877+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212877 data_alloc: 218103808 data_used: 2691072
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 23363584 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:02.488006+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 23363584 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:03.488125+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 23363584 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:04.488280+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 23363584 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:05.488416+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 23363584 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:06.488574+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212877 data_alloc: 218103808 data_used: 2691072
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 23363584 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:07.488701+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbf4400
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbf4400 session 0x5649cda450e0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d01fc000
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d01fc000 session 0x5649ce470f00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbe9000
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbe9000 session 0x5649d0d10f00
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 23363584 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbe9000
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbe9000 session 0x5649cf925680
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfb41400
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 26.502956390s of 26.523035049s, submitted: 9
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfb41400 session 0x5649ce59b680
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbf4400
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:08.488892+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbf4400 session 0x5649cf803860
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d01fc000
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d01fc000 session 0x5649d090a3c0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0afb800
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d0afb800 session 0x5649d01694a0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0afb800
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d0afb800 session 0x5649d0d103c0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116572160 unmapped: 26525696 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:09.489153+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116572160 unmapped: 26525696 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:10.489395+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116580352 unmapped: 26517504 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cf80b400
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cf80b400 session 0x5649ce471a40
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:11.489598+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1270568 data_alloc: 218103808 data_used: 2691072
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cf9d9800
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cf9d9800 session 0x5649cf9250e0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116580352 unmapped: 26517504 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:12.489793+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfb40400
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfb40400 session 0x5649d01805a0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbf7400
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbf7400 session 0x5649d0181860
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9c75000/0x0/0x4ffc00000, data 0x151dbd3/0x15e7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 27566080 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:13.490145+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbf7400
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cf80b400
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115548160 unmapped: 27549696 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:14.490322+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117948416 unmapped: 25149440 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:15.490463+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117948416 unmapped: 25149440 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9c74000/0x0/0x4ffc00000, data 0x151dbe3/0x15e8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:16.490613+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1324602 data_alloc: 234881024 data_used: 10555392
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117948416 unmapped: 25149440 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:17.490736+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117948416 unmapped: 25149440 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:18.490872+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9c74000/0x0/0x4ffc00000, data 0x151dbe3/0x15e8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117948416 unmapped: 25149440 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:19.491028+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117948416 unmapped: 25149440 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9c74000/0x0/0x4ffc00000, data 0x151dbe3/0x15e8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:20.491156+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117948416 unmapped: 25149440 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:21.491341+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1324602 data_alloc: 234881024 data_used: 10555392
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117948416 unmapped: 25149440 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:22.491464+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117948416 unmapped: 25149440 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:23.491575+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117948416 unmapped: 25149440 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:24.491733+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9c74000/0x0/0x4ffc00000, data 0x151dbe3/0x15e8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.495733261s of 16.590114594s, submitted: 20
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 119349248 unmapped: 23748608 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:25.491884+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 23666688 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:26.492036+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1361224 data_alloc: 234881024 data_used: 10567680
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 120561664 unmapped: 22536192 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9910000/0x0/0x4ffc00000, data 0x1881be3/0x194c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:27.492168+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 120561664 unmapped: 22536192 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:28.492388+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 120561664 unmapped: 22536192 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:29.492584+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 120561664 unmapped: 22536192 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:30.492700+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 120561664 unmapped: 22536192 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:31.492879+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1361224 data_alloc: 234881024 data_used: 10567680
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 120561664 unmapped: 22536192 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:32.493017+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9901000/0x0/0x4ffc00000, data 0x1890be3/0x195b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 120561664 unmapped: 22536192 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:33.493137+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9901000/0x0/0x4ffc00000, data 0x1890be3/0x195b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 120561664 unmapped: 22536192 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:34.493307+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 120561664 unmapped: 22536192 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:35.493429+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 120561664 unmapped: 22536192 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:36.493566+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1360096 data_alloc: 234881024 data_used: 10567680
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.018393517s of 12.156224251s, submitted: 39
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbf7400 session 0x5649cda45a40
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cf80b400 session 0x5649d0d2b0e0
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 120561664 unmapped: 22536192 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cdaa8000
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:37.493677+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cdaa8000 session 0x5649cd74a780
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 26787840 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:38.493886+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 26787840 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:39.494086+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 26787840 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:40.494289+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 26787840 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:41.494494+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 26787840 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:42.494703+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 26787840 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:43.494885+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 26787840 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:44.495137+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 26787840 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:45.495308+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 26787840 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:46.495497+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 26787840 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:47.495689+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 26787840 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:48.495900+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 26787840 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:49.496060+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 26787840 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:50.496180+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 26787840 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:51.496295+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 26787840 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:52.496419+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 26787840 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:53.496581+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 26787840 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:54.496730+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 26787840 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:55.496891+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 26787840 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:56.497056+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 26787840 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:57.497213+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 26787840 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:58.497432+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 26787840 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:59.497654+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 26787840 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:00.497781+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 26787840 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:01.497923+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 26787840 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:02.498069+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 26787840 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:03.498238+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 26787840 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:04.498407+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 26787840 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:05.498559+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 26787840 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:06.498706+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 26779648 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:07.498828+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 26779648 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:08.498980+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 26779648 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:09.499195+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 26779648 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:10.499350+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 26779648 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:11.499506+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 26779648 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:12.499705+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 26779648 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:13.499891+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 26779648 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:14.500089+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116326400 unmapped: 26771456 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:15.500225+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116326400 unmapped: 26771456 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:16.500401+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116326400 unmapped: 26771456 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:17.500547+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116326400 unmapped: 26771456 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:18.500731+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116326400 unmapped: 26771456 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:19.500900+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116326400 unmapped: 26771456 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:20.501066+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116326400 unmapped: 26771456 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:21.501239+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116326400 unmapped: 26771456 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:22.501428+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 26763264 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:23.501603+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 26763264 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:24.501906+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 26763264 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:25.502094+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 26763264 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:26.502236+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 26763264 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:27.502586+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 26763264 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:28.502892+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 26763264 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:29.503044+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 26763264 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:30.503267+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116342784 unmapped: 26755072 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:31.503471+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116342784 unmapped: 26755072 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:32.503705+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116342784 unmapped: 26755072 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:33.503899+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116342784 unmapped: 26755072 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:34.504165+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116342784 unmapped: 26755072 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:35.504384+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116342784 unmapped: 26755072 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:36.504552+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 26746880 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:37.504700+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 26746880 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:38.504880+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 26746880 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:39.505035+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 26746880 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:40.505183+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 26746880 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:41.505353+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 26746880 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:42.505520+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 26746880 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:43.505736+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 26746880 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:44.505951+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 26746880 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:45.506248+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 26746880 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:46.507019+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 26746880 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:47.507262+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:48.507740+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 26746880 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:49.508171+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 26746880 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:50.508455+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 26746880 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:51.508788+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 26746880 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:52.509272+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116359168 unmapped: 26738688 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:53.509493+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116359168 unmapped: 26738688 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:54.510326+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116359168 unmapped: 26738688 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:55.510793+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116359168 unmapped: 26738688 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:56.511258+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 26730496 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:57.511396+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 26730496 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:58.511547+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 26730496 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:59.511695+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 26730496 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:00.511825+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 26730496 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:01.512444+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 26730496 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:18:35 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:18:35 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:02.512554+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 26730496 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: do_command 'config diff' '{prefix=config diff}'
Jan 20 19:18:35 compute-0 ceph-osd[82836]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 20 19:18:35 compute-0 ceph-osd[82836]: do_command 'config show' '{prefix=config show}'
Jan 20 19:18:35 compute-0 ceph-osd[82836]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 20 19:18:35 compute-0 ceph-osd[82836]: do_command 'counter dump' '{prefix=counter dump}'
Jan 20 19:18:35 compute-0 ceph-osd[82836]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 20 19:18:35 compute-0 ceph-osd[82836]: do_command 'counter schema' '{prefix=counter schema}'
Jan 20 19:18:35 compute-0 ceph-osd[82836]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:03.512757+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116408320 unmapped: 26689536 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:18:35 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:04.512856+0000)
Jan 20 19:18:35 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 26550272 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:18:35 compute-0 ceph-osd[82836]: do_command 'log dump' '{prefix=log dump}'
Jan 20 19:18:35 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Jan 20 19:18:35 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/593287436' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 20 19:18:35 compute-0 ceph-mon[74381]: from='client.25738 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:35 compute-0 ceph-mon[74381]: pgmap v1099: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Jan 20 19:18:35 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/30701454' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 20 19:18:35 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/1046960484' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 20 19:18:35 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/232166718' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 20 19:18:35 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/961018758' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 20 19:18:35 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/926278231' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 20 19:18:35 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/2042278903' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 20 19:18:35 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/2059534676' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 20 19:18:35 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/1946649339' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 20 19:18:35 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/1277145328' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 20 19:18:35 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/1717316145' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 20 19:18:35 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/1067152326' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 20 19:18:35 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/1923728329' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 20 19:18:35 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/277486606' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 20 19:18:35 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:18:35 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.16905 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:35 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1100: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Jan 20 19:18:35 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.25789 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:36 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 20 19:18:36 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3455861759' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 20 19:18:36 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:18:36 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:18:36 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:18:36.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:18:36 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.25798 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:36 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.25804 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:36 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:18:36 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:18:36 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:18:36.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:18:36 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.16923 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:36 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.25813 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:36 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:18:37 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.16929 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:37 compute-0 crontab[278989]: (root) LIST (root)
Jan 20 19:18:37 compute-0 nova_compute[254061]: 2026-01-20 19:18:37.150 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:18:37 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.25819 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:37 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:18:37 compute-0 nova_compute[254061]: 2026-01-20 19:18:37.224 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:18:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:18:37.231Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:18:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:18:37.232Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:18:37 compute-0 sudo[279005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 19:18:37 compute-0 sudo[279005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:18:37 compute-0 sudo[279005]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:37 compute-0 ceph-mon[74381]: from='client.25835 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:37 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/1156747302' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 20 19:18:37 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/1327282327' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 20 19:18:37 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:18:37 compute-0 ceph-mon[74381]: from='client.16890 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:37 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/593287436' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 20 19:18:37 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/3455861759' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 20 19:18:37 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.16944 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:37 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.25834 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:37 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon stat"} v 0)
Jan 20 19:18:37 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3657788346' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Jan 20 19:18:37 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.16959 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:37 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1101: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Jan 20 19:18:38 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.25846 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:38 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:18:38 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:18:38 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:18:38.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:18:38 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.16971 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:18:38 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.25858 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:38 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:18:38 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:18:38 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:18:38.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:18:38 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "node ls"} v 0)
Jan 20 19:18:38 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1988473671' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 20 19:18:38 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.16986 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:18:38 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.25873 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:18:38 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Jan 20 19:18:38 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1386780524' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Jan 20 19:18:39 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.17001 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:18:39 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/3672578918' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 20 19:18:39 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/290615965' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 20 19:18:39 compute-0 ceph-mon[74381]: from='client.16905 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:39 compute-0 ceph-mon[74381]: pgmap v1100: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Jan 20 19:18:39 compute-0 ceph-mon[74381]: from='client.25789 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:39 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/2705507507' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 20 19:18:39 compute-0 ceph-mon[74381]: from='client.25798 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:39 compute-0 ceph-mon[74381]: from='client.25804 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:39 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/2971912278' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 20 19:18:39 compute-0 ceph-mon[74381]: from='client.16923 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:39 compute-0 ceph-mon[74381]: from='client.25813 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:39 compute-0 ceph-mon[74381]: from='client.16929 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:39 compute-0 ceph-mon[74381]: from='client.25819 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:39 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:18:39 compute-0 ceph-mon[74381]: from='client.16944 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:39 compute-0 ceph-mon[74381]: from='client.25834 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:39 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/591727431' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 20 19:18:39 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/3657788346' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Jan 20 19:18:39 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/3692960510' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 20 19:18:39 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/1597525270' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 20 19:18:39 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.25879 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:18:39 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.25904 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:39 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Jan 20 19:18:39 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3180726092' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Jan 20 19:18:39 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.17010 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:18:39 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.25891 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:18:39 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.25919 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:39 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Jan 20 19:18:39 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1179308112' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 20 19:18:39 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:18:39] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Jan 20 19:18:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:18:39] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Jan 20 19:18:39 compute-0 sudo[279359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:18:39 compute-0 sudo[279359]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:18:39 compute-0 sudo[279359]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:39 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Jan 20 19:18:39 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2582049337' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 20 19:18:39 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1102: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Jan 20 19:18:40 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Jan 20 19:18:40 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.25903 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:18:40 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Jan 20 19:18:40 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2475537077' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 20 19:18:40 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.25937 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:40 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:18:40 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:18:40 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:18:40.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:18:40 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:18:40 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:18:40 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:18:40 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:18:40.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:18:40 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Jan 20 19:18:40 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1571138287' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Jan 20 19:18:40 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Jan 20 19:18:40 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1024579162' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Jan 20 19:18:40 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Jan 20 19:18:40 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3166619187' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 20 19:18:40 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Jan 20 19:18:40 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2188463402' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 20 19:18:41 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Jan 20 19:18:41 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/373515885' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 20 19:18:41 compute-0 ceph-mon[74381]: from='client.16959 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:41 compute-0 ceph-mon[74381]: pgmap v1101: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Jan 20 19:18:41 compute-0 ceph-mon[74381]: from='client.25846 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:41 compute-0 ceph-mon[74381]: from='client.16971 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:18:41 compute-0 ceph-mon[74381]: from='client.25858 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:41 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/157689810' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 20 19:18:41 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/1988473671' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 20 19:18:41 compute-0 ceph-mon[74381]: from='client.16986 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:18:41 compute-0 ceph-mon[74381]: from='client.25873 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:18:41 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/1386780524' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Jan 20 19:18:41 compute-0 ceph-mon[74381]: from='client.17001 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:18:41 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/400780412' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 20 19:18:41 compute-0 ceph-mon[74381]: from='client.25879 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:18:41 compute-0 ceph-mon[74381]: from='client.25904 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:41 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/3180726092' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Jan 20 19:18:41 compute-0 ceph-mon[74381]: from='client.17010 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:18:41 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/878178135' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Jan 20 19:18:41 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/1179308112' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 20 19:18:41 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/3257777850' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 20 19:18:41 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/2582049337' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 20 19:18:41 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Jan 20 19:18:41 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2343339084' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 20 19:18:41 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.25955 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:41 compute-0 systemd[1]: Starting Hostname Service...
Jan 20 19:18:41 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd utilization"} v 0)
Jan 20 19:18:41 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/550994586' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 20 19:18:41 compute-0 systemd[1]: Started Hostname Service.
Jan 20 19:18:41 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Jan 20 19:18:41 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1141062113' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 20 19:18:41 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1103: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:18:41 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:18:42 compute-0 nova_compute[254061]: 2026-01-20 19:18:42.152 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:18:42 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.17112 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:42 compute-0 nova_compute[254061]: 2026-01-20 19:18:42.225 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:18:42 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:18:42 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:18:42 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:18:42.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:18:42 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:18:42 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:18:42 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:18:42.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:18:42 compute-0 ceph-mon[74381]: from='client.25891 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:18:42 compute-0 ceph-mon[74381]: from='client.25919 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:42 compute-0 ceph-mon[74381]: pgmap v1102: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Jan 20 19:18:42 compute-0 ceph-mon[74381]: from='client.25903 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:18:42 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/2475537077' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 20 19:18:42 compute-0 ceph-mon[74381]: from='client.25937 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:42 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:18:42 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:18:42 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/1571138287' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Jan 20 19:18:42 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/1024579162' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Jan 20 19:18:42 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/3434496128' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 20 19:18:42 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/2456035063' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 20 19:18:42 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/3166619187' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 20 19:18:42 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/2188463402' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 20 19:18:42 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/2470155505' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Jan 20 19:18:42 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/373515885' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 20 19:18:42 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/2343339084' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 20 19:18:42 compute-0 ceph-mon[74381]: from='client.25955 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:42 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/550994586' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 20 19:18:42 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/3530430819' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Jan 20 19:18:42 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/3179511854' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Jan 20 19:18:42 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/1141062113' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 20 19:18:42 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/2792040582' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 20 19:18:42 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/1534589282' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 20 19:18:42 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.17124 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:42 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.25982 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:42 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Jan 20 19:18:42 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4160064386' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Jan 20 19:18:42 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Jan 20 19:18:42 compute-0 podman[279798]: 2026-01-20 19:18:42.950010623 +0000 UTC m=+0.077372132 container health_status d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 20 19:18:43 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Jan 20 19:18:43 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/202938467' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 20 19:18:43 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.25997 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:43 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.17163 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:18:43 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.26015 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:43 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "quorum_status"} v 0)
Jan 20 19:18:43 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1921896660' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Jan 20 19:18:43 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Jan 20 19:18:43 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3975782022' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 20 19:18:43 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/4068641292' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 20 19:18:43 compute-0 ceph-mon[74381]: pgmap v1103: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:18:43 compute-0 ceph-mon[74381]: from='client.17112 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:43 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/2824773566' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 20 19:18:43 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/169678701' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 20 19:18:43 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/4160064386' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Jan 20 19:18:43 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/3051155188' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 20 19:18:43 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/3117842084' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Jan 20 19:18:43 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/1081537178' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 20 19:18:43 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/202938467' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 20 19:18:43 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/1149177079' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 20 19:18:43 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/4144880331' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 20 19:18:43 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.17181 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:18:43 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1104: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:18:44 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions"} v 0)
Jan 20 19:18:44 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2644053776' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 20 19:18:44 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.26021 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:18:44 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:18:44 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:18:44 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:18:44.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:18:44 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.17196 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:18:44 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:18:44 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:18:44 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:18:44.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:18:44 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.26026 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:18:44 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.26039 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:18:44 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Jan 20 19:18:44 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1886368380' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 20 19:18:44 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.17208 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:18:44 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.26035 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:44 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.26044 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:18:44 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.26054 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:18:44 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Jan 20 19:18:44 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2169609872' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 20 19:18:45 compute-0 ceph-mon[74381]: from='client.17124 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:45 compute-0 ceph-mon[74381]: from='client.25982 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:45 compute-0 ceph-mon[74381]: from='client.25997 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:45 compute-0 ceph-mon[74381]: from='client.17163 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:18:45 compute-0 ceph-mon[74381]: from='client.26015 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:45 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/1386112695' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Jan 20 19:18:45 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/1921896660' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Jan 20 19:18:45 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/1967287549' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 20 19:18:45 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/3975782022' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 20 19:18:45 compute-0 ceph-mon[74381]: from='client.17181 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:18:45 compute-0 ceph-mon[74381]: pgmap v1104: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:18:45 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/2452017913' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 20 19:18:45 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/2644053776' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 20 19:18:45 compute-0 ceph-mon[74381]: from='client.26021 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:18:45 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/2943501275' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Jan 20 19:18:45 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/585160222' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 20 19:18:45 compute-0 ceph-mon[74381]: from='client.17196 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:18:45 compute-0 ceph-mon[74381]: from='client.26026 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:18:45 compute-0 ceph-mon[74381]: from='client.26039 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:18:45 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/1886368380' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 20 19:18:45 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.17223 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:18:45 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 20 19:18:45 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 20 19:18:45 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 20 19:18:45 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 20 19:18:45 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.26053 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:45 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.26059 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:18:45 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.17244 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:18:45 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.26072 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:18:45 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.26080 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:18:45 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.17280 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:18:45 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1105: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:18:46 compute-0 ceph-mon[74381]: from='client.17208 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:18:46 compute-0 ceph-mon[74381]: from='client.26035 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:46 compute-0 ceph-mon[74381]: from='client.26044 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:18:46 compute-0 ceph-mon[74381]: from='client.26054 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:18:46 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/2169609872' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 20 19:18:46 compute-0 ceph-mon[74381]: from='client.17223 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:18:46 compute-0 ceph-mon[74381]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 20 19:18:46 compute-0 ceph-mon[74381]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 20 19:18:46 compute-0 ceph-mon[74381]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 20 19:18:46 compute-0 ceph-mon[74381]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 20 19:18:46 compute-0 ceph-mon[74381]: from='client.26053 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:46 compute-0 ceph-mon[74381]: from='client.26059 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:18:46 compute-0 ceph-mon[74381]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 20 19:18:46 compute-0 ceph-mon[74381]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 20 19:18:46 compute-0 ceph-mon[74381]: from='client.17244 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:18:46 compute-0 ceph-mon[74381]: from='client.26072 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:18:46 compute-0 ceph-mon[74381]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 20 19:18:46 compute-0 ceph-mon[74381]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 20 19:18:46 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/4170113901' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 20 19:18:46 compute-0 ceph-mon[74381]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 20 19:18:46 compute-0 ceph-mon[74381]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 20 19:18:46 compute-0 ceph-mon[74381]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 20 19:18:46 compute-0 ceph-mon[74381]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 20 19:18:46 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/3552142266' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 20 19:18:46 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.26101 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:18:46 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump"} v 0)
Jan 20 19:18:46 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2076554778' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Jan 20 19:18:46 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:18:46 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:18:46 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:18:46.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:18:46 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:18:46 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:18:46 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:18:46.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:18:46 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.26113 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:18:46 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.17313 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:46 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:18:47 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.26119 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:18:47 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Jan 20 19:18:47 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2344865162' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Jan 20 19:18:47 compute-0 ceph-mon[74381]: from='client.26080 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:18:47 compute-0 ceph-mon[74381]: from='client.17280 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:18:47 compute-0 ceph-mon[74381]: pgmap v1105: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:18:47 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/1389654750' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Jan 20 19:18:47 compute-0 ceph-mon[74381]: from='client.26101 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:18:47 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/701218280' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Jan 20 19:18:47 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/2076554778' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Jan 20 19:18:47 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/1242923288' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Jan 20 19:18:47 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/4161265886' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Jan 20 19:18:47 compute-0 ceph-mon[74381]: from='client.26113 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:18:47 compute-0 ceph-mon[74381]: from='client.17313 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:47 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/1260951505' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 20 19:18:47 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/4248585618' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 20 19:18:47 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/2572494361' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 20 19:18:47 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/2344865162' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Jan 20 19:18:47 compute-0 nova_compute[254061]: 2026-01-20 19:18:47.154 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:18:47 compute-0 nova_compute[254061]: 2026-01-20 19:18:47.227 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:18:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:18:47.232Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:18:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:18:47.233Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:18:47 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df"} v 0)
Jan 20 19:18:47 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/149014957' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Jan 20 19:18:47 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 20 19:18:47 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 20 19:18:47 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs dump"} v 0)
Jan 20 19:18:47 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3350305912' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Jan 20 19:18:47 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1106: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:18:47 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 20 19:18:47 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 20 19:18:48 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:18:48 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:18:48 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:18:48.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:18:48 compute-0 ceph-mon[74381]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 19:18:48 compute-0 ceph-mon[74381]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Cumulative writes: 7025 writes, 31K keys, 7023 commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s
                                           Cumulative WAL: 7025 writes, 7023 syncs, 1.00 writes per sync, written: 0.06 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1547 writes, 6894 keys, 1547 commit groups, 1.0 writes per commit group, ingest: 11.88 MB, 0.02 MB/s
                                           Interval WAL: 1547 writes, 1547 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    102.6      0.46              0.14        17    0.027       0      0       0.0       0.0
                                             L6      1/0   12.47 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   4.5    136.5    117.5      1.83              0.58        16    0.114     90K   8861       0.0       0.0
                                            Sum      1/0   12.47 MB   0.0      0.2     0.0      0.2       0.3      0.1       0.0   5.5    109.0    114.5      2.29              0.73        33    0.069     90K   8861       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.0     91.8     90.7      0.73              0.18         8    0.091     26K   2591       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   0.0    136.5    117.5      1.83              0.58        16    0.114     90K   8861       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    103.5      0.46              0.14        16    0.029       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.2      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.046, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.26 GB write, 0.11 MB/s write, 0.24 GB read, 0.10 MB/s read, 2.3 seconds
                                           Interval compaction: 0.06 GB write, 0.11 MB/s write, 0.07 GB read, 0.11 MB/s read, 0.7 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564b95c0c9b0#2 capacity: 304.00 MB usage: 20.55 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000175 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1353,19.85 MB,6.52914%) FilterBlock(34,258.36 KB,0.0829948%) IndexBlock(34,464.48 KB,0.14921%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 20 19:18:48 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs ls"} v 0)
Jan 20 19:18:48 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1320128367' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Jan 20 19:18:48 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:18:48 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:18:48 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:18:48.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:18:48 compute-0 ceph-mon[74381]: from='client.26119 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:18:48 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/3753089149' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 20 19:18:48 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/2095243781' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 20 19:18:48 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/3695539360' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 20 19:18:48 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/149014957' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Jan 20 19:18:48 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/2573289692' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 20 19:18:48 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/3421760309' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Jan 20 19:18:48 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/2849168690' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 20 19:18:48 compute-0 ceph-mon[74381]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 20 19:18:48 compute-0 ceph-mon[74381]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 20 19:18:48 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/3350305912' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Jan 20 19:18:48 compute-0 ceph-mon[74381]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 20 19:18:48 compute-0 ceph-mon[74381]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 20 19:18:48 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/134716314' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 20 19:18:48 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.17388 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:49 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds stat"} v 0)
Jan 20 19:18:49 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3073618167' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Jan 20 19:18:49 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.26240 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:18:49 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.26246 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:49 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.26203 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:49 compute-0 ceph-mon[74381]: pgmap v1106: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:18:49 compute-0 ceph-mon[74381]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 20 19:18:49 compute-0 ceph-mon[74381]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 20 19:18:49 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/1965160756' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Jan 20 19:18:49 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/1320128367' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Jan 20 19:18:49 compute-0 ceph-mon[74381]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 20 19:18:49 compute-0 ceph-mon[74381]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 20 19:18:49 compute-0 ceph-mon[74381]: from='client.? 192.168.122.10:0/1053590229' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 19:18:49 compute-0 ceph-mon[74381]: from='client.? 192.168.122.10:0/1053590229' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 19:18:49 compute-0 ceph-mon[74381]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 20 19:18:49 compute-0 ceph-mon[74381]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 20 19:18:49 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/1831368419' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 20 19:18:49 compute-0 ceph-mon[74381]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 20 19:18:49 compute-0 ceph-mon[74381]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 20 19:18:49 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/4023905199' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 20 19:18:49 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/386800590' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 20 19:18:49 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/2049296771' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Jan 20 19:18:49 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/3073618167' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Jan 20 19:18:49 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump"} v 0)
Jan 20 19:18:49 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1760484020' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Jan 20 19:18:49 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.26252 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:18:49 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:18:49] "GET /metrics HTTP/1.1" 200 48451 "" "Prometheus/2.51.0"
Jan 20 19:18:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:18:49] "GET /metrics HTTP/1.1" 200 48451 "" "Prometheus/2.51.0"
Jan 20 19:18:49 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1107: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:18:49 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.26258 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:50 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.17424 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:50 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.26264 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:18:50 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:18:50 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:18:50 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:18:50.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:18:50 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:18:50 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:18:50 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:18:50.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:18:50 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls"} v 0)
Jan 20 19:18:50 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/810115792' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Jan 20 19:18:50 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.26279 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:18:50 compute-0 ceph-mon[74381]: from='client.17388 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:50 compute-0 ceph-mon[74381]: from='client.26240 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:18:50 compute-0 ceph-mon[74381]: from='client.26246 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:50 compute-0 ceph-mon[74381]: from='client.26203 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:50 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/1760484020' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Jan 20 19:18:50 compute-0 ceph-mon[74381]: from='client.26252 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:18:50 compute-0 ceph-mon[74381]: pgmap v1107: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:18:50 compute-0 ceph-mon[74381]: from='client.26258 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:50 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/3560920529' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Jan 20 19:18:50 compute-0 ceph-mon[74381]: from='client.17424 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:50 compute-0 ceph-mon[74381]: from='client.26264 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:18:50 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/810115792' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Jan 20 19:18:50 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/4229881619' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Jan 20 19:18:50 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.26227 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:51 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.26297 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:18:51 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.17448 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:51 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.26318 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:18:51 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd dump"} v 0)
Jan 20 19:18:51 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2182468922' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Jan 20 19:18:51 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.26251 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:51 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.26330 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:18:51 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1108: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:18:51 compute-0 ceph-mon[74381]: from='client.26279 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:18:51 compute-0 ceph-mon[74381]: from='client.26227 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:51 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/156784614' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Jan 20 19:18:51 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/3780940745' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Jan 20 19:18:51 compute-0 ceph-mon[74381]: from='client.26297 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:18:51 compute-0 ceph-mon[74381]: from='client.17448 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:51 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/2720622636' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Jan 20 19:18:51 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/2771473284' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 20 19:18:51 compute-0 ceph-mon[74381]: from='client.26318 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:18:51 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/2182468922' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Jan 20 19:18:51 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:18:52 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd numa-status"} v 0)
Jan 20 19:18:52 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2732495124' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Jan 20 19:18:52 compute-0 nova_compute[254061]: 2026-01-20 19:18:52.155 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:18:52 compute-0 nova_compute[254061]: 2026-01-20 19:18:52.228 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:18:52 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:18:52 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:18:52 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:18:52.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:18:52 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.17472 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:52 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:18:52 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:18:52 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:18:52.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:18:52 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 20 19:18:52 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 20 19:18:52 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 20 19:18:52 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 20 19:18:52 compute-0 ovs-appctl[281816]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Jan 20 19:18:52 compute-0 ovs-appctl[281825]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Jan 20 19:18:52 compute-0 ovs-appctl[281833]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Jan 20 19:18:52 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.17484 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:52 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:18:52 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 19:18:52 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:18:52 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 20 19:18:52 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:18:52 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:18:52 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:18:52 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:18:52 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:18:52 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 20 19:18:52 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:18:52 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 20 19:18:52 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:18:52 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:18:52 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:18:52 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 19:18:52 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:18:52 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 20 19:18:52 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:18:52 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:18:52 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:18:52 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 19:18:52 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:18:52 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 20 19:18:53 compute-0 ceph-mon[74381]: from='client.26251 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:53 compute-0 ceph-mon[74381]: from='client.26330 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:18:53 compute-0 ceph-mon[74381]: pgmap v1108: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:18:53 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/2630028068' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 20 19:18:53 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/2732495124' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Jan 20 19:18:53 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/2042708681' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Jan 20 19:18:53 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/1557175705' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 20 19:18:53 compute-0 ceph-mon[74381]: from='client.17472 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:53 compute-0 ceph-mon[74381]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 20 19:18:53 compute-0 ceph-mon[74381]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 20 19:18:53 compute-0 ceph-mon[74381]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 20 19:18:53 compute-0 ceph-mon[74381]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 20 19:18:53 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/3515400149' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Jan 20 19:18:53 compute-0 ceph-mon[74381]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 20 19:18:53 compute-0 ceph-mon[74381]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 20 19:18:53 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.26281 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:53 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0)
Jan 20 19:18:53 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1233870419' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Jan 20 19:18:53 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd stat"} v 0)
Jan 20 19:18:53 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3408852643' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Jan 20 19:18:53 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1109: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:18:54 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.26302 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:54 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.17517 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:54 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.26405 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:54 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:18:54 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:18:54 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:18:54.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:18:54 compute-0 ceph-mon[74381]: from='client.17484 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:54 compute-0 ceph-mon[74381]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 20 19:18:54 compute-0 ceph-mon[74381]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 20 19:18:54 compute-0 ceph-mon[74381]: from='client.26281 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:54 compute-0 ceph-mon[74381]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 20 19:18:54 compute-0 ceph-mon[74381]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 20 19:18:54 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/1233870419' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Jan 20 19:18:54 compute-0 ceph-mon[74381]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 20 19:18:54 compute-0 ceph-mon[74381]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 20 19:18:54 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/833228686' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Jan 20 19:18:54 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/3408852643' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Jan 20 19:18:54 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/2016315116' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Jan 20 19:18:54 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.17526 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:54 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:18:54 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:18:54 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:18:54.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:18:54 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.26308 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:54 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0)
Jan 20 19:18:54 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4150497665' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 20 19:18:55 compute-0 ceph-mgr[74676]: [balancer INFO root] Optimize plan auto_2026-01-20_19:18:55
Jan 20 19:18:55 compute-0 ceph-mgr[74676]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 19:18:55 compute-0 ceph-mgr[74676]: [balancer INFO root] do_upmap
Jan 20 19:18:55 compute-0 ceph-mgr[74676]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.rgw.root', 'backups', 'default.rgw.log', '.mgr', '.nfs', 'vms', 'volumes', 'images', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data']
Jan 20 19:18:55 compute-0 ceph-mgr[74676]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 19:18:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:18:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:18:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:18:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:18:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:18:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:18:55 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "time-sync-status"} v 0)
Jan 20 19:18:55 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1122880915' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Jan 20 19:18:55 compute-0 ceph-mon[74381]: pgmap v1109: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:18:55 compute-0 ceph-mon[74381]: from='client.26302 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:55 compute-0 ceph-mon[74381]: from='client.17517 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:55 compute-0 ceph-mon[74381]: from='client.26405 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:55 compute-0 ceph-mon[74381]: from='client.17526 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:55 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/1289463804' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Jan 20 19:18:55 compute-0 ceph-mon[74381]: from='client.26308 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:55 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/4150497665' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 20 19:18:55 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/2231467763' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Jan 20 19:18:55 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/4252005727' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Jan 20 19:18:55 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:18:55 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/1122880915' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Jan 20 19:18:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 19:18:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:18:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 19:18:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:18:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 20 19:18:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:18:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:18:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:18:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:18:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:18:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 20 19:18:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:18:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 20 19:18:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:18:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:18:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:18:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 19:18:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:18:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 20 19:18:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:18:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:18:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:18:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 19:18:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:18:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 20 19:18:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 19:18:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:18:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 19:18:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:18:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:18:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:18:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:18:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:18:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:18:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:18:55 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0)
Jan 20 19:18:55 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2804853806' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Jan 20 19:18:55 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.26335 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:55 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1110: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:18:56 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.17562 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:18:56 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.26350 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:56 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:18:56 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 19:18:56 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:18:56 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 20 19:18:56 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:18:56 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:18:56 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:18:56 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:18:56 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:18:56 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 20 19:18:56 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:18:56 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 20 19:18:56 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:18:56 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:18:56 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:18:56 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 19:18:56 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:18:56 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 20 19:18:56 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:18:56 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:18:56 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:18:56 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 19:18:56 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:18:56 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 20 19:18:56 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:18:56 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:18:56 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:18:56.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:18:56 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.26447 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:56 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:18:56 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:18:56 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:18:56.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:18:56 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/1927680657' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Jan 20 19:18:56 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/143109398' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Jan 20 19:18:56 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/2804853806' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Jan 20 19:18:56 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/4023636351' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Jan 20 19:18:56 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0)
Jan 20 19:18:56 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1156723082' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 20 19:18:56 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:18:56 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0)
Jan 20 19:18:56 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2553764720' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Jan 20 19:18:57 compute-0 nova_compute[254061]: 2026-01-20 19:18:57.157 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:18:57 compute-0 nova_compute[254061]: 2026-01-20 19:18:57.229 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:18:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:18:57.234Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:18:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:18:57.236Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:18:57 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0)
Jan 20 19:18:57 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2903128695' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Jan 20 19:18:57 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.26374 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:57 compute-0 ceph-mon[74381]: from='client.26335 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:57 compute-0 ceph-mon[74381]: pgmap v1110: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:18:57 compute-0 ceph-mon[74381]: from='client.17562 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:18:57 compute-0 ceph-mon[74381]: from='client.26350 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:57 compute-0 ceph-mon[74381]: from='client.26447 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:57 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/1156723082' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 20 19:18:57 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/2534031112' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Jan 20 19:18:57 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/1884377956' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Jan 20 19:18:57 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/2553764720' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Jan 20 19:18:57 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/497305693' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Jan 20 19:18:57 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/1085821015' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Jan 20 19:18:57 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/2903128695' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Jan 20 19:18:57 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.26474 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:57 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0)
Jan 20 19:18:57 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2504935951' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Jan 20 19:18:57 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.26380 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:57 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1111: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:18:58 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.17598 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:18:58 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:18:58 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:18:58 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:18:58.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:18:58 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.26492 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:58 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:18:58 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:18:58 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:18:58.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:18:58 compute-0 ceph-mon[74381]: from='client.26374 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:58 compute-0 ceph-mon[74381]: from='client.26474 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:58 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/2504935951' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Jan 20 19:18:58 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/3041269352' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Jan 20 19:18:58 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/4123663783' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 20 19:18:58 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds stat", "format": "json-pretty"} v 0)
Jan 20 19:18:58 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3047107638' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Jan 20 19:18:58 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.26501 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:59 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json-pretty"} v 0)
Jan 20 19:18:59 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1395184878' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Jan 20 19:18:59 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.17622 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:18:59 compute-0 ceph-mon[74381]: from='client.26380 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:59 compute-0 ceph-mon[74381]: pgmap v1111: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:18:59 compute-0 ceph-mon[74381]: from='client.17598 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:18:59 compute-0 ceph-mon[74381]: from='client.26492 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:59 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/3047107638' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Jan 20 19:18:59 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/3283644740' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Jan 20 19:18:59 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/781663074' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Jan 20 19:18:59 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/1395184878' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Jan 20 19:18:59 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/2662573082' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Jan 20 19:18:59 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/3659450300' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Jan 20 19:18:59 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.26413 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:18:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:18:59] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Jan 20 19:18:59 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:18:59] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Jan 20 19:18:59 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.26525 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:18:59 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json-pretty"} v 0)
Jan 20 19:18:59 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1314810746' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Jan 20 19:18:59 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1112: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:18:59 compute-0 sudo[283474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:18:59 compute-0 sudo[283474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:18:59 compute-0 sudo[283474]: pam_unix(sudo:session): session closed for user root
Jan 20 19:19:00 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:19:00 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:19:00 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:19:00.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:19:00 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.26531 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:19:00 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:19:00 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 19:19:00 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:19:00 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 20 19:19:00 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:19:00 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:19:00 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:19:00 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:19:00 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:19:00 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 20 19:19:00 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:19:00 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 20 19:19:00 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:19:00 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:19:00 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:19:00 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 19:19:00 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:19:00 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 20 19:19:00 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:19:00 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:19:00 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:19:00 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 19:19:00 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:19:00 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 20 19:19:00 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.17634 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:19:00 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:19:00 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:19:00 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:19:00.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:19:00 compute-0 ceph-mon[74381]: from='client.26501 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:19:00 compute-0 ceph-mon[74381]: from='client.17622 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:19:00 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/1314810746' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Jan 20 19:19:00 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/1171745248' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 20 19:19:00 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.17643 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:19:01 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd dump", "format": "json-pretty"} v 0)
Jan 20 19:19:01 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1165831778' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Jan 20 19:19:01 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0)
Jan 20 19:19:01 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/561093544' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Jan 20 19:19:01 compute-0 ceph-mon[74381]: from='client.26413 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:19:01 compute-0 ceph-mon[74381]: from='client.26525 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:19:01 compute-0 ceph-mon[74381]: pgmap v1112: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:19:01 compute-0 ceph-mon[74381]: from='client.26531 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:19:01 compute-0 ceph-mon[74381]: from='client.17634 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:19:01 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/2023269326' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Jan 20 19:19:01 compute-0 ceph-mon[74381]: from='client.17643 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:19:01 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/3452843525' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Jan 20 19:19:01 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/1165831778' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Jan 20 19:19:01 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/421870724' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Jan 20 19:19:01 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/4257496781' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Jan 20 19:19:01 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/561093544' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Jan 20 19:19:01 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/1393355091' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Jan 20 19:19:01 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.26564 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:19:01 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.17664 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:19:01 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1113: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:19:01 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.26455 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:19:01 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:19:02 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.26576 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:19:02 compute-0 nova_compute[254061]: 2026-01-20 19:19:02.159 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:19:02 compute-0 nova_compute[254061]: 2026-01-20 19:19:02.230 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:19:02 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.17676 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:19:02 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:19:02 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 19:19:02 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:19:02 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 20 19:19:02 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:19:02 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:19:02 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:19:02 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:19:02 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:19:02 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 20 19:19:02 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:19:02 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 20 19:19:02 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:19:02 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:19:02 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:19:02 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 19:19:02 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:19:02 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 20 19:19:02 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:19:02 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:19:02 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:19:02 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 19:19:02 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:19:02 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 20 19:19:02 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:19:02 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:19:02 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:19:02.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:19:02 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:19:02 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:19:02 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:19:02.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:19:02 compute-0 ceph-mon[74381]: from='client.26564 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:19:02 compute-0 ceph-mon[74381]: from='client.17664 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:19:02 compute-0 ceph-mon[74381]: pgmap v1113: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:19:02 compute-0 ceph-mon[74381]: from='client.26455 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:19:02 compute-0 ceph-mon[74381]: from='client.26576 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:19:02 compute-0 ceph-mon[74381]: from='client.17676 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:19:02 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/797443720' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Jan 20 19:19:02 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/4029729245' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 20 19:19:02 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} v 0)
Jan 20 19:19:02 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1748687289' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 20 19:19:03 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd stat", "format": "json-pretty"} v 0)
Jan 20 19:19:03 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2133925925' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Jan 20 19:19:03 compute-0 podman[283870]: 2026-01-20 19:19:03.362099355 +0000 UTC m=+0.064159932 container health_status 7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Jan 20 19:19:03 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.26473 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:19:03 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.17697 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:19:03 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.26612 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:19:03 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/1748687289' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 20 19:19:03 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/3847927208' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Jan 20 19:19:03 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/1364840195' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Jan 20 19:19:03 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/2133925925' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Jan 20 19:19:03 compute-0 ceph-mon[74381]: from='client.26473 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:19:03 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/1090907764' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Jan 20 19:19:03 compute-0 ceph-mon[74381]: from='client.17697 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:19:03 compute-0 virtqemud[253535]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 20 19:19:03 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1114: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:19:03 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.17706 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:19:04 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:19:04 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:19:04 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:19:04.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:19:04 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.17718 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:19:04 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:19:04 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:19:04 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 20 19:19:04 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/208217159' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 20 19:19:04 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:19:04.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:19:04 compute-0 systemd[1]: Starting Time & Date Service...
Jan 20 19:19:04 compute-0 systemd[1]: Started Time & Date Service.
Jan 20 19:19:04 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.26494 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:19:04 compute-0 ceph-mon[74381]: from='client.26612 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:19:04 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/123925289' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Jan 20 19:19:04 compute-0 ceph-mon[74381]: pgmap v1114: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:19:04 compute-0 ceph-mon[74381]: from='client.17706 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:19:04 compute-0 ceph-mon[74381]: from='client.17718 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:19:04 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/208217159' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 20 19:19:04 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/2605275242' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 20 19:19:04 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0)
Jan 20 19:19:04 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2935333417' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Jan 20 19:19:05 compute-0 ceph-mon[74381]: from='client.26494 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:19:05 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/2935333417' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Jan 20 19:19:05 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/3665673793' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Jan 20 19:19:05 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/234508750' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Jan 20 19:19:05 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/3908910539' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Jan 20 19:19:05 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/1291884473' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Jan 20 19:19:05 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.26515 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:19:05 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1115: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:19:06 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:19:06 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:19:06 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:19:06.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:19:06 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.26521 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:19:06 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:19:06 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 19:19:06 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:19:06 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 20 19:19:06 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:19:06 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:19:06 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:19:06 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:19:06 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:19:06 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 20 19:19:06 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:19:06 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 20 19:19:06 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:19:06 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:19:06 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:19:06 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 19:19:06 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:19:06 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 20 19:19:06 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:19:06 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:19:06 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:19:06 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 19:19:06 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:19:06 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 20 19:19:06 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.26648 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:19:06 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:19:06 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:19:06 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:19:06.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:19:06 compute-0 ceph-mon[74381]: from='client.26515 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:19:06 compute-0 ceph-mon[74381]: pgmap v1115: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:19:06 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/3937983062' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Jan 20 19:19:06 compute-0 ceph-mon[74381]: from='client.26521 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:19:06 compute-0 ceph-mon[74381]: from='client.26648 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:19:06 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/427718022' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 20 19:19:06 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/915109913' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Jan 20 19:19:06 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:19:07 compute-0 nova_compute[254061]: 2026-01-20 19:19:07.163 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:19:07 compute-0 nova_compute[254061]: 2026-01-20 19:19:07.232 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:19:07 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:19:07.237Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:19:07 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.26545 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:19:07 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.26666 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:19:07 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/2465205850' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Jan 20 19:19:07 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/3829679859' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Jan 20 19:19:07 compute-0 ceph-mon[74381]: from='client.26545 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:19:07 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1116: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:19:07 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.26551 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:19:08 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:19:08 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:19:08 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:19:08.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:19:08 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.26678 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:19:08 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:19:08 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:19:08 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:19:08.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:19:08 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.26684 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:19:08 compute-0 ceph-mon[74381]: from='client.26666 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:19:08 compute-0 ceph-mon[74381]: pgmap v1116: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:19:08 compute-0 ceph-mon[74381]: from='client.26551 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:19:08 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/4139993890' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Jan 20 19:19:08 compute-0 ceph-mon[74381]: from='client.26678 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:19:08 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/292779801' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 20 19:19:08 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/4191861374' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Jan 20 19:19:09 compute-0 nova_compute[254061]: 2026-01-20 19:19:09.128 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:19:09 compute-0 nova_compute[254061]: 2026-01-20 19:19:09.154 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:19:09 compute-0 nova_compute[254061]: 2026-01-20 19:19:09.155 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:19:09 compute-0 nova_compute[254061]: 2026-01-20 19:19:09.155 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:19:09 compute-0 nova_compute[254061]: 2026-01-20 19:19:09.155 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 19:19:09 compute-0 nova_compute[254061]: 2026-01-20 19:19:09.155 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:19:09 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:19:09 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2150578466' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:19:09 compute-0 nova_compute[254061]: 2026-01-20 19:19:09.621 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:19:09 compute-0 nova_compute[254061]: 2026-01-20 19:19:09.751 254065 WARNING nova.virt.libvirt.driver [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 19:19:09 compute-0 nova_compute[254061]: 2026-01-20 19:19:09.752 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4327MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 19:19:09 compute-0 nova_compute[254061]: 2026-01-20 19:19:09.752 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:19:09 compute-0 nova_compute[254061]: 2026-01-20 19:19:09.753 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:19:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:19:09] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Jan 20 19:19:09 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:19:09] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Jan 20 19:19:09 compute-0 nova_compute[254061]: 2026-01-20 19:19:09.824 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 19:19:09 compute-0 nova_compute[254061]: 2026-01-20 19:19:09.824 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 19:19:09 compute-0 nova_compute[254061]: 2026-01-20 19:19:09.837 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:19:09 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1117: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:19:09 compute-0 ceph-mon[74381]: from='client.26684 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:19:09 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/688561628' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Jan 20 19:19:09 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/2150578466' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:19:09 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/2240711116' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Jan 20 19:19:09 compute-0 ceph-mon[74381]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Jan 20 19:19:09 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:19:09.983800) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 19:19:09 compute-0 ceph-mon[74381]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Jan 20 19:19:09 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936749983857, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 2494, "num_deletes": 251, "total_data_size": 4444872, "memory_usage": 4537960, "flush_reason": "Manual Compaction"}
Jan 20 19:19:09 compute-0 ceph-mon[74381]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Jan 20 19:19:10 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936750010886, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 4336832, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 29960, "largest_seqno": 32453, "table_properties": {"data_size": 4324860, "index_size": 7506, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3397, "raw_key_size": 30111, "raw_average_key_size": 22, "raw_value_size": 4299214, "raw_average_value_size": 3182, "num_data_blocks": 319, "num_entries": 1351, "num_filter_entries": 1351, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768936546, "oldest_key_time": 1768936546, "file_creation_time": 1768936749, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cbf3ab03-d51c-4622-b6c7-e997cd5246eb", "db_session_id": "I40O2DG19JCNHUB0JQU4", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Jan 20 19:19:10 compute-0 ceph-mon[74381]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 27115 microseconds, and 7888 cpu microseconds.
Jan 20 19:19:10 compute-0 ceph-mon[74381]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 19:19:10 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:19:10.010937) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 4336832 bytes OK
Jan 20 19:19:10 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:19:10.010957) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Jan 20 19:19:10 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:19:10.012685) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Jan 20 19:19:10 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:19:10.012697) EVENT_LOG_v1 {"time_micros": 1768936750012694, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 19:19:10 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:19:10.012714) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 19:19:10 compute-0 ceph-mon[74381]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 4433559, prev total WAL file size 4433559, number of live WAL files 2.
Jan 20 19:19:10 compute-0 ceph-mon[74381]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 19:19:10 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:19:10.013801) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Jan 20 19:19:10 compute-0 ceph-mon[74381]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 19:19:10 compute-0 ceph-mon[74381]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(4235KB)], [65(12MB)]
Jan 20 19:19:10 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936750013888, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 17416661, "oldest_snapshot_seqno": -1}
Jan 20 19:19:10 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.26702 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:19:10 compute-0 ceph-mon[74381]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 6878 keys, 15273574 bytes, temperature: kUnknown
Jan 20 19:19:10 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936750129746, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 15273574, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15227745, "index_size": 27541, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17221, "raw_key_size": 176817, "raw_average_key_size": 25, "raw_value_size": 15104283, "raw_average_value_size": 2196, "num_data_blocks": 1102, "num_entries": 6878, "num_filter_entries": 6878, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768934326, "oldest_key_time": 0, "file_creation_time": 1768936750, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cbf3ab03-d51c-4622-b6c7-e997cd5246eb", "db_session_id": "I40O2DG19JCNHUB0JQU4", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Jan 20 19:19:10 compute-0 ceph-mon[74381]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 19:19:10 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:19:10.130013) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 15273574 bytes
Jan 20 19:19:10 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:19:10.131341) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 150.2 rd, 131.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(4.1, 12.5 +0.0 blob) out(14.6 +0.0 blob), read-write-amplify(7.5) write-amplify(3.5) OK, records in: 7399, records dropped: 521 output_compression: NoCompression
Jan 20 19:19:10 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:19:10.131356) EVENT_LOG_v1 {"time_micros": 1768936750131348, "job": 36, "event": "compaction_finished", "compaction_time_micros": 115946, "compaction_time_cpu_micros": 56380, "output_level": 6, "num_output_files": 1, "total_output_size": 15273574, "num_input_records": 7399, "num_output_records": 6878, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 19:19:10 compute-0 ceph-mon[74381]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 19:19:10 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936750132019, "job": 36, "event": "table_file_deletion", "file_number": 67}
Jan 20 19:19:10 compute-0 ceph-mon[74381]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 19:19:10 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936750133884, "job": 36, "event": "table_file_deletion", "file_number": 65}
Jan 20 19:19:10 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:19:10.013679) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:19:10 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:19:10.133985) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:19:10 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:19:10.133990) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:19:10 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:19:10.133992) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:19:10 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:19:10.133996) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:19:10 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:19:10.133998) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:19:10 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:19:10 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2931887475' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:19:10 compute-0 nova_compute[254061]: 2026-01-20 19:19:10.270 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:19:10 compute-0 nova_compute[254061]: 2026-01-20 19:19:10.274 254065 DEBUG nova.compute.provider_tree [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Inventory has not changed in ProviderTree for provider: cb9161e5-191d-495c-920a-01144f42a215 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 19:19:10 compute-0 nova_compute[254061]: 2026-01-20 19:19:10.291 254065 DEBUG nova.scheduler.client.report [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Inventory has not changed for provider cb9161e5-191d-495c-920a-01144f42a215 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 19:19:10 compute-0 nova_compute[254061]: 2026-01-20 19:19:10.292 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 19:19:10 compute-0 nova_compute[254061]: 2026-01-20 19:19:10.293 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.540s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:19:10 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:19:10 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:19:10 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:19:10.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:19:10 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.26708 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:19:10 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:19:10 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 19:19:10 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:19:10 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 20 19:19:10 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:19:10 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:19:10 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:19:10 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:19:10 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:19:10 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 20 19:19:10 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:19:10 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 20 19:19:10 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:19:10 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:19:10 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:19:10 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 19:19:10 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:19:10 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 20 19:19:10 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:19:10 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:19:10 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:19:10 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 19:19:10 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:19:10 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 20 19:19:10 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:19:10 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:19:10 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:19:10.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:19:10 compute-0 ceph-mon[74381]: pgmap v1117: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:19:10 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:19:10 compute-0 ceph-mon[74381]: from='client.26702 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:19:10 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/2931887475' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:19:10 compute-0 ceph-mon[74381]: from='client.26708 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:19:10 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/3852221245' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 20 19:19:11 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.26726 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:19:11 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.26732 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:19:11 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1118: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:19:11 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:19:12 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/1687630800' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Jan 20 19:19:12 compute-0 ceph-mon[74381]: from='client.26726 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:19:12 compute-0 nova_compute[254061]: 2026-01-20 19:19:12.168 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:19:12 compute-0 nova_compute[254061]: 2026-01-20 19:19:12.233 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:19:12 compute-0 nova_compute[254061]: 2026-01-20 19:19:12.293 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:19:12 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:19:12 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:19:12 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:19:12.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:19:12 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:19:12 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:19:12 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:19:12.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:19:13 compute-0 podman[284323]: 2026-01-20 19:19:13.126234729 +0000 UTC m=+0.096075937 container health_status d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 20 19:19:13 compute-0 nova_compute[254061]: 2026-01-20 19:19:13.130 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:19:13 compute-0 ceph-mon[74381]: from='client.26732 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:19:13 compute-0 ceph-mon[74381]: pgmap v1118: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:19:13 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/796990771' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Jan 20 19:19:13 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/1418967858' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Jan 20 19:19:13 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1119: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:19:14 compute-0 nova_compute[254061]: 2026-01-20 19:19:14.129 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:19:14 compute-0 nova_compute[254061]: 2026-01-20 19:19:14.130 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 19:19:14 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:19:14 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:19:14 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:19:14.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:19:14 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:19:14 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:19:14 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:19:14.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:19:15 compute-0 nova_compute[254061]: 2026-01-20 19:19:15.130 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:19:15 compute-0 nova_compute[254061]: 2026-01-20 19:19:15.131 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 19:19:15 compute-0 nova_compute[254061]: 2026-01-20 19:19:15.131 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 19:19:15 compute-0 nova_compute[254061]: 2026-01-20 19:19:15.146 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 19:19:15 compute-0 nova_compute[254061]: 2026-01-20 19:19:15.146 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:19:15 compute-0 nova_compute[254061]: 2026-01-20 19:19:15.146 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:19:15 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1120: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:19:16 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:19:16 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:19:16 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:19:16.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:19:16 compute-0 ceph-mon[74381]: pgmap v1119: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:19:16 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/2123022373' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:19:16 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/3762577420' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:19:16 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/2608742170' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:19:16 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:19:16 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:19:16 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:19:16.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:19:16 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:19:17 compute-0 nova_compute[254061]: 2026-01-20 19:19:17.170 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:19:17 compute-0 nova_compute[254061]: 2026-01-20 19:19:17.234 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:19:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:19:17.237Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:19:17 compute-0 ceph-mon[74381]: pgmap v1120: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:19:17 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/966314244' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:19:17 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1121: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:19:18 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:19:18 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:19:18 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:19:18.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:19:18 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:19:18 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:19:18 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:19:18.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:19:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:19:19] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Jan 20 19:19:19 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:19:19] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Jan 20 19:19:19 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1122: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:19:20 compute-0 sudo[284356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:19:20 compute-0 sudo[284356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:19:20 compute-0 sudo[284356]: pam_unix(sudo:session): session closed for user root
Jan 20 19:19:20 compute-0 nova_compute[254061]: 2026-01-20 19:19:20.128 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:19:20 compute-0 nova_compute[254061]: 2026-01-20 19:19:20.129 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:19:20 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:19:20 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:19:20 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:19:20.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:19:20 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:19:20 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:19:20 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:19:20.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:19:20 compute-0 ceph-mon[74381]: pgmap v1121: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:19:21 compute-0 ceph-mon[74381]: pgmap v1122: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:19:21 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1123: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:19:21 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:19:22 compute-0 nova_compute[254061]: 2026-01-20 19:19:22.175 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:19:22 compute-0 nova_compute[254061]: 2026-01-20 19:19:22.236 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:19:22 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:19:22 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:19:22 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:19:22.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:19:22 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:19:22 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:19:22 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:19:22.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:19:22 compute-0 ceph-mon[74381]: pgmap v1123: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:19:23 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1124: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:19:24 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:19:24 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:19:24 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:19:24.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:19:24 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:19:24 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:19:24 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:19:24.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:19:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:19:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:19:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:19:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:19:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:19:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:19:25 compute-0 ceph-mon[74381]: pgmap v1124: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:19:25 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:19:25 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1125: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:19:26 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:19:26 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:19:26 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:19:26.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:19:26 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:19:26 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:19:26 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:19:26.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:19:26 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:19:27 compute-0 nova_compute[254061]: 2026-01-20 19:19:27.177 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:19:27 compute-0 ceph-mon[74381]: pgmap v1125: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:19:27 compute-0 nova_compute[254061]: 2026-01-20 19:19:27.237 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:19:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:19:27.239Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:19:27 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1126: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:19:28 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:19:28 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:19:28 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:19:28.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:19:28 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:19:28 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:19:28 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:19:28.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:19:29 compute-0 ceph-mon[74381]: pgmap v1126: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:19:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:19:29] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Jan 20 19:19:29 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:19:29] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Jan 20 19:19:29 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1127: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:19:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:19:30.297 165659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:19:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:19:30.297 165659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:19:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:19:30.297 165659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:19:30 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:19:30 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:19:30 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:19:30.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:19:30 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:19:30 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:19:30 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:19:30.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:19:31 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-crash-compute-0[79794]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 20 19:19:31 compute-0 ceph-mon[74381]: pgmap v1127: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:19:31 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1128: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:19:31 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:19:32 compute-0 nova_compute[254061]: 2026-01-20 19:19:32.179 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:19:32 compute-0 nova_compute[254061]: 2026-01-20 19:19:32.238 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:19:32 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:19:32 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:19:32 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:19:32.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:19:32 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:19:32 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:19:32 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:19:32.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:19:33 compute-0 ceph-mon[74381]: pgmap v1128: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:19:33 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1129: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:19:34 compute-0 podman[284396]: 2026-01-20 19:19:34.112304528 +0000 UTC m=+0.079317694 container health_status 7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 20 19:19:34 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:19:34 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:19:34 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:19:34.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:19:34 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:19:34 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:19:34 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:19:34.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:19:34 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 20 19:19:34 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 20 19:19:35 compute-0 ceph-mon[74381]: pgmap v1129: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:19:35 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1130: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:19:36 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:19:36 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:19:36 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:19:36.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:19:36 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:19:36 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:19:36 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:19:36.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:19:36 compute-0 ceph-mon[74381]: pgmap v1130: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:19:36 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:19:37 compute-0 nova_compute[254061]: 2026-01-20 19:19:37.182 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:19:37 compute-0 nova_compute[254061]: 2026-01-20 19:19:37.239 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:19:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:19:37.240Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:19:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:19:37.240Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:19:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:19:37.241Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:19:37 compute-0 sudo[284424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:19:37 compute-0 sudo[284424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:19:37 compute-0 sudo[284424]: pam_unix(sudo:session): session closed for user root
Jan 20 19:19:37 compute-0 sudo[284449]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 20 19:19:37 compute-0 sudo[284449]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:19:37 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1131: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:19:38 compute-0 sudo[284449]: pam_unix(sudo:session): session closed for user root
Jan 20 19:19:38 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1132: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 20 19:19:38 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 19:19:38 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:19:38 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 20 19:19:38 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:19:38 compute-0 sudo[284507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:19:38 compute-0 sudo[284507]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:19:38 compute-0 sudo[284507]: pam_unix(sudo:session): session closed for user root
Jan 20 19:19:38 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:19:38 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:19:38 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:19:38.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:19:38 compute-0 sudo[284532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 19:19:38 compute-0 sudo[284532]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:19:38 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:19:38 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:19:38 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:19:38.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:19:38 compute-0 podman[284598]: 2026-01-20 19:19:38.787958243 +0000 UTC m=+0.044497770 container create 2cb6d9a03085a81665957d17962d0ebe72384b11b1f4d01e7fb9bb43931c62fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_hermann, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Jan 20 19:19:38 compute-0 systemd[1]: Started libpod-conmon-2cb6d9a03085a81665957d17962d0ebe72384b11b1f4d01e7fb9bb43931c62fb.scope.
Jan 20 19:19:38 compute-0 ceph-mgr[74676]: [dashboard INFO request] [192.168.122.100:44322] [POST] [200] [0.002s] [4.0B] [c489ba05-86a6-4ce2-ae1d-6aa1cda6e2f2] /api/prometheus_receiver
Jan 20 19:19:38 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:19:38 compute-0 podman[284598]: 2026-01-20 19:19:38.769861633 +0000 UTC m=+0.026401190 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:19:38 compute-0 podman[284598]: 2026-01-20 19:19:38.87196485 +0000 UTC m=+0.128504417 container init 2cb6d9a03085a81665957d17962d0ebe72384b11b1f4d01e7fb9bb43931c62fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_hermann, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 20 19:19:38 compute-0 podman[284598]: 2026-01-20 19:19:38.879060678 +0000 UTC m=+0.135600215 container start 2cb6d9a03085a81665957d17962d0ebe72384b11b1f4d01e7fb9bb43931c62fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_hermann, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:19:38 compute-0 podman[284598]: 2026-01-20 19:19:38.88178381 +0000 UTC m=+0.138323357 container attach 2cb6d9a03085a81665957d17962d0ebe72384b11b1f4d01e7fb9bb43931c62fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_hermann, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:19:38 compute-0 focused_hermann[284616]: 167 167
Jan 20 19:19:38 compute-0 systemd[1]: libpod-2cb6d9a03085a81665957d17962d0ebe72384b11b1f4d01e7fb9bb43931c62fb.scope: Deactivated successfully.
Jan 20 19:19:38 compute-0 podman[284598]: 2026-01-20 19:19:38.885670094 +0000 UTC m=+0.142209641 container died 2cb6d9a03085a81665957d17962d0ebe72384b11b1f4d01e7fb9bb43931c62fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_hermann, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Jan 20 19:19:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-73277528da929e5d2af002ff3378ee5a5fb4862d0d14b54f61ed14946fe55065-merged.mount: Deactivated successfully.
Jan 20 19:19:38 compute-0 podman[284598]: 2026-01-20 19:19:38.928978052 +0000 UTC m=+0.185517589 container remove 2cb6d9a03085a81665957d17962d0ebe72384b11b1f4d01e7fb9bb43931c62fb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_hermann, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Jan 20 19:19:38 compute-0 systemd[1]: libpod-conmon-2cb6d9a03085a81665957d17962d0ebe72384b11b1f4d01e7fb9bb43931c62fb.scope: Deactivated successfully.
Jan 20 19:19:39 compute-0 ceph-mon[74381]: pgmap v1131: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:19:39 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 19:19:39 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 19:19:39 compute-0 ceph-mon[74381]: pgmap v1132: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 20 19:19:39 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:19:39 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:19:39 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 19:19:39 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 19:19:39 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 19:19:39 compute-0 podman[284640]: 2026-01-20 19:19:39.131877271 +0000 UTC m=+0.057077255 container create 20d317c935c2ce3a6678a9a45ecfdf4dbb1a3d86f388838a4ccc2ef82bbb4f5b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_hodgkin, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 19:19:39 compute-0 systemd[1]: Started libpod-conmon-20d317c935c2ce3a6678a9a45ecfdf4dbb1a3d86f388838a4ccc2ef82bbb4f5b.scope.
Jan 20 19:19:39 compute-0 podman[284640]: 2026-01-20 19:19:39.109799845 +0000 UTC m=+0.034999799 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:19:39 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:19:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e628fa029eca384f34e4047d68087005ed860c03ba31fe3f88f33e5d8a5fcffc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:19:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e628fa029eca384f34e4047d68087005ed860c03ba31fe3f88f33e5d8a5fcffc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:19:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e628fa029eca384f34e4047d68087005ed860c03ba31fe3f88f33e5d8a5fcffc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:19:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e628fa029eca384f34e4047d68087005ed860c03ba31fe3f88f33e5d8a5fcffc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:19:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e628fa029eca384f34e4047d68087005ed860c03ba31fe3f88f33e5d8a5fcffc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:19:39 compute-0 podman[284640]: 2026-01-20 19:19:39.263486009 +0000 UTC m=+0.188685973 container init 20d317c935c2ce3a6678a9a45ecfdf4dbb1a3d86f388838a4ccc2ef82bbb4f5b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_hodgkin, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325)
Jan 20 19:19:39 compute-0 podman[284640]: 2026-01-20 19:19:39.27442774 +0000 UTC m=+0.199627694 container start 20d317c935c2ce3a6678a9a45ecfdf4dbb1a3d86f388838a4ccc2ef82bbb4f5b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_hodgkin, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:19:39 compute-0 podman[284640]: 2026-01-20 19:19:39.277310246 +0000 UTC m=+0.202510240 container attach 20d317c935c2ce3a6678a9a45ecfdf4dbb1a3d86f388838a4ccc2ef82bbb4f5b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_hodgkin, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 20 19:19:39 compute-0 zen_hodgkin[284657]: --> passed data devices: 0 physical, 1 LVM
Jan 20 19:19:39 compute-0 zen_hodgkin[284657]: --> All data devices are unavailable
Jan 20 19:19:39 compute-0 systemd[1]: libpod-20d317c935c2ce3a6678a9a45ecfdf4dbb1a3d86f388838a4ccc2ef82bbb4f5b.scope: Deactivated successfully.
Jan 20 19:19:39 compute-0 podman[284640]: 2026-01-20 19:19:39.621754177 +0000 UTC m=+0.546954141 container died 20d317c935c2ce3a6678a9a45ecfdf4dbb1a3d86f388838a4ccc2ef82bbb4f5b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_hodgkin, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Jan 20 19:19:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-e628fa029eca384f34e4047d68087005ed860c03ba31fe3f88f33e5d8a5fcffc-merged.mount: Deactivated successfully.
Jan 20 19:19:39 compute-0 podman[284640]: 2026-01-20 19:19:39.678310106 +0000 UTC m=+0.603510100 container remove 20d317c935c2ce3a6678a9a45ecfdf4dbb1a3d86f388838a4ccc2ef82bbb4f5b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_hodgkin, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Jan 20 19:19:39 compute-0 systemd[1]: libpod-conmon-20d317c935c2ce3a6678a9a45ecfdf4dbb1a3d86f388838a4ccc2ef82bbb4f5b.scope: Deactivated successfully.
Jan 20 19:19:39 compute-0 sudo[284532]: pam_unix(sudo:session): session closed for user root
Jan 20 19:19:39 compute-0 sudo[284689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:19:39 compute-0 sudo[284689]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:19:39 compute-0 sudo[284689]: pam_unix(sudo:session): session closed for user root
Jan 20 19:19:39 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:19:39] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Jan 20 19:19:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:19:39] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Jan 20 19:19:39 compute-0 sudo[284714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- lvm list --format json
Jan 20 19:19:39 compute-0 sudo[284714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:19:40 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:19:40 compute-0 sudo[284768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:19:40 compute-0 sudo[284768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:19:40 compute-0 sudo[284768]: pam_unix(sudo:session): session closed for user root
Jan 20 19:19:40 compute-0 podman[284805]: 2026-01-20 19:19:40.194291314 +0000 UTC m=+0.038147762 container create ba850d4a7de4bf759d95822a042f2460cf661ddaa9abbfe0fa0be1243f5722b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_cerf, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 20 19:19:40 compute-0 systemd[1]: Started libpod-conmon-ba850d4a7de4bf759d95822a042f2460cf661ddaa9abbfe0fa0be1243f5722b9.scope.
Jan 20 19:19:40 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:19:40 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1133: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 20 19:19:40 compute-0 podman[284805]: 2026-01-20 19:19:40.255915378 +0000 UTC m=+0.099771856 container init ba850d4a7de4bf759d95822a042f2460cf661ddaa9abbfe0fa0be1243f5722b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_cerf, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 20 19:19:40 compute-0 podman[284805]: 2026-01-20 19:19:40.261801484 +0000 UTC m=+0.105657932 container start ba850d4a7de4bf759d95822a042f2460cf661ddaa9abbfe0fa0be1243f5722b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_cerf, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:19:40 compute-0 podman[284805]: 2026-01-20 19:19:40.264958007 +0000 UTC m=+0.108814485 container attach ba850d4a7de4bf759d95822a042f2460cf661ddaa9abbfe0fa0be1243f5722b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1)
Jan 20 19:19:40 compute-0 elegant_cerf[284820]: 167 167
Jan 20 19:19:40 compute-0 systemd[1]: libpod-ba850d4a7de4bf759d95822a042f2460cf661ddaa9abbfe0fa0be1243f5722b9.scope: Deactivated successfully.
Jan 20 19:19:40 compute-0 podman[284805]: 2026-01-20 19:19:40.267660589 +0000 UTC m=+0.111517047 container died ba850d4a7de4bf759d95822a042f2460cf661ddaa9abbfe0fa0be1243f5722b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_cerf, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True)
Jan 20 19:19:40 compute-0 podman[284805]: 2026-01-20 19:19:40.177456208 +0000 UTC m=+0.021312666 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:19:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-552f67a6c3551aea66f3ed9b093365c95c0382dff4f2eb4ba06a82b5c8395f9a-merged.mount: Deactivated successfully.
Jan 20 19:19:40 compute-0 podman[284805]: 2026-01-20 19:19:40.301055584 +0000 UTC m=+0.144912042 container remove ba850d4a7de4bf759d95822a042f2460cf661ddaa9abbfe0fa0be1243f5722b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_cerf, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Jan 20 19:19:40 compute-0 systemd[1]: libpod-conmon-ba850d4a7de4bf759d95822a042f2460cf661ddaa9abbfe0fa0be1243f5722b9.scope: Deactivated successfully.
Jan 20 19:19:40 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:19:40 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:19:40 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:19:40.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:19:40 compute-0 podman[284844]: 2026-01-20 19:19:40.455398845 +0000 UTC m=+0.040625378 container create f0948cee834f2179fbdbaefb1b43a4539441b1229e8dda8f8a91a44c45bd101a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:19:40 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:19:40 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:19:40 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:19:40.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:19:40 compute-0 systemd[1]: Started libpod-conmon-f0948cee834f2179fbdbaefb1b43a4539441b1229e8dda8f8a91a44c45bd101a.scope.
Jan 20 19:19:40 compute-0 podman[284844]: 2026-01-20 19:19:40.436153136 +0000 UTC m=+0.021379659 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:19:40 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:19:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55d4e9d985e9d6d2900f5e9aeb6e82f598ae5ac3cb69e85bc27a61c0580d309e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:19:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55d4e9d985e9d6d2900f5e9aeb6e82f598ae5ac3cb69e85bc27a61c0580d309e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:19:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55d4e9d985e9d6d2900f5e9aeb6e82f598ae5ac3cb69e85bc27a61c0580d309e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:19:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55d4e9d985e9d6d2900f5e9aeb6e82f598ae5ac3cb69e85bc27a61c0580d309e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:19:40 compute-0 podman[284844]: 2026-01-20 19:19:40.552086878 +0000 UTC m=+0.137313431 container init f0948cee834f2179fbdbaefb1b43a4539441b1229e8dda8f8a91a44c45bd101a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_goldwasser, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Jan 20 19:19:40 compute-0 podman[284844]: 2026-01-20 19:19:40.560002168 +0000 UTC m=+0.145228671 container start f0948cee834f2179fbdbaefb1b43a4539441b1229e8dda8f8a91a44c45bd101a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_goldwasser, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 20 19:19:40 compute-0 podman[284844]: 2026-01-20 19:19:40.576855565 +0000 UTC m=+0.162082158 container attach f0948cee834f2179fbdbaefb1b43a4539441b1229e8dda8f8a91a44c45bd101a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_goldwasser, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:19:40 compute-0 exciting_goldwasser[284860]: {
Jan 20 19:19:40 compute-0 exciting_goldwasser[284860]:     "0": [
Jan 20 19:19:40 compute-0 exciting_goldwasser[284860]:         {
Jan 20 19:19:40 compute-0 exciting_goldwasser[284860]:             "devices": [
Jan 20 19:19:40 compute-0 exciting_goldwasser[284860]:                 "/dev/loop3"
Jan 20 19:19:40 compute-0 exciting_goldwasser[284860]:             ],
Jan 20 19:19:40 compute-0 exciting_goldwasser[284860]:             "lv_name": "ceph_lv0",
Jan 20 19:19:40 compute-0 exciting_goldwasser[284860]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:19:40 compute-0 exciting_goldwasser[284860]:             "lv_size": "21470642176",
Jan 20 19:19:40 compute-0 exciting_goldwasser[284860]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=aecbbf3b-b405-507b-97d7-637a83f5b4b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5f53c0c6-6046-4836-83f9-ff93da7e674e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:19:40 compute-0 exciting_goldwasser[284860]:             "lv_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 19:19:40 compute-0 exciting_goldwasser[284860]:             "name": "ceph_lv0",
Jan 20 19:19:40 compute-0 exciting_goldwasser[284860]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:19:40 compute-0 exciting_goldwasser[284860]:             "tags": {
Jan 20 19:19:40 compute-0 exciting_goldwasser[284860]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:19:40 compute-0 exciting_goldwasser[284860]:                 "ceph.block_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 19:19:40 compute-0 exciting_goldwasser[284860]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:19:40 compute-0 exciting_goldwasser[284860]:                 "ceph.cluster_fsid": "aecbbf3b-b405-507b-97d7-637a83f5b4b1",
Jan 20 19:19:40 compute-0 exciting_goldwasser[284860]:                 "ceph.cluster_name": "ceph",
Jan 20 19:19:40 compute-0 exciting_goldwasser[284860]:                 "ceph.crush_device_class": "",
Jan 20 19:19:40 compute-0 exciting_goldwasser[284860]:                 "ceph.encrypted": "0",
Jan 20 19:19:40 compute-0 exciting_goldwasser[284860]:                 "ceph.osd_fsid": "5f53c0c6-6046-4836-83f9-ff93da7e674e",
Jan 20 19:19:40 compute-0 exciting_goldwasser[284860]:                 "ceph.osd_id": "0",
Jan 20 19:19:40 compute-0 exciting_goldwasser[284860]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:19:40 compute-0 exciting_goldwasser[284860]:                 "ceph.type": "block",
Jan 20 19:19:40 compute-0 exciting_goldwasser[284860]:                 "ceph.vdo": "0",
Jan 20 19:19:40 compute-0 exciting_goldwasser[284860]:                 "ceph.with_tpm": "0"
Jan 20 19:19:40 compute-0 exciting_goldwasser[284860]:             },
Jan 20 19:19:40 compute-0 exciting_goldwasser[284860]:             "type": "block",
Jan 20 19:19:40 compute-0 exciting_goldwasser[284860]:             "vg_name": "ceph_vg0"
Jan 20 19:19:40 compute-0 exciting_goldwasser[284860]:         }
Jan 20 19:19:40 compute-0 exciting_goldwasser[284860]:     ]
Jan 20 19:19:40 compute-0 exciting_goldwasser[284860]: }
Jan 20 19:19:40 compute-0 systemd[1]: libpod-f0948cee834f2179fbdbaefb1b43a4539441b1229e8dda8f8a91a44c45bd101a.scope: Deactivated successfully.
Jan 20 19:19:40 compute-0 podman[284844]: 2026-01-20 19:19:40.932315648 +0000 UTC m=+0.517542171 container died f0948cee834f2179fbdbaefb1b43a4539441b1229e8dda8f8a91a44c45bd101a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_goldwasser, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 19:19:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-55d4e9d985e9d6d2900f5e9aeb6e82f598ae5ac3cb69e85bc27a61c0580d309e-merged.mount: Deactivated successfully.
Jan 20 19:19:40 compute-0 podman[284844]: 2026-01-20 19:19:40.992048302 +0000 UTC m=+0.577274815 container remove f0948cee834f2179fbdbaefb1b43a4539441b1229e8dda8f8a91a44c45bd101a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:19:41 compute-0 systemd[1]: libpod-conmon-f0948cee834f2179fbdbaefb1b43a4539441b1229e8dda8f8a91a44c45bd101a.scope: Deactivated successfully.
Jan 20 19:19:41 compute-0 sudo[284714]: pam_unix(sudo:session): session closed for user root
Jan 20 19:19:41 compute-0 sudo[284884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:19:41 compute-0 sudo[284884]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:19:41 compute-0 sudo[284884]: pam_unix(sudo:session): session closed for user root
Jan 20 19:19:41 compute-0 sudo[284909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- raw list --format json
Jan 20 19:19:41 compute-0 sudo[284909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:19:41 compute-0 ceph-mon[74381]: pgmap v1133: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 20 19:19:41 compute-0 podman[284974]: 2026-01-20 19:19:41.545585955 +0000 UTC m=+0.060885795 container create 2091dfacc8dd362b8f2e537850ff723e4bdce0c4febbab85168627e299f463ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_spence, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Jan 20 19:19:41 compute-0 systemd[1]: Started libpod-conmon-2091dfacc8dd362b8f2e537850ff723e4bdce0c4febbab85168627e299f463ad.scope.
Jan 20 19:19:41 compute-0 podman[284974]: 2026-01-20 19:19:41.51633292 +0000 UTC m=+0.031632840 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:19:41 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:19:41 compute-0 podman[284974]: 2026-01-20 19:19:41.643214474 +0000 UTC m=+0.158514364 container init 2091dfacc8dd362b8f2e537850ff723e4bdce0c4febbab85168627e299f463ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_spence, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:19:41 compute-0 podman[284974]: 2026-01-20 19:19:41.65026563 +0000 UTC m=+0.165565460 container start 2091dfacc8dd362b8f2e537850ff723e4bdce0c4febbab85168627e299f463ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_spence, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:19:41 compute-0 podman[284974]: 2026-01-20 19:19:41.653581988 +0000 UTC m=+0.168881838 container attach 2091dfacc8dd362b8f2e537850ff723e4bdce0c4febbab85168627e299f463ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_spence, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:19:41 compute-0 compassionate_spence[284991]: 167 167
Jan 20 19:19:41 compute-0 systemd[1]: libpod-2091dfacc8dd362b8f2e537850ff723e4bdce0c4febbab85168627e299f463ad.scope: Deactivated successfully.
Jan 20 19:19:41 compute-0 podman[284974]: 2026-01-20 19:19:41.658235791 +0000 UTC m=+0.173535641 container died 2091dfacc8dd362b8f2e537850ff723e4bdce0c4febbab85168627e299f463ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_spence, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:19:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a1475561a5dda2ad9da92e74903752860a236a1231a4a1c490bcbd9d72c0d12-merged.mount: Deactivated successfully.
Jan 20 19:19:41 compute-0 podman[284974]: 2026-01-20 19:19:41.694228485 +0000 UTC m=+0.209528325 container remove 2091dfacc8dd362b8f2e537850ff723e4bdce0c4febbab85168627e299f463ad (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_spence, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Jan 20 19:19:41 compute-0 systemd[1]: libpod-conmon-2091dfacc8dd362b8f2e537850ff723e4bdce0c4febbab85168627e299f463ad.scope: Deactivated successfully.
Jan 20 19:19:41 compute-0 podman[285017]: 2026-01-20 19:19:41.84227617 +0000 UTC m=+0.042822886 container create abb934feb0540f868f7ccabf4f4e0b10ba7e6ed3b4272b9dbff828c39a40bb55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_bouman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:19:41 compute-0 systemd[1]: Started libpod-conmon-abb934feb0540f868f7ccabf4f4e0b10ba7e6ed3b4272b9dbff828c39a40bb55.scope.
Jan 20 19:19:41 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:19:41 compute-0 podman[285017]: 2026-01-20 19:19:41.823992456 +0000 UTC m=+0.024539202 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:19:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/031fcbbc15808cfb06c86aa74b6f03d7cf332f0b5171204a5f5661b2711dfaef/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:19:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/031fcbbc15808cfb06c86aa74b6f03d7cf332f0b5171204a5f5661b2711dfaef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:19:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/031fcbbc15808cfb06c86aa74b6f03d7cf332f0b5171204a5f5661b2711dfaef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:19:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/031fcbbc15808cfb06c86aa74b6f03d7cf332f0b5171204a5f5661b2711dfaef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:19:41 compute-0 podman[285017]: 2026-01-20 19:19:41.934893505 +0000 UTC m=+0.135440241 container init abb934feb0540f868f7ccabf4f4e0b10ba7e6ed3b4272b9dbff828c39a40bb55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_bouman, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 19:19:41 compute-0 podman[285017]: 2026-01-20 19:19:41.941087699 +0000 UTC m=+0.141634415 container start abb934feb0540f868f7ccabf4f4e0b10ba7e6ed3b4272b9dbff828c39a40bb55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_bouman, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:19:41 compute-0 podman[285017]: 2026-01-20 19:19:41.94411781 +0000 UTC m=+0.144664546 container attach abb934feb0540f868f7ccabf4f4e0b10ba7e6ed3b4272b9dbff828c39a40bb55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:19:41 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:19:42 compute-0 nova_compute[254061]: 2026-01-20 19:19:42.184 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:19:42 compute-0 nova_compute[254061]: 2026-01-20 19:19:42.240 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:19:42 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1134: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 20 19:19:42 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:19:42 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:19:42 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:19:42.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:19:42 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:19:42 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 19:19:42 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:19:42.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 19:19:42 compute-0 lvm[285108]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 19:19:42 compute-0 lvm[285108]: VG ceph_vg0 finished
Jan 20 19:19:42 compute-0 pedantic_bouman[285033]: {}
Jan 20 19:19:42 compute-0 systemd[1]: libpod-abb934feb0540f868f7ccabf4f4e0b10ba7e6ed3b4272b9dbff828c39a40bb55.scope: Deactivated successfully.
Jan 20 19:19:42 compute-0 systemd[1]: libpod-abb934feb0540f868f7ccabf4f4e0b10ba7e6ed3b4272b9dbff828c39a40bb55.scope: Consumed 1.012s CPU time.
Jan 20 19:19:42 compute-0 podman[285017]: 2026-01-20 19:19:42.629789876 +0000 UTC m=+0.830336582 container died abb934feb0540f868f7ccabf4f4e0b10ba7e6ed3b4272b9dbff828c39a40bb55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_bouman, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:19:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-031fcbbc15808cfb06c86aa74b6f03d7cf332f0b5171204a5f5661b2711dfaef-merged.mount: Deactivated successfully.
Jan 20 19:19:42 compute-0 podman[285017]: 2026-01-20 19:19:42.767335882 +0000 UTC m=+0.967882608 container remove abb934feb0540f868f7ccabf4f4e0b10ba7e6ed3b4272b9dbff828c39a40bb55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_bouman, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 20 19:19:42 compute-0 systemd[1]: libpod-conmon-abb934feb0540f868f7ccabf4f4e0b10ba7e6ed3b4272b9dbff828c39a40bb55.scope: Deactivated successfully.
Jan 20 19:19:42 compute-0 sudo[284909]: pam_unix(sudo:session): session closed for user root
Jan 20 19:19:43 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:19:43 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:19:43 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:19:43 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:19:43 compute-0 podman[285128]: 2026-01-20 19:19:43.453739048 +0000 UTC m=+0.089669158 container health_status d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 20 19:19:43 compute-0 sudo[285153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 19:19:43 compute-0 sudo[285153]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:19:43 compute-0 sudo[285153]: pam_unix(sudo:session): session closed for user root
Jan 20 19:19:43 compute-0 ceph-mon[74381]: pgmap v1134: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 20 19:19:43 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:19:43 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:19:44 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1135: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 20 19:19:44 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:19:44 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:19:44 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:19:44.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:19:44 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:19:44 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:19:44 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:19:44.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:19:44 compute-0 ceph-mon[74381]: pgmap v1135: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 20 19:19:46 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1136: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Jan 20 19:19:46 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:19:46 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:19:46 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:19:46.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:19:46 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:19:46 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:19:46 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:19:46.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:19:46 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:19:47 compute-0 nova_compute[254061]: 2026-01-20 19:19:47.185 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:19:47 compute-0 nova_compute[254061]: 2026-01-20 19:19:47.241 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:19:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:19:47.242Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:19:47 compute-0 ceph-mon[74381]: pgmap v1136: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Jan 20 19:19:48 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1137: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 20 19:19:48 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:19:48 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:19:48 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:19:48.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:19:48 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:19:48 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:19:48 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:19:48.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:19:48 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:19:48.844Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:19:49 compute-0 ceph-mon[74381]: pgmap v1137: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 20 19:19:49 compute-0 ceph-mon[74381]: from='client.? 192.168.122.10:0/594696692' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 19:19:49 compute-0 ceph-mon[74381]: from='client.? 192.168.122.10:0/594696692' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 19:19:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:19:49] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Jan 20 19:19:49 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:19:49] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Jan 20 19:19:50 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1138: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:19:50 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:19:50 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:19:50 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:19:50.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:19:50 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:19:50 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:19:50 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:19:50.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:19:51 compute-0 ceph-mon[74381]: pgmap v1138: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:19:51 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:19:52 compute-0 nova_compute[254061]: 2026-01-20 19:19:52.188 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:19:52 compute-0 nova_compute[254061]: 2026-01-20 19:19:52.242 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:19:52 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1139: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:19:52 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:19:52 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:19:52 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:19:52.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:19:52 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:19:52 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:19:52 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:19:52.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:19:52 compute-0 ceph-mon[74381]: pgmap v1139: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:19:54 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1140: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:19:54 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:19:54 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:19:54 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:19:54.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:19:54 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:19:54 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:19:54 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:19:54.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:19:55 compute-0 ceph-mgr[74676]: [balancer INFO root] Optimize plan auto_2026-01-20_19:19:55
Jan 20 19:19:55 compute-0 ceph-mgr[74676]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 19:19:55 compute-0 ceph-mgr[74676]: [balancer INFO root] do_upmap
Jan 20 19:19:55 compute-0 ceph-mgr[74676]: [balancer INFO root] pools ['default.rgw.control', 'backups', 'volumes', '.mgr', '.rgw.root', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.nfs', 'images', 'default.rgw.log', 'default.rgw.meta', 'vms']
Jan 20 19:19:55 compute-0 ceph-mgr[74676]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 19:19:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:19:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:19:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:19:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:19:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:19:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:19:55 compute-0 ceph-mon[74381]: pgmap v1140: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:19:55 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:19:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 19:19:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:19:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 19:19:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:19:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 20 19:19:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:19:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:19:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:19:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:19:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:19:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 20 19:19:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:19:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 20 19:19:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:19:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:19:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:19:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 19:19:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:19:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 20 19:19:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:19:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:19:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:19:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 19:19:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:19:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 20 19:19:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 19:19:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:19:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 19:19:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:19:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:19:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:19:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:19:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:19:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:19:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:19:56 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1141: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:19:56 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:19:56 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:19:56 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:19:56.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:19:56 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:19:56 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:19:56 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:19:56.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:19:56 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:19:57 compute-0 nova_compute[254061]: 2026-01-20 19:19:57.189 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:19:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:19:57.243Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:19:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:19:57.244Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:19:57 compute-0 nova_compute[254061]: 2026-01-20 19:19:57.245 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:19:57 compute-0 ceph-mon[74381]: pgmap v1141: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:19:57 compute-0 sudo[275651]: pam_unix(sudo:session): session closed for user root
Jan 20 19:19:57 compute-0 sshd-session[275650]: Received disconnect from 192.168.122.10 port 57520:11: disconnected by user
Jan 20 19:19:57 compute-0 sshd-session[275650]: Disconnected from user zuul 192.168.122.10 port 57520
Jan 20 19:19:57 compute-0 sshd-session[275647]: pam_unix(sshd:session): session closed for user zuul
Jan 20 19:19:57 compute-0 systemd[1]: session-57.scope: Deactivated successfully.
Jan 20 19:19:57 compute-0 systemd[1]: session-57.scope: Consumed 2min 55.373s CPU time, 787.1M memory peak, read 273.7M from disk, written 78.2M to disk.
Jan 20 19:19:57 compute-0 systemd-logind[796]: Session 57 logged out. Waiting for processes to exit.
Jan 20 19:19:57 compute-0 systemd-logind[796]: Removed session 57.
Jan 20 19:19:57 compute-0 sshd-session[285192]: Accepted publickey for zuul from 192.168.122.10 port 60302 ssh2: ECDSA SHA256:OqjpBifwMHsrzwMbJxHbqE54q1skVz6aecL1tN9sOps
Jan 20 19:19:57 compute-0 systemd-logind[796]: New session 58 of user zuul.
Jan 20 19:19:57 compute-0 systemd[1]: Started Session 58 of User zuul.
Jan 20 19:19:57 compute-0 sshd-session[285192]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 19:19:58 compute-0 sudo[285196]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/cat /var/tmp/sos-osp/sosreport-compute-0-2026-01-20-yvoumnt.tar.xz
Jan 20 19:19:58 compute-0 sudo[285196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:19:58 compute-0 sudo[285196]: pam_unix(sudo:session): session closed for user root
Jan 20 19:19:58 compute-0 sshd-session[285195]: Received disconnect from 192.168.122.10 port 60302:11: disconnected by user
Jan 20 19:19:58 compute-0 sshd-session[285195]: Disconnected from user zuul 192.168.122.10 port 60302
Jan 20 19:19:58 compute-0 sshd-session[285192]: pam_unix(sshd:session): session closed for user zuul
Jan 20 19:19:58 compute-0 systemd[1]: session-58.scope: Deactivated successfully.
Jan 20 19:19:58 compute-0 systemd-logind[796]: Session 58 logged out. Waiting for processes to exit.
Jan 20 19:19:58 compute-0 systemd-logind[796]: Removed session 58.
Jan 20 19:19:58 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1142: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:19:58 compute-0 sshd-session[285221]: Accepted publickey for zuul from 192.168.122.10 port 60318 ssh2: ECDSA SHA256:OqjpBifwMHsrzwMbJxHbqE54q1skVz6aecL1tN9sOps
Jan 20 19:19:58 compute-0 systemd-logind[796]: New session 59 of user zuul.
Jan 20 19:19:58 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:19:58 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:19:58 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:19:58.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:19:58 compute-0 systemd[1]: Started Session 59 of User zuul.
Jan 20 19:19:58 compute-0 sshd-session[285221]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 19:19:58 compute-0 sudo[285226]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rm -rf /var/tmp/sos-osp
Jan 20 19:19:58 compute-0 sudo[285226]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:19:58 compute-0 sudo[285226]: pam_unix(sudo:session): session closed for user root
Jan 20 19:19:58 compute-0 sshd-session[285225]: Received disconnect from 192.168.122.10 port 60318:11: disconnected by user
Jan 20 19:19:58 compute-0 sshd-session[285225]: Disconnected from user zuul 192.168.122.10 port 60318
Jan 20 19:19:58 compute-0 sshd-session[285221]: pam_unix(sshd:session): session closed for user zuul
Jan 20 19:19:58 compute-0 systemd[1]: session-59.scope: Deactivated successfully.
Jan 20 19:19:58 compute-0 systemd-logind[796]: Session 59 logged out. Waiting for processes to exit.
Jan 20 19:19:58 compute-0 systemd-logind[796]: Removed session 59.
Jan 20 19:19:58 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:19:58 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:19:58 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:19:58.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:19:58 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:19:58.845Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:19:58 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:19:58.846Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:19:59 compute-0 ceph-mon[74381]: pgmap v1142: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:19:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:19:59] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Jan 20 19:19:59 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:19:59] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Jan 20 19:20:00 compute-0 ceph-mon[74381]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 1 failed cephadm daemon(s)
Jan 20 19:20:00 compute-0 ceph-mon[74381]: log_channel(cluster) log [WRN] : [WRN] CEPHADM_FAILED_DAEMON: 1 failed cephadm daemon(s)
Jan 20 19:20:00 compute-0 ceph-mon[74381]: log_channel(cluster) log [WRN] :     daemon nfs.cephfs.2.0.compute-0.ulclbx on compute-0 is in error state
Jan 20 19:20:00 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1143: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:20:00 compute-0 sudo[285252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:20:00 compute-0 sudo[285252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:20:00 compute-0 sudo[285252]: pam_unix(sudo:session): session closed for user root
Jan 20 19:20:00 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:20:00 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:20:00 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:20:00.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:20:00 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:20:00 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:20:00 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:20:00.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:20:00 compute-0 ceph-mon[74381]: Health detail: HEALTH_WARN 1 failed cephadm daemon(s)
Jan 20 19:20:00 compute-0 ceph-mon[74381]: [WRN] CEPHADM_FAILED_DAEMON: 1 failed cephadm daemon(s)
Jan 20 19:20:00 compute-0 ceph-mon[74381]:     daemon nfs.cephfs.2.0.compute-0.ulclbx on compute-0 is in error state
Jan 20 19:20:01 compute-0 ceph-mon[74381]: pgmap v1143: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:20:01 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:20:02 compute-0 nova_compute[254061]: 2026-01-20 19:20:02.191 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:20:02 compute-0 nova_compute[254061]: 2026-01-20 19:20:02.246 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:20:02 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1144: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:20:02 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:20:02 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:20:02 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:20:02.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:20:02 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:20:02 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:20:02 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:20:02.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:20:02 compute-0 ceph-mon[74381]: pgmap v1144: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:20:04 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1145: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:20:04 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:20:04 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:20:04 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:20:04.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:20:04 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:20:04 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:20:04 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:20:04.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:20:05 compute-0 podman[285283]: 2026-01-20 19:20:05.08234715 +0000 UTC m=+0.056850788 container health_status 7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 20 19:20:05 compute-0 ceph-mon[74381]: pgmap v1145: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:20:06 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1146: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:20:06 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:20:06 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:20:06 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:20:06.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:20:06 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:20:06 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:20:06 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:20:06.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:20:06 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:20:07 compute-0 nova_compute[254061]: 2026-01-20 19:20:07.194 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:20:07 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:20:07.244Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:20:07 compute-0 nova_compute[254061]: 2026-01-20 19:20:07.247 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:20:07 compute-0 ceph-mon[74381]: pgmap v1146: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:20:08 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1147: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:20:08 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:20:08 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:20:08 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:20:08.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:20:08 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:20:08 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:20:08 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:20:08.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:20:08 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:20:08.847Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:20:09 compute-0 nova_compute[254061]: 2026-01-20 19:20:09.128 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:20:09 compute-0 nova_compute[254061]: 2026-01-20 19:20:09.152 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:20:09 compute-0 nova_compute[254061]: 2026-01-20 19:20:09.153 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:20:09 compute-0 nova_compute[254061]: 2026-01-20 19:20:09.153 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:20:09 compute-0 nova_compute[254061]: 2026-01-20 19:20:09.153 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 19:20:09 compute-0 nova_compute[254061]: 2026-01-20 19:20:09.154 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:20:09 compute-0 ceph-mon[74381]: pgmap v1147: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:20:09 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:20:09 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2881210917' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:20:09 compute-0 nova_compute[254061]: 2026-01-20 19:20:09.652 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:20:09 compute-0 nova_compute[254061]: 2026-01-20 19:20:09.803 254065 WARNING nova.virt.libvirt.driver [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 19:20:09 compute-0 nova_compute[254061]: 2026-01-20 19:20:09.804 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4487MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 19:20:09 compute-0 nova_compute[254061]: 2026-01-20 19:20:09.804 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:20:09 compute-0 nova_compute[254061]: 2026-01-20 19:20:09.805 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:20:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:20:09] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Jan 20 19:20:09 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:20:09] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Jan 20 19:20:09 compute-0 nova_compute[254061]: 2026-01-20 19:20:09.894 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 19:20:09 compute-0 nova_compute[254061]: 2026-01-20 19:20:09.894 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 19:20:09 compute-0 nova_compute[254061]: 2026-01-20 19:20:09.924 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:20:10 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1148: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:20:10 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:20:10 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1837829676' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:20:10 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:20:10 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:20:10 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:20:10.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:20:10 compute-0 nova_compute[254061]: 2026-01-20 19:20:10.392 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:20:10 compute-0 nova_compute[254061]: 2026-01-20 19:20:10.399 254065 DEBUG nova.compute.provider_tree [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Inventory has not changed in ProviderTree for provider: cb9161e5-191d-495c-920a-01144f42a215 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 19:20:10 compute-0 nova_compute[254061]: 2026-01-20 19:20:10.461 254065 DEBUG nova.scheduler.client.report [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Inventory has not changed for provider cb9161e5-191d-495c-920a-01144f42a215 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 19:20:10 compute-0 nova_compute[254061]: 2026-01-20 19:20:10.462 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 19:20:10 compute-0 nova_compute[254061]: 2026-01-20 19:20:10.462 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.658s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:20:10 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:20:10 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:20:10 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:20:10.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:20:10 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/2881210917' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:20:10 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:20:10 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/1837829676' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:20:11 compute-0 ceph-mon[74381]: pgmap v1148: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:20:12 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:20:12 compute-0 nova_compute[254061]: 2026-01-20 19:20:12.196 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:20:12 compute-0 nova_compute[254061]: 2026-01-20 19:20:12.248 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:20:12 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1149: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:20:12 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:20:12 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:20:12 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:20:12.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:20:12 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:20:12 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:20:12 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:20:12.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:20:12 compute-0 ceph-mon[74381]: pgmap v1149: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:20:13 compute-0 nova_compute[254061]: 2026-01-20 19:20:13.463 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:20:14 compute-0 podman[285354]: 2026-01-20 19:20:14.112495699 +0000 UTC m=+0.078730009 container health_status d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 20 19:20:14 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1150: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:20:14 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:20:14 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:20:14 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:20:14.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:20:14 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:20:14 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:20:14 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:20:14.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:20:15 compute-0 nova_compute[254061]: 2026-01-20 19:20:15.129 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:20:15 compute-0 nova_compute[254061]: 2026-01-20 19:20:15.129 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 19:20:15 compute-0 nova_compute[254061]: 2026-01-20 19:20:15.130 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 19:20:15 compute-0 nova_compute[254061]: 2026-01-20 19:20:15.513 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 19:20:15 compute-0 nova_compute[254061]: 2026-01-20 19:20:15.513 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:20:15 compute-0 nova_compute[254061]: 2026-01-20 19:20:15.514 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:20:15 compute-0 nova_compute[254061]: 2026-01-20 19:20:15.514 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:20:15 compute-0 ceph-mon[74381]: pgmap v1150: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:20:16 compute-0 nova_compute[254061]: 2026-01-20 19:20:16.128 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:20:16 compute-0 nova_compute[254061]: 2026-01-20 19:20:16.129 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 19:20:16 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1151: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:20:16 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:20:16 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:20:16 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:20:16.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:20:16 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:20:16 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:20:16 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:20:16.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:20:16 compute-0 ceph-mon[74381]: pgmap v1151: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:20:16 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/363008408' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:20:17 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:20:17 compute-0 nova_compute[254061]: 2026-01-20 19:20:17.195 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:20:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:20:17.245Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:20:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:20:17.245Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:20:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:20:17.246Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:20:17 compute-0 nova_compute[254061]: 2026-01-20 19:20:17.250 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:20:18 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1152: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:20:18 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:20:18 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:20:18 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:20:18.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:20:18 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/201958825' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:20:18 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/3489208876' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:20:18 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/4042988181' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:20:18 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:20:18 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:20:18 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:20:18.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:20:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:20:18.848Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:20:19 compute-0 ceph-mon[74381]: pgmap v1152: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:20:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:20:19] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Jan 20 19:20:19 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:20:19] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Jan 20 19:20:20 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1153: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:20:20 compute-0 sudo[285388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:20:20 compute-0 sudo[285388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:20:20 compute-0 sudo[285388]: pam_unix(sudo:session): session closed for user root
Jan 20 19:20:20 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:20:20 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:20:20 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:20:20.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:20:20 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:20:20 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:20:20 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:20:20.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:20:20 compute-0 ceph-mon[74381]: pgmap v1153: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:20:21 compute-0 nova_compute[254061]: 2026-01-20 19:20:21.129 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:20:22 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:20:22 compute-0 nova_compute[254061]: 2026-01-20 19:20:22.124 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:20:22 compute-0 nova_compute[254061]: 2026-01-20 19:20:22.197 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:20:22 compute-0 nova_compute[254061]: 2026-01-20 19:20:22.250 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:20:22 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1154: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:20:22 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:20:22 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:20:22 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:20:22.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:20:22 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:20:22 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:20:22 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:20:22.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:20:23 compute-0 ceph-mon[74381]: pgmap v1154: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:20:24 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1155: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:20:24 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:20:24 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 19:20:24 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:20:24.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 19:20:24 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:20:24 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:20:24 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:20:24.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:20:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:20:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:20:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:20:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:20:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:20:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:20:25 compute-0 ceph-mon[74381]: pgmap v1155: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:20:25 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:20:26 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1156: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:20:26 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:20:26 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:20:26 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:20:26.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:20:26 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:20:26 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:20:26 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:20:26.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:20:27 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:20:27 compute-0 nova_compute[254061]: 2026-01-20 19:20:27.199 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:20:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:20:27.248Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:20:27 compute-0 nova_compute[254061]: 2026-01-20 19:20:27.252 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:20:27 compute-0 ceph-mon[74381]: pgmap v1156: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:20:28 compute-0 nova_compute[254061]: 2026-01-20 19:20:28.124 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:20:28 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1157: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:20:28 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:20:28 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:20:28 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:20:28.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:20:28 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:20:28 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:20:28 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:20:28.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:20:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:20:28.849Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:20:29 compute-0 ceph-mon[74381]: pgmap v1157: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:20:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:20:29] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Jan 20 19:20:29 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:20:29] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Jan 20 19:20:30 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1158: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:20:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:20:30.297 165659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:20:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:20:30.298 165659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:20:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:20:30.298 165659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:20:30 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:20:30 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:20:30 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:20:30.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:20:30 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:20:30 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:20:30 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:20:30.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:20:31 compute-0 ceph-mon[74381]: pgmap v1158: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:20:32 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:20:32 compute-0 nova_compute[254061]: 2026-01-20 19:20:32.201 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:20:32 compute-0 nova_compute[254061]: 2026-01-20 19:20:32.253 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:20:32 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1159: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:20:32 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:20:32 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:20:32 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:20:32.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:20:32 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:20:32 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:20:32 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:20:32.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:20:32 compute-0 ceph-osd[82836]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 19:20:32 compute-0 ceph-osd[82836]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 12K writes, 47K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 12K writes, 3584 syncs, 3.59 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2134 writes, 6845 keys, 2134 commit groups, 1.0 writes per commit group, ingest: 7.09 MB, 0.01 MB/s
                                           Interval WAL: 2134 writes, 912 syncs, 2.34 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 20 19:20:33 compute-0 ceph-mon[74381]: pgmap v1159: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:20:34 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1160: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:20:34 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:20:34 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:20:34 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:20:34.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:20:34 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:20:34 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:20:34 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:20:34.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:20:35 compute-0 ceph-mon[74381]: pgmap v1160: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:20:36 compute-0 podman[285429]: 2026-01-20 19:20:36.112733322 +0000 UTC m=+0.077761802 container health_status 7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 19:20:36 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1161: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:20:36 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:20:36 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:20:36 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:20:36.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:20:36 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:20:36 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:20:36 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:20:36.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:20:37 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:20:37 compute-0 nova_compute[254061]: 2026-01-20 19:20:37.204 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:20:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:20:37.249Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:20:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:20:37.249Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:20:37 compute-0 nova_compute[254061]: 2026-01-20 19:20:37.255 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:20:37 compute-0 ceph-mon[74381]: pgmap v1161: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:20:38 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1162: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:20:38 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:20:38 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:20:38 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:20:38.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:20:38 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:20:38 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:20:38 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:20:38.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:20:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:20:38.849Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:20:39 compute-0 ceph-mon[74381]: pgmap v1162: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:20:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:20:39] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Jan 20 19:20:39 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:20:39] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Jan 20 19:20:40 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1163: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:20:40 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:20:40 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:20:40 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:20:40.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:20:40 compute-0 sudo[285453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:20:40 compute-0 sudo[285453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:20:40 compute-0 sudo[285453]: pam_unix(sudo:session): session closed for user root
Jan 20 19:20:40 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:20:40 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:20:40 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:20:40.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:20:40 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:20:41 compute-0 ceph-mon[74381]: pgmap v1163: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:20:42 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:20:42 compute-0 nova_compute[254061]: 2026-01-20 19:20:42.207 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:20:42 compute-0 nova_compute[254061]: 2026-01-20 19:20:42.255 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:20:42 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1164: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:20:42 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:20:42 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:20:42 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:20:42.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:20:42 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:20:42 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:20:42 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:20:42.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:20:43 compute-0 ceph-mon[74381]: pgmap v1164: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:20:43 compute-0 sudo[285481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:20:43 compute-0 sudo[285481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:20:43 compute-0 sudo[285481]: pam_unix(sudo:session): session closed for user root
Jan 20 19:20:43 compute-0 sudo[285506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Jan 20 19:20:43 compute-0 sudo[285506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:20:44 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1165: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:20:44 compute-0 sudo[285506]: pam_unix(sudo:session): session closed for user root
Jan 20 19:20:44 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:20:44 compute-0 podman[285545]: 2026-01-20 19:20:44.29067991 +0000 UTC m=+0.084152661 container health_status d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202)
Jan 20 19:20:44 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:20:44 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 20 19:20:44 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:20:44 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:20:44 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:20:44.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:20:44 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:20:44 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:20:44 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:20:44 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:20:44.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:20:44 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:20:44 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:20:44 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 20 19:20:44 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:20:44 compute-0 sudo[285579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:20:44 compute-0 sudo[285579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:20:44 compute-0 sudo[285579]: pam_unix(sudo:session): session closed for user root
Jan 20 19:20:44 compute-0 sudo[285604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 20 19:20:44 compute-0 sudo[285604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:20:45 compute-0 sudo[285604]: pam_unix(sudo:session): session closed for user root
Jan 20 19:20:45 compute-0 ceph-mon[74381]: pgmap v1165: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:20:45 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:20:45 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:20:45 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:20:45 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:20:45 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1166: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 20 19:20:45 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 19:20:45 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:20:45 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 20 19:20:45 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:20:45 compute-0 sudo[285662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:20:45 compute-0 sudo[285662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:20:45 compute-0 sudo[285662]: pam_unix(sudo:session): session closed for user root
Jan 20 19:20:45 compute-0 sudo[285687]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 19:20:45 compute-0 sudo[285687]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:20:45 compute-0 podman[285751]: 2026-01-20 19:20:45.923909596 +0000 UTC m=+0.039360084 container create df49e0133211e17f2b80ceb141bff4925c39233dc633b3d3c79a974bd003665f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_mayer, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:20:45 compute-0 systemd[1]: Started libpod-conmon-df49e0133211e17f2b80ceb141bff4925c39233dc633b3d3c79a974bd003665f.scope.
Jan 20 19:20:45 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:20:46 compute-0 podman[285751]: 2026-01-20 19:20:45.907924322 +0000 UTC m=+0.023374830 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:20:46 compute-0 podman[285751]: 2026-01-20 19:20:46.01383431 +0000 UTC m=+0.129284818 container init df49e0133211e17f2b80ceb141bff4925c39233dc633b3d3c79a974bd003665f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_mayer, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Jan 20 19:20:46 compute-0 podman[285751]: 2026-01-20 19:20:46.021673438 +0000 UTC m=+0.137123926 container start df49e0133211e17f2b80ceb141bff4925c39233dc633b3d3c79a974bd003665f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_mayer, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:20:46 compute-0 podman[285751]: 2026-01-20 19:20:46.024660697 +0000 UTC m=+0.140111205 container attach df49e0133211e17f2b80ceb141bff4925c39233dc633b3d3c79a974bd003665f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_mayer, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:20:46 compute-0 competent_mayer[285767]: 167 167
Jan 20 19:20:46 compute-0 systemd[1]: libpod-df49e0133211e17f2b80ceb141bff4925c39233dc633b3d3c79a974bd003665f.scope: Deactivated successfully.
Jan 20 19:20:46 compute-0 conmon[285767]: conmon df49e0133211e17f2b80 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-df49e0133211e17f2b80ceb141bff4925c39233dc633b3d3c79a974bd003665f.scope/container/memory.events
Jan 20 19:20:46 compute-0 podman[285751]: 2026-01-20 19:20:46.031175299 +0000 UTC m=+0.146625827 container died df49e0133211e17f2b80ceb141bff4925c39233dc633b3d3c79a974bd003665f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_mayer, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:20:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-dbfe3be8523d8c68dcfc93313c0e1b021f11e9566524ea1c29622a02ba4a40f6-merged.mount: Deactivated successfully.
Jan 20 19:20:46 compute-0 podman[285751]: 2026-01-20 19:20:46.089061284 +0000 UTC m=+0.204511772 container remove df49e0133211e17f2b80ceb141bff4925c39233dc633b3d3c79a974bd003665f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_mayer, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:20:46 compute-0 systemd[1]: libpod-conmon-df49e0133211e17f2b80ceb141bff4925c39233dc633b3d3c79a974bd003665f.scope: Deactivated successfully.
Jan 20 19:20:46 compute-0 podman[285793]: 2026-01-20 19:20:46.273851263 +0000 UTC m=+0.044085210 container create 2c9a36b865ffbedba39067ac1bd6a285f8811b7324aacf97308515cf7e1d9f9f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_sammet, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:20:46 compute-0 systemd[1]: Started libpod-conmon-2c9a36b865ffbedba39067ac1bd6a285f8811b7324aacf97308515cf7e1d9f9f.scope.
Jan 20 19:20:46 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:20:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21294f2b6a457c76a20e1bdb82af789ff4ba58eab55650bee7b718127cafb09b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:20:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21294f2b6a457c76a20e1bdb82af789ff4ba58eab55650bee7b718127cafb09b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:20:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21294f2b6a457c76a20e1bdb82af789ff4ba58eab55650bee7b718127cafb09b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:20:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21294f2b6a457c76a20e1bdb82af789ff4ba58eab55650bee7b718127cafb09b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:20:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21294f2b6a457c76a20e1bdb82af789ff4ba58eab55650bee7b718127cafb09b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:20:46 compute-0 podman[285793]: 2026-01-20 19:20:46.255046454 +0000 UTC m=+0.025280411 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:20:46 compute-0 podman[285793]: 2026-01-20 19:20:46.351317426 +0000 UTC m=+0.121551403 container init 2c9a36b865ffbedba39067ac1bd6a285f8811b7324aacf97308515cf7e1d9f9f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_sammet, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 20 19:20:46 compute-0 podman[285793]: 2026-01-20 19:20:46.358893177 +0000 UTC m=+0.129127114 container start 2c9a36b865ffbedba39067ac1bd6a285f8811b7324aacf97308515cf7e1d9f9f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_sammet, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:20:46 compute-0 podman[285793]: 2026-01-20 19:20:46.361851275 +0000 UTC m=+0.132085262 container attach 2c9a36b865ffbedba39067ac1bd6a285f8811b7324aacf97308515cf7e1d9f9f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:20:46 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:20:46 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 19:20:46 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:20:46.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 19:20:46 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 19:20:46 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 19:20:46 compute-0 ceph-mon[74381]: pgmap v1166: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 20 19:20:46 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:20:46 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:20:46 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 19:20:46 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 19:20:46 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 19:20:46 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:20:46 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:20:46 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:20:46.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:20:46 compute-0 epic_sammet[285809]: --> passed data devices: 0 physical, 1 LVM
Jan 20 19:20:46 compute-0 epic_sammet[285809]: --> All data devices are unavailable
Jan 20 19:20:46 compute-0 systemd[1]: libpod-2c9a36b865ffbedba39067ac1bd6a285f8811b7324aacf97308515cf7e1d9f9f.scope: Deactivated successfully.
Jan 20 19:20:46 compute-0 podman[285793]: 2026-01-20 19:20:46.681374476 +0000 UTC m=+0.451608413 container died 2c9a36b865ffbedba39067ac1bd6a285f8811b7324aacf97308515cf7e1d9f9f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Jan 20 19:20:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-21294f2b6a457c76a20e1bdb82af789ff4ba58eab55650bee7b718127cafb09b-merged.mount: Deactivated successfully.
Jan 20 19:20:46 compute-0 podman[285793]: 2026-01-20 19:20:46.725978578 +0000 UTC m=+0.496212525 container remove 2c9a36b865ffbedba39067ac1bd6a285f8811b7324aacf97308515cf7e1d9f9f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Jan 20 19:20:46 compute-0 systemd[1]: libpod-conmon-2c9a36b865ffbedba39067ac1bd6a285f8811b7324aacf97308515cf7e1d9f9f.scope: Deactivated successfully.
Jan 20 19:20:46 compute-0 sudo[285687]: pam_unix(sudo:session): session closed for user root
Jan 20 19:20:46 compute-0 sudo[285837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:20:46 compute-0 sudo[285837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:20:46 compute-0 sudo[285837]: pam_unix(sudo:session): session closed for user root
Jan 20 19:20:46 compute-0 sudo[285863]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- lvm list --format json
Jan 20 19:20:46 compute-0 sudo[285863]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:20:47 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:20:47 compute-0 nova_compute[254061]: 2026-01-20 19:20:47.209 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:20:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:20:47.250Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:20:47 compute-0 nova_compute[254061]: 2026-01-20 19:20:47.257 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:20:47 compute-0 podman[285928]: 2026-01-20 19:20:47.278383812 +0000 UTC m=+0.039216471 container create f2a059e37e4b3aac8d926949319210e66bb612b8564fb76adc1fb0b2cf24f732 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:20:47 compute-0 systemd[1]: Started libpod-conmon-f2a059e37e4b3aac8d926949319210e66bb612b8564fb76adc1fb0b2cf24f732.scope.
Jan 20 19:20:47 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:20:47 compute-0 podman[285928]: 2026-01-20 19:20:47.263149278 +0000 UTC m=+0.023981957 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:20:47 compute-0 podman[285928]: 2026-01-20 19:20:47.367569786 +0000 UTC m=+0.128402495 container init f2a059e37e4b3aac8d926949319210e66bb612b8564fb76adc1fb0b2cf24f732 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_brattain, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 20 19:20:47 compute-0 podman[285928]: 2026-01-20 19:20:47.382232194 +0000 UTC m=+0.143064853 container start f2a059e37e4b3aac8d926949319210e66bb612b8564fb76adc1fb0b2cf24f732 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:20:47 compute-0 podman[285928]: 2026-01-20 19:20:47.386795116 +0000 UTC m=+0.147627775 container attach f2a059e37e4b3aac8d926949319210e66bb612b8564fb76adc1fb0b2cf24f732 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_brattain, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:20:47 compute-0 wonderful_brattain[285944]: 167 167
Jan 20 19:20:47 compute-0 systemd[1]: libpod-f2a059e37e4b3aac8d926949319210e66bb612b8564fb76adc1fb0b2cf24f732.scope: Deactivated successfully.
Jan 20 19:20:47 compute-0 podman[285928]: 2026-01-20 19:20:47.388428289 +0000 UTC m=+0.149260948 container died f2a059e37e4b3aac8d926949319210e66bb612b8564fb76adc1fb0b2cf24f732 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_brattain, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Jan 20 19:20:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-95e30c077d2cbfc996c4478890f3031dca17a0ebfc2013da0fd3babd568f0928-merged.mount: Deactivated successfully.
Jan 20 19:20:47 compute-0 podman[285928]: 2026-01-20 19:20:47.426481138 +0000 UTC m=+0.187313807 container remove f2a059e37e4b3aac8d926949319210e66bb612b8564fb76adc1fb0b2cf24f732 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_brattain, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Jan 20 19:20:47 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1167: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 20 19:20:47 compute-0 systemd[1]: libpod-conmon-f2a059e37e4b3aac8d926949319210e66bb612b8564fb76adc1fb0b2cf24f732.scope: Deactivated successfully.
Jan 20 19:20:47 compute-0 podman[285969]: 2026-01-20 19:20:47.62423403 +0000 UTC m=+0.061135932 container create e18f01863da587bc393b6edcf79c2357fb62bf5883a84a750025c657df735365 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_hugle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 19:20:47 compute-0 systemd[1]: Started libpod-conmon-e18f01863da587bc393b6edcf79c2357fb62bf5883a84a750025c657df735365.scope.
Jan 20 19:20:47 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:20:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef11d2fa383b317fd7d6ef4a0682ee9356f03ea128e560c98452d62eaf28320d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:20:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef11d2fa383b317fd7d6ef4a0682ee9356f03ea128e560c98452d62eaf28320d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:20:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef11d2fa383b317fd7d6ef4a0682ee9356f03ea128e560c98452d62eaf28320d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:20:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef11d2fa383b317fd7d6ef4a0682ee9356f03ea128e560c98452d62eaf28320d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:20:47 compute-0 podman[285969]: 2026-01-20 19:20:47.696960008 +0000 UTC m=+0.133861990 container init e18f01863da587bc393b6edcf79c2357fb62bf5883a84a750025c657df735365 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_hugle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Jan 20 19:20:47 compute-0 podman[285969]: 2026-01-20 19:20:47.60576931 +0000 UTC m=+0.042671232 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:20:47 compute-0 podman[285969]: 2026-01-20 19:20:47.707002484 +0000 UTC m=+0.143904406 container start e18f01863da587bc393b6edcf79c2357fb62bf5883a84a750025c657df735365 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:20:47 compute-0 podman[285969]: 2026-01-20 19:20:47.711425251 +0000 UTC m=+0.148327193 container attach e18f01863da587bc393b6edcf79c2357fb62bf5883a84a750025c657df735365 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_hugle, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Jan 20 19:20:47 compute-0 recursing_hugle[285985]: {
Jan 20 19:20:47 compute-0 recursing_hugle[285985]:     "0": [
Jan 20 19:20:47 compute-0 recursing_hugle[285985]:         {
Jan 20 19:20:47 compute-0 recursing_hugle[285985]:             "devices": [
Jan 20 19:20:47 compute-0 recursing_hugle[285985]:                 "/dev/loop3"
Jan 20 19:20:47 compute-0 recursing_hugle[285985]:             ],
Jan 20 19:20:47 compute-0 recursing_hugle[285985]:             "lv_name": "ceph_lv0",
Jan 20 19:20:47 compute-0 recursing_hugle[285985]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:20:47 compute-0 recursing_hugle[285985]:             "lv_size": "21470642176",
Jan 20 19:20:47 compute-0 recursing_hugle[285985]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=aecbbf3b-b405-507b-97d7-637a83f5b4b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5f53c0c6-6046-4836-83f9-ff93da7e674e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:20:47 compute-0 recursing_hugle[285985]:             "lv_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 19:20:47 compute-0 recursing_hugle[285985]:             "name": "ceph_lv0",
Jan 20 19:20:47 compute-0 recursing_hugle[285985]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:20:47 compute-0 recursing_hugle[285985]:             "tags": {
Jan 20 19:20:47 compute-0 recursing_hugle[285985]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:20:47 compute-0 recursing_hugle[285985]:                 "ceph.block_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 19:20:47 compute-0 recursing_hugle[285985]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:20:47 compute-0 recursing_hugle[285985]:                 "ceph.cluster_fsid": "aecbbf3b-b405-507b-97d7-637a83f5b4b1",
Jan 20 19:20:47 compute-0 recursing_hugle[285985]:                 "ceph.cluster_name": "ceph",
Jan 20 19:20:47 compute-0 recursing_hugle[285985]:                 "ceph.crush_device_class": "",
Jan 20 19:20:47 compute-0 recursing_hugle[285985]:                 "ceph.encrypted": "0",
Jan 20 19:20:47 compute-0 recursing_hugle[285985]:                 "ceph.osd_fsid": "5f53c0c6-6046-4836-83f9-ff93da7e674e",
Jan 20 19:20:47 compute-0 recursing_hugle[285985]:                 "ceph.osd_id": "0",
Jan 20 19:20:47 compute-0 recursing_hugle[285985]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:20:47 compute-0 recursing_hugle[285985]:                 "ceph.type": "block",
Jan 20 19:20:47 compute-0 recursing_hugle[285985]:                 "ceph.vdo": "0",
Jan 20 19:20:47 compute-0 recursing_hugle[285985]:                 "ceph.with_tpm": "0"
Jan 20 19:20:47 compute-0 recursing_hugle[285985]:             },
Jan 20 19:20:47 compute-0 recursing_hugle[285985]:             "type": "block",
Jan 20 19:20:47 compute-0 recursing_hugle[285985]:             "vg_name": "ceph_vg0"
Jan 20 19:20:47 compute-0 recursing_hugle[285985]:         }
Jan 20 19:20:47 compute-0 recursing_hugle[285985]:     ]
Jan 20 19:20:47 compute-0 recursing_hugle[285985]: }
Jan 20 19:20:47 compute-0 systemd[1]: libpod-e18f01863da587bc393b6edcf79c2357fb62bf5883a84a750025c657df735365.scope: Deactivated successfully.
Jan 20 19:20:47 compute-0 podman[285969]: 2026-01-20 19:20:47.985433755 +0000 UTC m=+0.422335657 container died e18f01863da587bc393b6edcf79c2357fb62bf5883a84a750025c657df735365 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_hugle, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 20 19:20:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-ef11d2fa383b317fd7d6ef4a0682ee9356f03ea128e560c98452d62eaf28320d-merged.mount: Deactivated successfully.
Jan 20 19:20:48 compute-0 podman[285969]: 2026-01-20 19:20:48.026860263 +0000 UTC m=+0.463762165 container remove e18f01863da587bc393b6edcf79c2357fb62bf5883a84a750025c657df735365 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=recursing_hugle, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 20 19:20:48 compute-0 systemd[1]: libpod-conmon-e18f01863da587bc393b6edcf79c2357fb62bf5883a84a750025c657df735365.scope: Deactivated successfully.
Jan 20 19:20:48 compute-0 sudo[285863]: pam_unix(sudo:session): session closed for user root
Jan 20 19:20:48 compute-0 sudo[286008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:20:48 compute-0 sudo[286008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:20:48 compute-0 sudo[286008]: pam_unix(sudo:session): session closed for user root
Jan 20 19:20:48 compute-0 sudo[286033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- raw list --format json
Jan 20 19:20:48 compute-0 sudo[286033]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:20:48 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:20:48 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:20:48 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:20:48.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:20:48 compute-0 ceph-mon[74381]: pgmap v1167: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 20 19:20:48 compute-0 ceph-mon[74381]: from='client.? 192.168.122.10:0/2753353544' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 19:20:48 compute-0 ceph-mon[74381]: from='client.? 192.168.122.10:0/2753353544' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 19:20:48 compute-0 podman[286099]: 2026-01-20 19:20:48.580232252 +0000 UTC m=+0.045018974 container create 6a0f0e7bc60e9a0e5decf1531216979a73b825a41565e0cf59efbf7b3be1926f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_wright, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:20:48 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:20:48 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:20:48 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:20:48.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:20:48 compute-0 systemd[1]: Started libpod-conmon-6a0f0e7bc60e9a0e5decf1531216979a73b825a41565e0cf59efbf7b3be1926f.scope.
Jan 20 19:20:48 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:20:48 compute-0 podman[286099]: 2026-01-20 19:20:48.557591462 +0000 UTC m=+0.022378214 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:20:48 compute-0 podman[286099]: 2026-01-20 19:20:48.659975676 +0000 UTC m=+0.124762478 container init 6a0f0e7bc60e9a0e5decf1531216979a73b825a41565e0cf59efbf7b3be1926f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_wright, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:20:48 compute-0 podman[286099]: 2026-01-20 19:20:48.666741566 +0000 UTC m=+0.131528268 container start 6a0f0e7bc60e9a0e5decf1531216979a73b825a41565e0cf59efbf7b3be1926f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_wright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Jan 20 19:20:48 compute-0 podman[286099]: 2026-01-20 19:20:48.67033542 +0000 UTC m=+0.135122182 container attach 6a0f0e7bc60e9a0e5decf1531216979a73b825a41565e0cf59efbf7b3be1926f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_wright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:20:48 compute-0 serene_wright[286116]: 167 167
Jan 20 19:20:48 compute-0 systemd[1]: libpod-6a0f0e7bc60e9a0e5decf1531216979a73b825a41565e0cf59efbf7b3be1926f.scope: Deactivated successfully.
Jan 20 19:20:48 compute-0 podman[286099]: 2026-01-20 19:20:48.672532889 +0000 UTC m=+0.137319601 container died 6a0f0e7bc60e9a0e5decf1531216979a73b825a41565e0cf59efbf7b3be1926f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_wright, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 19:20:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-56d8b28dd158bd78dac824565c851a3a83533c56def49ad9aad34986617dd838-merged.mount: Deactivated successfully.
Jan 20 19:20:48 compute-0 podman[286099]: 2026-01-20 19:20:48.707017873 +0000 UTC m=+0.171804575 container remove 6a0f0e7bc60e9a0e5decf1531216979a73b825a41565e0cf59efbf7b3be1926f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_wright, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Jan 20 19:20:48 compute-0 systemd[1]: libpod-conmon-6a0f0e7bc60e9a0e5decf1531216979a73b825a41565e0cf59efbf7b3be1926f.scope: Deactivated successfully.
Jan 20 19:20:48 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:20:48.850Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:20:48 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:20:48.851Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:20:48 compute-0 podman[286141]: 2026-01-20 19:20:48.886272184 +0000 UTC m=+0.044041378 container create 2e62209db0b7dd70aaaf46d69165f24278f5524e086092c177bc4f2fdf368b24 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_rhodes, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:20:48 compute-0 systemd[1]: Started libpod-conmon-2e62209db0b7dd70aaaf46d69165f24278f5524e086092c177bc4f2fdf368b24.scope.
Jan 20 19:20:48 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:20:48 compute-0 podman[286141]: 2026-01-20 19:20:48.869583273 +0000 UTC m=+0.027352487 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:20:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b70f5877ad0fcc8a19b70bcbacaed379a184251d03299c7e6052162cb9e267f3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:20:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b70f5877ad0fcc8a19b70bcbacaed379a184251d03299c7e6052162cb9e267f3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:20:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b70f5877ad0fcc8a19b70bcbacaed379a184251d03299c7e6052162cb9e267f3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:20:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b70f5877ad0fcc8a19b70bcbacaed379a184251d03299c7e6052162cb9e267f3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:20:48 compute-0 podman[286141]: 2026-01-20 19:20:48.993911948 +0000 UTC m=+0.151681162 container init 2e62209db0b7dd70aaaf46d69165f24278f5524e086092c177bc4f2fdf368b24 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_rhodes, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Jan 20 19:20:49 compute-0 podman[286141]: 2026-01-20 19:20:49.003751309 +0000 UTC m=+0.161520503 container start 2e62209db0b7dd70aaaf46d69165f24278f5524e086092c177bc4f2fdf368b24 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 19:20:49 compute-0 podman[286141]: 2026-01-20 19:20:49.007085258 +0000 UTC m=+0.164854452 container attach 2e62209db0b7dd70aaaf46d69165f24278f5524e086092c177bc4f2fdf368b24 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:20:49 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1168: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 20 19:20:49 compute-0 lvm[286231]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 19:20:49 compute-0 lvm[286231]: VG ceph_vg0 finished
Jan 20 19:20:49 compute-0 busy_rhodes[286157]: {}
Jan 20 19:20:49 compute-0 systemd[1]: libpod-2e62209db0b7dd70aaaf46d69165f24278f5524e086092c177bc4f2fdf368b24.scope: Deactivated successfully.
Jan 20 19:20:49 compute-0 systemd[1]: libpod-2e62209db0b7dd70aaaf46d69165f24278f5524e086092c177bc4f2fdf368b24.scope: Consumed 1.130s CPU time.
Jan 20 19:20:49 compute-0 podman[286235]: 2026-01-20 19:20:49.738064485 +0000 UTC m=+0.024774368 container died 2e62209db0b7dd70aaaf46d69165f24278f5524e086092c177bc4f2fdf368b24 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_rhodes, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 19:20:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-b70f5877ad0fcc8a19b70bcbacaed379a184251d03299c7e6052162cb9e267f3-merged.mount: Deactivated successfully.
Jan 20 19:20:49 compute-0 podman[286235]: 2026-01-20 19:20:49.786055977 +0000 UTC m=+0.072765870 container remove 2e62209db0b7dd70aaaf46d69165f24278f5524e086092c177bc4f2fdf368b24 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 20 19:20:49 compute-0 systemd[1]: libpod-conmon-2e62209db0b7dd70aaaf46d69165f24278f5524e086092c177bc4f2fdf368b24.scope: Deactivated successfully.
Jan 20 19:20:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:20:49] "GET /metrics HTTP/1.1" 200 48452 "" "Prometheus/2.51.0"
Jan 20 19:20:49 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:20:49] "GET /metrics HTTP/1.1" 200 48452 "" "Prometheus/2.51.0"
Jan 20 19:20:49 compute-0 sudo[286033]: pam_unix(sudo:session): session closed for user root
Jan 20 19:20:49 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:20:49 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:20:49 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:20:49 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:20:49 compute-0 sudo[286250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 19:20:49 compute-0 sudo[286250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:20:49 compute-0 sudo[286250]: pam_unix(sudo:session): session closed for user root
Jan 20 19:20:50 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:20:50 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:20:50 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:20:50.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:20:50 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:20:50 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:20:50 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:20:50.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:20:50 compute-0 ceph-mon[74381]: pgmap v1168: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 20 19:20:50 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:20:50 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:20:51 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1169: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Jan 20 19:20:52 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:20:52 compute-0 nova_compute[254061]: 2026-01-20 19:20:52.212 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:20:52 compute-0 nova_compute[254061]: 2026-01-20 19:20:52.258 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:20:52 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:20:52 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:20:52 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:20:52.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:20:52 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:20:52 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:20:52 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:20:52.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:20:52 compute-0 ceph-mon[74381]: pgmap v1169: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Jan 20 19:20:53 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1170: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 20 19:20:54 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:20:54 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:20:54 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:20:54.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:20:54 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:20:54 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:20:54 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:20:54.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:20:54 compute-0 ceph-mon[74381]: pgmap v1170: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 20 19:20:55 compute-0 ceph-mgr[74676]: [balancer INFO root] Optimize plan auto_2026-01-20_19:20:55
Jan 20 19:20:55 compute-0 ceph-mgr[74676]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 19:20:55 compute-0 ceph-mgr[74676]: [balancer INFO root] do_upmap
Jan 20 19:20:55 compute-0 ceph-mgr[74676]: [balancer INFO root] pools ['default.rgw.log', '.nfs', 'images', '.mgr', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.control', 'volumes', 'default.rgw.meta', '.rgw.root', 'backups']
Jan 20 19:20:55 compute-0 ceph-mgr[74676]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 19:20:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:20:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:20:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:20:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:20:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:20:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:20:55 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1171: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 20 19:20:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 19:20:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 19:20:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:20:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 19:20:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:20:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:20:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 19:20:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:20:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 20 19:20:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:20:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:20:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:20:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:20:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:20:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 20 19:20:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:20:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 20 19:20:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:20:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:20:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:20:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 19:20:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:20:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 20 19:20:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:20:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:20:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:20:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 19:20:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:20:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 20 19:20:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:20:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:20:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:20:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:20:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:20:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:20:55 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:20:56 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:20:56 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:20:56 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:20:56.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:20:56 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:20:56 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 19:20:56 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:20:56.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 19:20:56 compute-0 ceph-mon[74381]: pgmap v1171: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 20 19:20:57 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:20:57 compute-0 nova_compute[254061]: 2026-01-20 19:20:57.215 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:20:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:20:57.251Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:20:57 compute-0 nova_compute[254061]: 2026-01-20 19:20:57.259 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:20:57 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1172: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:20:57 compute-0 ceph-mon[74381]: pgmap v1172: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:20:58 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:20:58 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:20:58 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:20:58.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:20:58 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:20:58 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:20:58 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:20:58.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:20:58 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:20:58.852Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:20:58 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:20:58.852Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:20:58 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:20:58.852Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:20:59 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1173: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:20:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:20:59] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Jan 20 19:20:59 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:20:59] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Jan 20 19:21:00 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:21:00 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:21:00 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:21:00.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:21:00 compute-0 sudo[286286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:21:00 compute-0 sudo[286286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:21:00 compute-0 sudo[286286]: pam_unix(sudo:session): session closed for user root
Jan 20 19:21:00 compute-0 ceph-mon[74381]: pgmap v1173: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:21:00 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:21:00 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:21:00 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:21:00.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:21:01 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1174: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:21:02 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:21:02 compute-0 nova_compute[254061]: 2026-01-20 19:21:02.217 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:21:02 compute-0 nova_compute[254061]: 2026-01-20 19:21:02.261 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:21:02 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:21:02 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:21:02 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:21:02.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:21:02 compute-0 ceph-mon[74381]: pgmap v1174: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:21:02 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:21:02 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:21:02 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:21:02.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:21:03 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1175: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:21:04 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:21:04 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:21:04 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:21:04.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:21:04 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:21:04 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:21:04 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:21:04.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:21:04 compute-0 ceph-mon[74381]: pgmap v1175: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:21:05 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1176: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:21:06 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:21:06 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:21:06 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:21:06.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:21:06 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:21:06 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:21:06 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:21:06.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:21:06 compute-0 ceph-mon[74381]: pgmap v1176: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:21:07 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:21:07 compute-0 podman[286318]: 2026-01-20 19:21:07.126461543 +0000 UTC m=+0.100007893 container health_status 7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent)
Jan 20 19:21:07 compute-0 nova_compute[254061]: 2026-01-20 19:21:07.218 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:21:07 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:21:07.251Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:21:07 compute-0 nova_compute[254061]: 2026-01-20 19:21:07.263 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:21:07 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1177: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:21:08 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:21:08 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:21:08 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:21:08.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:21:08 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:21:08 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:21:08 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:21:08.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:21:08 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:21:08.853Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:21:08 compute-0 ceph-mon[74381]: pgmap v1177: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:21:09 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1178: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:21:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:21:09] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Jan 20 19:21:09 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:21:09] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Jan 20 19:21:10 compute-0 nova_compute[254061]: 2026-01-20 19:21:10.128 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:21:10 compute-0 ceph-mon[74381]: pgmap v1178: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:21:10 compute-0 nova_compute[254061]: 2026-01-20 19:21:10.184 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:21:10 compute-0 nova_compute[254061]: 2026-01-20 19:21:10.185 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:21:10 compute-0 nova_compute[254061]: 2026-01-20 19:21:10.185 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:21:10 compute-0 nova_compute[254061]: 2026-01-20 19:21:10.185 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 19:21:10 compute-0 nova_compute[254061]: 2026-01-20 19:21:10.185 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:21:10 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:21:10 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:21:10 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:21:10.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:21:10 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:21:10 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:21:10 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:21:10.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:21:10 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:21:10 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2696292566' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:21:10 compute-0 nova_compute[254061]: 2026-01-20 19:21:10.698 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:21:10 compute-0 nova_compute[254061]: 2026-01-20 19:21:10.935 254065 WARNING nova.virt.libvirt.driver [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 19:21:10 compute-0 nova_compute[254061]: 2026-01-20 19:21:10.936 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4469MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 19:21:10 compute-0 nova_compute[254061]: 2026-01-20 19:21:10.936 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:21:10 compute-0 nova_compute[254061]: 2026-01-20 19:21:10.936 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:21:11 compute-0 nova_compute[254061]: 2026-01-20 19:21:11.041 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 19:21:11 compute-0 nova_compute[254061]: 2026-01-20 19:21:11.041 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 19:21:11 compute-0 nova_compute[254061]: 2026-01-20 19:21:11.088 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:21:11 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:21:11 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/2696292566' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:21:11 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1179: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:21:11 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:21:11 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4192683848' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:21:11 compute-0 nova_compute[254061]: 2026-01-20 19:21:11.650 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:21:11 compute-0 nova_compute[254061]: 2026-01-20 19:21:11.655 254065 DEBUG nova.compute.provider_tree [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Inventory has not changed in ProviderTree for provider: cb9161e5-191d-495c-920a-01144f42a215 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 19:21:11 compute-0 nova_compute[254061]: 2026-01-20 19:21:11.682 254065 DEBUG nova.scheduler.client.report [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Inventory has not changed for provider cb9161e5-191d-495c-920a-01144f42a215 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 19:21:11 compute-0 nova_compute[254061]: 2026-01-20 19:21:11.683 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 19:21:11 compute-0 nova_compute[254061]: 2026-01-20 19:21:11.683 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.747s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:21:12 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:21:12 compute-0 ceph-mon[74381]: pgmap v1179: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:21:12 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/4192683848' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:21:12 compute-0 nova_compute[254061]: 2026-01-20 19:21:12.220 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:21:12 compute-0 nova_compute[254061]: 2026-01-20 19:21:12.263 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:21:12 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:21:12 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.002000052s ======
Jan 20 19:21:12 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:21:12.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000052s
Jan 20 19:21:12 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:21:12 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:21:12 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:21:12.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:21:13 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1180: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:21:14 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:21:14 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:21:14 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:21:14.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:21:14 compute-0 ceph-mon[74381]: pgmap v1180: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:21:14 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:21:14 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:21:14 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:21:14.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:21:14 compute-0 nova_compute[254061]: 2026-01-20 19:21:14.685 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:21:15 compute-0 nova_compute[254061]: 2026-01-20 19:21:15.129 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:21:15 compute-0 podman[286389]: 2026-01-20 19:21:15.149031291 +0000 UTC m=+0.120074634 container health_status d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 19:21:15 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1181: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:21:16 compute-0 nova_compute[254061]: 2026-01-20 19:21:16.128 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:21:16 compute-0 nova_compute[254061]: 2026-01-20 19:21:16.129 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 19:21:16 compute-0 nova_compute[254061]: 2026-01-20 19:21:16.129 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 19:21:16 compute-0 nova_compute[254061]: 2026-01-20 19:21:16.168 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 19:21:16 compute-0 nova_compute[254061]: 2026-01-20 19:21:16.168 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:21:16 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:21:16 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:21:16 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:21:16.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:21:16 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:21:16 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:21:16 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:21:16.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:21:16 compute-0 ceph-mon[74381]: pgmap v1181: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:21:16 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/3778909833' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:21:17 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:21:17 compute-0 nova_compute[254061]: 2026-01-20 19:21:17.129 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:21:17 compute-0 nova_compute[254061]: 2026-01-20 19:21:17.221 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:21:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:21:17.252Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:21:17 compute-0 nova_compute[254061]: 2026-01-20 19:21:17.264 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:21:17 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1182: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:21:18 compute-0 nova_compute[254061]: 2026-01-20 19:21:18.128 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:21:18 compute-0 nova_compute[254061]: 2026-01-20 19:21:18.129 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 19:21:18 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/2852820539' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:21:18 compute-0 ceph-mon[74381]: pgmap v1182: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:21:18 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:21:18 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 19:21:18 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:21:18.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 19:21:18 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:21:18 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:21:18 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:21:18.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:21:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:21:18.854Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:21:19 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/2062829337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:21:19 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1183: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:21:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:21:19] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Jan 20 19:21:19 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:21:19] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Jan 20 19:21:20 compute-0 ceph-mon[74381]: pgmap v1183: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:21:20 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/3313215674' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:21:20 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:21:20 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:21:20 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:21:20.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:21:20 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:21:20 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:21:20 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:21:20.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:21:20 compute-0 sudo[286423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:21:20 compute-0 sudo[286423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:21:20 compute-0 sudo[286423]: pam_unix(sudo:session): session closed for user root
Jan 20 19:21:21 compute-0 nova_compute[254061]: 2026-01-20 19:21:21.129 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:21:21 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1184: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:21:22 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:21:22 compute-0 nova_compute[254061]: 2026-01-20 19:21:22.224 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:21:22 compute-0 nova_compute[254061]: 2026-01-20 19:21:22.265 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:21:22 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:21:22 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:21:22 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:21:22.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:21:22 compute-0 ceph-mon[74381]: pgmap v1184: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:21:22 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:21:22 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:21:22 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:21:22.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:21:23 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1185: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:21:24 compute-0 nova_compute[254061]: 2026-01-20 19:21:24.124 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:21:24 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:21:24 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:21:24 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:21:24.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:21:24 compute-0 ceph-mon[74381]: pgmap v1185: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:21:24 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:21:24 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:21:24 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:21:24.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:21:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:21:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:21:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:21:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:21:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:21:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:21:25 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1186: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:21:25 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:21:26 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:21:26 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:21:26 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:21:26.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:21:26 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:21:26 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:21:26 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:21:26.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:21:26 compute-0 ceph-mon[74381]: pgmap v1186: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:21:27 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:21:27 compute-0 nova_compute[254061]: 2026-01-20 19:21:27.225 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:21:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:21:27.254Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:21:27 compute-0 nova_compute[254061]: 2026-01-20 19:21:27.267 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:21:27 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1187: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:21:27 compute-0 ceph-mon[74381]: pgmap v1187: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:21:28 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:21:28 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:21:28 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:21:28.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:21:28 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:21:28 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:21:28 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:21:28.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:21:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:21:28.854Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:21:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:21:28.854Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:21:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:21:28.855Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:21:29 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1188: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:21:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:21:29] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Jan 20 19:21:29 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:21:29] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Jan 20 19:21:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:21:30.299 165659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:21:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:21:30.299 165659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:21:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:21:30.299 165659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:21:30 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:21:30 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:21:30 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:21:30.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:21:30 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:21:30 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:21:30 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:21:30.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:21:30 compute-0 ceph-mon[74381]: pgmap v1188: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:21:31 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1189: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:21:31 compute-0 ceph-mon[74381]: pgmap v1189: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:21:32 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:21:32 compute-0 nova_compute[254061]: 2026-01-20 19:21:32.226 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:21:32 compute-0 nova_compute[254061]: 2026-01-20 19:21:32.268 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:21:32 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:21:32 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:21:32 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:21:32.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:21:32 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:21:32 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:21:32 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:21:32.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:21:33 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1190: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:21:34 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:21:34 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:21:34 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:21:34.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:21:34 compute-0 ceph-mon[74381]: pgmap v1190: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:21:34 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:21:34 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:21:34 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:21:34.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:21:35 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1191: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:21:36 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:21:36 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:21:36 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:21:36.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:21:36 compute-0 ceph-mon[74381]: pgmap v1191: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:21:36 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:21:36 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:21:36 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:21:36.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:21:37 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:21:37 compute-0 nova_compute[254061]: 2026-01-20 19:21:37.230 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:21:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:21:37.254Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:21:37 compute-0 nova_compute[254061]: 2026-01-20 19:21:37.270 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:21:37 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1192: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:21:38 compute-0 podman[286465]: 2026-01-20 19:21:38.07840865 +0000 UTC m=+0.060334610 container health_status 7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:21:38 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:21:38 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:21:38 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:21:38.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:21:38 compute-0 ceph-mon[74381]: pgmap v1192: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:21:38 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:21:38 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000025s ======
Jan 20 19:21:38 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:21:38.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Jan 20 19:21:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:21:38.857Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:21:39 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1193: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:21:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:21:39] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Jan 20 19:21:39 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:21:39] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Jan 20 19:21:40 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:21:40 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:21:40 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:21:40.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:21:40 compute-0 ceph-mon[74381]: pgmap v1193: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:21:40 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:21:40 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:21:40 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:21:40 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:21:40.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:21:40 compute-0 sudo[286487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:21:40 compute-0 sudo[286487]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:21:40 compute-0 sudo[286487]: pam_unix(sudo:session): session closed for user root
Jan 20 19:21:41 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1194: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:21:42 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:21:42 compute-0 nova_compute[254061]: 2026-01-20 19:21:42.232 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:21:42 compute-0 nova_compute[254061]: 2026-01-20 19:21:42.273 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:21:42 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:21:42 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:21:42 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:21:42.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:21:42 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:21:42 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:21:42 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:21:42.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:21:42 compute-0 ceph-mon[74381]: pgmap v1194: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:21:43 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1195: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:21:43 compute-0 ceph-mon[74381]: pgmap v1195: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:21:44 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:21:44 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:21:44 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:21:44.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:21:44 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:21:44 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:21:44 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:21:44.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:21:45 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1196: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:21:46 compute-0 podman[286517]: 2026-01-20 19:21:46.102310397 +0000 UTC m=+0.080826814 container health_status d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 19:21:46 compute-0 ceph-mon[74381]: pgmap v1196: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:21:46 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:21:46 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:21:46 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:21:46.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:21:46 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:21:46 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:21:46 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:21:46.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:21:47 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:21:47 compute-0 nova_compute[254061]: 2026-01-20 19:21:47.232 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:21:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:21:47.255Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:21:47 compute-0 nova_compute[254061]: 2026-01-20 19:21:47.275 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:21:47 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1197: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:21:48 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:21:48 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:21:48 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:21:48.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:21:48 compute-0 ceph-mon[74381]: pgmap v1197: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:21:48 compute-0 ceph-mon[74381]: from='client.? 192.168.122.10:0/3378198154' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 19:21:48 compute-0 ceph-mon[74381]: from='client.? 192.168.122.10:0/3378198154' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 19:21:48 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:21:48 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:21:48 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:21:48.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:21:48 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:21:48.858Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:21:49 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1198: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:21:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:21:49] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Jan 20 19:21:49 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:21:49] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Jan 20 19:21:50 compute-0 sudo[286548]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:21:50 compute-0 sudo[286548]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:21:50 compute-0 sudo[286548]: pam_unix(sudo:session): session closed for user root
Jan 20 19:21:50 compute-0 sudo[286573]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 20 19:21:50 compute-0 sudo[286573]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:21:50 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:21:50 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:21:50 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:21:50.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:21:50 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:21:50 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:21:50 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:21:50.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:21:50 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 20 19:21:50 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:21:50 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 20 19:21:50 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:21:50 compute-0 sudo[286573]: pam_unix(sudo:session): session closed for user root
Jan 20 19:21:50 compute-0 ceph-mon[74381]: pgmap v1198: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:21:50 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:21:50 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:21:51 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1199: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:21:51 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1200: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 20 19:21:51 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 19:21:51 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:21:51 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 20 19:21:51 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:21:51 compute-0 sudo[286631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:21:51 compute-0 sudo[286631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:21:51 compute-0 sudo[286631]: pam_unix(sudo:session): session closed for user root
Jan 20 19:21:51 compute-0 sudo[286656]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 19:21:51 compute-0 sudo[286656]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:21:52 compute-0 ceph-mon[74381]: pgmap v1199: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:21:52 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 19:21:52 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 19:21:52 compute-0 ceph-mon[74381]: pgmap v1200: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 20 19:21:52 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:21:52 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:21:52 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 19:21:52 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 19:21:52 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 19:21:52 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:21:52 compute-0 ceph-mon[74381]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Jan 20 19:21:52 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:21:52.039122) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 19:21:52 compute-0 ceph-mon[74381]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Jan 20 19:21:52 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936912039171, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 1941, "num_deletes": 507, "total_data_size": 3275674, "memory_usage": 3327872, "flush_reason": "Manual Compaction"}
Jan 20 19:21:52 compute-0 ceph-mon[74381]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Jan 20 19:21:52 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936912058632, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 3194075, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 32454, "largest_seqno": 34394, "table_properties": {"data_size": 3185536, "index_size": 4777, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2693, "raw_key_size": 19756, "raw_average_key_size": 18, "raw_value_size": 3166443, "raw_average_value_size": 2962, "num_data_blocks": 206, "num_entries": 1069, "num_filter_entries": 1069, "num_deletions": 507, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768936750, "oldest_key_time": 1768936750, "file_creation_time": 1768936912, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cbf3ab03-d51c-4622-b6c7-e997cd5246eb", "db_session_id": "I40O2DG19JCNHUB0JQU4", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Jan 20 19:21:52 compute-0 ceph-mon[74381]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 19553 microseconds, and 9555 cpu microseconds.
Jan 20 19:21:52 compute-0 ceph-mon[74381]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 19:21:52 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:21:52.058675) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 3194075 bytes OK
Jan 20 19:21:52 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:21:52.058694) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Jan 20 19:21:52 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:21:52.060326) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Jan 20 19:21:52 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:21:52.060340) EVENT_LOG_v1 {"time_micros": 1768936912060336, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 19:21:52 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:21:52.060358) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 19:21:52 compute-0 ceph-mon[74381]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 3266476, prev total WAL file size 3266476, number of live WAL files 2.
Jan 20 19:21:52 compute-0 ceph-mon[74381]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 19:21:52 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:21:52.061654) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B7600323538' seq:72057594037927935, type:22 .. '6B7600353131' seq:0, type:0; will stop at (end)
Jan 20 19:21:52 compute-0 ceph-mon[74381]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 19:21:52 compute-0 ceph-mon[74381]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(3119KB)], [68(14MB)]
Jan 20 19:21:52 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936912062679, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 18467649, "oldest_snapshot_seqno": -1}
Jan 20 19:21:52 compute-0 ceph-mon[74381]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 6914 keys, 16944660 bytes, temperature: kUnknown
Jan 20 19:21:52 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936912191531, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 16944660, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16896341, "index_size": 29903, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17349, "raw_key_size": 180578, "raw_average_key_size": 26, "raw_value_size": 16769770, "raw_average_value_size": 2425, "num_data_blocks": 1187, "num_entries": 6914, "num_filter_entries": 6914, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768934326, "oldest_key_time": 0, "file_creation_time": 1768936912, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cbf3ab03-d51c-4622-b6c7-e997cd5246eb", "db_session_id": "I40O2DG19JCNHUB0JQU4", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Jan 20 19:21:52 compute-0 ceph-mon[74381]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 19:21:52 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:21:52.191776) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 16944660 bytes
Jan 20 19:21:52 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:21:52.193441) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 143.2 rd, 131.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 14.6 +0.0 blob) out(16.2 +0.0 blob), read-write-amplify(11.1) write-amplify(5.3) OK, records in: 7947, records dropped: 1033 output_compression: NoCompression
Jan 20 19:21:52 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:21:52.193456) EVENT_LOG_v1 {"time_micros": 1768936912193449, "job": 38, "event": "compaction_finished", "compaction_time_micros": 128925, "compaction_time_cpu_micros": 33030, "output_level": 6, "num_output_files": 1, "total_output_size": 16944660, "num_input_records": 7947, "num_output_records": 6914, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 19:21:52 compute-0 ceph-mon[74381]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 19:21:52 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936912194107, "job": 38, "event": "table_file_deletion", "file_number": 70}
Jan 20 19:21:52 compute-0 podman[286723]: 2026-01-20 19:21:52.194876503 +0000 UTC m=+0.067962552 container create 9ff2984759b90ec3cfe0b3c5708143b9052c7d36faff4cb7928945eaf081f8cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_archimedes, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:21:52 compute-0 ceph-mon[74381]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 19:21:52 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936912196587, "job": 38, "event": "table_file_deletion", "file_number": 68}
Jan 20 19:21:52 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:21:52.061490) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:21:52 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:21:52.196652) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:21:52 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:21:52.196659) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:21:52 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:21:52.196661) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:21:52 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:21:52.196663) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:21:52 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:21:52.196664) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:21:52 compute-0 systemd[1]: Started libpod-conmon-9ff2984759b90ec3cfe0b3c5708143b9052c7d36faff4cb7928945eaf081f8cd.scope.
Jan 20 19:21:52 compute-0 nova_compute[254061]: 2026-01-20 19:21:52.234 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:21:52 compute-0 podman[286723]: 2026-01-20 19:21:52.147064566 +0000 UTC m=+0.020150635 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:21:52 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:21:52 compute-0 podman[286723]: 2026-01-20 19:21:52.265730822 +0000 UTC m=+0.138816881 container init 9ff2984759b90ec3cfe0b3c5708143b9052c7d36faff4cb7928945eaf081f8cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:21:52 compute-0 podman[286723]: 2026-01-20 19:21:52.27473409 +0000 UTC m=+0.147820129 container start 9ff2984759b90ec3cfe0b3c5708143b9052c7d36faff4cb7928945eaf081f8cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Jan 20 19:21:52 compute-0 nova_compute[254061]: 2026-01-20 19:21:52.276 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:21:52 compute-0 awesome_archimedes[286739]: 167 167
Jan 20 19:21:52 compute-0 systemd[1]: libpod-9ff2984759b90ec3cfe0b3c5708143b9052c7d36faff4cb7928945eaf081f8cd.scope: Deactivated successfully.
Jan 20 19:21:52 compute-0 podman[286723]: 2026-01-20 19:21:52.279121057 +0000 UTC m=+0.152207106 container attach 9ff2984759b90ec3cfe0b3c5708143b9052c7d36faff4cb7928945eaf081f8cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 19:21:52 compute-0 conmon[286739]: conmon 9ff2984759b90ec3cfe0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9ff2984759b90ec3cfe0b3c5708143b9052c7d36faff4cb7928945eaf081f8cd.scope/container/memory.events
Jan 20 19:21:52 compute-0 podman[286723]: 2026-01-20 19:21:52.279939569 +0000 UTC m=+0.153025618 container died 9ff2984759b90ec3cfe0b3c5708143b9052c7d36faff4cb7928945eaf081f8cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:21:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-76a7e4d166171ed6b0edf2a542fb005e2da51cb89182a16faf84c7eaa6c6d8ca-merged.mount: Deactivated successfully.
Jan 20 19:21:52 compute-0 podman[286723]: 2026-01-20 19:21:52.318234323 +0000 UTC m=+0.191320372 container remove 9ff2984759b90ec3cfe0b3c5708143b9052c7d36faff4cb7928945eaf081f8cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_archimedes, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:21:52 compute-0 systemd[1]: libpod-conmon-9ff2984759b90ec3cfe0b3c5708143b9052c7d36faff4cb7928945eaf081f8cd.scope: Deactivated successfully.
Jan 20 19:21:52 compute-0 podman[286763]: 2026-01-20 19:21:52.465601931 +0000 UTC m=+0.040230878 container create 1a446e497cab7fade5367672d60bb03d527ccc90e0bf4436391ea195aa2673f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_feistel, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:21:52 compute-0 systemd[1]: Started libpod-conmon-1a446e497cab7fade5367672d60bb03d527ccc90e0bf4436391ea195aa2673f5.scope.
Jan 20 19:21:52 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:21:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99dfcc8b5f667e85fb058529d2d03ac72c81240a68d80969350fd851512f91e6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:21:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99dfcc8b5f667e85fb058529d2d03ac72c81240a68d80969350fd851512f91e6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:21:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99dfcc8b5f667e85fb058529d2d03ac72c81240a68d80969350fd851512f91e6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:21:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99dfcc8b5f667e85fb058529d2d03ac72c81240a68d80969350fd851512f91e6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:21:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99dfcc8b5f667e85fb058529d2d03ac72c81240a68d80969350fd851512f91e6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:21:52 compute-0 podman[286763]: 2026-01-20 19:21:52.544716898 +0000 UTC m=+0.119345875 container init 1a446e497cab7fade5367672d60bb03d527ccc90e0bf4436391ea195aa2673f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_feistel, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:21:52 compute-0 podman[286763]: 2026-01-20 19:21:52.447356047 +0000 UTC m=+0.021985024 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:21:52 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:21:52 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:21:52 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:21:52.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:21:52 compute-0 podman[286763]: 2026-01-20 19:21:52.558559405 +0000 UTC m=+0.133188352 container start 1a446e497cab7fade5367672d60bb03d527ccc90e0bf4436391ea195aa2673f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_feistel, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Jan 20 19:21:52 compute-0 podman[286763]: 2026-01-20 19:21:52.562310034 +0000 UTC m=+0.136938991 container attach 1a446e497cab7fade5367672d60bb03d527ccc90e0bf4436391ea195aa2673f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_feistel, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:21:52 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:21:52 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:21:52 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:21:52.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:21:52 compute-0 vibrant_feistel[286780]: --> passed data devices: 0 physical, 1 LVM
Jan 20 19:21:52 compute-0 vibrant_feistel[286780]: --> All data devices are unavailable
Jan 20 19:21:52 compute-0 systemd[1]: libpod-1a446e497cab7fade5367672d60bb03d527ccc90e0bf4436391ea195aa2673f5.scope: Deactivated successfully.
Jan 20 19:21:52 compute-0 podman[286763]: 2026-01-20 19:21:52.920789687 +0000 UTC m=+0.495418654 container died 1a446e497cab7fade5367672d60bb03d527ccc90e0bf4436391ea195aa2673f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_feistel, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:21:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-99dfcc8b5f667e85fb058529d2d03ac72c81240a68d80969350fd851512f91e6-merged.mount: Deactivated successfully.
Jan 20 19:21:52 compute-0 podman[286763]: 2026-01-20 19:21:52.971968563 +0000 UTC m=+0.546597510 container remove 1a446e497cab7fade5367672d60bb03d527ccc90e0bf4436391ea195aa2673f5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_feistel, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Jan 20 19:21:52 compute-0 systemd[1]: libpod-conmon-1a446e497cab7fade5367672d60bb03d527ccc90e0bf4436391ea195aa2673f5.scope: Deactivated successfully.
Jan 20 19:21:53 compute-0 sudo[286656]: pam_unix(sudo:session): session closed for user root
Jan 20 19:21:53 compute-0 sudo[286807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:21:53 compute-0 sudo[286807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:21:53 compute-0 sudo[286807]: pam_unix(sudo:session): session closed for user root
Jan 20 19:21:53 compute-0 sudo[286832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- lvm list --format json
Jan 20 19:21:53 compute-0 sudo[286832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:21:53 compute-0 podman[286897]: 2026-01-20 19:21:53.516764425 +0000 UTC m=+0.034366391 container create ff310d4e80af7bc0a7e957737e91d782778bf0cb833e62e7fb655479b12fb8ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:21:53 compute-0 systemd[1]: Started libpod-conmon-ff310d4e80af7bc0a7e957737e91d782778bf0cb833e62e7fb655479b12fb8ff.scope.
Jan 20 19:21:53 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1201: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 20 19:21:53 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:21:53 compute-0 podman[286897]: 2026-01-20 19:21:53.501444159 +0000 UTC m=+0.019046145 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:21:53 compute-0 podman[286897]: 2026-01-20 19:21:53.603099924 +0000 UTC m=+0.120701910 container init ff310d4e80af7bc0a7e957737e91d782778bf0cb833e62e7fb655479b12fb8ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:21:53 compute-0 podman[286897]: 2026-01-20 19:21:53.611424475 +0000 UTC m=+0.129026441 container start ff310d4e80af7bc0a7e957737e91d782778bf0cb833e62e7fb655479b12fb8ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 19:21:53 compute-0 podman[286897]: 2026-01-20 19:21:53.614798894 +0000 UTC m=+0.132400890 container attach ff310d4e80af7bc0a7e957737e91d782778bf0cb833e62e7fb655479b12fb8ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Jan 20 19:21:53 compute-0 busy_blackburn[286914]: 167 167
Jan 20 19:21:53 compute-0 systemd[1]: libpod-ff310d4e80af7bc0a7e957737e91d782778bf0cb833e62e7fb655479b12fb8ff.scope: Deactivated successfully.
Jan 20 19:21:53 compute-0 conmon[286914]: conmon ff310d4e80af7bc0a7e9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ff310d4e80af7bc0a7e957737e91d782778bf0cb833e62e7fb655479b12fb8ff.scope/container/memory.events
Jan 20 19:21:53 compute-0 podman[286897]: 2026-01-20 19:21:53.618217165 +0000 UTC m=+0.135819161 container died ff310d4e80af7bc0a7e957737e91d782778bf0cb833e62e7fb655479b12fb8ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:21:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-6c18be5736dc50ea846d98ba4078bd433d0e78f8bbacca9f7eaedab0448671af-merged.mount: Deactivated successfully.
Jan 20 19:21:53 compute-0 podman[286897]: 2026-01-20 19:21:53.664190474 +0000 UTC m=+0.181792440 container remove ff310d4e80af7bc0a7e957737e91d782778bf0cb833e62e7fb655479b12fb8ff (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_blackburn, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:21:53 compute-0 systemd[1]: libpod-conmon-ff310d4e80af7bc0a7e957737e91d782778bf0cb833e62e7fb655479b12fb8ff.scope: Deactivated successfully.
Jan 20 19:21:53 compute-0 podman[286938]: 2026-01-20 19:21:53.83305381 +0000 UTC m=+0.042618081 container create df205ed99a674f3ca9d1f50b4c33222766567bb5a6db8ceb819b2e6ca397ce92 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_aryabhata, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:21:53 compute-0 systemd[1]: Started libpod-conmon-df205ed99a674f3ca9d1f50b4c33222766567bb5a6db8ceb819b2e6ca397ce92.scope.
Jan 20 19:21:53 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:21:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3a59481517b43fc0ef100c11a5d534351db54ce0048f519ee27f74962e08b30/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:21:53 compute-0 podman[286938]: 2026-01-20 19:21:53.817292692 +0000 UTC m=+0.026856983 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:21:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3a59481517b43fc0ef100c11a5d534351db54ce0048f519ee27f74962e08b30/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:21:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3a59481517b43fc0ef100c11a5d534351db54ce0048f519ee27f74962e08b30/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:21:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a3a59481517b43fc0ef100c11a5d534351db54ce0048f519ee27f74962e08b30/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:21:53 compute-0 podman[286938]: 2026-01-20 19:21:53.923796695 +0000 UTC m=+0.133361016 container init df205ed99a674f3ca9d1f50b4c33222766567bb5a6db8ceb819b2e6ca397ce92 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 20 19:21:53 compute-0 podman[286938]: 2026-01-20 19:21:53.934660413 +0000 UTC m=+0.144224684 container start df205ed99a674f3ca9d1f50b4c33222766567bb5a6db8ceb819b2e6ca397ce92 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_aryabhata, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:21:53 compute-0 podman[286938]: 2026-01-20 19:21:53.937529019 +0000 UTC m=+0.147093330 container attach df205ed99a674f3ca9d1f50b4c33222766567bb5a6db8ceb819b2e6ca397ce92 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:21:54 compute-0 musing_aryabhata[286954]: {
Jan 20 19:21:54 compute-0 musing_aryabhata[286954]:     "0": [
Jan 20 19:21:54 compute-0 musing_aryabhata[286954]:         {
Jan 20 19:21:54 compute-0 musing_aryabhata[286954]:             "devices": [
Jan 20 19:21:54 compute-0 musing_aryabhata[286954]:                 "/dev/loop3"
Jan 20 19:21:54 compute-0 musing_aryabhata[286954]:             ],
Jan 20 19:21:54 compute-0 musing_aryabhata[286954]:             "lv_name": "ceph_lv0",
Jan 20 19:21:54 compute-0 musing_aryabhata[286954]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:21:54 compute-0 musing_aryabhata[286954]:             "lv_size": "21470642176",
Jan 20 19:21:54 compute-0 musing_aryabhata[286954]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=aecbbf3b-b405-507b-97d7-637a83f5b4b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5f53c0c6-6046-4836-83f9-ff93da7e674e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:21:54 compute-0 musing_aryabhata[286954]:             "lv_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 19:21:54 compute-0 musing_aryabhata[286954]:             "name": "ceph_lv0",
Jan 20 19:21:54 compute-0 musing_aryabhata[286954]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:21:54 compute-0 musing_aryabhata[286954]:             "tags": {
Jan 20 19:21:54 compute-0 musing_aryabhata[286954]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:21:54 compute-0 musing_aryabhata[286954]:                 "ceph.block_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 19:21:54 compute-0 musing_aryabhata[286954]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:21:54 compute-0 musing_aryabhata[286954]:                 "ceph.cluster_fsid": "aecbbf3b-b405-507b-97d7-637a83f5b4b1",
Jan 20 19:21:54 compute-0 musing_aryabhata[286954]:                 "ceph.cluster_name": "ceph",
Jan 20 19:21:54 compute-0 musing_aryabhata[286954]:                 "ceph.crush_device_class": "",
Jan 20 19:21:54 compute-0 musing_aryabhata[286954]:                 "ceph.encrypted": "0",
Jan 20 19:21:54 compute-0 musing_aryabhata[286954]:                 "ceph.osd_fsid": "5f53c0c6-6046-4836-83f9-ff93da7e674e",
Jan 20 19:21:54 compute-0 musing_aryabhata[286954]:                 "ceph.osd_id": "0",
Jan 20 19:21:54 compute-0 musing_aryabhata[286954]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:21:54 compute-0 musing_aryabhata[286954]:                 "ceph.type": "block",
Jan 20 19:21:54 compute-0 musing_aryabhata[286954]:                 "ceph.vdo": "0",
Jan 20 19:21:54 compute-0 musing_aryabhata[286954]:                 "ceph.with_tpm": "0"
Jan 20 19:21:54 compute-0 musing_aryabhata[286954]:             },
Jan 20 19:21:54 compute-0 musing_aryabhata[286954]:             "type": "block",
Jan 20 19:21:54 compute-0 musing_aryabhata[286954]:             "vg_name": "ceph_vg0"
Jan 20 19:21:54 compute-0 musing_aryabhata[286954]:         }
Jan 20 19:21:54 compute-0 musing_aryabhata[286954]:     ]
Jan 20 19:21:54 compute-0 musing_aryabhata[286954]: }
Jan 20 19:21:54 compute-0 systemd[1]: libpod-df205ed99a674f3ca9d1f50b4c33222766567bb5a6db8ceb819b2e6ca397ce92.scope: Deactivated successfully.
Jan 20 19:21:54 compute-0 podman[286938]: 2026-01-20 19:21:54.263593353 +0000 UTC m=+0.473157634 container died df205ed99a674f3ca9d1f50b4c33222766567bb5a6db8ceb819b2e6ca397ce92 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_aryabhata, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Jan 20 19:21:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-a3a59481517b43fc0ef100c11a5d534351db54ce0048f519ee27f74962e08b30-merged.mount: Deactivated successfully.
Jan 20 19:21:54 compute-0 podman[286938]: 2026-01-20 19:21:54.307557558 +0000 UTC m=+0.517121829 container remove df205ed99a674f3ca9d1f50b4c33222766567bb5a6db8ceb819b2e6ca397ce92 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 20 19:21:54 compute-0 systemd[1]: libpod-conmon-df205ed99a674f3ca9d1f50b4c33222766567bb5a6db8ceb819b2e6ca397ce92.scope: Deactivated successfully.
Jan 20 19:21:54 compute-0 sudo[286832]: pam_unix(sudo:session): session closed for user root
Jan 20 19:21:54 compute-0 sudo[286977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:21:54 compute-0 sudo[286977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:21:54 compute-0 sudo[286977]: pam_unix(sudo:session): session closed for user root
Jan 20 19:21:54 compute-0 sudo[287002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- raw list --format json
Jan 20 19:21:54 compute-0 sudo[287002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:21:54 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:21:54 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:21:54 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:21:54.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:21:54 compute-0 ceph-mon[74381]: pgmap v1201: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 20 19:21:54 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:21:54 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:21:54 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:21:54.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:21:54 compute-0 podman[287069]: 2026-01-20 19:21:54.891341364 +0000 UTC m=+0.049833223 container create 1c77812473178d45e1ace752fe377f87b23aa633aff42fac580c7d52dcfd7316 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_lamarr, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:21:54 compute-0 systemd[1]: Started libpod-conmon-1c77812473178d45e1ace752fe377f87b23aa633aff42fac580c7d52dcfd7316.scope.
Jan 20 19:21:54 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:21:54 compute-0 podman[287069]: 2026-01-20 19:21:54.872113804 +0000 UTC m=+0.030605733 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:21:54 compute-0 podman[287069]: 2026-01-20 19:21:54.983149008 +0000 UTC m=+0.141640857 container init 1c77812473178d45e1ace752fe377f87b23aa633aff42fac580c7d52dcfd7316 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 20 19:21:54 compute-0 podman[287069]: 2026-01-20 19:21:54.989133426 +0000 UTC m=+0.147625265 container start 1c77812473178d45e1ace752fe377f87b23aa633aff42fac580c7d52dcfd7316 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_lamarr, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:21:54 compute-0 podman[287069]: 2026-01-20 19:21:54.992505695 +0000 UTC m=+0.150997564 container attach 1c77812473178d45e1ace752fe377f87b23aa633aff42fac580c7d52dcfd7316 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Jan 20 19:21:54 compute-0 gracious_lamarr[287086]: 167 167
Jan 20 19:21:54 compute-0 systemd[1]: libpod-1c77812473178d45e1ace752fe377f87b23aa633aff42fac580c7d52dcfd7316.scope: Deactivated successfully.
Jan 20 19:21:54 compute-0 conmon[287086]: conmon 1c77812473178d45e1ac <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1c77812473178d45e1ace752fe377f87b23aa633aff42fac580c7d52dcfd7316.scope/container/memory.events
Jan 20 19:21:54 compute-0 podman[287069]: 2026-01-20 19:21:54.995529615 +0000 UTC m=+0.154021484 container died 1c77812473178d45e1ace752fe377f87b23aa633aff42fac580c7d52dcfd7316 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_lamarr, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Jan 20 19:21:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-ab0f4175f3091c5126676c105fc721c020b0952496efed5897e3a1c825737086-merged.mount: Deactivated successfully.
Jan 20 19:21:55 compute-0 podman[287069]: 2026-01-20 19:21:55.039621584 +0000 UTC m=+0.198113444 container remove 1c77812473178d45e1ace752fe377f87b23aa633aff42fac580c7d52dcfd7316 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:21:55 compute-0 systemd[1]: libpod-conmon-1c77812473178d45e1ace752fe377f87b23aa633aff42fac580c7d52dcfd7316.scope: Deactivated successfully.
Jan 20 19:21:55 compute-0 ceph-mgr[74676]: [balancer INFO root] Optimize plan auto_2026-01-20_19:21:55
Jan 20 19:21:55 compute-0 ceph-mgr[74676]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 19:21:55 compute-0 ceph-mgr[74676]: [balancer INFO root] do_upmap
Jan 20 19:21:55 compute-0 ceph-mgr[74676]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.meta', 'backups', '.nfs', '.rgw.root', '.mgr', 'volumes', 'default.rgw.control', 'images', 'cephfs.cephfs.data', 'default.rgw.log', 'vms']
Jan 20 19:21:55 compute-0 ceph-mgr[74676]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 19:21:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:21:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:21:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:21:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:21:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:21:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:21:55 compute-0 podman[287114]: 2026-01-20 19:21:55.208183093 +0000 UTC m=+0.042049936 container create afb7430efa0327292f4a15f96bbef38f974a76c4e965d452dc011dd9a2d9c0e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:21:55 compute-0 systemd[1]: Started libpod-conmon-afb7430efa0327292f4a15f96bbef38f974a76c4e965d452dc011dd9a2d9c0e9.scope.
Jan 20 19:21:55 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:21:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bf09ce606509dad2fa7bd9e4d278607598de3fa453c6dbb5854c1011808fc31/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:21:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bf09ce606509dad2fa7bd9e4d278607598de3fa453c6dbb5854c1011808fc31/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:21:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bf09ce606509dad2fa7bd9e4d278607598de3fa453c6dbb5854c1011808fc31/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:21:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bf09ce606509dad2fa7bd9e4d278607598de3fa453c6dbb5854c1011808fc31/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:21:55 compute-0 podman[287114]: 2026-01-20 19:21:55.189155449 +0000 UTC m=+0.023022312 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:21:55 compute-0 podman[287114]: 2026-01-20 19:21:55.287532636 +0000 UTC m=+0.121399519 container init afb7430efa0327292f4a15f96bbef38f974a76c4e965d452dc011dd9a2d9c0e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Jan 20 19:21:55 compute-0 podman[287114]: 2026-01-20 19:21:55.295523928 +0000 UTC m=+0.129390761 container start afb7430efa0327292f4a15f96bbef38f974a76c4e965d452dc011dd9a2d9c0e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_joliot, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Jan 20 19:21:55 compute-0 podman[287114]: 2026-01-20 19:21:55.298701962 +0000 UTC m=+0.132568795 container attach afb7430efa0327292f4a15f96bbef38f974a76c4e965d452dc011dd9a2d9c0e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_joliot, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Jan 20 19:21:55 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1202: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 20 19:21:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 19:21:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 19:21:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:21:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:21:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 19:21:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:21:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 19:21:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:21:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 20 19:21:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:21:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:21:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:21:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:21:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:21:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 20 19:21:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:21:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 20 19:21:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:21:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:21:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:21:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 19:21:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:21:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 20 19:21:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:21:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:21:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:21:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 19:21:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:21:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 20 19:21:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:21:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:21:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:21:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:21:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:21:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:21:55 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:21:55 compute-0 lvm[287204]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 19:21:55 compute-0 lvm[287204]: VG ceph_vg0 finished
Jan 20 19:21:55 compute-0 rsyslogd[1003]: imjournal from <np0005589270:ceph-mgr>: begin to drop messages due to rate-limiting
Jan 20 19:21:55 compute-0 elegant_joliot[287130]: {}
Jan 20 19:21:55 compute-0 systemd[1]: libpod-afb7430efa0327292f4a15f96bbef38f974a76c4e965d452dc011dd9a2d9c0e9.scope: Deactivated successfully.
Jan 20 19:21:56 compute-0 systemd[1]: libpod-afb7430efa0327292f4a15f96bbef38f974a76c4e965d452dc011dd9a2d9c0e9.scope: Consumed 1.080s CPU time.
Jan 20 19:21:56 compute-0 podman[287114]: 2026-01-20 19:21:55.999839069 +0000 UTC m=+0.833705932 container died afb7430efa0327292f4a15f96bbef38f974a76c4e965d452dc011dd9a2d9c0e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_joliot, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 20 19:21:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-2bf09ce606509dad2fa7bd9e4d278607598de3fa453c6dbb5854c1011808fc31-merged.mount: Deactivated successfully.
Jan 20 19:21:56 compute-0 podman[287114]: 2026-01-20 19:21:56.059719467 +0000 UTC m=+0.893586320 container remove afb7430efa0327292f4a15f96bbef38f974a76c4e965d452dc011dd9a2d9c0e9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:21:56 compute-0 systemd[1]: libpod-conmon-afb7430efa0327292f4a15f96bbef38f974a76c4e965d452dc011dd9a2d9c0e9.scope: Deactivated successfully.
Jan 20 19:21:56 compute-0 sudo[287002]: pam_unix(sudo:session): session closed for user root
Jan 20 19:21:56 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:21:56 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:21:56 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:21:56 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:21:56 compute-0 sudo[287219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 19:21:56 compute-0 sudo[287219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:21:56 compute-0 sudo[287219]: pam_unix(sudo:session): session closed for user root
Jan 20 19:21:56 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:21:56 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:21:56 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:21:56.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:21:56 compute-0 ceph-mon[74381]: pgmap v1202: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 20 19:21:56 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:21:56 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:21:56 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:21:56 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:21:56 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:21:56.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:21:57 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:21:57 compute-0 nova_compute[254061]: 2026-01-20 19:21:57.236 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:21:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:21:57.255Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:21:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:21:57.256Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:21:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:21:57.257Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:21:57 compute-0 nova_compute[254061]: 2026-01-20 19:21:57.278 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:21:57 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1203: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 20 19:21:57 compute-0 ceph-mon[74381]: pgmap v1203: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 20 19:21:58 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:21:58 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:21:58 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:21:58.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:21:58 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:21:58 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:21:58 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:21:58.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:21:58 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:21:58.859Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:21:59 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1204: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 20 19:21:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:21:59] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Jan 20 19:21:59 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:21:59] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Jan 20 19:22:00 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:22:00 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:22:00 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:22:00.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:22:00 compute-0 ceph-mon[74381]: pgmap v1204: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 20 19:22:00 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:22:00 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:22:00 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:22:00.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:22:00 compute-0 sudo[287249]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:22:00 compute-0 sudo[287249]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:22:00 compute-0 sudo[287249]: pam_unix(sudo:session): session closed for user root
Jan 20 19:22:01 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1205: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 20 19:22:02 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:22:02 compute-0 nova_compute[254061]: 2026-01-20 19:22:02.239 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:22:02 compute-0 nova_compute[254061]: 2026-01-20 19:22:02.278 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:22:02 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:22:02 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:22:02 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:22:02.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:22:02 compute-0 ceph-mon[74381]: pgmap v1205: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 20 19:22:02 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:22:02 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:22:02 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:22:02.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:22:03 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1206: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:22:03 compute-0 ceph-mon[74381]: pgmap v1206: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:22:04 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:22:04 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:22:04 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:22:04.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:22:04 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:22:04 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:22:04 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:22:04.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:22:05 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1207: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:22:06 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:22:06 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:22:06 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:22:06.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:22:06 compute-0 ceph-mon[74381]: pgmap v1207: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:22:06 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:22:06 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:22:06 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:22:06.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:22:07 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:22:07 compute-0 nova_compute[254061]: 2026-01-20 19:22:07.241 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:22:07 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:22:07.258Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:22:07 compute-0 nova_compute[254061]: 2026-01-20 19:22:07.279 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:22:07 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1208: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:22:08 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:22:08 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:22:08 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:22:08.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:22:08 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:22:08 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:22:08 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:22:08.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:22:08 compute-0 ceph-mon[74381]: pgmap v1208: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:22:08 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:22:08.859Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:22:08 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:22:08.859Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:22:09 compute-0 podman[287283]: 2026-01-20 19:22:09.100461062 +0000 UTC m=+0.069319126 container health_status 7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 20 19:22:09 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1209: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:22:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:22:09] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Jan 20 19:22:09 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:22:09] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Jan 20 19:22:10 compute-0 ceph-mon[74381]: pgmap v1209: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:22:10 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:22:10 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:22:10 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:22:10.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:22:10 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:22:10 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:22:10 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:22:10.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:22:11 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:22:11 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1210: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:22:12 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:22:12 compute-0 nova_compute[254061]: 2026-01-20 19:22:12.128 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:22:12 compute-0 nova_compute[254061]: 2026-01-20 19:22:12.203 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:22:12 compute-0 nova_compute[254061]: 2026-01-20 19:22:12.204 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:22:12 compute-0 nova_compute[254061]: 2026-01-20 19:22:12.204 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:22:12 compute-0 nova_compute[254061]: 2026-01-20 19:22:12.204 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 19:22:12 compute-0 nova_compute[254061]: 2026-01-20 19:22:12.204 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:22:12 compute-0 nova_compute[254061]: 2026-01-20 19:22:12.243 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:22:12 compute-0 nova_compute[254061]: 2026-01-20 19:22:12.280 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:22:12 compute-0 ceph-mon[74381]: pgmap v1210: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:22:12 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:22:12 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:22:12 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:22:12.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:22:12 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:22:12 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2008918583' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:22:12 compute-0 nova_compute[254061]: 2026-01-20 19:22:12.654 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:22:12 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:22:12 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:22:12 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:22:12.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:22:12 compute-0 nova_compute[254061]: 2026-01-20 19:22:12.818 254065 WARNING nova.virt.libvirt.driver [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 19:22:12 compute-0 nova_compute[254061]: 2026-01-20 19:22:12.820 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4489MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 19:22:12 compute-0 nova_compute[254061]: 2026-01-20 19:22:12.820 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:22:12 compute-0 nova_compute[254061]: 2026-01-20 19:22:12.820 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:22:13 compute-0 nova_compute[254061]: 2026-01-20 19:22:13.093 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 19:22:13 compute-0 nova_compute[254061]: 2026-01-20 19:22:13.093 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 19:22:13 compute-0 nova_compute[254061]: 2026-01-20 19:22:13.249 254065 DEBUG nova.scheduler.client.report [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Refreshing inventories for resource provider cb9161e5-191d-495c-920a-01144f42a215 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 20 19:22:13 compute-0 nova_compute[254061]: 2026-01-20 19:22:13.399 254065 DEBUG nova.scheduler.client.report [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Updating ProviderTree inventory for provider cb9161e5-191d-495c-920a-01144f42a215 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 20 19:22:13 compute-0 nova_compute[254061]: 2026-01-20 19:22:13.399 254065 DEBUG nova.compute.provider_tree [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Updating inventory in ProviderTree for provider cb9161e5-191d-495c-920a-01144f42a215 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 20 19:22:13 compute-0 nova_compute[254061]: 2026-01-20 19:22:13.415 254065 DEBUG nova.scheduler.client.report [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Refreshing aggregate associations for resource provider cb9161e5-191d-495c-920a-01144f42a215, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 20 19:22:13 compute-0 nova_compute[254061]: 2026-01-20 19:22:13.456 254065 DEBUG nova.scheduler.client.report [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Refreshing trait associations for resource provider cb9161e5-191d-495c-920a-01144f42a215, traits: COMPUTE_DEVICE_TAGGING,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AESNI,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_ABM,COMPUTE_ACCELERATORS,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE4A,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NODE,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_F16C,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_USB,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE,HW_CPU_X86_SSE2,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_BMI,HW_CPU_X86_SVM,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 20 19:22:13 compute-0 nova_compute[254061]: 2026-01-20 19:22:13.525 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:22:13 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/2008918583' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:22:13 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1211: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:22:13 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:22:13 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1228827386' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:22:13 compute-0 nova_compute[254061]: 2026-01-20 19:22:13.974 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:22:13 compute-0 nova_compute[254061]: 2026-01-20 19:22:13.982 254065 DEBUG nova.compute.provider_tree [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Inventory has not changed in ProviderTree for provider: cb9161e5-191d-495c-920a-01144f42a215 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 19:22:14 compute-0 nova_compute[254061]: 2026-01-20 19:22:14.017 254065 DEBUG nova.scheduler.client.report [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Inventory has not changed for provider cb9161e5-191d-495c-920a-01144f42a215 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 19:22:14 compute-0 nova_compute[254061]: 2026-01-20 19:22:14.020 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 19:22:14 compute-0 nova_compute[254061]: 2026-01-20 19:22:14.021 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.201s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:22:14 compute-0 ceph-mon[74381]: pgmap v1211: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:22:14 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/1228827386' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:22:14 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:22:14 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:22:14 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:22:14.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:22:14 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:22:14 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:22:14 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:22:14.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:22:15 compute-0 nova_compute[254061]: 2026-01-20 19:22:15.022 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:22:15 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1212: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:22:16 compute-0 nova_compute[254061]: 2026-01-20 19:22:16.129 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:22:16 compute-0 nova_compute[254061]: 2026-01-20 19:22:16.129 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 19:22:16 compute-0 nova_compute[254061]: 2026-01-20 19:22:16.129 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 19:22:16 compute-0 nova_compute[254061]: 2026-01-20 19:22:16.189 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 19:22:16 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:22:16 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:22:16 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:22:16.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:22:16 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:22:16 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:22:16 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:22:16.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:22:17 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:22:17 compute-0 podman[287354]: 2026-01-20 19:22:17.128397639 +0000 UTC m=+0.109603747 container health_status d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 20 19:22:17 compute-0 nova_compute[254061]: 2026-01-20 19:22:17.128 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:22:17 compute-0 nova_compute[254061]: 2026-01-20 19:22:17.129 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:22:17 compute-0 nova_compute[254061]: 2026-01-20 19:22:17.129 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:22:17 compute-0 nova_compute[254061]: 2026-01-20 19:22:17.129 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 20 19:22:17 compute-0 nova_compute[254061]: 2026-01-20 19:22:17.169 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 20 19:22:17 compute-0 nova_compute[254061]: 2026-01-20 19:22:17.244 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:22:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:22:17.259Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:22:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:22:17.259Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:22:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:22:17.260Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:22:17 compute-0 nova_compute[254061]: 2026-01-20 19:22:17.282 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:22:17 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1213: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:22:17 compute-0 ceph-mon[74381]: pgmap v1212: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:22:17 compute-0 ceph-mon[74381]: pgmap v1213: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:22:18 compute-0 nova_compute[254061]: 2026-01-20 19:22:18.169 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:22:18 compute-0 nova_compute[254061]: 2026-01-20 19:22:18.170 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:22:18 compute-0 nova_compute[254061]: 2026-01-20 19:22:18.170 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 19:22:18 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:22:18 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:22:18 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:22:18.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:22:18 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:22:18 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:22:18 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:22:18.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:22:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:22:18.860Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:22:19 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/1975370633' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:22:19 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1214: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:22:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:22:19] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Jan 20 19:22:19 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:22:19] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Jan 20 19:22:20 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/1808000831' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:22:20 compute-0 ceph-mon[74381]: pgmap v1214: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:22:20 compute-0 nova_compute[254061]: 2026-01-20 19:22:20.128 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:22:20 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:22:20 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:22:20 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:22:20.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:22:20 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:22:20 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:22:20 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:22:20.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:22:20 compute-0 sudo[287383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:22:20 compute-0 sudo[287383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:22:20 compute-0 sudo[287383]: pam_unix(sudo:session): session closed for user root
Jan 20 19:22:21 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/283527222' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:22:21 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/4097467645' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:22:21 compute-0 nova_compute[254061]: 2026-01-20 19:22:21.164 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:22:21 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1215: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Jan 20 19:22:22 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:22:22 compute-0 ceph-mon[74381]: pgmap v1215: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Jan 20 19:22:22 compute-0 nova_compute[254061]: 2026-01-20 19:22:22.247 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:22:22 compute-0 nova_compute[254061]: 2026-01-20 19:22:22.283 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:22:22 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:22:22 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:22:22 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:22:22.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:22:22 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:22:22 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:22:22 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:22:22.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:22:23 compute-0 nova_compute[254061]: 2026-01-20 19:22:23.128 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:22:23 compute-0 nova_compute[254061]: 2026-01-20 19:22:23.129 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 20 19:22:23 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1216: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 20 19:22:24 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:22:24 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:22:24 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:22:24.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:22:24 compute-0 ceph-mon[74381]: pgmap v1216: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 20 19:22:24 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:22:24 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:22:24 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:22:24.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:22:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:22:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:22:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:22:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:22:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:22:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:22:25 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1217: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 20 19:22:25 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:22:26 compute-0 nova_compute[254061]: 2026-01-20 19:22:26.141 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:22:26 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:22:26 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:22:26 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:22:26.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:22:26 compute-0 ceph-mon[74381]: pgmap v1217: 337 pgs: 337 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 20 19:22:26 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:22:26 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:22:26 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:22:26.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:22:27 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:22:27 compute-0 nova_compute[254061]: 2026-01-20 19:22:27.248 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:22:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:22:27.260Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:22:27 compute-0 nova_compute[254061]: 2026-01-20 19:22:27.314 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:22:27 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1218: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 107 KiB/s rd, 0 B/s wr, 176 op/s
Jan 20 19:22:27 compute-0 ceph-mon[74381]: pgmap v1218: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 107 KiB/s rd, 0 B/s wr, 176 op/s
Jan 20 19:22:28 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:22:28 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:22:28 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:22:28.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:22:28 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:22:28 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:22:28 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:22:28.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:22:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:22:28.862Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:22:29 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1219: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 106 KiB/s rd, 0 B/s wr, 176 op/s
Jan 20 19:22:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:22:29] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Jan 20 19:22:29 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:22:29] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Jan 20 19:22:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:22:30.300 165659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:22:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:22:30.300 165659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:22:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:22:30.301 165659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:22:30 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:22:30 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:22:30 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:22:30.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:22:30 compute-0 ceph-mon[74381]: pgmap v1219: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 106 KiB/s rd, 0 B/s wr, 176 op/s
Jan 20 19:22:30 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:22:30 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:22:30 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:22:30.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:22:31 compute-0 nova_compute[254061]: 2026-01-20 19:22:31.125 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:22:31 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1220: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 107 KiB/s rd, 0 B/s wr, 176 op/s
Jan 20 19:22:32 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:22:32 compute-0 nova_compute[254061]: 2026-01-20 19:22:32.250 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:22:32 compute-0 nova_compute[254061]: 2026-01-20 19:22:32.315 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:22:32 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:22:32 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:22:32 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:22:32.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:22:32 compute-0 ceph-mon[74381]: pgmap v1220: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 107 KiB/s rd, 0 B/s wr, 176 op/s
Jan 20 19:22:32 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:22:32 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:22:32 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:22:32.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:22:33 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1221: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 106 KiB/s rd, 0 B/s wr, 175 op/s
Jan 20 19:22:33 compute-0 rsyslogd[1003]: imjournal: 309 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Jan 20 19:22:34 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:22:34 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:22:34 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:22:34.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:22:34 compute-0 ceph-mon[74381]: pgmap v1221: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 106 KiB/s rd, 0 B/s wr, 175 op/s
Jan 20 19:22:34 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:22:34 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:22:34 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:22:34.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:22:35 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1222: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 106 KiB/s rd, 0 B/s wr, 175 op/s
Jan 20 19:22:35 compute-0 ceph-mon[74381]: pgmap v1222: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 106 KiB/s rd, 0 B/s wr, 175 op/s
Jan 20 19:22:36 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:22:36 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:22:36 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:22:36.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:22:36 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:22:36 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:22:36 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:22:36.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:22:37 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:22:37 compute-0 nova_compute[254061]: 2026-01-20 19:22:37.252 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:22:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:22:37.261Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:22:37 compute-0 nova_compute[254061]: 2026-01-20 19:22:37.317 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:22:37 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1223: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 106 KiB/s rd, 0 B/s wr, 176 op/s
Jan 20 19:22:38 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:22:38 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:22:38 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:22:38.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:22:38 compute-0 ceph-mon[74381]: pgmap v1223: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 106 KiB/s rd, 0 B/s wr, 176 op/s
Jan 20 19:22:38 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:22:38 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:22:38 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:22:38.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:22:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:22:38.863Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:22:39 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1224: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:22:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:22:39] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Jan 20 19:22:39 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:22:39] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Jan 20 19:22:40 compute-0 podman[287427]: 2026-01-20 19:22:40.111657709 +0000 UTC m=+0.077604261 container health_status 7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 20 19:22:40 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:22:40 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:22:40 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:22:40.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:22:40 compute-0 ceph-mon[74381]: pgmap v1224: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:22:40 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:22:40 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:22:40 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:22:40 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:22:40.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:22:41 compute-0 sudo[287448]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:22:41 compute-0 sudo[287448]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:22:41 compute-0 sudo[287448]: pam_unix(sudo:session): session closed for user root
Jan 20 19:22:41 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1225: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:22:41 compute-0 ceph-mon[74381]: pgmap v1225: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:22:42 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:22:42 compute-0 nova_compute[254061]: 2026-01-20 19:22:42.255 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:22:42 compute-0 nova_compute[254061]: 2026-01-20 19:22:42.319 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:22:42 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:22:42 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:22:42 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:22:42.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:22:42 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:22:42 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:22:42 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:22:42.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:22:43 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1226: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:22:43 compute-0 ceph-mon[74381]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Jan 20 19:22:43 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:22:43.657162) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 19:22:43 compute-0 ceph-mon[74381]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Jan 20 19:22:43 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936963657263, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 679, "num_deletes": 251, "total_data_size": 997911, "memory_usage": 1010920, "flush_reason": "Manual Compaction"}
Jan 20 19:22:43 compute-0 ceph-mon[74381]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Jan 20 19:22:43 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936963666515, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 984862, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34395, "largest_seqno": 35073, "table_properties": {"data_size": 981260, "index_size": 1446, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8084, "raw_average_key_size": 19, "raw_value_size": 974123, "raw_average_value_size": 2330, "num_data_blocks": 63, "num_entries": 418, "num_filter_entries": 418, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768936913, "oldest_key_time": 1768936913, "file_creation_time": 1768936963, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cbf3ab03-d51c-4622-b6c7-e997cd5246eb", "db_session_id": "I40O2DG19JCNHUB0JQU4", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Jan 20 19:22:43 compute-0 ceph-mon[74381]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 9402 microseconds, and 4251 cpu microseconds.
Jan 20 19:22:43 compute-0 ceph-mon[74381]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 19:22:43 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:22:43.666574) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 984862 bytes OK
Jan 20 19:22:43 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:22:43.666604) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Jan 20 19:22:43 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:22:43.668363) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Jan 20 19:22:43 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:22:43.668401) EVENT_LOG_v1 {"time_micros": 1768936963668388, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 19:22:43 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:22:43.668431) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 19:22:43 compute-0 ceph-mon[74381]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 994419, prev total WAL file size 994419, number of live WAL files 2.
Jan 20 19:22:43 compute-0 ceph-mon[74381]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 19:22:43 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:22:43.669001) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Jan 20 19:22:43 compute-0 ceph-mon[74381]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 19:22:43 compute-0 ceph-mon[74381]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(961KB)], [71(16MB)]
Jan 20 19:22:43 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936963669031, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 17929522, "oldest_snapshot_seqno": -1}
Jan 20 19:22:43 compute-0 ceph-mon[74381]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 6818 keys, 15753068 bytes, temperature: kUnknown
Jan 20 19:22:43 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936963758497, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 15753068, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15706332, "index_size": 28576, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17093, "raw_key_size": 179251, "raw_average_key_size": 26, "raw_value_size": 15582366, "raw_average_value_size": 2285, "num_data_blocks": 1127, "num_entries": 6818, "num_filter_entries": 6818, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768934326, "oldest_key_time": 0, "file_creation_time": 1768936963, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cbf3ab03-d51c-4622-b6c7-e997cd5246eb", "db_session_id": "I40O2DG19JCNHUB0JQU4", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Jan 20 19:22:43 compute-0 ceph-mon[74381]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 19:22:43 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:22:43.758718) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 15753068 bytes
Jan 20 19:22:43 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:22:43.759849) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 200.2 rd, 175.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 16.2 +0.0 blob) out(15.0 +0.0 blob), read-write-amplify(34.2) write-amplify(16.0) OK, records in: 7332, records dropped: 514 output_compression: NoCompression
Jan 20 19:22:43 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:22:43.759864) EVENT_LOG_v1 {"time_micros": 1768936963759857, "job": 40, "event": "compaction_finished", "compaction_time_micros": 89542, "compaction_time_cpu_micros": 28091, "output_level": 6, "num_output_files": 1, "total_output_size": 15753068, "num_input_records": 7332, "num_output_records": 6818, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 19:22:43 compute-0 ceph-mon[74381]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 19:22:43 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936963760185, "job": 40, "event": "table_file_deletion", "file_number": 73}
Jan 20 19:22:43 compute-0 ceph-mon[74381]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 19:22:43 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936963762772, "job": 40, "event": "table_file_deletion", "file_number": 71}
Jan 20 19:22:43 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:22:43.668953) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:22:43 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:22:43.762874) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:22:43 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:22:43.762878) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:22:43 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:22:43.762880) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:22:43 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:22:43.762881) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:22:43 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:22:43.762882) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:22:44 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:22:44 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:22:44 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:22:44.633 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:22:44 compute-0 ceph-mon[74381]: pgmap v1226: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:22:44 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:22:44 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:22:44 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:22:44.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:22:45 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1227: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:22:46 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:22:46 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:22:46 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:22:46.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:22:46 compute-0 ceph-mon[74381]: pgmap v1227: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:22:46 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:22:46 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:22:46 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:22:46.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:22:47 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:22:47 compute-0 ceph-mon[74381]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #75. Immutable memtables: 0.
Jan 20 19:22:47 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:22:47.058915) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 19:22:47 compute-0 ceph-mon[74381]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 75
Jan 20 19:22:47 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936967058944, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 277, "num_deletes": 251, "total_data_size": 49909, "memory_usage": 54664, "flush_reason": "Manual Compaction"}
Jan 20 19:22:47 compute-0 ceph-mon[74381]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #76: started
Jan 20 19:22:47 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936967061562, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 76, "file_size": 49208, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 35074, "largest_seqno": 35350, "table_properties": {"data_size": 47322, "index_size": 115, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 5389, "raw_average_key_size": 20, "raw_value_size": 43623, "raw_average_value_size": 163, "num_data_blocks": 5, "num_entries": 267, "num_filter_entries": 267, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768936964, "oldest_key_time": 1768936964, "file_creation_time": 1768936967, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cbf3ab03-d51c-4622-b6c7-e997cd5246eb", "db_session_id": "I40O2DG19JCNHUB0JQU4", "orig_file_number": 76, "seqno_to_time_mapping": "N/A"}}
Jan 20 19:22:47 compute-0 ceph-mon[74381]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 2727 microseconds, and 1370 cpu microseconds.
Jan 20 19:22:47 compute-0 ceph-mon[74381]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 19:22:47 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:22:47.061633) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #76: 49208 bytes OK
Jan 20 19:22:47 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:22:47.061660) [db/memtable_list.cc:519] [default] Level-0 commit table #76 started
Jan 20 19:22:47 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:22:47.062931) [db/memtable_list.cc:722] [default] Level-0 commit table #76: memtable #1 done
Jan 20 19:22:47 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:22:47.062952) EVENT_LOG_v1 {"time_micros": 1768936967062945, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 19:22:47 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:22:47.062975) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 19:22:47 compute-0 ceph-mon[74381]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 47815, prev total WAL file size 47815, number of live WAL files 2.
Jan 20 19:22:47 compute-0 ceph-mon[74381]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000072.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 19:22:47 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:22:47.063474) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303036' seq:72057594037927935, type:22 .. '6D6772737461740031323538' seq:0, type:0; will stop at (end)
Jan 20 19:22:47 compute-0 ceph-mon[74381]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 19:22:47 compute-0 ceph-mon[74381]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [76(48KB)], [74(15MB)]
Jan 20 19:22:47 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936967063593, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [76], "files_L6": [74], "score": -1, "input_data_size": 15802276, "oldest_snapshot_seqno": -1}
Jan 20 19:22:47 compute-0 ceph-mon[74381]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #77: 6576 keys, 11725649 bytes, temperature: kUnknown
Jan 20 19:22:47 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936967125150, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 77, "file_size": 11725649, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11685480, "index_size": 22649, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16453, "raw_key_size": 174415, "raw_average_key_size": 26, "raw_value_size": 11570604, "raw_average_value_size": 1759, "num_data_blocks": 879, "num_entries": 6576, "num_filter_entries": 6576, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768934326, "oldest_key_time": 0, "file_creation_time": 1768936967, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cbf3ab03-d51c-4622-b6c7-e997cd5246eb", "db_session_id": "I40O2DG19JCNHUB0JQU4", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Jan 20 19:22:47 compute-0 ceph-mon[74381]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 19:22:47 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:22:47.125361) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 11725649 bytes
Jan 20 19:22:47 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:22:47.126695) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 256.6 rd, 190.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 15.0 +0.0 blob) out(11.2 +0.0 blob), read-write-amplify(559.4) write-amplify(238.3) OK, records in: 7085, records dropped: 509 output_compression: NoCompression
Jan 20 19:22:47 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:22:47.126710) EVENT_LOG_v1 {"time_micros": 1768936967126703, "job": 42, "event": "compaction_finished", "compaction_time_micros": 61589, "compaction_time_cpu_micros": 24226, "output_level": 6, "num_output_files": 1, "total_output_size": 11725649, "num_input_records": 7085, "num_output_records": 6576, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 19:22:47 compute-0 ceph-mon[74381]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000076.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 19:22:47 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936967126843, "job": 42, "event": "table_file_deletion", "file_number": 76}
Jan 20 19:22:47 compute-0 ceph-mon[74381]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 19:22:47 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936967129285, "job": 42, "event": "table_file_deletion", "file_number": 74}
Jan 20 19:22:47 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:22:47.063413) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:22:47 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:22:47.129397) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:22:47 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:22:47.129403) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:22:47 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:22:47.129405) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:22:47 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:22:47.129407) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:22:47 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:22:47.129409) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:22:47 compute-0 nova_compute[254061]: 2026-01-20 19:22:47.255 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:22:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:22:47.262Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:22:47 compute-0 nova_compute[254061]: 2026-01-20 19:22:47.320 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:22:47 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1228: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:22:48 compute-0 ceph-mon[74381]: pgmap v1228: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:22:48 compute-0 podman[287479]: 2026-01-20 19:22:48.103193009 +0000 UTC m=+0.080590773 container health_status d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller)
Jan 20 19:22:48 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:22:48 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:22:48 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:22:48.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:22:48 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:22:48 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:22:48 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:22:48.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:22:48 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:22:48.864Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:22:49 compute-0 ceph-mon[74381]: from='client.? 192.168.122.10:0/3920157921' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 19:22:49 compute-0 ceph-mon[74381]: from='client.? 192.168.122.10:0/3920157921' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 19:22:49 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1229: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:22:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:22:49] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Jan 20 19:22:49 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:22:49] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Jan 20 19:22:50 compute-0 ceph-mon[74381]: pgmap v1229: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:22:50 compute-0 nova_compute[254061]: 2026-01-20 19:22:50.444 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:22:50 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:22:50 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:22:50 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:22:50.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:22:50 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:22:50 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:22:50 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:22:50.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:22:51 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1230: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:22:52 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:22:52 compute-0 nova_compute[254061]: 2026-01-20 19:22:52.257 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:22:52 compute-0 nova_compute[254061]: 2026-01-20 19:22:52.322 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:22:52 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:22:52 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:22:52 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:22:52.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:22:52 compute-0 ceph-mon[74381]: pgmap v1230: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:22:52 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:22:52 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:22:52 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:22:52.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:22:53 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1231: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:22:54 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:22:54 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:22:54 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:22:54.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:22:54 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:22:54 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:22:54 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:22:54.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:22:54 compute-0 ceph-mon[74381]: pgmap v1231: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:22:55 compute-0 ceph-mgr[74676]: [balancer INFO root] Optimize plan auto_2026-01-20_19:22:55
Jan 20 19:22:55 compute-0 ceph-mgr[74676]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 19:22:55 compute-0 ceph-mgr[74676]: [balancer INFO root] do_upmap
Jan 20 19:22:55 compute-0 ceph-mgr[74676]: [balancer INFO root] pools ['.mgr', '.nfs', '.rgw.root', 'default.rgw.control', 'volumes', 'backups', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.log', 'vms', 'images', 'cephfs.cephfs.data']
Jan 20 19:22:55 compute-0 ceph-mgr[74676]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 19:22:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:22:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:22:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:22:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:22:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:22:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:22:55 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1232: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:22:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 19:22:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:22:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 19:22:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:22:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 20 19:22:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:22:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:22:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:22:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:22:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:22:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 20 19:22:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:22:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 20 19:22:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:22:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:22:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:22:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 19:22:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:22:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 20 19:22:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:22:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:22:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:22:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 19:22:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:22:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 20 19:22:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 19:22:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:22:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 19:22:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:22:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:22:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:22:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:22:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:22:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:22:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:22:55 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:22:55 compute-0 ceph-mon[74381]: pgmap v1232: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:22:56 compute-0 sudo[287515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:22:56 compute-0 sudo[287515]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:22:56 compute-0 sudo[287515]: pam_unix(sudo:session): session closed for user root
Jan 20 19:22:56 compute-0 sudo[287540]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 20 19:22:56 compute-0 sudo[287540]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:22:56 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:22:56 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:22:56 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:22:56.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:22:56 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:22:56 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:22:56 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:22:56.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:22:57 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:22:57 compute-0 sudo[287540]: pam_unix(sudo:session): session closed for user root
Jan 20 19:22:57 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1233: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Jan 20 19:22:57 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 19:22:57 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:22:57 compute-0 nova_compute[254061]: 2026-01-20 19:22:57.258 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:22:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:22:57.263Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:22:57 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 19:22:57 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 19:22:57 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 20 19:22:57 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:22:57 compute-0 nova_compute[254061]: 2026-01-20 19:22:57.324 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:22:57 compute-0 sudo[287597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:22:57 compute-0 sudo[287597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:22:57 compute-0 sudo[287597]: pam_unix(sudo:session): session closed for user root
Jan 20 19:22:57 compute-0 sudo[287622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 19:22:57 compute-0 sudo[287622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:22:57 compute-0 podman[287689]: 2026-01-20 19:22:57.801562888 +0000 UTC m=+0.057519618 container create dd8a12df5ff2c535757cabb7c419c7710792d96faf517a08e4e223ae99369d71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Jan 20 19:22:57 compute-0 systemd[1]: Started libpod-conmon-dd8a12df5ff2c535757cabb7c419c7710792d96faf517a08e4e223ae99369d71.scope.
Jan 20 19:22:57 compute-0 podman[287689]: 2026-01-20 19:22:57.766689954 +0000 UTC m=+0.022646734 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:22:57 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:22:57 compute-0 podman[287689]: 2026-01-20 19:22:57.878292985 +0000 UTC m=+0.134249685 container init dd8a12df5ff2c535757cabb7c419c7710792d96faf517a08e4e223ae99369d71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 20 19:22:57 compute-0 podman[287689]: 2026-01-20 19:22:57.885243233 +0000 UTC m=+0.141199913 container start dd8a12df5ff2c535757cabb7c419c7710792d96faf517a08e4e223ae99369d71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Jan 20 19:22:57 compute-0 podman[287689]: 2026-01-20 19:22:57.888991545 +0000 UTC m=+0.144948225 container attach dd8a12df5ff2c535757cabb7c419c7710792d96faf517a08e4e223ae99369d71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_mayer, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:22:57 compute-0 priceless_mayer[287705]: 167 167
Jan 20 19:22:57 compute-0 systemd[1]: libpod-dd8a12df5ff2c535757cabb7c419c7710792d96faf517a08e4e223ae99369d71.scope: Deactivated successfully.
Jan 20 19:22:57 compute-0 podman[287689]: 2026-01-20 19:22:57.892094139 +0000 UTC m=+0.148050819 container died dd8a12df5ff2c535757cabb7c419c7710792d96faf517a08e4e223ae99369d71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 19:22:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-d36bcd9a95f5668e5aab929cf359af323a3915802fa878b8ecc182808151853e-merged.mount: Deactivated successfully.
Jan 20 19:22:57 compute-0 podman[287689]: 2026-01-20 19:22:57.93019041 +0000 UTC m=+0.186147130 container remove dd8a12df5ff2c535757cabb7c419c7710792d96faf517a08e4e223ae99369d71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Jan 20 19:22:57 compute-0 systemd[1]: libpod-conmon-dd8a12df5ff2c535757cabb7c419c7710792d96faf517a08e4e223ae99369d71.scope: Deactivated successfully.
Jan 20 19:22:58 compute-0 podman[287730]: 2026-01-20 19:22:58.086017637 +0000 UTC m=+0.042067439 container create 673706caba8ba1d1f293b0ca6bd61b8e04c977827abbe4cf9dc0d54665ec0e40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_bose, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:22:58 compute-0 systemd[1]: Started libpod-conmon-673706caba8ba1d1f293b0ca6bd61b8e04c977827abbe4cf9dc0d54665ec0e40.scope.
Jan 20 19:22:58 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:22:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a83ddf9520bf80a4a517c96253deb505835af4f038c200de23bbf8890690f092/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:22:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a83ddf9520bf80a4a517c96253deb505835af4f038c200de23bbf8890690f092/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:22:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a83ddf9520bf80a4a517c96253deb505835af4f038c200de23bbf8890690f092/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:22:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a83ddf9520bf80a4a517c96253deb505835af4f038c200de23bbf8890690f092/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:22:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a83ddf9520bf80a4a517c96253deb505835af4f038c200de23bbf8890690f092/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:22:58 compute-0 podman[287730]: 2026-01-20 19:22:58.06872973 +0000 UTC m=+0.024779552 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:22:58 compute-0 podman[287730]: 2026-01-20 19:22:58.170249038 +0000 UTC m=+0.126298850 container init 673706caba8ba1d1f293b0ca6bd61b8e04c977827abbe4cf9dc0d54665ec0e40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_bose, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid)
Jan 20 19:22:58 compute-0 podman[287730]: 2026-01-20 19:22:58.178003687 +0000 UTC m=+0.134053489 container start 673706caba8ba1d1f293b0ca6bd61b8e04c977827abbe4cf9dc0d54665ec0e40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_bose, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid)
Jan 20 19:22:58 compute-0 podman[287730]: 2026-01-20 19:22:58.181340018 +0000 UTC m=+0.137389820 container attach 673706caba8ba1d1f293b0ca6bd61b8e04c977827abbe4cf9dc0d54665ec0e40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_bose, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid)
Jan 20 19:22:58 compute-0 ceph-mon[74381]: pgmap v1233: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Jan 20 19:22:58 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:22:58 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:22:58 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 19:22:58 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 19:22:58 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 19:22:58 compute-0 trusting_bose[287746]: --> passed data devices: 0 physical, 1 LVM
Jan 20 19:22:58 compute-0 trusting_bose[287746]: --> All data devices are unavailable
Jan 20 19:22:58 compute-0 systemd[1]: libpod-673706caba8ba1d1f293b0ca6bd61b8e04c977827abbe4cf9dc0d54665ec0e40.scope: Deactivated successfully.
Jan 20 19:22:58 compute-0 podman[287762]: 2026-01-20 19:22:58.566521683 +0000 UTC m=+0.028948764 container died 673706caba8ba1d1f293b0ca6bd61b8e04c977827abbe4cf9dc0d54665ec0e40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_bose, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 20 19:22:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-a83ddf9520bf80a4a517c96253deb505835af4f038c200de23bbf8890690f092-merged.mount: Deactivated successfully.
Jan 20 19:22:58 compute-0 podman[287762]: 2026-01-20 19:22:58.605738705 +0000 UTC m=+0.068165756 container remove 673706caba8ba1d1f293b0ca6bd61b8e04c977827abbe4cf9dc0d54665ec0e40 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_bose, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 20 19:22:58 compute-0 systemd[1]: libpod-conmon-673706caba8ba1d1f293b0ca6bd61b8e04c977827abbe4cf9dc0d54665ec0e40.scope: Deactivated successfully.
Jan 20 19:22:58 compute-0 sudo[287622]: pam_unix(sudo:session): session closed for user root
Jan 20 19:22:58 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:22:58 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:22:58 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:22:58.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:22:58 compute-0 sudo[287776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:22:58 compute-0 sudo[287776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:22:58 compute-0 sudo[287776]: pam_unix(sudo:session): session closed for user root
Jan 20 19:22:58 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:22:58 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:22:58 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:22:58.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:22:58 compute-0 sudo[287801]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- lvm list --format json
Jan 20 19:22:58 compute-0 sudo[287801]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:22:58 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:22:58.865Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:22:59 compute-0 podman[287868]: 2026-01-20 19:22:59.109396816 +0000 UTC m=+0.039157811 container create 8d20d884966d19f9d6fa6cda6d55288c1e750a6e3936e2a3455c2248a8327523 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_chebyshev, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:22:59 compute-0 systemd[1]: Started libpod-conmon-8d20d884966d19f9d6fa6cda6d55288c1e750a6e3936e2a3455c2248a8327523.scope.
Jan 20 19:22:59 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1234: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Jan 20 19:22:59 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:22:59 compute-0 podman[287868]: 2026-01-20 19:22:59.164618201 +0000 UTC m=+0.094379216 container init 8d20d884966d19f9d6fa6cda6d55288c1e750a6e3936e2a3455c2248a8327523 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 20 19:22:59 compute-0 podman[287868]: 2026-01-20 19:22:59.170409078 +0000 UTC m=+0.100170073 container start 8d20d884966d19f9d6fa6cda6d55288c1e750a6e3936e2a3455c2248a8327523 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 20 19:22:59 compute-0 podman[287868]: 2026-01-20 19:22:59.173116511 +0000 UTC m=+0.102877506 container attach 8d20d884966d19f9d6fa6cda6d55288c1e750a6e3936e2a3455c2248a8327523 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:22:59 compute-0 intelligent_chebyshev[287885]: 167 167
Jan 20 19:22:59 compute-0 systemd[1]: libpod-8d20d884966d19f9d6fa6cda6d55288c1e750a6e3936e2a3455c2248a8327523.scope: Deactivated successfully.
Jan 20 19:22:59 compute-0 podman[287868]: 2026-01-20 19:22:59.175076994 +0000 UTC m=+0.104838009 container died 8d20d884966d19f9d6fa6cda6d55288c1e750a6e3936e2a3455c2248a8327523 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Jan 20 19:22:59 compute-0 podman[287868]: 2026-01-20 19:22:59.090000532 +0000 UTC m=+0.019761547 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:22:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-94a3649ab7cf0ff7935dc44937d51ee3aae9dc64e334745448e61ca6bc50d1fd-merged.mount: Deactivated successfully.
Jan 20 19:22:59 compute-0 podman[287868]: 2026-01-20 19:22:59.209871486 +0000 UTC m=+0.139632481 container remove 8d20d884966d19f9d6fa6cda6d55288c1e750a6e3936e2a3455c2248a8327523 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_chebyshev, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 19:22:59 compute-0 systemd[1]: libpod-conmon-8d20d884966d19f9d6fa6cda6d55288c1e750a6e3936e2a3455c2248a8327523.scope: Deactivated successfully.
Jan 20 19:22:59 compute-0 podman[287909]: 2026-01-20 19:22:59.35336804 +0000 UTC m=+0.035626495 container create a4c2858b3122224fd143e1c5029214657cdf485b3157d5053dfa4ad026b1765c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_sutherland, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:22:59 compute-0 systemd[1]: Started libpod-conmon-a4c2858b3122224fd143e1c5029214657cdf485b3157d5053dfa4ad026b1765c.scope.
Jan 20 19:22:59 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:22:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3db0a03d83b6fbe1406b3be177bbffae18aefa58aa472d28cf51c40653e21976/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:22:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3db0a03d83b6fbe1406b3be177bbffae18aefa58aa472d28cf51c40653e21976/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:22:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3db0a03d83b6fbe1406b3be177bbffae18aefa58aa472d28cf51c40653e21976/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:22:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3db0a03d83b6fbe1406b3be177bbffae18aefa58aa472d28cf51c40653e21976/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:22:59 compute-0 podman[287909]: 2026-01-20 19:22:59.426314285 +0000 UTC m=+0.108572750 container init a4c2858b3122224fd143e1c5029214657cdf485b3157d5053dfa4ad026b1765c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 20 19:22:59 compute-0 podman[287909]: 2026-01-20 19:22:59.433184061 +0000 UTC m=+0.115442516 container start a4c2858b3122224fd143e1c5029214657cdf485b3157d5053dfa4ad026b1765c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Jan 20 19:22:59 compute-0 podman[287909]: 2026-01-20 19:22:59.337708506 +0000 UTC m=+0.019966981 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:22:59 compute-0 podman[287909]: 2026-01-20 19:22:59.436154391 +0000 UTC m=+0.118412866 container attach a4c2858b3122224fd143e1c5029214657cdf485b3157d5053dfa4ad026b1765c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Jan 20 19:22:59 compute-0 relaxed_sutherland[287925]: {
Jan 20 19:22:59 compute-0 relaxed_sutherland[287925]:     "0": [
Jan 20 19:22:59 compute-0 relaxed_sutherland[287925]:         {
Jan 20 19:22:59 compute-0 relaxed_sutherland[287925]:             "devices": [
Jan 20 19:22:59 compute-0 relaxed_sutherland[287925]:                 "/dev/loop3"
Jan 20 19:22:59 compute-0 relaxed_sutherland[287925]:             ],
Jan 20 19:22:59 compute-0 relaxed_sutherland[287925]:             "lv_name": "ceph_lv0",
Jan 20 19:22:59 compute-0 relaxed_sutherland[287925]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:22:59 compute-0 relaxed_sutherland[287925]:             "lv_size": "21470642176",
Jan 20 19:22:59 compute-0 relaxed_sutherland[287925]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=aecbbf3b-b405-507b-97d7-637a83f5b4b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5f53c0c6-6046-4836-83f9-ff93da7e674e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:22:59 compute-0 relaxed_sutherland[287925]:             "lv_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 19:22:59 compute-0 relaxed_sutherland[287925]:             "name": "ceph_lv0",
Jan 20 19:22:59 compute-0 relaxed_sutherland[287925]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:22:59 compute-0 relaxed_sutherland[287925]:             "tags": {
Jan 20 19:22:59 compute-0 relaxed_sutherland[287925]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:22:59 compute-0 relaxed_sutherland[287925]:                 "ceph.block_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 19:22:59 compute-0 relaxed_sutherland[287925]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:22:59 compute-0 relaxed_sutherland[287925]:                 "ceph.cluster_fsid": "aecbbf3b-b405-507b-97d7-637a83f5b4b1",
Jan 20 19:22:59 compute-0 relaxed_sutherland[287925]:                 "ceph.cluster_name": "ceph",
Jan 20 19:22:59 compute-0 relaxed_sutherland[287925]:                 "ceph.crush_device_class": "",
Jan 20 19:22:59 compute-0 relaxed_sutherland[287925]:                 "ceph.encrypted": "0",
Jan 20 19:22:59 compute-0 relaxed_sutherland[287925]:                 "ceph.osd_fsid": "5f53c0c6-6046-4836-83f9-ff93da7e674e",
Jan 20 19:22:59 compute-0 relaxed_sutherland[287925]:                 "ceph.osd_id": "0",
Jan 20 19:22:59 compute-0 relaxed_sutherland[287925]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:22:59 compute-0 relaxed_sutherland[287925]:                 "ceph.type": "block",
Jan 20 19:22:59 compute-0 relaxed_sutherland[287925]:                 "ceph.vdo": "0",
Jan 20 19:22:59 compute-0 relaxed_sutherland[287925]:                 "ceph.with_tpm": "0"
Jan 20 19:22:59 compute-0 relaxed_sutherland[287925]:             },
Jan 20 19:22:59 compute-0 relaxed_sutherland[287925]:             "type": "block",
Jan 20 19:22:59 compute-0 relaxed_sutherland[287925]:             "vg_name": "ceph_vg0"
Jan 20 19:22:59 compute-0 relaxed_sutherland[287925]:         }
Jan 20 19:22:59 compute-0 relaxed_sutherland[287925]:     ]
Jan 20 19:22:59 compute-0 relaxed_sutherland[287925]: }
Jan 20 19:22:59 compute-0 systemd[1]: libpod-a4c2858b3122224fd143e1c5029214657cdf485b3157d5053dfa4ad026b1765c.scope: Deactivated successfully.
Jan 20 19:22:59 compute-0 podman[287909]: 2026-01-20 19:22:59.733957411 +0000 UTC m=+0.416215886 container died a4c2858b3122224fd143e1c5029214657cdf485b3157d5053dfa4ad026b1765c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_sutherland, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:22:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-3db0a03d83b6fbe1406b3be177bbffae18aefa58aa472d28cf51c40653e21976-merged.mount: Deactivated successfully.
Jan 20 19:22:59 compute-0 podman[287909]: 2026-01-20 19:22:59.771887438 +0000 UTC m=+0.454145893 container remove a4c2858b3122224fd143e1c5029214657cdf485b3157d5053dfa4ad026b1765c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=relaxed_sutherland, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 19:22:59 compute-0 systemd[1]: libpod-conmon-a4c2858b3122224fd143e1c5029214657cdf485b3157d5053dfa4ad026b1765c.scope: Deactivated successfully.
Jan 20 19:22:59 compute-0 sudo[287801]: pam_unix(sudo:session): session closed for user root
Jan 20 19:22:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:22:59] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Jan 20 19:22:59 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:22:59] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Jan 20 19:22:59 compute-0 sudo[287946]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:22:59 compute-0 sudo[287946]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:22:59 compute-0 sudo[287946]: pam_unix(sudo:session): session closed for user root
Jan 20 19:22:59 compute-0 sudo[287971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- raw list --format json
Jan 20 19:22:59 compute-0 sudo[287971]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:23:00 compute-0 podman[288036]: 2026-01-20 19:23:00.25857873 +0000 UTC m=+0.019794266 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:23:00 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:23:00 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:23:00 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:23:00.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:23:00 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:23:00 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:23:00 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:23:00.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:23:00 compute-0 podman[288036]: 2026-01-20 19:23:00.923607661 +0000 UTC m=+0.684823197 container create e13ce3fc7fef0a8af4f67182c0e3b6cf78256a3dbbba67ca0aebb7065cf55b97 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_ganguly, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 19:23:00 compute-0 ceph-mon[74381]: pgmap v1234: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Jan 20 19:23:00 compute-0 systemd[1]: Started libpod-conmon-e13ce3fc7fef0a8af4f67182c0e3b6cf78256a3dbbba67ca0aebb7065cf55b97.scope.
Jan 20 19:23:01 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:23:01 compute-0 podman[288036]: 2026-01-20 19:23:01.021681455 +0000 UTC m=+0.782897011 container init e13ce3fc7fef0a8af4f67182c0e3b6cf78256a3dbbba67ca0aebb7065cf55b97 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:23:01 compute-0 podman[288036]: 2026-01-20 19:23:01.031919142 +0000 UTC m=+0.793134678 container start e13ce3fc7fef0a8af4f67182c0e3b6cf78256a3dbbba67ca0aebb7065cf55b97 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_ganguly, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:23:01 compute-0 podman[288036]: 2026-01-20 19:23:01.035617882 +0000 UTC m=+0.796833418 container attach e13ce3fc7fef0a8af4f67182c0e3b6cf78256a3dbbba67ca0aebb7065cf55b97 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_ganguly, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:23:01 compute-0 admiring_ganguly[288054]: 167 167
Jan 20 19:23:01 compute-0 systemd[1]: libpod-e13ce3fc7fef0a8af4f67182c0e3b6cf78256a3dbbba67ca0aebb7065cf55b97.scope: Deactivated successfully.
Jan 20 19:23:01 compute-0 podman[288036]: 2026-01-20 19:23:01.037764041 +0000 UTC m=+0.798979577 container died e13ce3fc7fef0a8af4f67182c0e3b6cf78256a3dbbba67ca0aebb7065cf55b97 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 20 19:23:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-95cdcb1c573f5169f515de2c93e8edba1b1876e45b6b83310e1ccfda9b3dc2b9-merged.mount: Deactivated successfully.
Jan 20 19:23:01 compute-0 podman[288036]: 2026-01-20 19:23:01.072569533 +0000 UTC m=+0.833785059 container remove e13ce3fc7fef0a8af4f67182c0e3b6cf78256a3dbbba67ca0aebb7065cf55b97 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_ganguly, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Jan 20 19:23:01 compute-0 systemd[1]: libpod-conmon-e13ce3fc7fef0a8af4f67182c0e3b6cf78256a3dbbba67ca0aebb7065cf55b97.scope: Deactivated successfully.
Jan 20 19:23:01 compute-0 sudo[288067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:23:01 compute-0 sudo[288067]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:23:01 compute-0 sudo[288067]: pam_unix(sudo:session): session closed for user root
Jan 20 19:23:01 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1235: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Jan 20 19:23:01 compute-0 podman[288103]: 2026-01-20 19:23:01.248056893 +0000 UTC m=+0.042914693 container create 52b11c58e514bf8a9247a9c68fa48485ff551e5fc726214e1770e7b3ce0fbb90 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_engelbart, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:23:01 compute-0 systemd[1]: Started libpod-conmon-52b11c58e514bf8a9247a9c68fa48485ff551e5fc726214e1770e7b3ce0fbb90.scope.
Jan 20 19:23:01 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:23:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3200e4b39cde572ce6844b0188eb4cf0f393da8acf778c588ea8987267ca2bc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:23:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3200e4b39cde572ce6844b0188eb4cf0f393da8acf778c588ea8987267ca2bc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:23:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3200e4b39cde572ce6844b0188eb4cf0f393da8acf778c588ea8987267ca2bc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:23:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3200e4b39cde572ce6844b0188eb4cf0f393da8acf778c588ea8987267ca2bc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:23:01 compute-0 podman[288103]: 2026-01-20 19:23:01.23245383 +0000 UTC m=+0.027311650 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:23:01 compute-0 podman[288103]: 2026-01-20 19:23:01.331329666 +0000 UTC m=+0.126187476 container init 52b11c58e514bf8a9247a9c68fa48485ff551e5fc726214e1770e7b3ce0fbb90 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_engelbart, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:23:01 compute-0 podman[288103]: 2026-01-20 19:23:01.337411561 +0000 UTC m=+0.132269361 container start 52b11c58e514bf8a9247a9c68fa48485ff551e5fc726214e1770e7b3ce0fbb90 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_engelbart, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:23:01 compute-0 podman[288103]: 2026-01-20 19:23:01.340635048 +0000 UTC m=+0.135492868 container attach 52b11c58e514bf8a9247a9c68fa48485ff551e5fc726214e1770e7b3ce0fbb90 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_engelbart, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 20 19:23:01 compute-0 lvm[288194]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 19:23:01 compute-0 lvm[288194]: VG ceph_vg0 finished
Jan 20 19:23:01 compute-0 ceph-mon[74381]: pgmap v1235: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Jan 20 19:23:01 compute-0 trusting_engelbart[288120]: {}
Jan 20 19:23:02 compute-0 systemd[1]: libpod-52b11c58e514bf8a9247a9c68fa48485ff551e5fc726214e1770e7b3ce0fbb90.scope: Deactivated successfully.
Jan 20 19:23:02 compute-0 podman[288103]: 2026-01-20 19:23:02.013748927 +0000 UTC m=+0.808606727 container died 52b11c58e514bf8a9247a9c68fa48485ff551e5fc726214e1770e7b3ce0fbb90 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_engelbart, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 20 19:23:02 compute-0 systemd[1]: libpod-52b11c58e514bf8a9247a9c68fa48485ff551e5fc726214e1770e7b3ce0fbb90.scope: Consumed 1.023s CPU time.
Jan 20 19:23:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-b3200e4b39cde572ce6844b0188eb4cf0f393da8acf778c588ea8987267ca2bc-merged.mount: Deactivated successfully.
Jan 20 19:23:02 compute-0 podman[288103]: 2026-01-20 19:23:02.051712024 +0000 UTC m=+0.846569824 container remove 52b11c58e514bf8a9247a9c68fa48485ff551e5fc726214e1770e7b3ce0fbb90 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_engelbart, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:23:02 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:23:02 compute-0 systemd[1]: libpod-conmon-52b11c58e514bf8a9247a9c68fa48485ff551e5fc726214e1770e7b3ce0fbb90.scope: Deactivated successfully.
Jan 20 19:23:02 compute-0 sudo[287971]: pam_unix(sudo:session): session closed for user root
Jan 20 19:23:02 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:23:02 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:23:02 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:23:02 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:23:02 compute-0 sudo[288207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 19:23:02 compute-0 sudo[288207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:23:02 compute-0 sudo[288207]: pam_unix(sudo:session): session closed for user root
Jan 20 19:23:02 compute-0 nova_compute[254061]: 2026-01-20 19:23:02.260 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:23:02 compute-0 nova_compute[254061]: 2026-01-20 19:23:02.326 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:23:02 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:23:02 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:23:02 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:23:02.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:23:02 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:23:02 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:23:02 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:23:02.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:23:03 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:23:03 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:23:03 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1236: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Jan 20 19:23:04 compute-0 ceph-mon[74381]: pgmap v1236: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Jan 20 19:23:04 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:23:04 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:23:04 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:23:04.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:23:04 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:23:04 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:23:04 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:23:04.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:23:05 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1237: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Jan 20 19:23:06 compute-0 ceph-mon[74381]: pgmap v1237: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Jan 20 19:23:06 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:23:06 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:23:06 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:23:06.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:23:06 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:23:06 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:23:06 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:23:06.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:23:07 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:23:07 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1238: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Jan 20 19:23:07 compute-0 nova_compute[254061]: 2026-01-20 19:23:07.262 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:23:07 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:23:07.264Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:23:07 compute-0 nova_compute[254061]: 2026-01-20 19:23:07.326 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:23:08 compute-0 ceph-mon[74381]: pgmap v1238: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Jan 20 19:23:08 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:23:08 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:23:08 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:23:08.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:23:08 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:23:08 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:23:08 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:23:08.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:23:08 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:23:08.866Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:23:09 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1239: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:23:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:23:09] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Jan 20 19:23:09 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:23:09] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Jan 20 19:23:10 compute-0 ceph-mon[74381]: pgmap v1239: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:23:10 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:23:10 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:23:10 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:23:10 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:23:10.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:23:10 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:23:10 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:23:10 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:23:10.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:23:11 compute-0 podman[288242]: 2026-01-20 19:23:11.083277565 +0000 UTC m=+0.054350583 container health_status 7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 20 19:23:11 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1240: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:23:12 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:23:12 compute-0 nova_compute[254061]: 2026-01-20 19:23:12.265 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:23:12 compute-0 nova_compute[254061]: 2026-01-20 19:23:12.327 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:23:12 compute-0 ceph-mon[74381]: pgmap v1240: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:23:12 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:23:12 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:23:12 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:23:12.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:23:12 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:23:12 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:23:12 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:23:12.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:23:13 compute-0 nova_compute[254061]: 2026-01-20 19:23:13.128 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:23:13 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1241: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:23:13 compute-0 nova_compute[254061]: 2026-01-20 19:23:13.166 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:23:13 compute-0 nova_compute[254061]: 2026-01-20 19:23:13.166 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:23:13 compute-0 nova_compute[254061]: 2026-01-20 19:23:13.166 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:23:13 compute-0 nova_compute[254061]: 2026-01-20 19:23:13.167 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 19:23:13 compute-0 nova_compute[254061]: 2026-01-20 19:23:13.167 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:23:13 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:23:13 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/952239754' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:23:13 compute-0 nova_compute[254061]: 2026-01-20 19:23:13.595 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:23:13 compute-0 nova_compute[254061]: 2026-01-20 19:23:13.747 254065 WARNING nova.virt.libvirt.driver [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 19:23:13 compute-0 nova_compute[254061]: 2026-01-20 19:23:13.748 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4458MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 19:23:13 compute-0 nova_compute[254061]: 2026-01-20 19:23:13.748 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:23:13 compute-0 nova_compute[254061]: 2026-01-20 19:23:13.749 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:23:13 compute-0 nova_compute[254061]: 2026-01-20 19:23:13.821 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 19:23:13 compute-0 nova_compute[254061]: 2026-01-20 19:23:13.821 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 19:23:13 compute-0 nova_compute[254061]: 2026-01-20 19:23:13.838 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:23:14 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:23:14 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3121529516' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:23:14 compute-0 nova_compute[254061]: 2026-01-20 19:23:14.247 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:23:14 compute-0 nova_compute[254061]: 2026-01-20 19:23:14.252 254065 DEBUG nova.compute.provider_tree [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Inventory has not changed in ProviderTree for provider: cb9161e5-191d-495c-920a-01144f42a215 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 19:23:14 compute-0 nova_compute[254061]: 2026-01-20 19:23:14.266 254065 DEBUG nova.scheduler.client.report [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Inventory has not changed for provider cb9161e5-191d-495c-920a-01144f42a215 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 19:23:14 compute-0 nova_compute[254061]: 2026-01-20 19:23:14.268 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 19:23:14 compute-0 nova_compute[254061]: 2026-01-20 19:23:14.268 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.519s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:23:14 compute-0 ceph-mon[74381]: pgmap v1241: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:23:14 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/952239754' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:23:14 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/3121529516' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:23:14 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:23:14 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:23:14 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:23:14.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:23:14 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:23:14 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:23:14 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:23:14.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:23:15 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1242: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:23:16 compute-0 nova_compute[254061]: 2026-01-20 19:23:16.268 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:23:16 compute-0 ceph-mon[74381]: pgmap v1242: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:23:16 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:23:16 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:23:16 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:23:16.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:23:16 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:23:16 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:23:16 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:23:16.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:23:17 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:23:17 compute-0 nova_compute[254061]: 2026-01-20 19:23:17.129 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:23:17 compute-0 nova_compute[254061]: 2026-01-20 19:23:17.129 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 19:23:17 compute-0 nova_compute[254061]: 2026-01-20 19:23:17.129 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 19:23:17 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1243: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:23:17 compute-0 nova_compute[254061]: 2026-01-20 19:23:17.148 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 19:23:17 compute-0 nova_compute[254061]: 2026-01-20 19:23:17.148 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:23:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:23:17.264Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:23:17 compute-0 nova_compute[254061]: 2026-01-20 19:23:17.266 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:23:17 compute-0 nova_compute[254061]: 2026-01-20 19:23:17.329 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:23:18 compute-0 ceph-mon[74381]: pgmap v1243: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:23:18 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:23:18 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:23:18 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:23:18.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:23:18 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:23:18 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:23:18 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:23:18.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:23:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:23:18.867Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:23:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:23:18.868Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:23:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:23:18.868Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:23:19 compute-0 nova_compute[254061]: 2026-01-20 19:23:19.128 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:23:19 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1244: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:23:19 compute-0 podman[288314]: 2026-01-20 19:23:19.165847049 +0000 UTC m=+0.134878151 container health_status d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 20 19:23:19 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/2714181642' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:23:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:23:19] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Jan 20 19:23:19 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:23:19] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Jan 20 19:23:20 compute-0 nova_compute[254061]: 2026-01-20 19:23:20.129 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:23:20 compute-0 nova_compute[254061]: 2026-01-20 19:23:20.129 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:23:20 compute-0 nova_compute[254061]: 2026-01-20 19:23:20.129 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 19:23:20 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:23:20 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:23:20 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:23:20.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:23:20 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:23:20 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:23:20 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:23:20.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:23:20 compute-0 ceph-mon[74381]: pgmap v1244: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:23:20 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/2658428913' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:23:21 compute-0 nova_compute[254061]: 2026-01-20 19:23:21.129 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:23:21 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1245: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:23:21 compute-0 sudo[288342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:23:21 compute-0 sudo[288342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:23:21 compute-0 sudo[288342]: pam_unix(sudo:session): session closed for user root
Jan 20 19:23:22 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:23:22 compute-0 ceph-mon[74381]: pgmap v1245: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:23:22 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/1160806447' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:23:22 compute-0 nova_compute[254061]: 2026-01-20 19:23:22.268 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:23:22 compute-0 nova_compute[254061]: 2026-01-20 19:23:22.330 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:23:22 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:23:22 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:23:22 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:23:22.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:23:22 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:23:22 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:23:22 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:23:22.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:23:23 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1246: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:23:23 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/863740139' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:23:24 compute-0 ceph-mon[74381]: pgmap v1246: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:23:24 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:23:24 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:23:24 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:23:24.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:23:24 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:23:24 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:23:24 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:23:24.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:23:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:23:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:23:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:23:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:23:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:23:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:23:25 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1247: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:23:25 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:23:26 compute-0 nova_compute[254061]: 2026-01-20 19:23:26.125 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:23:26 compute-0 ceph-mon[74381]: pgmap v1247: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:23:26 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:23:26 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:23:26 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:23:26.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:23:26 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:23:26 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:23:26 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:23:26.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:23:27 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:23:27 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1248: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:23:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:23:27.265Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:23:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:23:27.265Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:23:27 compute-0 nova_compute[254061]: 2026-01-20 19:23:27.270 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:23:27 compute-0 nova_compute[254061]: 2026-01-20 19:23:27.332 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:23:28 compute-0 ceph-mon[74381]: pgmap v1248: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:23:28 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:23:28 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:23:28 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:23:28.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:23:28 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:23:28 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:23:28 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:23:28.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:23:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:23:28.868Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:23:29 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1249: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:23:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:23:29] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Jan 20 19:23:29 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:23:29] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Jan 20 19:23:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:23:30.301 165659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:23:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:23:30.301 165659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:23:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:23:30.301 165659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:23:30 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:23:30 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:23:30 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:23:30.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:23:30 compute-0 ceph-mon[74381]: pgmap v1249: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:23:30 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:23:30 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:23:30 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:23:30.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:23:31 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1250: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:23:31 compute-0 ceph-mon[74381]: pgmap v1250: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:23:32 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:23:32 compute-0 nova_compute[254061]: 2026-01-20 19:23:32.272 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:23:32 compute-0 nova_compute[254061]: 2026-01-20 19:23:32.334 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:23:32 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:23:32 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:23:32 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:23:32.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:23:32 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:23:32 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:23:32 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:23:32.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:23:33 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1251: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:23:34 compute-0 ceph-mon[74381]: pgmap v1251: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:23:34 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:23:34 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:23:34 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:23:34.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:23:34 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:23:34 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:23:34 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:23:34.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:23:35 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1252: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:23:36 compute-0 ceph-mon[74381]: pgmap v1252: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:23:36 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:23:36 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:23:36 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:23:36.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:23:36 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:23:36 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:23:36 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:23:36.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:23:37 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:23:37 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1253: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:23:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:23:37.266Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:23:37 compute-0 nova_compute[254061]: 2026-01-20 19:23:37.274 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:23:37 compute-0 nova_compute[254061]: 2026-01-20 19:23:37.336 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:23:38 compute-0 ceph-mon[74381]: pgmap v1253: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:23:38 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:23:38 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:23:38 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:23:38.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:23:38 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:23:38 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:23:38 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:23:38.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:23:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:23:38.870Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:23:39 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1254: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:23:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:23:39] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Jan 20 19:23:39 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:23:39] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Jan 20 19:23:40 compute-0 ceph-mon[74381]: pgmap v1254: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:23:40 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:23:40 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:23:40 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:23:40 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:23:40.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:23:40 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:23:40 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:23:40 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:23:40.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:23:41 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1255: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:23:41 compute-0 sudo[288388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:23:41 compute-0 sudo[288388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:23:41 compute-0 sudo[288388]: pam_unix(sudo:session): session closed for user root
Jan 20 19:23:41 compute-0 podman[288412]: 2026-01-20 19:23:41.422574535 +0000 UTC m=+0.066216773 container health_status 7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 19:23:42 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:23:42 compute-0 nova_compute[254061]: 2026-01-20 19:23:42.277 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:23:42 compute-0 nova_compute[254061]: 2026-01-20 19:23:42.338 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:23:42 compute-0 ceph-mon[74381]: pgmap v1255: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:23:42 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:23:42 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:23:42 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:23:42.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:23:42 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:23:42 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:23:42 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:23:42.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:23:43 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1256: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:23:44 compute-0 ceph-mon[74381]: pgmap v1256: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:23:44 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:23:44 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:23:44 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:23:44.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:23:44 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:23:44 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:23:44 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:23:44.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:23:45 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1257: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:23:46 compute-0 ceph-mon[74381]: pgmap v1257: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:23:46 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:23:46 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:23:46 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:23:46.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:23:46 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:23:46 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:23:46 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:23:46.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:23:47 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:23:47 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1258: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:23:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:23:47.267Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:23:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:23:47.267Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:23:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:23:47.267Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:23:47 compute-0 nova_compute[254061]: 2026-01-20 19:23:47.279 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:23:47 compute-0 nova_compute[254061]: 2026-01-20 19:23:47.340 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:23:48 compute-0 ceph-mon[74381]: pgmap v1258: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:23:48 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:23:48 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:23:48 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:23:48.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:23:48 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:23:48 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:23:48 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:23:48.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:23:48 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:23:48.870Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:23:49 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1259: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:23:49 compute-0 ceph-mon[74381]: from='client.? 192.168.122.10:0/1286504381' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 19:23:49 compute-0 ceph-mon[74381]: from='client.? 192.168.122.10:0/1286504381' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 19:23:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:23:49] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Jan 20 19:23:49 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:23:49] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Jan 20 19:23:50 compute-0 podman[288441]: 2026-01-20 19:23:50.087573434 +0000 UTC m=+0.068646599 container health_status d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.vendor=CentOS)
Jan 20 19:23:50 compute-0 ceph-mon[74381]: pgmap v1259: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:23:50 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:23:50 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:23:50 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:23:50.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:23:50 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:23:50 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:23:50 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:23:50.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:23:51 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1260: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:23:52 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:23:52 compute-0 nova_compute[254061]: 2026-01-20 19:23:52.305 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:23:52 compute-0 nova_compute[254061]: 2026-01-20 19:23:52.342 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:23:52 compute-0 ceph-mon[74381]: pgmap v1260: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:23:52 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:23:52 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:23:52 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:23:52.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:23:52 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:23:52 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:23:52 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:23:52.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:23:53 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1261: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:23:54 compute-0 ceph-mon[74381]: pgmap v1261: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:23:54 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:23:54 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:23:54 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:23:54.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:23:54 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:23:54 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:23:54 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:23:54.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:23:55 compute-0 ceph-mgr[74676]: [balancer INFO root] Optimize plan auto_2026-01-20_19:23:55
Jan 20 19:23:55 compute-0 ceph-mgr[74676]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 19:23:55 compute-0 ceph-mgr[74676]: [balancer INFO root] do_upmap
Jan 20 19:23:55 compute-0 ceph-mgr[74676]: [balancer INFO root] pools ['volumes', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.control', 'backups', 'images', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta', 'default.rgw.log', '.nfs', 'vms']
Jan 20 19:23:55 compute-0 ceph-mgr[74676]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 19:23:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:23:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:23:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:23:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:23:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:23:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:23:55 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1262: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:23:55 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:23:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 19:23:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:23:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 19:23:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:23:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 20 19:23:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:23:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:23:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:23:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:23:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:23:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 20 19:23:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:23:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 20 19:23:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:23:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:23:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:23:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 19:23:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:23:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 20 19:23:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:23:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:23:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:23:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 19:23:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:23:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 20 19:23:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 19:23:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:23:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 19:23:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:23:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:23:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:23:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:23:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:23:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:23:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:23:56 compute-0 ceph-mon[74381]: pgmap v1262: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:23:56 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:23:56 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:23:56 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:23:56.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:23:56 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:23:56 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:23:56 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:23:56.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:23:57 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:23:57 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1263: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:23:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:23:57.268Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:23:57 compute-0 nova_compute[254061]: 2026-01-20 19:23:57.340 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:23:57 compute-0 nova_compute[254061]: 2026-01-20 19:23:57.344 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:23:58 compute-0 ceph-mon[74381]: pgmap v1263: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:23:58 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:23:58 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:23:58 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:23:58.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:23:58 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:23:58 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:23:58 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:23:58.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:23:58 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:23:58.871Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:23:58 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:23:58.872Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:23:59 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1264: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:23:59 compute-0 ceph-mon[74381]: pgmap v1264: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:23:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:23:59] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Jan 20 19:23:59 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:23:59] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Jan 20 19:24:00 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:24:00 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:24:00 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:24:00.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:24:00 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:24:00 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:24:00 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:24:00.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:24:01 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1265: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:24:01 compute-0 sudo[288479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:24:01 compute-0 sudo[288479]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:24:01 compute-0 sudo[288479]: pam_unix(sudo:session): session closed for user root
Jan 20 19:24:01 compute-0 anacron[5103]: Job `cron.monthly' started
Jan 20 19:24:01 compute-0 anacron[5103]: Job `cron.monthly' terminated
Jan 20 19:24:01 compute-0 anacron[5103]: Normal exit (3 jobs run)
Jan 20 19:24:02 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:24:02 compute-0 ceph-mon[74381]: pgmap v1265: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:24:02 compute-0 nova_compute[254061]: 2026-01-20 19:24:02.343 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:24:02 compute-0 nova_compute[254061]: 2026-01-20 19:24:02.345 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:24:02 compute-0 sudo[288507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:24:02 compute-0 sudo[288507]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:24:02 compute-0 sudo[288507]: pam_unix(sudo:session): session closed for user root
Jan 20 19:24:02 compute-0 sudo[288532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 20 19:24:02 compute-0 sudo[288532]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:24:02 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:24:02 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:24:02 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:24:02.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:24:02 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:24:02 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:24:02 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:24:02.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:24:02 compute-0 sudo[288532]: pam_unix(sudo:session): session closed for user root
Jan 20 19:24:03 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1266: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:24:03 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1267: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 20 19:24:03 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1268: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:24:03 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 19:24:03 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:24:03 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 20 19:24:03 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:24:03 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 19:24:03 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 19:24:03 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:24:03 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:24:03 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 19:24:03 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 19:24:03 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 19:24:03 compute-0 sudo[288589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:24:03 compute-0 sudo[288589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:24:03 compute-0 sudo[288589]: pam_unix(sudo:session): session closed for user root
Jan 20 19:24:03 compute-0 sudo[288614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 19:24:03 compute-0 sudo[288614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:24:03 compute-0 podman[288681]: 2026-01-20 19:24:03.813690685 +0000 UTC m=+0.050086157 container create 4e26ffac797ab6d0361e164ec00ac24d4e3c497aefc94e6332a0939fc29318a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_payne, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:24:03 compute-0 systemd[1]: Started libpod-conmon-4e26ffac797ab6d0361e164ec00ac24d4e3c497aefc94e6332a0939fc29318a6.scope.
Jan 20 19:24:03 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:24:03 compute-0 podman[288681]: 2026-01-20 19:24:03.886676691 +0000 UTC m=+0.123072173 container init 4e26ffac797ab6d0361e164ec00ac24d4e3c497aefc94e6332a0939fc29318a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:24:03 compute-0 podman[288681]: 2026-01-20 19:24:03.793055827 +0000 UTC m=+0.029451329 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:24:03 compute-0 podman[288681]: 2026-01-20 19:24:03.892459587 +0000 UTC m=+0.128855039 container start 4e26ffac797ab6d0361e164ec00ac24d4e3c497aefc94e6332a0939fc29318a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_payne, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:24:03 compute-0 podman[288681]: 2026-01-20 19:24:03.89476964 +0000 UTC m=+0.131165122 container attach 4e26ffac797ab6d0361e164ec00ac24d4e3c497aefc94e6332a0939fc29318a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Jan 20 19:24:03 compute-0 vigilant_payne[288697]: 167 167
Jan 20 19:24:03 compute-0 systemd[1]: libpod-4e26ffac797ab6d0361e164ec00ac24d4e3c497aefc94e6332a0939fc29318a6.scope: Deactivated successfully.
Jan 20 19:24:03 compute-0 podman[288681]: 2026-01-20 19:24:03.897477653 +0000 UTC m=+0.133873125 container died 4e26ffac797ab6d0361e164ec00ac24d4e3c497aefc94e6332a0939fc29318a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 20 19:24:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-690b2cc29bf90d3f46e5fc2289dfb32081d5418e49c6d8fd2c387379aa936793-merged.mount: Deactivated successfully.
Jan 20 19:24:03 compute-0 podman[288681]: 2026-01-20 19:24:03.933498998 +0000 UTC m=+0.169894450 container remove 4e26ffac797ab6d0361e164ec00ac24d4e3c497aefc94e6332a0939fc29318a6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_payne, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Jan 20 19:24:03 compute-0 systemd[1]: libpod-conmon-4e26ffac797ab6d0361e164ec00ac24d4e3c497aefc94e6332a0939fc29318a6.scope: Deactivated successfully.
Jan 20 19:24:04 compute-0 podman[288721]: 2026-01-20 19:24:04.115580637 +0000 UTC m=+0.042955614 container create b9cee95fd94b5ba70d89847aad6686f8ebbb1f6c138eb9a8acab3cc396287daf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_dubinsky, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid)
Jan 20 19:24:04 compute-0 systemd[1]: Started libpod-conmon-b9cee95fd94b5ba70d89847aad6686f8ebbb1f6c138eb9a8acab3cc396287daf.scope.
Jan 20 19:24:04 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:24:04 compute-0 podman[288721]: 2026-01-20 19:24:04.094416043 +0000 UTC m=+0.021791050 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:24:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6b1a4e005aa2a6594dfe4d2da99450c6677eab3c30e714da7124e3bc40dd7ff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:24:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6b1a4e005aa2a6594dfe4d2da99450c6677eab3c30e714da7124e3bc40dd7ff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:24:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6b1a4e005aa2a6594dfe4d2da99450c6677eab3c30e714da7124e3bc40dd7ff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:24:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6b1a4e005aa2a6594dfe4d2da99450c6677eab3c30e714da7124e3bc40dd7ff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:24:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6b1a4e005aa2a6594dfe4d2da99450c6677eab3c30e714da7124e3bc40dd7ff/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:24:04 compute-0 podman[288721]: 2026-01-20 19:24:04.207225787 +0000 UTC m=+0.134600764 container init b9cee95fd94b5ba70d89847aad6686f8ebbb1f6c138eb9a8acab3cc396287daf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:24:04 compute-0 podman[288721]: 2026-01-20 19:24:04.216271522 +0000 UTC m=+0.143646499 container start b9cee95fd94b5ba70d89847aad6686f8ebbb1f6c138eb9a8acab3cc396287daf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_dubinsky, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 20 19:24:04 compute-0 podman[288721]: 2026-01-20 19:24:04.219601352 +0000 UTC m=+0.146976339 container attach b9cee95fd94b5ba70d89847aad6686f8ebbb1f6c138eb9a8acab3cc396287daf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_dubinsky, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:24:04 compute-0 ceph-mon[74381]: pgmap v1266: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:24:04 compute-0 ceph-mon[74381]: pgmap v1267: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 20 19:24:04 compute-0 ceph-mon[74381]: pgmap v1268: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:24:04 compute-0 practical_dubinsky[288737]: --> passed data devices: 0 physical, 1 LVM
Jan 20 19:24:04 compute-0 practical_dubinsky[288737]: --> All data devices are unavailable
Jan 20 19:24:04 compute-0 systemd[1]: libpod-b9cee95fd94b5ba70d89847aad6686f8ebbb1f6c138eb9a8acab3cc396287daf.scope: Deactivated successfully.
Jan 20 19:24:04 compute-0 podman[288721]: 2026-01-20 19:24:04.563045677 +0000 UTC m=+0.490420634 container died b9cee95fd94b5ba70d89847aad6686f8ebbb1f6c138eb9a8acab3cc396287daf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Jan 20 19:24:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-a6b1a4e005aa2a6594dfe4d2da99450c6677eab3c30e714da7124e3bc40dd7ff-merged.mount: Deactivated successfully.
Jan 20 19:24:04 compute-0 podman[288721]: 2026-01-20 19:24:04.614741507 +0000 UTC m=+0.542116464 container remove b9cee95fd94b5ba70d89847aad6686f8ebbb1f6c138eb9a8acab3cc396287daf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_dubinsky, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:24:04 compute-0 systemd[1]: libpod-conmon-b9cee95fd94b5ba70d89847aad6686f8ebbb1f6c138eb9a8acab3cc396287daf.scope: Deactivated successfully.
Jan 20 19:24:04 compute-0 sudo[288614]: pam_unix(sudo:session): session closed for user root
Jan 20 19:24:04 compute-0 sudo[288764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:24:04 compute-0 sudo[288764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:24:04 compute-0 sudo[288764]: pam_unix(sudo:session): session closed for user root
Jan 20 19:24:04 compute-0 sudo[288789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- lvm list --format json
Jan 20 19:24:04 compute-0 sudo[288789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:24:04 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:24:04 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:24:04 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:24:04.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:24:04 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:24:04 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:24:04 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:24:04.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:24:05 compute-0 podman[288857]: 2026-01-20 19:24:05.159096511 +0000 UTC m=+0.039661365 container create d248fb70259289d9b39a9e7a4ffd5fce3d5fd0b9fa96ee17962b429e765f4a29 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_rubin, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325)
Jan 20 19:24:05 compute-0 systemd[1]: Started libpod-conmon-d248fb70259289d9b39a9e7a4ffd5fce3d5fd0b9fa96ee17962b429e765f4a29.scope.
Jan 20 19:24:05 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1269: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 763 B/s rd, 0 op/s
Jan 20 19:24:05 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:24:05 compute-0 podman[288857]: 2026-01-20 19:24:05.236710701 +0000 UTC m=+0.117275575 container init d248fb70259289d9b39a9e7a4ffd5fce3d5fd0b9fa96ee17962b429e765f4a29 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_rubin, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:24:05 compute-0 podman[288857]: 2026-01-20 19:24:05.141476193 +0000 UTC m=+0.022041027 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:24:05 compute-0 podman[288857]: 2026-01-20 19:24:05.24369476 +0000 UTC m=+0.124259604 container start d248fb70259289d9b39a9e7a4ffd5fce3d5fd0b9fa96ee17962b429e765f4a29 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 20 19:24:05 compute-0 podman[288857]: 2026-01-20 19:24:05.247988366 +0000 UTC m=+0.128553230 container attach d248fb70259289d9b39a9e7a4ffd5fce3d5fd0b9fa96ee17962b429e765f4a29 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:24:05 compute-0 elated_rubin[288874]: 167 167
Jan 20 19:24:05 compute-0 systemd[1]: libpod-d248fb70259289d9b39a9e7a4ffd5fce3d5fd0b9fa96ee17962b429e765f4a29.scope: Deactivated successfully.
Jan 20 19:24:05 compute-0 podman[288857]: 2026-01-20 19:24:05.250231657 +0000 UTC m=+0.130796491 container died d248fb70259289d9b39a9e7a4ffd5fce3d5fd0b9fa96ee17962b429e765f4a29 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_rubin, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:24:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-e20ca04793b08a1a01deed7bc1389cc9eba1ba2b695f276f92e4dd7e5aa27947-merged.mount: Deactivated successfully.
Jan 20 19:24:05 compute-0 podman[288857]: 2026-01-20 19:24:05.285466761 +0000 UTC m=+0.166031595 container remove d248fb70259289d9b39a9e7a4ffd5fce3d5fd0b9fa96ee17962b429e765f4a29 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elated_rubin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:24:05 compute-0 systemd[1]: libpod-conmon-d248fb70259289d9b39a9e7a4ffd5fce3d5fd0b9fa96ee17962b429e765f4a29.scope: Deactivated successfully.
Jan 20 19:24:05 compute-0 podman[288898]: 2026-01-20 19:24:05.445597184 +0000 UTC m=+0.038203784 container create 967dcd3bb06cdeb1b01d31ec7ce0e64c524428c1a6b96dc5c47b62b0800dee9a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_stonebraker, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:24:05 compute-0 systemd[1]: Started libpod-conmon-967dcd3bb06cdeb1b01d31ec7ce0e64c524428c1a6b96dc5c47b62b0800dee9a.scope.
Jan 20 19:24:05 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:24:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd02ba803e734f62716b61524cf4f5607946c2ac7286cbde5761d75760676615/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:24:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd02ba803e734f62716b61524cf4f5607946c2ac7286cbde5761d75760676615/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:24:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd02ba803e734f62716b61524cf4f5607946c2ac7286cbde5761d75760676615/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:24:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd02ba803e734f62716b61524cf4f5607946c2ac7286cbde5761d75760676615/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:24:05 compute-0 podman[288898]: 2026-01-20 19:24:05.521291294 +0000 UTC m=+0.113897994 container init 967dcd3bb06cdeb1b01d31ec7ce0e64c524428c1a6b96dc5c47b62b0800dee9a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True)
Jan 20 19:24:05 compute-0 podman[288898]: 2026-01-20 19:24:05.430010323 +0000 UTC m=+0.022616953 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:24:05 compute-0 podman[288898]: 2026-01-20 19:24:05.53556692 +0000 UTC m=+0.128173560 container start 967dcd3bb06cdeb1b01d31ec7ce0e64c524428c1a6b96dc5c47b62b0800dee9a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_stonebraker, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:24:05 compute-0 podman[288898]: 2026-01-20 19:24:05.540328279 +0000 UTC m=+0.132934969 container attach 967dcd3bb06cdeb1b01d31ec7ce0e64c524428c1a6b96dc5c47b62b0800dee9a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_stonebraker, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 19:24:05 compute-0 suspicious_stonebraker[288914]: {
Jan 20 19:24:05 compute-0 suspicious_stonebraker[288914]:     "0": [
Jan 20 19:24:05 compute-0 suspicious_stonebraker[288914]:         {
Jan 20 19:24:05 compute-0 suspicious_stonebraker[288914]:             "devices": [
Jan 20 19:24:05 compute-0 suspicious_stonebraker[288914]:                 "/dev/loop3"
Jan 20 19:24:05 compute-0 suspicious_stonebraker[288914]:             ],
Jan 20 19:24:05 compute-0 suspicious_stonebraker[288914]:             "lv_name": "ceph_lv0",
Jan 20 19:24:05 compute-0 suspicious_stonebraker[288914]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:24:05 compute-0 suspicious_stonebraker[288914]:             "lv_size": "21470642176",
Jan 20 19:24:05 compute-0 suspicious_stonebraker[288914]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=aecbbf3b-b405-507b-97d7-637a83f5b4b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5f53c0c6-6046-4836-83f9-ff93da7e674e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:24:05 compute-0 suspicious_stonebraker[288914]:             "lv_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 19:24:05 compute-0 suspicious_stonebraker[288914]:             "name": "ceph_lv0",
Jan 20 19:24:05 compute-0 suspicious_stonebraker[288914]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:24:05 compute-0 suspicious_stonebraker[288914]:             "tags": {
Jan 20 19:24:05 compute-0 suspicious_stonebraker[288914]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:24:05 compute-0 suspicious_stonebraker[288914]:                 "ceph.block_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 19:24:05 compute-0 suspicious_stonebraker[288914]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:24:05 compute-0 suspicious_stonebraker[288914]:                 "ceph.cluster_fsid": "aecbbf3b-b405-507b-97d7-637a83f5b4b1",
Jan 20 19:24:05 compute-0 suspicious_stonebraker[288914]:                 "ceph.cluster_name": "ceph",
Jan 20 19:24:05 compute-0 suspicious_stonebraker[288914]:                 "ceph.crush_device_class": "",
Jan 20 19:24:05 compute-0 suspicious_stonebraker[288914]:                 "ceph.encrypted": "0",
Jan 20 19:24:05 compute-0 suspicious_stonebraker[288914]:                 "ceph.osd_fsid": "5f53c0c6-6046-4836-83f9-ff93da7e674e",
Jan 20 19:24:05 compute-0 suspicious_stonebraker[288914]:                 "ceph.osd_id": "0",
Jan 20 19:24:05 compute-0 suspicious_stonebraker[288914]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:24:05 compute-0 suspicious_stonebraker[288914]:                 "ceph.type": "block",
Jan 20 19:24:05 compute-0 suspicious_stonebraker[288914]:                 "ceph.vdo": "0",
Jan 20 19:24:05 compute-0 suspicious_stonebraker[288914]:                 "ceph.with_tpm": "0"
Jan 20 19:24:05 compute-0 suspicious_stonebraker[288914]:             },
Jan 20 19:24:05 compute-0 suspicious_stonebraker[288914]:             "type": "block",
Jan 20 19:24:05 compute-0 suspicious_stonebraker[288914]:             "vg_name": "ceph_vg0"
Jan 20 19:24:05 compute-0 suspicious_stonebraker[288914]:         }
Jan 20 19:24:05 compute-0 suspicious_stonebraker[288914]:     ]
Jan 20 19:24:05 compute-0 suspicious_stonebraker[288914]: }
Jan 20 19:24:05 compute-0 systemd[1]: libpod-967dcd3bb06cdeb1b01d31ec7ce0e64c524428c1a6b96dc5c47b62b0800dee9a.scope: Deactivated successfully.
Jan 20 19:24:05 compute-0 podman[288898]: 2026-01-20 19:24:05.857559765 +0000 UTC m=+0.450166425 container died 967dcd3bb06cdeb1b01d31ec7ce0e64c524428c1a6b96dc5c47b62b0800dee9a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_stonebraker, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 20 19:24:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd02ba803e734f62716b61524cf4f5607946c2ac7286cbde5761d75760676615-merged.mount: Deactivated successfully.
Jan 20 19:24:05 compute-0 podman[288898]: 2026-01-20 19:24:05.914245619 +0000 UTC m=+0.506852259 container remove 967dcd3bb06cdeb1b01d31ec7ce0e64c524428c1a6b96dc5c47b62b0800dee9a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_stonebraker, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:24:05 compute-0 systemd[1]: libpod-conmon-967dcd3bb06cdeb1b01d31ec7ce0e64c524428c1a6b96dc5c47b62b0800dee9a.scope: Deactivated successfully.
Jan 20 19:24:05 compute-0 sudo[288789]: pam_unix(sudo:session): session closed for user root
Jan 20 19:24:06 compute-0 sudo[288941]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:24:06 compute-0 sudo[288941]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:24:06 compute-0 sudo[288941]: pam_unix(sudo:session): session closed for user root
Jan 20 19:24:06 compute-0 sudo[288966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- raw list --format json
Jan 20 19:24:06 compute-0 sudo[288966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:24:06 compute-0 ceph-mon[74381]: pgmap v1269: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 763 B/s rd, 0 op/s
Jan 20 19:24:06 compute-0 podman[289034]: 2026-01-20 19:24:06.511435283 +0000 UTC m=+0.035029379 container create 6d0e1b36e6bbc300ec08d8cb4051666a29fc9efa3112a87ae0d6b0dddb3dcac2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_wilson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 20 19:24:06 compute-0 systemd[1]: Started libpod-conmon-6d0e1b36e6bbc300ec08d8cb4051666a29fc9efa3112a87ae0d6b0dddb3dcac2.scope.
Jan 20 19:24:06 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:24:06 compute-0 podman[289034]: 2026-01-20 19:24:06.585409146 +0000 UTC m=+0.109003262 container init 6d0e1b36e6bbc300ec08d8cb4051666a29fc9efa3112a87ae0d6b0dddb3dcac2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_wilson, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Jan 20 19:24:06 compute-0 podman[289034]: 2026-01-20 19:24:06.591547892 +0000 UTC m=+0.115141988 container start 6d0e1b36e6bbc300ec08d8cb4051666a29fc9efa3112a87ae0d6b0dddb3dcac2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_wilson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Jan 20 19:24:06 compute-0 podman[289034]: 2026-01-20 19:24:06.496226212 +0000 UTC m=+0.019820328 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:24:06 compute-0 podman[289034]: 2026-01-20 19:24:06.594363888 +0000 UTC m=+0.117958354 container attach 6d0e1b36e6bbc300ec08d8cb4051666a29fc9efa3112a87ae0d6b0dddb3dcac2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Jan 20 19:24:06 compute-0 stoic_wilson[289051]: 167 167
Jan 20 19:24:06 compute-0 systemd[1]: libpod-6d0e1b36e6bbc300ec08d8cb4051666a29fc9efa3112a87ae0d6b0dddb3dcac2.scope: Deactivated successfully.
Jan 20 19:24:06 compute-0 podman[289034]: 2026-01-20 19:24:06.596077194 +0000 UTC m=+0.119671290 container died 6d0e1b36e6bbc300ec08d8cb4051666a29fc9efa3112a87ae0d6b0dddb3dcac2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_wilson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:24:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-87934b78fa650bf4af8ec50cdb8c3247dafecf2b4654b4768997e0dff3a00d48-merged.mount: Deactivated successfully.
Jan 20 19:24:06 compute-0 podman[289034]: 2026-01-20 19:24:06.629595121 +0000 UTC m=+0.153189217 container remove 6d0e1b36e6bbc300ec08d8cb4051666a29fc9efa3112a87ae0d6b0dddb3dcac2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_wilson, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:24:06 compute-0 systemd[1]: libpod-conmon-6d0e1b36e6bbc300ec08d8cb4051666a29fc9efa3112a87ae0d6b0dddb3dcac2.scope: Deactivated successfully.
Jan 20 19:24:06 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:24:06 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:24:06 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:24:06.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:24:06 compute-0 podman[289074]: 2026-01-20 19:24:06.790766533 +0000 UTC m=+0.049861290 container create a2751c5bd169c5d8d924006d72d2a7257b362cbf65bb593b50f034f06de35dce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_hoover, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:24:06 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:24:06 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:24:06 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:24:06.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:24:06 compute-0 systemd[1]: Started libpod-conmon-a2751c5bd169c5d8d924006d72d2a7257b362cbf65bb593b50f034f06de35dce.scope.
Jan 20 19:24:06 compute-0 podman[289074]: 2026-01-20 19:24:06.774437892 +0000 UTC m=+0.033532639 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:24:06 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:24:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcc352d984061b8f4975967f3b0e620c0cd5f2244e1282f7e702aec9da631253/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:24:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcc352d984061b8f4975967f3b0e620c0cd5f2244e1282f7e702aec9da631253/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:24:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcc352d984061b8f4975967f3b0e620c0cd5f2244e1282f7e702aec9da631253/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:24:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcc352d984061b8f4975967f3b0e620c0cd5f2244e1282f7e702aec9da631253/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:24:06 compute-0 podman[289074]: 2026-01-20 19:24:06.890208435 +0000 UTC m=+0.149303172 container init a2751c5bd169c5d8d924006d72d2a7257b362cbf65bb593b50f034f06de35dce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_hoover, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 20 19:24:06 compute-0 podman[289074]: 2026-01-20 19:24:06.89665322 +0000 UTC m=+0.155747987 container start a2751c5bd169c5d8d924006d72d2a7257b362cbf65bb593b50f034f06de35dce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_hoover, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Jan 20 19:24:06 compute-0 podman[289074]: 2026-01-20 19:24:06.899853666 +0000 UTC m=+0.158948503 container attach a2751c5bd169c5d8d924006d72d2a7257b362cbf65bb593b50f034f06de35dce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_hoover, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:24:07 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:24:07 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1270: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:24:07 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:24:07.269Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:24:07 compute-0 nova_compute[254061]: 2026-01-20 19:24:07.345 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:24:07 compute-0 lvm[289165]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 19:24:07 compute-0 lvm[289165]: VG ceph_vg0 finished
Jan 20 19:24:07 compute-0 vigorous_hoover[289090]: {}
Jan 20 19:24:07 compute-0 systemd[1]: libpod-a2751c5bd169c5d8d924006d72d2a7257b362cbf65bb593b50f034f06de35dce.scope: Deactivated successfully.
Jan 20 19:24:07 compute-0 systemd[1]: libpod-a2751c5bd169c5d8d924006d72d2a7257b362cbf65bb593b50f034f06de35dce.scope: Consumed 1.081s CPU time.
Jan 20 19:24:07 compute-0 podman[289074]: 2026-01-20 19:24:07.647070221 +0000 UTC m=+0.906164948 container died a2751c5bd169c5d8d924006d72d2a7257b362cbf65bb593b50f034f06de35dce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_hoover, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 20 19:24:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-bcc352d984061b8f4975967f3b0e620c0cd5f2244e1282f7e702aec9da631253-merged.mount: Deactivated successfully.
Jan 20 19:24:07 compute-0 podman[289074]: 2026-01-20 19:24:07.687206996 +0000 UTC m=+0.946301723 container remove a2751c5bd169c5d8d924006d72d2a7257b362cbf65bb593b50f034f06de35dce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_hoover, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Jan 20 19:24:07 compute-0 systemd[1]: libpod-conmon-a2751c5bd169c5d8d924006d72d2a7257b362cbf65bb593b50f034f06de35dce.scope: Deactivated successfully.
Jan 20 19:24:07 compute-0 sudo[288966]: pam_unix(sudo:session): session closed for user root
Jan 20 19:24:07 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:24:07 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:24:07 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:24:07 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:24:08 compute-0 sudo[289181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 19:24:08 compute-0 sudo[289181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:24:08 compute-0 sudo[289181]: pam_unix(sudo:session): session closed for user root
Jan 20 19:24:08 compute-0 ceph-mon[74381]: pgmap v1270: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:24:08 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:24:08 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:24:08 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:24:08 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:24:08 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:24:08.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:24:08 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:24:08 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:24:08 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:24:08.815 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:24:08 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:24:08.872Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:24:09 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1271: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 763 B/s rd, 0 op/s
Jan 20 19:24:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:24:09] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Jan 20 19:24:09 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:24:09] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Jan 20 19:24:10 compute-0 ceph-mon[74381]: pgmap v1271: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 763 B/s rd, 0 op/s
Jan 20 19:24:10 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:24:10 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:24:10 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:24:10 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:24:10.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:24:10 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:24:10 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:24:10 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:24:10.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:24:11 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1272: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:24:11 compute-0 ceph-mon[74381]: pgmap v1272: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:24:12 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:24:12 compute-0 podman[289210]: 2026-01-20 19:24:12.100667063 +0000 UTC m=+0.071614790 container health_status 7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 20 19:24:12 compute-0 nova_compute[254061]: 2026-01-20 19:24:12.349 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:24:12 compute-0 nova_compute[254061]: 2026-01-20 19:24:12.351 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:24:12 compute-0 nova_compute[254061]: 2026-01-20 19:24:12.351 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 20 19:24:12 compute-0 nova_compute[254061]: 2026-01-20 19:24:12.352 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 19:24:12 compute-0 nova_compute[254061]: 2026-01-20 19:24:12.363 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:24:12 compute-0 nova_compute[254061]: 2026-01-20 19:24:12.363 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 19:24:12 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:24:12 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:24:12 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:24:12.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:24:12 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:24:12 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:24:12 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:24:12.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:24:13 compute-0 nova_compute[254061]: 2026-01-20 19:24:13.128 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:24:13 compute-0 nova_compute[254061]: 2026-01-20 19:24:13.155 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:24:13 compute-0 nova_compute[254061]: 2026-01-20 19:24:13.156 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:24:13 compute-0 nova_compute[254061]: 2026-01-20 19:24:13.156 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:24:13 compute-0 nova_compute[254061]: 2026-01-20 19:24:13.156 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 19:24:13 compute-0 nova_compute[254061]: 2026-01-20 19:24:13.156 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:24:13 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1273: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 20 19:24:13 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:24:13 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/523893174' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:24:13 compute-0 nova_compute[254061]: 2026-01-20 19:24:13.585 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:24:13 compute-0 nova_compute[254061]: 2026-01-20 19:24:13.791 254065 WARNING nova.virt.libvirt.driver [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 19:24:13 compute-0 nova_compute[254061]: 2026-01-20 19:24:13.793 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4451MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 19:24:13 compute-0 nova_compute[254061]: 2026-01-20 19:24:13.793 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:24:13 compute-0 nova_compute[254061]: 2026-01-20 19:24:13.794 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:24:13 compute-0 nova_compute[254061]: 2026-01-20 19:24:13.896 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 19:24:13 compute-0 nova_compute[254061]: 2026-01-20 19:24:13.897 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 19:24:13 compute-0 nova_compute[254061]: 2026-01-20 19:24:13.912 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:24:14 compute-0 ceph-mon[74381]: pgmap v1273: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 20 19:24:14 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/523893174' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:24:14 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:24:14 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4150529885' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:24:14 compute-0 nova_compute[254061]: 2026-01-20 19:24:14.365 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:24:14 compute-0 nova_compute[254061]: 2026-01-20 19:24:14.371 254065 DEBUG nova.compute.provider_tree [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Inventory has not changed in ProviderTree for provider: cb9161e5-191d-495c-920a-01144f42a215 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 19:24:14 compute-0 nova_compute[254061]: 2026-01-20 19:24:14.643 254065 DEBUG nova.scheduler.client.report [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Inventory has not changed for provider cb9161e5-191d-495c-920a-01144f42a215 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 19:24:14 compute-0 nova_compute[254061]: 2026-01-20 19:24:14.647 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 19:24:14 compute-0 nova_compute[254061]: 2026-01-20 19:24:14.647 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.853s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:24:14 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:24:14 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:24:14 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:24:14.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:24:14 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:24:14 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:24:14 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:24:14.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:24:15 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1274: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:24:15 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/4150529885' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:24:16 compute-0 ceph-mon[74381]: pgmap v1274: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:24:16 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:24:16 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:24:16 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:24:16.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:24:16 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:24:16 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:24:16 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:24:16.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:24:17 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:24:17 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1275: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:24:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:24:17.269Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:24:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:24:17.270Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:24:17 compute-0 nova_compute[254061]: 2026-01-20 19:24:17.364 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:24:18 compute-0 ceph-mon[74381]: pgmap v1275: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:24:18 compute-0 nova_compute[254061]: 2026-01-20 19:24:18.648 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:24:18 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:24:18 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:24:18 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:24:18.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:24:18 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:24:18 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:24:18 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:24:18.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:24:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:24:18.873Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:24:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:24:18.873Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:24:19 compute-0 nova_compute[254061]: 2026-01-20 19:24:19.128 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:24:19 compute-0 nova_compute[254061]: 2026-01-20 19:24:19.129 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 19:24:19 compute-0 nova_compute[254061]: 2026-01-20 19:24:19.129 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 19:24:19 compute-0 nova_compute[254061]: 2026-01-20 19:24:19.145 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 19:24:19 compute-0 nova_compute[254061]: 2026-01-20 19:24:19.145 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:24:19 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1276: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:24:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:24:19] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Jan 20 19:24:19 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:24:19] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Jan 20 19:24:20 compute-0 ceph-mon[74381]: pgmap v1276: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:24:20 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:24:20 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:24:20 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:24:20.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:24:20 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:24:20 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:24:20 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:24:20.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:24:21 compute-0 podman[289283]: 2026-01-20 19:24:21.100755642 +0000 UTC m=+0.074446607 container health_status d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller)
Jan 20 19:24:21 compute-0 nova_compute[254061]: 2026-01-20 19:24:21.128 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:24:21 compute-0 nova_compute[254061]: 2026-01-20 19:24:21.129 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:24:21 compute-0 nova_compute[254061]: 2026-01-20 19:24:21.129 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:24:21 compute-0 nova_compute[254061]: 2026-01-20 19:24:21.130 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:24:21 compute-0 nova_compute[254061]: 2026-01-20 19:24:21.130 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 19:24:21 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1277: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:24:21 compute-0 sudo[289309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:24:21 compute-0 sudo[289309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:24:21 compute-0 sudo[289309]: pam_unix(sudo:session): session closed for user root
Jan 20 19:24:21 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/1913013377' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:24:21 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/4217429439' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:24:22 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:24:22 compute-0 nova_compute[254061]: 2026-01-20 19:24:22.365 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:24:22 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:24:22 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:24:22 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:24:22.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:24:22 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:24:22 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:24:22 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:24:22.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:24:22 compute-0 ceph-mon[74381]: pgmap v1277: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:24:22 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/1799664748' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:24:22 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/2532046845' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:24:23 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1278: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:24:23 compute-0 ceph-mon[74381]: pgmap v1278: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:24:24 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:24:24 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:24:24 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:24:24.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:24:24 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:24:24 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:24:24 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:24:24.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:24:25 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:24:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:24:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:24:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:24:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:24:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:24:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:24:25 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1279: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:24:26 compute-0 ceph-mon[74381]: pgmap v1279: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:24:26 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:24:26 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:24:26 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:24:26.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:24:26 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:24:26 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:24:26 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:24:26.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:24:27 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:24:27 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1280: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:24:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:24:27.271Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:24:27 compute-0 nova_compute[254061]: 2026-01-20 19:24:27.366 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:24:27 compute-0 nova_compute[254061]: 2026-01-20 19:24:27.369 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:24:28 compute-0 nova_compute[254061]: 2026-01-20 19:24:28.126 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:24:28 compute-0 ceph-mon[74381]: pgmap v1280: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:24:28 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:24:28 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:24:28 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:24:28.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:24:28 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:24:28 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:24:28 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:24:28.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:24:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:24:28.874Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:24:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:24:28.874Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:24:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:24:28.874Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:24:29 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1281: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:24:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:24:29] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Jan 20 19:24:29 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:24:29] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Jan 20 19:24:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:24:30.302 165659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:24:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:24:30.302 165659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:24:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:24:30.302 165659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:24:30 compute-0 ceph-mon[74381]: pgmap v1281: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:24:30 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:24:30 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:24:30 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:24:30.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:24:30 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:24:30 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:24:30 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:24:30.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:24:31 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1282: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:24:32 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:24:32 compute-0 ceph-mon[74381]: pgmap v1282: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:24:32 compute-0 nova_compute[254061]: 2026-01-20 19:24:32.367 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:24:32 compute-0 nova_compute[254061]: 2026-01-20 19:24:32.370 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:24:32 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:24:32 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:24:32 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:24:32.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:24:32 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:24:32 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:24:32 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:24:32.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:24:33 compute-0 nova_compute[254061]: 2026-01-20 19:24:33.124 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:24:33 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1283: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:24:34 compute-0 ceph-mon[74381]: pgmap v1283: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:24:34 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:24:34 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:24:34 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:24:34.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:24:34 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:24:34 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:24:34 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:24:34.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:24:35 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1284: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:24:36 compute-0 ceph-mon[74381]: pgmap v1284: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:24:36 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:24:36 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:24:36 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:24:36.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:24:36 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:24:36 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:24:36 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:24:36.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:24:37 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:24:37 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1285: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:24:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:24:37.272Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:24:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:24:37.272Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:24:37 compute-0 nova_compute[254061]: 2026-01-20 19:24:37.369 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:24:38 compute-0 ceph-mon[74381]: pgmap v1285: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:24:38 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:24:38 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:24:38 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:24:38.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:24:38 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:24:38 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:24:38 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:24:38.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:24:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:24:38.876Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:24:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:24:38.876Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:24:39 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1286: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:24:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:24:39] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Jan 20 19:24:39 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:24:39] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Jan 20 19:24:40 compute-0 ceph-mon[74381]: pgmap v1286: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:24:40 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:24:40 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:24:40 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:24:40 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:24:40.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:24:40 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:24:40 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:24:40 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:24:40.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:24:41 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1287: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:24:41 compute-0 sudo[289354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:24:41 compute-0 sudo[289354]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:24:41 compute-0 sudo[289354]: pam_unix(sudo:session): session closed for user root
Jan 20 19:24:42 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:24:42 compute-0 nova_compute[254061]: 2026-01-20 19:24:42.373 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:24:42 compute-0 nova_compute[254061]: 2026-01-20 19:24:42.374 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:24:42 compute-0 nova_compute[254061]: 2026-01-20 19:24:42.375 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 20 19:24:42 compute-0 nova_compute[254061]: 2026-01-20 19:24:42.375 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 19:24:42 compute-0 nova_compute[254061]: 2026-01-20 19:24:42.406 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:24:42 compute-0 nova_compute[254061]: 2026-01-20 19:24:42.407 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 19:24:42 compute-0 ceph-mon[74381]: pgmap v1287: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:24:42 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:24:42 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:24:42 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:24:42.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:24:42 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:24:42 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:24:42 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:24:42.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:24:43 compute-0 podman[289381]: 2026-01-20 19:24:43.08756827 +0000 UTC m=+0.062280297 container health_status 7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 20 19:24:43 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1288: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:24:44 compute-0 ceph-mon[74381]: pgmap v1288: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:24:44 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:24:44 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:24:44 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:24:44.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:24:44 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:24:44 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:24:44 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:24:44.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:24:45 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1289: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:24:45 compute-0 ceph-mon[74381]: pgmap v1289: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:24:46 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:24:46 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:24:46 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:24:46.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:24:46 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:24:46 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:24:46 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:24:46.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:24:47 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:24:47 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1290: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:24:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:24:47.274Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:24:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:24:47.274Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:24:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:24:47.274Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:24:47 compute-0 nova_compute[254061]: 2026-01-20 19:24:47.409 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:24:48 compute-0 ceph-mon[74381]: pgmap v1290: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:24:48 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:24:48 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:24:48 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:24:48.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:24:48 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:24:48 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:24:48 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:24:48.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:24:48 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:24:48.876Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:24:49 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1291: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:24:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:24:49] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Jan 20 19:24:49 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:24:49] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Jan 20 19:24:50 compute-0 ceph-mon[74381]: pgmap v1291: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:24:50 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:24:50 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:24:50 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:24:50.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:24:50 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:24:50 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:24:50 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:24:50.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:24:51 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1292: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:24:52 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:24:52 compute-0 podman[289410]: 2026-01-20 19:24:52.122900453 +0000 UTC m=+0.091181159 container health_status d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller)
Jan 20 19:24:52 compute-0 nova_compute[254061]: 2026-01-20 19:24:52.410 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:24:52 compute-0 nova_compute[254061]: 2026-01-20 19:24:52.412 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:24:52 compute-0 nova_compute[254061]: 2026-01-20 19:24:52.412 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 20 19:24:52 compute-0 nova_compute[254061]: 2026-01-20 19:24:52.412 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 19:24:52 compute-0 nova_compute[254061]: 2026-01-20 19:24:52.457 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:24:52 compute-0 nova_compute[254061]: 2026-01-20 19:24:52.458 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 19:24:52 compute-0 ceph-mon[74381]: pgmap v1292: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:24:52 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:24:52 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:24:52 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:24:52.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:24:52 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:24:52 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:24:52 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:24:52.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:24:53 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1293: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:24:54 compute-0 ceph-mon[74381]: pgmap v1293: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:24:54 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:24:54 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:24:54 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:24:54.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:24:54 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:24:54 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:24:54 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:24:54.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:24:55 compute-0 ceph-mgr[74676]: [balancer INFO root] Optimize plan auto_2026-01-20_19:24:55
Jan 20 19:24:55 compute-0 ceph-mgr[74676]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 19:24:55 compute-0 ceph-mgr[74676]: [balancer INFO root] do_upmap
Jan 20 19:24:55 compute-0 ceph-mgr[74676]: [balancer INFO root] pools ['volumes', 'vms', 'images', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control', 'backups', '.mgr', 'cephfs.cephfs.meta', '.rgw.root', '.nfs']
Jan 20 19:24:55 compute-0 ceph-mgr[74676]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 19:24:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:24:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:24:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:24:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:24:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:24:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:24:55 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1294: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:24:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 19:24:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:24:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 19:24:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:24:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 20 19:24:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:24:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:24:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:24:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:24:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:24:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 20 19:24:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:24:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 20 19:24:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:24:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:24:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:24:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 19:24:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:24:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 20 19:24:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:24:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:24:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:24:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 19:24:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:24:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 20 19:24:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 19:24:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:24:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 19:24:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:24:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:24:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:24:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:24:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:24:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:24:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:24:55 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:24:56 compute-0 ceph-mon[74381]: pgmap v1294: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:24:56 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:24:56 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:24:56 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:24:56.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:24:56 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:24:56 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:24:56 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:24:56.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:24:57 compute-0 ceph-mgr[74676]: [devicehealth INFO root] Check health
Jan 20 19:24:57 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:24:57 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1295: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:24:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:24:57.275Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:24:57 compute-0 nova_compute[254061]: 2026-01-20 19:24:57.458 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:24:57 compute-0 ceph-mon[74381]: pgmap v1295: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:24:58 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:24:58 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:24:58 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:24:58.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:24:58 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:24:58 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:24:58 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:24:58.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:24:58 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:24:58.878Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:24:59 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1296: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:24:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:24:59] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Jan 20 19:24:59 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:24:59] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Jan 20 19:25:00 compute-0 ceph-mon[74381]: pgmap v1296: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:25:00 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:25:00 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:25:00 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:25:00.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:25:00 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:25:00 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:25:00 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:25:00.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:25:01 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1297: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:25:01 compute-0 sudo[289446]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:25:01 compute-0 sudo[289446]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:25:01 compute-0 sudo[289446]: pam_unix(sudo:session): session closed for user root
Jan 20 19:25:02 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:25:02 compute-0 nova_compute[254061]: 2026-01-20 19:25:02.460 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:25:02 compute-0 ceph-mon[74381]: pgmap v1297: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:25:02 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:25:02 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:25:02 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:25:02.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:25:02 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:25:02 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:25:02 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:25:02.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:25:03 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1298: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:25:04 compute-0 ceph-mon[74381]: pgmap v1298: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:25:04 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:25:04 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:25:04 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:25:04.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:25:04 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:25:04 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:25:04 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:25:04.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:25:05 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1299: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:25:06 compute-0 ceph-mon[74381]: pgmap v1299: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:25:06 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:25:06 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:25:06 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:25:06.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:25:06 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:25:06 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:25:06 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:25:06.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:25:07 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:25:07 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1300: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:25:07 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:25:07.276Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:25:07 compute-0 nova_compute[254061]: 2026-01-20 19:25:07.462 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:25:07 compute-0 nova_compute[254061]: 2026-01-20 19:25:07.464 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:25:07 compute-0 nova_compute[254061]: 2026-01-20 19:25:07.464 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 20 19:25:07 compute-0 nova_compute[254061]: 2026-01-20 19:25:07.464 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 19:25:07 compute-0 nova_compute[254061]: 2026-01-20 19:25:07.497 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:25:07 compute-0 nova_compute[254061]: 2026-01-20 19:25:07.497 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 19:25:08 compute-0 ceph-mon[74381]: pgmap v1300: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:25:08 compute-0 sudo[289477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:25:08 compute-0 sudo[289477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:25:08 compute-0 sudo[289477]: pam_unix(sudo:session): session closed for user root
Jan 20 19:25:08 compute-0 sudo[289503]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 20 19:25:08 compute-0 sudo[289503]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:25:08 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:25:08 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:25:08 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:25:08.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:25:08 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:25:08 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:25:08 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:25:08.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:25:08 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:25:08.878Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:25:09 compute-0 sudo[289503]: pam_unix(sudo:session): session closed for user root
Jan 20 19:25:09 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1301: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Jan 20 19:25:09 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 19:25:09 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:25:09 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 20 19:25:09 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:25:09 compute-0 sudo[289560]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:25:09 compute-0 sudo[289560]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:25:09 compute-0 sudo[289560]: pam_unix(sudo:session): session closed for user root
Jan 20 19:25:09 compute-0 sudo[289585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 19:25:09 compute-0 sudo[289585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:25:09 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 19:25:09 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 19:25:09 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:25:09 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:25:09 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 19:25:09 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 19:25:09 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 19:25:09 compute-0 podman[289650]: 2026-01-20 19:25:09.787220471 +0000 UTC m=+0.073327136 container create 657107427deeb7ec77ba4155f438c384d4634eb5a01672c5a6c0cf0e675d5b9b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_pike, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 20 19:25:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:25:09] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Jan 20 19:25:09 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:25:09] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Jan 20 19:25:09 compute-0 podman[289650]: 2026-01-20 19:25:09.73769254 +0000 UTC m=+0.023799225 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:25:09 compute-0 systemd[1]: Started libpod-conmon-657107427deeb7ec77ba4155f438c384d4634eb5a01672c5a6c0cf0e675d5b9b.scope.
Jan 20 19:25:09 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:25:09 compute-0 podman[289650]: 2026-01-20 19:25:09.877403821 +0000 UTC m=+0.163510486 container init 657107427deeb7ec77ba4155f438c384d4634eb5a01672c5a6c0cf0e675d5b9b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_pike, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 19:25:09 compute-0 podman[289650]: 2026-01-20 19:25:09.884463383 +0000 UTC m=+0.170570058 container start 657107427deeb7ec77ba4155f438c384d4634eb5a01672c5a6c0cf0e675d5b9b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_pike, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:25:09 compute-0 podman[289650]: 2026-01-20 19:25:09.888639155 +0000 UTC m=+0.174745850 container attach 657107427deeb7ec77ba4155f438c384d4634eb5a01672c5a6c0cf0e675d5b9b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_pike, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Jan 20 19:25:09 compute-0 frosty_pike[289666]: 167 167
Jan 20 19:25:09 compute-0 systemd[1]: libpod-657107427deeb7ec77ba4155f438c384d4634eb5a01672c5a6c0cf0e675d5b9b.scope: Deactivated successfully.
Jan 20 19:25:09 compute-0 conmon[289666]: conmon 657107427deeb7ec77ba <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-657107427deeb7ec77ba4155f438c384d4634eb5a01672c5a6c0cf0e675d5b9b.scope/container/memory.events
Jan 20 19:25:09 compute-0 podman[289650]: 2026-01-20 19:25:09.892362306 +0000 UTC m=+0.178468971 container died 657107427deeb7ec77ba4155f438c384d4634eb5a01672c5a6c0cf0e675d5b9b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_pike, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Jan 20 19:25:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-becc6af4fceb9bae3f07cdb98666e8c87f40ee22ff27e9b4065bcc60cfb87510-merged.mount: Deactivated successfully.
Jan 20 19:25:09 compute-0 podman[289650]: 2026-01-20 19:25:09.937150228 +0000 UTC m=+0.223256893 container remove 657107427deeb7ec77ba4155f438c384d4634eb5a01672c5a6c0cf0e675d5b9b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_pike, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 19:25:09 compute-0 systemd[1]: libpod-conmon-657107427deeb7ec77ba4155f438c384d4634eb5a01672c5a6c0cf0e675d5b9b.scope: Deactivated successfully.
Jan 20 19:25:10 compute-0 podman[289690]: 2026-01-20 19:25:10.089542333 +0000 UTC m=+0.038521504 container create 16603b01f652c8ed634cfcc95dd123fa5688ede696578727be67954f3c0ccc09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_meninsky, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 20 19:25:10 compute-0 systemd[1]: Started libpod-conmon-16603b01f652c8ed634cfcc95dd123fa5688ede696578727be67954f3c0ccc09.scope.
Jan 20 19:25:10 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:25:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b31cc73b9f291b7454ada872ade21535777981f4326506b9f1997a06aa2715b1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:25:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b31cc73b9f291b7454ada872ade21535777981f4326506b9f1997a06aa2715b1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:25:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b31cc73b9f291b7454ada872ade21535777981f4326506b9f1997a06aa2715b1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:25:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b31cc73b9f291b7454ada872ade21535777981f4326506b9f1997a06aa2715b1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:25:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b31cc73b9f291b7454ada872ade21535777981f4326506b9f1997a06aa2715b1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:25:10 compute-0 podman[289690]: 2026-01-20 19:25:10.165610912 +0000 UTC m=+0.114590103 container init 16603b01f652c8ed634cfcc95dd123fa5688ede696578727be67954f3c0ccc09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_meninsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 20 19:25:10 compute-0 podman[289690]: 2026-01-20 19:25:10.072610045 +0000 UTC m=+0.021589226 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:25:10 compute-0 podman[289690]: 2026-01-20 19:25:10.172690433 +0000 UTC m=+0.121669604 container start 16603b01f652c8ed634cfcc95dd123fa5688ede696578727be67954f3c0ccc09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_meninsky, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 20 19:25:10 compute-0 podman[289690]: 2026-01-20 19:25:10.174940325 +0000 UTC m=+0.123919496 container attach 16603b01f652c8ed634cfcc95dd123fa5688ede696578727be67954f3c0ccc09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_meninsky, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:25:10 compute-0 zen_meninsky[289706]: --> passed data devices: 0 physical, 1 LVM
Jan 20 19:25:10 compute-0 zen_meninsky[289706]: --> All data devices are unavailable
Jan 20 19:25:10 compute-0 systemd[1]: libpod-16603b01f652c8ed634cfcc95dd123fa5688ede696578727be67954f3c0ccc09.scope: Deactivated successfully.
Jan 20 19:25:10 compute-0 podman[289722]: 2026-01-20 19:25:10.512402089 +0000 UTC m=+0.025851511 container died 16603b01f652c8ed634cfcc95dd123fa5688ede696578727be67954f3c0ccc09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_meninsky, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True)
Jan 20 19:25:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-b31cc73b9f291b7454ada872ade21535777981f4326506b9f1997a06aa2715b1-merged.mount: Deactivated successfully.
Jan 20 19:25:10 compute-0 podman[289722]: 2026-01-20 19:25:10.561165059 +0000 UTC m=+0.074614391 container remove 16603b01f652c8ed634cfcc95dd123fa5688ede696578727be67954f3c0ccc09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_meninsky, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 20 19:25:10 compute-0 systemd[1]: libpod-conmon-16603b01f652c8ed634cfcc95dd123fa5688ede696578727be67954f3c0ccc09.scope: Deactivated successfully.
Jan 20 19:25:10 compute-0 sudo[289585]: pam_unix(sudo:session): session closed for user root
Jan 20 19:25:10 compute-0 sudo[289738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:25:10 compute-0 sudo[289738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:25:10 compute-0 sudo[289738]: pam_unix(sudo:session): session closed for user root
Jan 20 19:25:10 compute-0 sudo[289763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- lvm list --format json
Jan 20 19:25:10 compute-0 sudo[289763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:25:10 compute-0 ceph-mon[74381]: pgmap v1301: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Jan 20 19:25:10 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:25:10 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:25:10 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:25:10 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:25:10.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:25:10 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:25:10 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:25:10 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:25:10.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:25:11 compute-0 podman[289829]: 2026-01-20 19:25:11.104874924 +0000 UTC m=+0.034397911 container create dd956fcb2aebe89675eeed3be9f08bd26c20e742d05647f9bb4f9df1379a6649 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:25:11 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1302: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:25:11 compute-0 systemd[1]: Started libpod-conmon-dd956fcb2aebe89675eeed3be9f08bd26c20e742d05647f9bb4f9df1379a6649.scope.
Jan 20 19:25:11 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:25:11 compute-0 podman[289829]: 2026-01-20 19:25:11.180780689 +0000 UTC m=+0.110303706 container init dd956fcb2aebe89675eeed3be9f08bd26c20e742d05647f9bb4f9df1379a6649 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_solomon, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:25:11 compute-0 podman[289829]: 2026-01-20 19:25:11.089987312 +0000 UTC m=+0.019510329 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:25:11 compute-0 podman[289829]: 2026-01-20 19:25:11.187526441 +0000 UTC m=+0.117049458 container start dd956fcb2aebe89675eeed3be9f08bd26c20e742d05647f9bb4f9df1379a6649 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_solomon, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Jan 20 19:25:11 compute-0 compassionate_solomon[289845]: 167 167
Jan 20 19:25:11 compute-0 systemd[1]: libpod-dd956fcb2aebe89675eeed3be9f08bd26c20e742d05647f9bb4f9df1379a6649.scope: Deactivated successfully.
Jan 20 19:25:11 compute-0 podman[289829]: 2026-01-20 19:25:11.22475741 +0000 UTC m=+0.154280427 container attach dd956fcb2aebe89675eeed3be9f08bd26c20e742d05647f9bb4f9df1379a6649 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_solomon, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:25:11 compute-0 podman[289829]: 2026-01-20 19:25:11.22513702 +0000 UTC m=+0.154660007 container died dd956fcb2aebe89675eeed3be9f08bd26c20e742d05647f9bb4f9df1379a6649 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_solomon, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:25:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b485e8f8a53ddc6100c85f272466b5e53eefbe54b43c0135353058e1767d3bd-merged.mount: Deactivated successfully.
Jan 20 19:25:11 compute-0 podman[289829]: 2026-01-20 19:25:11.522239681 +0000 UTC m=+0.451762668 container remove dd956fcb2aebe89675eeed3be9f08bd26c20e742d05647f9bb4f9df1379a6649 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:25:11 compute-0 systemd[1]: libpod-conmon-dd956fcb2aebe89675eeed3be9f08bd26c20e742d05647f9bb4f9df1379a6649.scope: Deactivated successfully.
Jan 20 19:25:11 compute-0 podman[289870]: 2026-01-20 19:25:11.694398281 +0000 UTC m=+0.040086546 container create e27f3f9df7d27ced47e24aaeda14961d0c32c943ded4d3102675edab694c5a3b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_euclid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Jan 20 19:25:11 compute-0 systemd[1]: Started libpod-conmon-e27f3f9df7d27ced47e24aaeda14961d0c32c943ded4d3102675edab694c5a3b.scope.
Jan 20 19:25:11 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:25:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6662fda6d3e3aceaf91905f831e670de00a97e003b342fe94e680e48737643da/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:25:11 compute-0 podman[289870]: 2026-01-20 19:25:11.677128774 +0000 UTC m=+0.022817059 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:25:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6662fda6d3e3aceaf91905f831e670de00a97e003b342fe94e680e48737643da/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:25:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6662fda6d3e3aceaf91905f831e670de00a97e003b342fe94e680e48737643da/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:25:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6662fda6d3e3aceaf91905f831e670de00a97e003b342fe94e680e48737643da/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:25:11 compute-0 podman[289870]: 2026-01-20 19:25:11.781763936 +0000 UTC m=+0.127452221 container init e27f3f9df7d27ced47e24aaeda14961d0c32c943ded4d3102675edab694c5a3b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_euclid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:25:11 compute-0 podman[289870]: 2026-01-20 19:25:11.78893585 +0000 UTC m=+0.134624115 container start e27f3f9df7d27ced47e24aaeda14961d0c32c943ded4d3102675edab694c5a3b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True)
Jan 20 19:25:11 compute-0 podman[289870]: 2026-01-20 19:25:11.791848849 +0000 UTC m=+0.137537114 container attach e27f3f9df7d27ced47e24aaeda14961d0c32c943ded4d3102675edab694c5a3b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_euclid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:25:11 compute-0 ceph-mon[74381]: pgmap v1302: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:25:12 compute-0 compassionate_euclid[289886]: {
Jan 20 19:25:12 compute-0 compassionate_euclid[289886]:     "0": [
Jan 20 19:25:12 compute-0 compassionate_euclid[289886]:         {
Jan 20 19:25:12 compute-0 compassionate_euclid[289886]:             "devices": [
Jan 20 19:25:12 compute-0 compassionate_euclid[289886]:                 "/dev/loop3"
Jan 20 19:25:12 compute-0 compassionate_euclid[289886]:             ],
Jan 20 19:25:12 compute-0 compassionate_euclid[289886]:             "lv_name": "ceph_lv0",
Jan 20 19:25:12 compute-0 compassionate_euclid[289886]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:25:12 compute-0 compassionate_euclid[289886]:             "lv_size": "21470642176",
Jan 20 19:25:12 compute-0 compassionate_euclid[289886]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=aecbbf3b-b405-507b-97d7-637a83f5b4b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5f53c0c6-6046-4836-83f9-ff93da7e674e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:25:12 compute-0 compassionate_euclid[289886]:             "lv_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 19:25:12 compute-0 compassionate_euclid[289886]:             "name": "ceph_lv0",
Jan 20 19:25:12 compute-0 compassionate_euclid[289886]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:25:12 compute-0 compassionate_euclid[289886]:             "tags": {
Jan 20 19:25:12 compute-0 compassionate_euclid[289886]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:25:12 compute-0 compassionate_euclid[289886]:                 "ceph.block_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 19:25:12 compute-0 compassionate_euclid[289886]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:25:12 compute-0 compassionate_euclid[289886]:                 "ceph.cluster_fsid": "aecbbf3b-b405-507b-97d7-637a83f5b4b1",
Jan 20 19:25:12 compute-0 compassionate_euclid[289886]:                 "ceph.cluster_name": "ceph",
Jan 20 19:25:12 compute-0 compassionate_euclid[289886]:                 "ceph.crush_device_class": "",
Jan 20 19:25:12 compute-0 compassionate_euclid[289886]:                 "ceph.encrypted": "0",
Jan 20 19:25:12 compute-0 compassionate_euclid[289886]:                 "ceph.osd_fsid": "5f53c0c6-6046-4836-83f9-ff93da7e674e",
Jan 20 19:25:12 compute-0 compassionate_euclid[289886]:                 "ceph.osd_id": "0",
Jan 20 19:25:12 compute-0 compassionate_euclid[289886]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:25:12 compute-0 compassionate_euclid[289886]:                 "ceph.type": "block",
Jan 20 19:25:12 compute-0 compassionate_euclid[289886]:                 "ceph.vdo": "0",
Jan 20 19:25:12 compute-0 compassionate_euclid[289886]:                 "ceph.with_tpm": "0"
Jan 20 19:25:12 compute-0 compassionate_euclid[289886]:             },
Jan 20 19:25:12 compute-0 compassionate_euclid[289886]:             "type": "block",
Jan 20 19:25:12 compute-0 compassionate_euclid[289886]:             "vg_name": "ceph_vg0"
Jan 20 19:25:12 compute-0 compassionate_euclid[289886]:         }
Jan 20 19:25:12 compute-0 compassionate_euclid[289886]:     ]
Jan 20 19:25:12 compute-0 compassionate_euclid[289886]: }
Jan 20 19:25:12 compute-0 systemd[1]: libpod-e27f3f9df7d27ced47e24aaeda14961d0c32c943ded4d3102675edab694c5a3b.scope: Deactivated successfully.
Jan 20 19:25:12 compute-0 podman[289870]: 2026-01-20 19:25:12.079707789 +0000 UTC m=+0.425396104 container died e27f3f9df7d27ced47e24aaeda14961d0c32c943ded4d3102675edab694c5a3b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_euclid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 19:25:12 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:25:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-6662fda6d3e3aceaf91905f831e670de00a97e003b342fe94e680e48737643da-merged.mount: Deactivated successfully.
Jan 20 19:25:12 compute-0 podman[289870]: 2026-01-20 19:25:12.134006219 +0000 UTC m=+0.479694484 container remove e27f3f9df7d27ced47e24aaeda14961d0c32c943ded4d3102675edab694c5a3b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_euclid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 20 19:25:12 compute-0 systemd[1]: libpod-conmon-e27f3f9df7d27ced47e24aaeda14961d0c32c943ded4d3102675edab694c5a3b.scope: Deactivated successfully.
Jan 20 19:25:12 compute-0 sudo[289763]: pam_unix(sudo:session): session closed for user root
Jan 20 19:25:12 compute-0 sudo[289908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:25:12 compute-0 sudo[289908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:25:12 compute-0 sudo[289908]: pam_unix(sudo:session): session closed for user root
Jan 20 19:25:12 compute-0 sudo[289933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- raw list --format json
Jan 20 19:25:12 compute-0 sudo[289933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:25:12 compute-0 nova_compute[254061]: 2026-01-20 19:25:12.498 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:25:12 compute-0 podman[289999]: 2026-01-20 19:25:12.704979333 +0000 UTC m=+0.037156047 container create 5f7a3a35f43ce7c255373643197642be826638279a1c8ca07f894db61e2888c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_herschel, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Jan 20 19:25:12 compute-0 systemd[1]: Started libpod-conmon-5f7a3a35f43ce7c255373643197642be826638279a1c8ca07f894db61e2888c9.scope.
Jan 20 19:25:12 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:25:12 compute-0 podman[289999]: 2026-01-20 19:25:12.775557853 +0000 UTC m=+0.107734597 container init 5f7a3a35f43ce7c255373643197642be826638279a1c8ca07f894db61e2888c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_herschel, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:25:12 compute-0 podman[289999]: 2026-01-20 19:25:12.68972361 +0000 UTC m=+0.021900354 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:25:12 compute-0 podman[289999]: 2026-01-20 19:25:12.787300482 +0000 UTC m=+0.119477206 container start 5f7a3a35f43ce7c255373643197642be826638279a1c8ca07f894db61e2888c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_herschel, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:25:12 compute-0 podman[289999]: 2026-01-20 19:25:12.791643449 +0000 UTC m=+0.123820173 container attach 5f7a3a35f43ce7c255373643197642be826638279a1c8ca07f894db61e2888c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_herschel, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:25:12 compute-0 jolly_herschel[290015]: 167 167
Jan 20 19:25:12 compute-0 systemd[1]: libpod-5f7a3a35f43ce7c255373643197642be826638279a1c8ca07f894db61e2888c9.scope: Deactivated successfully.
Jan 20 19:25:12 compute-0 podman[289999]: 2026-01-20 19:25:12.793623692 +0000 UTC m=+0.125800416 container died 5f7a3a35f43ce7c255373643197642be826638279a1c8ca07f894db61e2888c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_herschel, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 20 19:25:12 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:25:12 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.003000079s ======
Jan 20 19:25:12 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:25:12.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000079s
Jan 20 19:25:12 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:25:12 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:25:12 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:25:12.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:25:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-09362b621dd06a492e2e485f2b7ff73e6093857769d0dfa31b0d0ac8842798c8-merged.mount: Deactivated successfully.
Jan 20 19:25:13 compute-0 podman[289999]: 2026-01-20 19:25:13.117727525 +0000 UTC m=+0.449904249 container remove 5f7a3a35f43ce7c255373643197642be826638279a1c8ca07f894db61e2888c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_herschel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 20 19:25:13 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1303: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Jan 20 19:25:13 compute-0 systemd[1]: libpod-conmon-5f7a3a35f43ce7c255373643197642be826638279a1c8ca07f894db61e2888c9.scope: Deactivated successfully.
Jan 20 19:25:13 compute-0 podman[290036]: 2026-01-20 19:25:13.216988452 +0000 UTC m=+0.057012485 container health_status 7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 20 19:25:13 compute-0 podman[290059]: 2026-01-20 19:25:13.294993682 +0000 UTC m=+0.053279793 container create 06023f1a5fc69b8e37b3bb7ec2c15bd95c3ec7bac64025585011ab5cd9a56087 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_pascal, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:25:13 compute-0 systemd[1]: Started libpod-conmon-06023f1a5fc69b8e37b3bb7ec2c15bd95c3ec7bac64025585011ab5cd9a56087.scope.
Jan 20 19:25:13 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:25:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d026787e61bb8fa80468006b7840ffd75b8424734bab7bc52aa22ddaabe6e84f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:25:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d026787e61bb8fa80468006b7840ffd75b8424734bab7bc52aa22ddaabe6e84f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:25:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d026787e61bb8fa80468006b7840ffd75b8424734bab7bc52aa22ddaabe6e84f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:25:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d026787e61bb8fa80468006b7840ffd75b8424734bab7bc52aa22ddaabe6e84f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:25:13 compute-0 podman[290059]: 2026-01-20 19:25:13.27494873 +0000 UTC m=+0.033234851 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:25:13 compute-0 podman[290059]: 2026-01-20 19:25:13.373348653 +0000 UTC m=+0.131634794 container init 06023f1a5fc69b8e37b3bb7ec2c15bd95c3ec7bac64025585011ab5cd9a56087 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_pascal, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:25:13 compute-0 podman[290059]: 2026-01-20 19:25:13.381930115 +0000 UTC m=+0.140216226 container start 06023f1a5fc69b8e37b3bb7ec2c15bd95c3ec7bac64025585011ab5cd9a56087 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 20 19:25:13 compute-0 podman[290059]: 2026-01-20 19:25:13.385602415 +0000 UTC m=+0.143888526 container attach 06023f1a5fc69b8e37b3bb7ec2c15bd95c3ec7bac64025585011ab5cd9a56087 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_pascal, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Jan 20 19:25:14 compute-0 lvm[290149]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 19:25:14 compute-0 lvm[290149]: VG ceph_vg0 finished
Jan 20 19:25:14 compute-0 practical_pascal[290075]: {}
Jan 20 19:25:14 compute-0 systemd[1]: libpod-06023f1a5fc69b8e37b3bb7ec2c15bd95c3ec7bac64025585011ab5cd9a56087.scope: Deactivated successfully.
Jan 20 19:25:14 compute-0 systemd[1]: libpod-06023f1a5fc69b8e37b3bb7ec2c15bd95c3ec7bac64025585011ab5cd9a56087.scope: Consumed 1.199s CPU time.
Jan 20 19:25:14 compute-0 podman[290059]: 2026-01-20 19:25:14.10769988 +0000 UTC m=+0.865985981 container died 06023f1a5fc69b8e37b3bb7ec2c15bd95c3ec7bac64025585011ab5cd9a56087 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_pascal, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 20 19:25:14 compute-0 nova_compute[254061]: 2026-01-20 19:25:14.128 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:25:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-d026787e61bb8fa80468006b7840ffd75b8424734bab7bc52aa22ddaabe6e84f-merged.mount: Deactivated successfully.
Jan 20 19:25:14 compute-0 podman[290059]: 2026-01-20 19:25:14.199428752 +0000 UTC m=+0.957714853 container remove 06023f1a5fc69b8e37b3bb7ec2c15bd95c3ec7bac64025585011ab5cd9a56087 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_pascal, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Jan 20 19:25:14 compute-0 systemd[1]: libpod-conmon-06023f1a5fc69b8e37b3bb7ec2c15bd95c3ec7bac64025585011ab5cd9a56087.scope: Deactivated successfully.
Jan 20 19:25:14 compute-0 sudo[289933]: pam_unix(sudo:session): session closed for user root
Jan 20 19:25:14 compute-0 nova_compute[254061]: 2026-01-20 19:25:14.365 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:25:14 compute-0 nova_compute[254061]: 2026-01-20 19:25:14.366 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:25:14 compute-0 nova_compute[254061]: 2026-01-20 19:25:14.366 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:25:14 compute-0 nova_compute[254061]: 2026-01-20 19:25:14.367 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 19:25:14 compute-0 nova_compute[254061]: 2026-01-20 19:25:14.368 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:25:14 compute-0 ceph-mon[74381]: pgmap v1303: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Jan 20 19:25:14 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:25:14 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/285202446' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:25:14 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:25:14 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:25:14 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:25:14.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:25:14 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:25:14 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:25:14 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:25:14.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:25:14 compute-0 nova_compute[254061]: 2026-01-20 19:25:14.880 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:25:15 compute-0 nova_compute[254061]: 2026-01-20 19:25:15.030 254065 WARNING nova.virt.libvirt.driver [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 19:25:15 compute-0 nova_compute[254061]: 2026-01-20 19:25:15.031 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4439MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 19:25:15 compute-0 nova_compute[254061]: 2026-01-20 19:25:15.031 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:25:15 compute-0 nova_compute[254061]: 2026-01-20 19:25:15.032 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:25:15 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:25:15 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:25:15 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:25:15 compute-0 nova_compute[254061]: 2026-01-20 19:25:15.116 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 19:25:15 compute-0 nova_compute[254061]: 2026-01-20 19:25:15.117 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 19:25:15 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1304: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Jan 20 19:25:15 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:25:15 compute-0 nova_compute[254061]: 2026-01-20 19:25:15.141 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:25:15 compute-0 sudo[290192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 19:25:15 compute-0 sudo[290192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:25:15 compute-0 sudo[290192]: pam_unix(sudo:session): session closed for user root
Jan 20 19:25:15 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:25:15 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1254032478' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:25:15 compute-0 nova_compute[254061]: 2026-01-20 19:25:15.575 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:25:15 compute-0 nova_compute[254061]: 2026-01-20 19:25:15.581 254065 DEBUG nova.compute.provider_tree [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Inventory has not changed in ProviderTree for provider: cb9161e5-191d-495c-920a-01144f42a215 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 19:25:15 compute-0 nova_compute[254061]: 2026-01-20 19:25:15.599 254065 DEBUG nova.scheduler.client.report [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Inventory has not changed for provider cb9161e5-191d-495c-920a-01144f42a215 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 19:25:15 compute-0 nova_compute[254061]: 2026-01-20 19:25:15.600 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 19:25:15 compute-0 nova_compute[254061]: 2026-01-20 19:25:15.601 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.569s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:25:15 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/285202446' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:25:15 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:25:15 compute-0 ceph-mon[74381]: pgmap v1304: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Jan 20 19:25:15 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:25:15 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/1254032478' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:25:16 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:25:16 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:25:16 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:25:16.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:25:16 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:25:16 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:25:16 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:25:16.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:25:17 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:25:17 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1305: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:25:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:25:17.276Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:25:17 compute-0 nova_compute[254061]: 2026-01-20 19:25:17.501 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:25:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[106777]: logger=cleanup t=2026-01-20T19:25:17.545273641Z level=info msg="Completed cleanup jobs" duration=37.353022ms
Jan 20 19:25:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[106777]: logger=grafana.update.checker t=2026-01-20T19:25:17.66973765Z level=info msg="Update check succeeded" duration=77.650402ms
Jan 20 19:25:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0[106777]: logger=plugins.update.checker t=2026-01-20T19:25:17.682897875Z level=info msg="Update check succeeded" duration=90.832628ms
Jan 20 19:25:18 compute-0 ceph-mon[74381]: pgmap v1305: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:25:18 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:25:18 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:25:18 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:25:18.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:25:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:25:18.879Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:25:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:25:18.879Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:25:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:25:18.879Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:25:18 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:25:18 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:25:18 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:25:18.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:25:19 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1306: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Jan 20 19:25:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:25:19] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Jan 20 19:25:19 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:25:19] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Jan 20 19:25:20 compute-0 ceph-mon[74381]: pgmap v1306: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Jan 20 19:25:20 compute-0 nova_compute[254061]: 2026-01-20 19:25:20.602 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:25:20 compute-0 nova_compute[254061]: 2026-01-20 19:25:20.602 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:25:20 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:25:20 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:25:20 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:25:20.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:25:20 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:25:20 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:25:20 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:25:20.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:25:21 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1307: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:25:21 compute-0 nova_compute[254061]: 2026-01-20 19:25:21.129 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:25:21 compute-0 nova_compute[254061]: 2026-01-20 19:25:21.129 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 19:25:21 compute-0 nova_compute[254061]: 2026-01-20 19:25:21.130 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 19:25:21 compute-0 nova_compute[254061]: 2026-01-20 19:25:21.151 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 19:25:21 compute-0 nova_compute[254061]: 2026-01-20 19:25:21.151 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:25:21 compute-0 nova_compute[254061]: 2026-01-20 19:25:21.152 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:25:21 compute-0 sudo[290244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:25:21 compute-0 sudo[290244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:25:21 compute-0 sudo[290244]: pam_unix(sudo:session): session closed for user root
Jan 20 19:25:22 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:25:22 compute-0 nova_compute[254061]: 2026-01-20 19:25:22.129 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:25:22 compute-0 nova_compute[254061]: 2026-01-20 19:25:22.503 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:25:22 compute-0 ceph-mon[74381]: pgmap v1307: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:25:22 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/1747101977' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:25:22 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/2855564345' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:25:22 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:25:22 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:25:22 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:25:22.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:25:22 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:25:22 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:25:22 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:25:22.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:25:23 compute-0 nova_compute[254061]: 2026-01-20 19:25:23.128 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:25:23 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1308: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:25:23 compute-0 nova_compute[254061]: 2026-01-20 19:25:23.128 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 19:25:23 compute-0 podman[290271]: 2026-01-20 19:25:23.168785298 +0000 UTC m=+0.135394335 container health_status d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 20 19:25:23 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/376185115' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:25:23 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/2418963547' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:25:24 compute-0 ceph-mon[74381]: pgmap v1308: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:25:24 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:25:24 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:25:24 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:25:24.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:25:24 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:25:24 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:25:24 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:25:24.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:25:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:25:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:25:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:25:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:25:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:25:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:25:25 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1309: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:25:25 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:25:26 compute-0 ceph-mon[74381]: pgmap v1309: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:25:26 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:25:26 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:25:26 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:25:26.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:25:26 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:25:26 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:25:26 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:25:26.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:25:27 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:25:27 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1310: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:25:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:25:27.277Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:25:27 compute-0 nova_compute[254061]: 2026-01-20 19:25:27.505 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:25:28 compute-0 ceph-mon[74381]: pgmap v1310: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:25:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:25:28.880Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:25:28 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:25:28 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:25:28 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:25:28.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:25:28 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:25:28 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:25:28 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:25:28.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:25:29 compute-0 nova_compute[254061]: 2026-01-20 19:25:29.124 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:25:29 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1311: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:25:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:25:29] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Jan 20 19:25:29 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:25:29] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Jan 20 19:25:29 compute-0 ceph-mon[74381]: pgmap v1311: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:25:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:25:30.302 165659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:25:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:25:30.302 165659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:25:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:25:30.302 165659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:25:30 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:25:30 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9639c5d0 =====
Jan 20 19:25:30 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:25:30 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:25:30.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:25:30 compute-0 radosgw[89571]: ====== req done req=0x7f0e9639c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:25:30 compute-0 radosgw[89571]: beast: 0x7f0e9639c5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:25:30.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:25:31 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1312: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:25:32 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:25:32 compute-0 ceph-mon[74381]: pgmap v1312: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:25:32 compute-0 nova_compute[254061]: 2026-01-20 19:25:32.507 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:25:32 compute-0 nova_compute[254061]: 2026-01-20 19:25:32.509 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:25:32 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:25:32 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:25:32 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:25:32.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:25:32 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9639c5d0 =====
Jan 20 19:25:32 compute-0 radosgw[89571]: ====== req done req=0x7f0e9639c5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:25:32 compute-0 radosgw[89571]: beast: 0x7f0e9639c5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:25:32.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:25:33 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1313: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:25:34 compute-0 ceph-mon[74381]: pgmap v1313: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:25:34 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:25:34 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:25:34 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:25:34.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:25:34 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:25:34 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:25:34 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:25:34.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:25:35 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1314: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:25:36 compute-0 ceph-mon[74381]: pgmap v1314: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:25:36 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:25:36 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:25:36 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:25:36.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:25:36 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:25:36 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:25:36 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:25:36.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:25:37 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:25:37 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1315: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:25:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:25:37.278Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:25:37 compute-0 nova_compute[254061]: 2026-01-20 19:25:37.509 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:25:38 compute-0 ceph-mon[74381]: pgmap v1315: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:25:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:25:38.880Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:25:38 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:25:38 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:25:38 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:25:38.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:25:38 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:25:38 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:25:38 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:25:38.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:25:39 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1316: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:25:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:25:39] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Jan 20 19:25:39 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:25:39] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Jan 20 19:25:40 compute-0 ceph-mon[74381]: pgmap v1316: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:25:40 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:25:40 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:25:40 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:25:40 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:25:40.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:25:40 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:25:40 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:25:40 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:25:40.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:25:41 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1317: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:25:41 compute-0 sudo[290315]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:25:41 compute-0 sudo[290315]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:25:41 compute-0 sudo[290315]: pam_unix(sudo:session): session closed for user root
Jan 20 19:25:42 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:25:42 compute-0 nova_compute[254061]: 2026-01-20 19:25:42.511 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:25:42 compute-0 nova_compute[254061]: 2026-01-20 19:25:42.512 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:25:42 compute-0 ceph-mon[74381]: pgmap v1317: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:25:42 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:25:42 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:25:42 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:25:42.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:25:42 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:25:42 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:25:42 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:25:42.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:25:43 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1318: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:25:44 compute-0 podman[290342]: 2026-01-20 19:25:44.121754247 +0000 UTC m=+0.099344880 container health_status 7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 19:25:44 compute-0 ceph-mon[74381]: pgmap v1318: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:25:44 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:25:44 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:25:44 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:25:44.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:25:44 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:25:44 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:25:44 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:25:44.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:25:45 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1319: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:25:46 compute-0 ceph-mon[74381]: pgmap v1319: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:25:46 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:25:46 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:25:46 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:25:46.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:25:47 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:25:47 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:25:47 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:25:47.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:25:47 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:25:47 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1320: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:25:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:25:47.279Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:25:47 compute-0 nova_compute[254061]: 2026-01-20 19:25:47.513 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:25:47 compute-0 nova_compute[254061]: 2026-01-20 19:25:47.514 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:25:47 compute-0 nova_compute[254061]: 2026-01-20 19:25:47.514 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5001 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 20 19:25:47 compute-0 nova_compute[254061]: 2026-01-20 19:25:47.514 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 19:25:47 compute-0 nova_compute[254061]: 2026-01-20 19:25:47.514 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 19:25:47 compute-0 nova_compute[254061]: 2026-01-20 19:25:47.515 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:25:48 compute-0 ceph-mon[74381]: pgmap v1320: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:25:48 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 20 19:25:48 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2221326725' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 19:25:48 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 20 19:25:48 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2221326725' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 19:25:48 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:25:48.881Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:25:48 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:25:48 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:25:48 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:25:48.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:25:49 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:25:49 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:25:49 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:25:49.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:25:49 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1321: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:25:49 compute-0 ceph-mon[74381]: from='client.? 192.168.122.10:0/2221326725' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 19:25:49 compute-0 ceph-mon[74381]: from='client.? 192.168.122.10:0/2221326725' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 19:25:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:25:49] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Jan 20 19:25:49 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:25:49] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Jan 20 19:25:50 compute-0 ceph-mon[74381]: pgmap v1321: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:25:50 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:25:50 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:25:50 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:25:50.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:25:51 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:25:51 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:25:51 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:25:51.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:25:51 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1322: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:25:52 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:25:52 compute-0 ceph-mon[74381]: pgmap v1322: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:25:52 compute-0 nova_compute[254061]: 2026-01-20 19:25:52.515 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:25:52 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:25:52 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:25:52 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:25:52.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:25:53 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:25:53 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:25:53 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:25:53.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:25:53 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1323: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:25:54 compute-0 podman[290371]: 2026-01-20 19:25:54.156422447 +0000 UTC m=+0.125944090 container health_status d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, container_name=ovn_controller)
Jan 20 19:25:54 compute-0 ceph-mon[74381]: pgmap v1323: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:25:54 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:25:54 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.002000053s ======
Jan 20 19:25:54 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:25:54.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Jan 20 19:25:55 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:25:55 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:25:55 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:25:55.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:25:55 compute-0 ceph-mgr[74676]: [balancer INFO root] Optimize plan auto_2026-01-20_19:25:55
Jan 20 19:25:55 compute-0 ceph-mgr[74676]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 19:25:55 compute-0 ceph-mgr[74676]: [balancer INFO root] do_upmap
Jan 20 19:25:55 compute-0 ceph-mgr[74676]: [balancer INFO root] pools ['backups', '.nfs', 'images', '.mgr', 'cephfs.cephfs.meta', 'volumes', 'vms', '.rgw.root', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control']
Jan 20 19:25:55 compute-0 ceph-mgr[74676]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 19:25:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:25:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:25:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:25:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:25:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:25:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:25:55 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1324: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:25:55 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:25:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 19:25:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:25:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 19:25:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:25:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 20 19:25:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:25:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:25:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:25:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:25:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:25:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 20 19:25:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:25:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 20 19:25:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:25:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:25:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:25:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 19:25:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:25:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 20 19:25:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:25:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:25:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:25:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 19:25:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:25:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 20 19:25:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 19:25:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:25:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 19:25:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:25:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:25:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:25:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:25:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:25:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:25:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:25:56 compute-0 ceph-mon[74381]: pgmap v1324: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:25:57 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:25:57 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:25:57 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:25:56.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:25:57 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:25:57 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:25:57 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:25:57.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:25:57 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:25:57 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1325: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:25:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:25:57.279Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:25:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:25:57.279Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:25:57 compute-0 nova_compute[254061]: 2026-01-20 19:25:57.516 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:25:57 compute-0 nova_compute[254061]: 2026-01-20 19:25:57.517 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:25:58 compute-0 ceph-mon[74381]: pgmap v1325: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:25:58 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:25:58.882Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:25:58 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:25:58.882Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:25:58 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:25:58.882Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:25:59 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:25:59 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:25:59 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:25:59.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:25:59 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:25:59 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:25:59 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:25:59.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:25:59 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1326: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:25:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:25:59] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Jan 20 19:25:59 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:25:59] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Jan 20 19:26:00 compute-0 ceph-mon[74381]: pgmap v1326: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:26:01 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:26:01 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:26:01 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:26:01.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:26:01 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:26:01 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:26:01 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:26:01.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:26:01 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1327: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:26:01 compute-0 sudo[290406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:26:01 compute-0 sudo[290406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:26:01 compute-0 sudo[290406]: pam_unix(sudo:session): session closed for user root
Jan 20 19:26:02 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:26:02 compute-0 nova_compute[254061]: 2026-01-20 19:26:02.518 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:26:02 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 19:26:02 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 19:26:02 compute-0 ceph-mon[74381]: pgmap v1327: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:26:03 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:26:03 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:26:03 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:26:03.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:26:03 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:26:03 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:26:03 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:26:03.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:26:03 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1328: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:26:04 compute-0 ceph-mon[74381]: pgmap v1328: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:26:05 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:26:05 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:26:05 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:26:05.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:26:05 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:26:05 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:26:05 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:26:05.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:26:05 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1329: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:26:06 compute-0 ceph-mon[74381]: pgmap v1329: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:26:07 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:26:07 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:26:07 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:26:07.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:26:07 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:26:07 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:26:07 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:26:07.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:26:07 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:26:07 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1330: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:26:07 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:26:07.281Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:26:07 compute-0 nova_compute[254061]: 2026-01-20 19:26:07.519 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:26:07 compute-0 ceph-mon[74381]: pgmap v1330: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:26:08 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:26:08.883Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:26:09 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:26:09 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:26:09 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:26:09.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:26:09 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:26:09 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:26:09 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:26:09.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:26:09 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1331: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:26:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:26:09] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Jan 20 19:26:09 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:26:09] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Jan 20 19:26:10 compute-0 ceph-mon[74381]: pgmap v1331: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:26:10 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:26:11 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:26:11 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:26:11 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:26:11.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:26:11 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:26:11 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:26:11 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:26:11.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:26:11 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1332: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:26:12 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:26:12 compute-0 ceph-mon[74381]: pgmap v1332: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:26:12 compute-0 nova_compute[254061]: 2026-01-20 19:26:12.521 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:26:13 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:26:13 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:26:13 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:26:13.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:26:13 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:26:13 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:26:13 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:26:13.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:26:13 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1333: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:26:14 compute-0 ceph-mon[74381]: pgmap v1333: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:26:15 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:26:15 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:26:15 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:26:15.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:26:15 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:26:15 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:26:15 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:26:15.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:26:15 compute-0 podman[290446]: 2026-01-20 19:26:15.078078718 +0000 UTC m=+0.054316221 container health_status 7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 19:26:15 compute-0 nova_compute[254061]: 2026-01-20 19:26:15.128 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:26:15 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1334: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:26:15 compute-0 nova_compute[254061]: 2026-01-20 19:26:15.153 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:26:15 compute-0 nova_compute[254061]: 2026-01-20 19:26:15.154 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:26:15 compute-0 nova_compute[254061]: 2026-01-20 19:26:15.154 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:26:15 compute-0 nova_compute[254061]: 2026-01-20 19:26:15.154 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 19:26:15 compute-0 nova_compute[254061]: 2026-01-20 19:26:15.154 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:26:15 compute-0 sudo[290486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:26:15 compute-0 sudo[290486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:26:15 compute-0 sudo[290486]: pam_unix(sudo:session): session closed for user root
Jan 20 19:26:15 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:26:15 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/680420068' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:26:15 compute-0 sudo[290511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 20 19:26:15 compute-0 sudo[290511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:26:15 compute-0 nova_compute[254061]: 2026-01-20 19:26:15.611 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:26:15 compute-0 nova_compute[254061]: 2026-01-20 19:26:15.753 254065 WARNING nova.virt.libvirt.driver [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 19:26:15 compute-0 nova_compute[254061]: 2026-01-20 19:26:15.754 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4485MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 19:26:15 compute-0 nova_compute[254061]: 2026-01-20 19:26:15.754 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:26:15 compute-0 nova_compute[254061]: 2026-01-20 19:26:15.754 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:26:15 compute-0 nova_compute[254061]: 2026-01-20 19:26:15.819 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 19:26:15 compute-0 nova_compute[254061]: 2026-01-20 19:26:15.820 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 19:26:15 compute-0 nova_compute[254061]: 2026-01-20 19:26:15.850 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:26:16 compute-0 sudo[290511]: pam_unix(sudo:session): session closed for user root
Jan 20 19:26:16 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 20 19:26:16 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 20 19:26:16 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:26:16 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2216829996' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:26:16 compute-0 nova_compute[254061]: 2026-01-20 19:26:16.301 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:26:16 compute-0 nova_compute[254061]: 2026-01-20 19:26:16.305 254065 DEBUG nova.compute.provider_tree [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Inventory has not changed in ProviderTree for provider: cb9161e5-191d-495c-920a-01144f42a215 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 19:26:16 compute-0 nova_compute[254061]: 2026-01-20 19:26:16.322 254065 DEBUG nova.scheduler.client.report [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Inventory has not changed for provider cb9161e5-191d-495c-920a-01144f42a215 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 19:26:16 compute-0 nova_compute[254061]: 2026-01-20 19:26:16.323 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 19:26:16 compute-0 nova_compute[254061]: 2026-01-20 19:26:16.324 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.569s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:26:16 compute-0 ceph-mon[74381]: pgmap v1334: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:26:16 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/680420068' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:26:16 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 20 19:26:16 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Jan 20 19:26:16 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/2216829996' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:26:17 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:26:17 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:26:17 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:26:17.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:26:17 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:26:17 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:26:17 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:26:17.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:26:17 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:26:17 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1335: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:26:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:26:17.282Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:26:17 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 20 19:26:17 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:26:17 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 20 19:26:17 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:26:17 compute-0 nova_compute[254061]: 2026-01-20 19:26:17.522 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:26:17 compute-0 nova_compute[254061]: 2026-01-20 19:26:17.524 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:26:17 compute-0 nova_compute[254061]: 2026-01-20 19:26:17.524 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 20 19:26:17 compute-0 nova_compute[254061]: 2026-01-20 19:26:17.524 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 19:26:17 compute-0 nova_compute[254061]: 2026-01-20 19:26:17.525 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 19:26:17 compute-0 nova_compute[254061]: 2026-01-20 19:26:17.526 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:26:18 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 20 19:26:18 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Jan 20 19:26:18 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 20 19:26:18 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:26:18 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 20 19:26:18 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:26:18 compute-0 ceph-mon[74381]: pgmap v1335: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:26:18 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:26:18 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:26:18 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 20 19:26:18 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Jan 20 19:26:18 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:26:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:26:18.885Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:26:19 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:26:19 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:26:19 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:26:19.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:26:19 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:26:19 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:26:19 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:26:19.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:26:19 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1336: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:26:19 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Jan 20 19:26:19 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 20 19:26:19 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1337: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 20 19:26:19 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 19:26:19 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:26:19 compute-0 ceph-mon[74381]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #78. Immutable memtables: 0.
Jan 20 19:26:19 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:26:19.345673) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 19:26:19 compute-0 ceph-mon[74381]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 78
Jan 20 19:26:19 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768937179345784, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 2075, "num_deletes": 251, "total_data_size": 4094547, "memory_usage": 4160576, "flush_reason": "Manual Compaction"}
Jan 20 19:26:19 compute-0 ceph-mon[74381]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #79: started
Jan 20 19:26:19 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 20 19:26:19 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768937179378490, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 79, "file_size": 3999509, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 35351, "largest_seqno": 37425, "table_properties": {"data_size": 3990100, "index_size": 5967, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 19296, "raw_average_key_size": 20, "raw_value_size": 3971243, "raw_average_value_size": 4193, "num_data_blocks": 256, "num_entries": 947, "num_filter_entries": 947, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768936968, "oldest_key_time": 1768936968, "file_creation_time": 1768937179, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cbf3ab03-d51c-4622-b6c7-e997cd5246eb", "db_session_id": "I40O2DG19JCNHUB0JQU4", "orig_file_number": 79, "seqno_to_time_mapping": "N/A"}}
Jan 20 19:26:19 compute-0 ceph-mon[74381]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 32873 microseconds, and 16966 cpu microseconds.
Jan 20 19:26:19 compute-0 ceph-mon[74381]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 19:26:19 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:26:19.378559) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #79: 3999509 bytes OK
Jan 20 19:26:19 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:26:19.378587) [db/memtable_list.cc:519] [default] Level-0 commit table #79 started
Jan 20 19:26:19 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:26:19.381559) [db/memtable_list.cc:722] [default] Level-0 commit table #79: memtable #1 done
Jan 20 19:26:19 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:26:19.381614) EVENT_LOG_v1 {"time_micros": 1768937179381602, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 19:26:19 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:26:19.381643) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 19:26:19 compute-0 ceph-mon[74381]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 4086083, prev total WAL file size 4088234, number of live WAL files 2.
Jan 20 19:26:19 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:26:19 compute-0 ceph-mon[74381]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000075.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 19:26:19 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:26:19.383969) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Jan 20 19:26:19 compute-0 ceph-mon[74381]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 19:26:19 compute-0 ceph-mon[74381]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [79(3905KB)], [77(11MB)]
Jan 20 19:26:19 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768937179384109, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [79], "files_L6": [77], "score": -1, "input_data_size": 15725158, "oldest_snapshot_seqno": -1}
Jan 20 19:26:19 compute-0 ceph-mon[74381]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #80: 7003 keys, 13448185 bytes, temperature: kUnknown
Jan 20 19:26:19 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768937179468599, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 80, "file_size": 13448185, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13404181, "index_size": 25396, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17541, "raw_key_size": 183983, "raw_average_key_size": 26, "raw_value_size": 13280701, "raw_average_value_size": 1896, "num_data_blocks": 991, "num_entries": 7003, "num_filter_entries": 7003, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768934326, "oldest_key_time": 0, "file_creation_time": 1768937179, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cbf3ab03-d51c-4622-b6c7-e997cd5246eb", "db_session_id": "I40O2DG19JCNHUB0JQU4", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Jan 20 19:26:19 compute-0 ceph-mon[74381]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 19:26:19 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:26:19.468962) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 13448185 bytes
Jan 20 19:26:19 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:26:19.470408) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 186.0 rd, 159.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.8, 11.2 +0.0 blob) out(12.8 +0.0 blob), read-write-amplify(7.3) write-amplify(3.4) OK, records in: 7523, records dropped: 520 output_compression: NoCompression
Jan 20 19:26:19 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:26:19.470438) EVENT_LOG_v1 {"time_micros": 1768937179470425, "job": 44, "event": "compaction_finished", "compaction_time_micros": 84562, "compaction_time_cpu_micros": 37982, "output_level": 6, "num_output_files": 1, "total_output_size": 13448185, "num_input_records": 7523, "num_output_records": 7003, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 19:26:19 compute-0 ceph-mon[74381]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000079.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 19:26:19 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768937179471783, "job": 44, "event": "table_file_deletion", "file_number": 79}
Jan 20 19:26:19 compute-0 ceph-mon[74381]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 19:26:19 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768937179475927, "job": 44, "event": "table_file_deletion", "file_number": 77}
Jan 20 19:26:19 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:26:19.383538) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:26:19 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:26:19.476017) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:26:19 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:26:19.476025) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:26:19 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:26:19.476028) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:26:19 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:26:19.476031) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:26:19 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:26:19.476033) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:26:19 compute-0 sudo[290597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:26:19 compute-0 sudo[290597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:26:19 compute-0 sudo[290597]: pam_unix(sudo:session): session closed for user root
Jan 20 19:26:19 compute-0 sudo[290622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 19:26:19 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:26:19 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 20 19:26:19 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Jan 20 19:26:19 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 19:26:19 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 19:26:19 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:26:19 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:26:19 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 19:26:19 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 19:26:19 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 19:26:19 compute-0 sudo[290622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:26:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:26:19] "GET /metrics HTTP/1.1" 200 48449 "" "Prometheus/2.51.0"
Jan 20 19:26:19 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:26:19] "GET /metrics HTTP/1.1" 200 48449 "" "Prometheus/2.51.0"
Jan 20 19:26:20 compute-0 podman[290690]: 2026-01-20 19:26:20.002184824 +0000 UTC m=+0.038515413 container create 1b3497ca688b2ba197bb0411bdf1b84e018d94626fa1799950859a6f34d90915 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_kirch, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 20 19:26:20 compute-0 systemd[1]: Started libpod-conmon-1b3497ca688b2ba197bb0411bdf1b84e018d94626fa1799950859a6f34d90915.scope.
Jan 20 19:26:20 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:26:20 compute-0 podman[290690]: 2026-01-20 19:26:20.067235685 +0000 UTC m=+0.103566304 container init 1b3497ca688b2ba197bb0411bdf1b84e018d94626fa1799950859a6f34d90915 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_kirch, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 20 19:26:20 compute-0 podman[290690]: 2026-01-20 19:26:20.075982652 +0000 UTC m=+0.112313251 container start 1b3497ca688b2ba197bb0411bdf1b84e018d94626fa1799950859a6f34d90915 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Jan 20 19:26:20 compute-0 podman[290690]: 2026-01-20 19:26:20.07961021 +0000 UTC m=+0.115940829 container attach 1b3497ca688b2ba197bb0411bdf1b84e018d94626fa1799950859a6f34d90915 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_kirch, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 20 19:26:20 compute-0 podman[290690]: 2026-01-20 19:26:19.985673428 +0000 UTC m=+0.022004037 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:26:20 compute-0 elastic_kirch[290706]: 167 167
Jan 20 19:26:20 compute-0 systemd[1]: libpod-1b3497ca688b2ba197bb0411bdf1b84e018d94626fa1799950859a6f34d90915.scope: Deactivated successfully.
Jan 20 19:26:20 compute-0 podman[290690]: 2026-01-20 19:26:20.08256629 +0000 UTC m=+0.118896879 container died 1b3497ca688b2ba197bb0411bdf1b84e018d94626fa1799950859a6f34d90915 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_kirch, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 20 19:26:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-fdb0f84ec8f00381181e0af0329fd4b8aecd6be402a4863d0eb71c80a8e9b30a-merged.mount: Deactivated successfully.
Jan 20 19:26:20 compute-0 podman[290690]: 2026-01-20 19:26:20.120286261 +0000 UTC m=+0.156616850 container remove 1b3497ca688b2ba197bb0411bdf1b84e018d94626fa1799950859a6f34d90915 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_kirch, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:26:20 compute-0 systemd[1]: libpod-conmon-1b3497ca688b2ba197bb0411bdf1b84e018d94626fa1799950859a6f34d90915.scope: Deactivated successfully.
Jan 20 19:26:20 compute-0 podman[290729]: 2026-01-20 19:26:20.271641818 +0000 UTC m=+0.042207543 container create f25b6e765089db9ebba4c4f55c069051da3c72113bc996aaf01ee871323d4f70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Jan 20 19:26:20 compute-0 systemd[1]: Started libpod-conmon-f25b6e765089db9ebba4c4f55c069051da3c72113bc996aaf01ee871323d4f70.scope.
Jan 20 19:26:20 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:26:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c94c83ab757b9827cb4c75f6168598df4bfe09bc972f75f363df74ad0083ae1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:26:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c94c83ab757b9827cb4c75f6168598df4bfe09bc972f75f363df74ad0083ae1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:26:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c94c83ab757b9827cb4c75f6168598df4bfe09bc972f75f363df74ad0083ae1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:26:20 compute-0 podman[290729]: 2026-01-20 19:26:20.251901054 +0000 UTC m=+0.022466779 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:26:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c94c83ab757b9827cb4c75f6168598df4bfe09bc972f75f363df74ad0083ae1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:26:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c94c83ab757b9827cb4c75f6168598df4bfe09bc972f75f363df74ad0083ae1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:26:20 compute-0 podman[290729]: 2026-01-20 19:26:20.358573511 +0000 UTC m=+0.129139226 container init f25b6e765089db9ebba4c4f55c069051da3c72113bc996aaf01ee871323d4f70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_kare, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 20 19:26:20 compute-0 podman[290729]: 2026-01-20 19:26:20.365862908 +0000 UTC m=+0.136428603 container start f25b6e765089db9ebba4c4f55c069051da3c72113bc996aaf01ee871323d4f70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_kare, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:26:20 compute-0 podman[290729]: 2026-01-20 19:26:20.368488589 +0000 UTC m=+0.139054324 container attach f25b6e765089db9ebba4c4f55c069051da3c72113bc996aaf01ee871323d4f70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_kare, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325)
Jan 20 19:26:20 compute-0 ceph-mon[74381]: pgmap v1336: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:26:20 compute-0 ceph-mon[74381]: pgmap v1337: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 20 19:26:20 compute-0 cranky_kare[290747]: --> passed data devices: 0 physical, 1 LVM
Jan 20 19:26:20 compute-0 cranky_kare[290747]: --> All data devices are unavailable
Jan 20 19:26:20 compute-0 systemd[1]: libpod-f25b6e765089db9ebba4c4f55c069051da3c72113bc996aaf01ee871323d4f70.scope: Deactivated successfully.
Jan 20 19:26:20 compute-0 podman[290729]: 2026-01-20 19:26:20.670852593 +0000 UTC m=+0.441418308 container died f25b6e765089db9ebba4c4f55c069051da3c72113bc996aaf01ee871323d4f70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_kare, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid)
Jan 20 19:26:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c94c83ab757b9827cb4c75f6168598df4bfe09bc972f75f363df74ad0083ae1-merged.mount: Deactivated successfully.
Jan 20 19:26:20 compute-0 podman[290729]: 2026-01-20 19:26:20.71068585 +0000 UTC m=+0.481251595 container remove f25b6e765089db9ebba4c4f55c069051da3c72113bc996aaf01ee871323d4f70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_kare, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 19:26:20 compute-0 systemd[1]: libpod-conmon-f25b6e765089db9ebba4c4f55c069051da3c72113bc996aaf01ee871323d4f70.scope: Deactivated successfully.
Jan 20 19:26:20 compute-0 sudo[290622]: pam_unix(sudo:session): session closed for user root
Jan 20 19:26:20 compute-0 sudo[290776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:26:20 compute-0 sudo[290776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:26:20 compute-0 sudo[290776]: pam_unix(sudo:session): session closed for user root
Jan 20 19:26:20 compute-0 sudo[290801]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- lvm list --format json
Jan 20 19:26:20 compute-0 sudo[290801]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:26:21 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:26:21 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:26:21 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:26:21.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:26:21 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:26:21 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:26:21 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:26:21.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:26:21 compute-0 podman[290869]: 2026-01-20 19:26:21.263278787 +0000 UTC m=+0.059018918 container create c5081b51b91961648e6ac0aa6053b321622fa408b00182d00855573400c4e808 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid)
Jan 20 19:26:21 compute-0 systemd[1]: Started libpod-conmon-c5081b51b91961648e6ac0aa6053b321622fa408b00182d00855573400c4e808.scope.
Jan 20 19:26:21 compute-0 podman[290869]: 2026-01-20 19:26:21.234624122 +0000 UTC m=+0.030364343 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:26:21 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1338: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 20 19:26:21 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:26:21 compute-0 podman[290869]: 2026-01-20 19:26:21.353675454 +0000 UTC m=+0.149415605 container init c5081b51b91961648e6ac0aa6053b321622fa408b00182d00855573400c4e808 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 19:26:21 compute-0 podman[290869]: 2026-01-20 19:26:21.361428934 +0000 UTC m=+0.157169075 container start c5081b51b91961648e6ac0aa6053b321622fa408b00182d00855573400c4e808 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_bhaskara, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:26:21 compute-0 podman[290869]: 2026-01-20 19:26:21.364683243 +0000 UTC m=+0.160423374 container attach c5081b51b91961648e6ac0aa6053b321622fa408b00182d00855573400c4e808 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:26:21 compute-0 jovial_bhaskara[290886]: 167 167
Jan 20 19:26:21 compute-0 systemd[1]: libpod-c5081b51b91961648e6ac0aa6053b321622fa408b00182d00855573400c4e808.scope: Deactivated successfully.
Jan 20 19:26:21 compute-0 podman[290869]: 2026-01-20 19:26:21.366604494 +0000 UTC m=+0.162344625 container died c5081b51b91961648e6ac0aa6053b321622fa408b00182d00855573400c4e808 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Jan 20 19:26:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-c91922b5579b5a29ee9d2903db2ef50cc8d53d8e0d42222a966751cde34ab3bf-merged.mount: Deactivated successfully.
Jan 20 19:26:21 compute-0 podman[290869]: 2026-01-20 19:26:21.400317067 +0000 UTC m=+0.196057198 container remove c5081b51b91961648e6ac0aa6053b321622fa408b00182d00855573400c4e808 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True)
Jan 20 19:26:21 compute-0 systemd[1]: libpod-conmon-c5081b51b91961648e6ac0aa6053b321622fa408b00182d00855573400c4e808.scope: Deactivated successfully.
Jan 20 19:26:21 compute-0 podman[290908]: 2026-01-20 19:26:21.605993864 +0000 UTC m=+0.060664523 container create b00b7981811277240627953186edf2fd6fe3c16502926800af29242fa5209968 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_merkle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 20 19:26:21 compute-0 systemd[1]: Started libpod-conmon-b00b7981811277240627953186edf2fd6fe3c16502926800af29242fa5209968.scope.
Jan 20 19:26:21 compute-0 podman[290908]: 2026-01-20 19:26:21.573888855 +0000 UTC m=+0.028559564 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:26:21 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:26:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a442df9ee88d293a3df8ce7f63ad62dc0691abb191e0cb2a5550346792e562c9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:26:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a442df9ee88d293a3df8ce7f63ad62dc0691abb191e0cb2a5550346792e562c9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:26:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a442df9ee88d293a3df8ce7f63ad62dc0691abb191e0cb2a5550346792e562c9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:26:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a442df9ee88d293a3df8ce7f63ad62dc0691abb191e0cb2a5550346792e562c9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:26:21 compute-0 podman[290908]: 2026-01-20 19:26:21.721094199 +0000 UTC m=+0.175764838 container init b00b7981811277240627953186edf2fd6fe3c16502926800af29242fa5209968 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_merkle, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Jan 20 19:26:21 compute-0 podman[290908]: 2026-01-20 19:26:21.72853917 +0000 UTC m=+0.183209799 container start b00b7981811277240627953186edf2fd6fe3c16502926800af29242fa5209968 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_merkle, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:26:21 compute-0 podman[290908]: 2026-01-20 19:26:21.733846384 +0000 UTC m=+0.188517033 container attach b00b7981811277240627953186edf2fd6fe3c16502926800af29242fa5209968 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_merkle, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 20 19:26:21 compute-0 compassionate_merkle[290925]: {
Jan 20 19:26:21 compute-0 compassionate_merkle[290925]:     "0": [
Jan 20 19:26:21 compute-0 compassionate_merkle[290925]:         {
Jan 20 19:26:21 compute-0 compassionate_merkle[290925]:             "devices": [
Jan 20 19:26:21 compute-0 compassionate_merkle[290925]:                 "/dev/loop3"
Jan 20 19:26:21 compute-0 compassionate_merkle[290925]:             ],
Jan 20 19:26:21 compute-0 compassionate_merkle[290925]:             "lv_name": "ceph_lv0",
Jan 20 19:26:21 compute-0 compassionate_merkle[290925]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:26:21 compute-0 compassionate_merkle[290925]:             "lv_size": "21470642176",
Jan 20 19:26:21 compute-0 compassionate_merkle[290925]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=aecbbf3b-b405-507b-97d7-637a83f5b4b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5f53c0c6-6046-4836-83f9-ff93da7e674e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:26:21 compute-0 compassionate_merkle[290925]:             "lv_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 19:26:21 compute-0 compassionate_merkle[290925]:             "name": "ceph_lv0",
Jan 20 19:26:21 compute-0 compassionate_merkle[290925]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:26:21 compute-0 compassionate_merkle[290925]:             "tags": {
Jan 20 19:26:21 compute-0 compassionate_merkle[290925]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:26:21 compute-0 compassionate_merkle[290925]:                 "ceph.block_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 19:26:21 compute-0 compassionate_merkle[290925]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:26:21 compute-0 compassionate_merkle[290925]:                 "ceph.cluster_fsid": "aecbbf3b-b405-507b-97d7-637a83f5b4b1",
Jan 20 19:26:21 compute-0 compassionate_merkle[290925]:                 "ceph.cluster_name": "ceph",
Jan 20 19:26:21 compute-0 compassionate_merkle[290925]:                 "ceph.crush_device_class": "",
Jan 20 19:26:21 compute-0 compassionate_merkle[290925]:                 "ceph.encrypted": "0",
Jan 20 19:26:21 compute-0 compassionate_merkle[290925]:                 "ceph.osd_fsid": "5f53c0c6-6046-4836-83f9-ff93da7e674e",
Jan 20 19:26:21 compute-0 compassionate_merkle[290925]:                 "ceph.osd_id": "0",
Jan 20 19:26:21 compute-0 compassionate_merkle[290925]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:26:21 compute-0 compassionate_merkle[290925]:                 "ceph.type": "block",
Jan 20 19:26:21 compute-0 compassionate_merkle[290925]:                 "ceph.vdo": "0",
Jan 20 19:26:21 compute-0 compassionate_merkle[290925]:                 "ceph.with_tpm": "0"
Jan 20 19:26:21 compute-0 compassionate_merkle[290925]:             },
Jan 20 19:26:21 compute-0 compassionate_merkle[290925]:             "type": "block",
Jan 20 19:26:21 compute-0 compassionate_merkle[290925]:             "vg_name": "ceph_vg0"
Jan 20 19:26:21 compute-0 compassionate_merkle[290925]:         }
Jan 20 19:26:21 compute-0 compassionate_merkle[290925]:     ]
Jan 20 19:26:21 compute-0 compassionate_merkle[290925]: }
Jan 20 19:26:21 compute-0 systemd[1]: libpod-b00b7981811277240627953186edf2fd6fe3c16502926800af29242fa5209968.scope: Deactivated successfully.
Jan 20 19:26:21 compute-0 podman[290908]: 2026-01-20 19:26:21.987103219 +0000 UTC m=+0.441773878 container died b00b7981811277240627953186edf2fd6fe3c16502926800af29242fa5209968 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_merkle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 20 19:26:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-a442df9ee88d293a3df8ce7f63ad62dc0691abb191e0cb2a5550346792e562c9-merged.mount: Deactivated successfully.
Jan 20 19:26:22 compute-0 podman[290908]: 2026-01-20 19:26:22.030982396 +0000 UTC m=+0.485653015 container remove b00b7981811277240627953186edf2fd6fe3c16502926800af29242fa5209968 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Jan 20 19:26:22 compute-0 systemd[1]: libpod-conmon-b00b7981811277240627953186edf2fd6fe3c16502926800af29242fa5209968.scope: Deactivated successfully.
Jan 20 19:26:22 compute-0 sudo[290801]: pam_unix(sudo:session): session closed for user root
Jan 20 19:26:22 compute-0 sudo[290935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:26:22 compute-0 sudo[290935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:26:22 compute-0 sudo[290935]: pam_unix(sudo:session): session closed for user root
Jan 20 19:26:22 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:26:22 compute-0 sudo[290971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:26:22 compute-0 sudo[290971]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:26:22 compute-0 sudo[290971]: pam_unix(sudo:session): session closed for user root
Jan 20 19:26:22 compute-0 sudo[290996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- raw list --format json
Jan 20 19:26:22 compute-0 sudo[290996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:26:22 compute-0 nova_compute[254061]: 2026-01-20 19:26:22.324 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:26:22 compute-0 nova_compute[254061]: 2026-01-20 19:26:22.326 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:26:22 compute-0 nova_compute[254061]: 2026-01-20 19:26:22.326 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:26:22 compute-0 nova_compute[254061]: 2026-01-20 19:26:22.326 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:26:22 compute-0 nova_compute[254061]: 2026-01-20 19:26:22.526 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:26:22 compute-0 podman[291063]: 2026-01-20 19:26:22.562942424 +0000 UTC m=+0.048936185 container create 9a723a07630b0ae40fe3e5b80b24ce706680e1ca012da42cfa214e36b7524785 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_hopper, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Jan 20 19:26:22 compute-0 systemd[1]: Started libpod-conmon-9a723a07630b0ae40fe3e5b80b24ce706680e1ca012da42cfa214e36b7524785.scope.
Jan 20 19:26:22 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:26:22 compute-0 podman[291063]: 2026-01-20 19:26:22.537260559 +0000 UTC m=+0.023254410 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:26:23 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:26:23 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:26:23 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:26:23.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:26:23 compute-0 ceph-mon[74381]: pgmap v1338: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 20 19:26:23 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:26:23 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:26:23 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:26:23.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:26:23 compute-0 podman[291063]: 2026-01-20 19:26:23.106587749 +0000 UTC m=+0.592581510 container init 9a723a07630b0ae40fe3e5b80b24ce706680e1ca012da42cfa214e36b7524785 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_hopper, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 20 19:26:23 compute-0 podman[291063]: 2026-01-20 19:26:23.112016316 +0000 UTC m=+0.598010077 container start 9a723a07630b0ae40fe3e5b80b24ce706680e1ca012da42cfa214e36b7524785 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:26:23 compute-0 trusting_hopper[291079]: 167 167
Jan 20 19:26:23 compute-0 systemd[1]: libpod-9a723a07630b0ae40fe3e5b80b24ce706680e1ca012da42cfa214e36b7524785.scope: Deactivated successfully.
Jan 20 19:26:23 compute-0 nova_compute[254061]: 2026-01-20 19:26:23.128 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:26:23 compute-0 nova_compute[254061]: 2026-01-20 19:26:23.129 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 19:26:23 compute-0 nova_compute[254061]: 2026-01-20 19:26:23.129 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 19:26:23 compute-0 podman[291063]: 2026-01-20 19:26:23.146470868 +0000 UTC m=+0.632464669 container attach 9a723a07630b0ae40fe3e5b80b24ce706680e1ca012da42cfa214e36b7524785 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 20 19:26:23 compute-0 podman[291063]: 2026-01-20 19:26:23.147316261 +0000 UTC m=+0.633310052 container died 9a723a07630b0ae40fe3e5b80b24ce706680e1ca012da42cfa214e36b7524785 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_hopper, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:26:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-57b5d950bf60cbc3f8dbc16e1ccb409fec19f20c8e94112d09bf97b306148809-merged.mount: Deactivated successfully.
Jan 20 19:26:23 compute-0 podman[291063]: 2026-01-20 19:26:23.188535217 +0000 UTC m=+0.674528978 container remove 9a723a07630b0ae40fe3e5b80b24ce706680e1ca012da42cfa214e36b7524785 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:26:23 compute-0 systemd[1]: libpod-conmon-9a723a07630b0ae40fe3e5b80b24ce706680e1ca012da42cfa214e36b7524785.scope: Deactivated successfully.
Jan 20 19:26:23 compute-0 nova_compute[254061]: 2026-01-20 19:26:23.209 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 19:26:23 compute-0 nova_compute[254061]: 2026-01-20 19:26:23.210 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:26:23 compute-0 nova_compute[254061]: 2026-01-20 19:26:23.210 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:26:23 compute-0 nova_compute[254061]: 2026-01-20 19:26:23.211 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 19:26:23 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1339: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 20 19:26:23 compute-0 podman[291105]: 2026-01-20 19:26:23.369277188 +0000 UTC m=+0.039089579 container create 309000ad506132db7c91f83a1c902b0e88339707c8cbee3484043ea2cfc4827a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 20 19:26:23 compute-0 systemd[1]: Started libpod-conmon-309000ad506132db7c91f83a1c902b0e88339707c8cbee3484043ea2cfc4827a.scope.
Jan 20 19:26:23 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:26:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84ceada4faff23f96089ff8b2d564543d312e1a10c25aa5a71668986fd530265/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:26:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84ceada4faff23f96089ff8b2d564543d312e1a10c25aa5a71668986fd530265/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:26:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84ceada4faff23f96089ff8b2d564543d312e1a10c25aa5a71668986fd530265/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:26:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84ceada4faff23f96089ff8b2d564543d312e1a10c25aa5a71668986fd530265/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:26:23 compute-0 podman[291105]: 2026-01-20 19:26:23.353603514 +0000 UTC m=+0.023415925 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:26:23 compute-0 podman[291105]: 2026-01-20 19:26:23.450828926 +0000 UTC m=+0.120641337 container init 309000ad506132db7c91f83a1c902b0e88339707c8cbee3484043ea2cfc4827a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_vaughan, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325)
Jan 20 19:26:23 compute-0 podman[291105]: 2026-01-20 19:26:23.457866207 +0000 UTC m=+0.127678598 container start 309000ad506132db7c91f83a1c902b0e88339707c8cbee3484043ea2cfc4827a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_vaughan, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True)
Jan 20 19:26:23 compute-0 podman[291105]: 2026-01-20 19:26:23.461221047 +0000 UTC m=+0.131033438 container attach 309000ad506132db7c91f83a1c902b0e88339707c8cbee3484043ea2cfc4827a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_vaughan, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 20 19:26:24 compute-0 lvm[291198]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 19:26:24 compute-0 lvm[291198]: VG ceph_vg0 finished
Jan 20 19:26:24 compute-0 angry_vaughan[291121]: {}
Jan 20 19:26:24 compute-0 systemd[1]: libpod-309000ad506132db7c91f83a1c902b0e88339707c8cbee3484043ea2cfc4827a.scope: Deactivated successfully.
Jan 20 19:26:24 compute-0 systemd[1]: libpod-309000ad506132db7c91f83a1c902b0e88339707c8cbee3484043ea2cfc4827a.scope: Consumed 1.142s CPU time.
Jan 20 19:26:24 compute-0 podman[291203]: 2026-01-20 19:26:24.215614446 +0000 UTC m=+0.023502117 container died 309000ad506132db7c91f83a1c902b0e88339707c8cbee3484043ea2cfc4827a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Jan 20 19:26:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-84ceada4faff23f96089ff8b2d564543d312e1a10c25aa5a71668986fd530265-merged.mount: Deactivated successfully.
Jan 20 19:26:24 compute-0 podman[291203]: 2026-01-20 19:26:24.253537132 +0000 UTC m=+0.061424783 container remove 309000ad506132db7c91f83a1c902b0e88339707c8cbee3484043ea2cfc4827a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Jan 20 19:26:24 compute-0 systemd[1]: libpod-conmon-309000ad506132db7c91f83a1c902b0e88339707c8cbee3484043ea2cfc4827a.scope: Deactivated successfully.
Jan 20 19:26:24 compute-0 sudo[290996]: pam_unix(sudo:session): session closed for user root
Jan 20 19:26:24 compute-0 podman[291202]: 2026-01-20 19:26:24.301306955 +0000 UTC m=+0.103125632 container health_status d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 20 19:26:24 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/2530923027' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:26:24 compute-0 ceph-mon[74381]: pgmap v1339: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 20 19:26:24 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/543060459' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:26:24 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/1624772812' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:26:24 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:26:24 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:26:24 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:26:24 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:26:24 compute-0 sudo[291244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 19:26:24 compute-0 sudo[291244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:26:24 compute-0 sudo[291244]: pam_unix(sudo:session): session closed for user root
Jan 20 19:26:25 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:26:25 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:26:25 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:26:25.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:26:25 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:26:25 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:26:25 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:26:25.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:26:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:26:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:26:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:26:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:26:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:26:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:26:25 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1340: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 20 19:26:25 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/1754983278' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:26:25 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:26:25 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:26:25 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:26:27 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:26:27 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:26:27 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:26:27.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:26:27 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:26:27 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:26:27 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:26:27.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:26:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:26:27.284Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:26:27 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1341: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 20 19:26:27 compute-0 nova_compute[254061]: 2026-01-20 19:26:27.528 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:26:27 compute-0 nova_compute[254061]: 2026-01-20 19:26:27.530 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:26:27 compute-0 nova_compute[254061]: 2026-01-20 19:26:27.530 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 20 19:26:27 compute-0 nova_compute[254061]: 2026-01-20 19:26:27.530 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 19:26:27 compute-0 nova_compute[254061]: 2026-01-20 19:26:27.531 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 19:26:27 compute-0 nova_compute[254061]: 2026-01-20 19:26:27.532 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:26:27 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:26:27 compute-0 ceph-mon[74381]: pgmap v1340: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 20 19:26:28 compute-0 ceph-mon[74381]: pgmap v1341: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 20 19:26:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:26:28.886Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:26:29 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:26:29 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:26:29 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:26:29.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:26:29 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:26:29 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:26:29 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:26:29.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:26:29 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1342: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 20 19:26:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:26:29] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Jan 20 19:26:29 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:26:29] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Jan 20 19:26:30 compute-0 nova_compute[254061]: 2026-01-20 19:26:30.206 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:26:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:26:30.304 165659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:26:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:26:30.304 165659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:26:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:26:30.305 165659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:26:30 compute-0 ceph-mon[74381]: pgmap v1342: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 20 19:26:31 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:26:31 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:26:31 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:26:31.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:26:31 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:26:31 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:26:31 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:26:31.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:26:31 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1343: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:26:31 compute-0 ceph-mon[74381]: pgmap v1343: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:26:32 compute-0 nova_compute[254061]: 2026-01-20 19:26:32.533 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:26:32 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:26:33 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:26:33 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:26:33 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:26:33.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:26:33 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:26:33 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:26:33 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:26:33.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:26:33 compute-0 nova_compute[254061]: 2026-01-20 19:26:33.124 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:26:33 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1344: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:26:34 compute-0 ceph-mon[74381]: pgmap v1344: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:26:35 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:26:35 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:26:35 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:26:35.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:26:35 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:26:35 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:26:35 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:26:35.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:26:35 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1345: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:26:35 compute-0 ceph-mon[74381]: pgmap v1345: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:26:37 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:26:37 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:26:37 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:26:37.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:26:37 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:26:37 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:26:37 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:26:37.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:26:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:26:37.284Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:26:37 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1346: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:26:37 compute-0 nova_compute[254061]: 2026-01-20 19:26:37.534 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:26:37 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:26:38 compute-0 ceph-mon[74381]: pgmap v1346: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:26:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:26:38.888Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:26:39 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:26:39 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:26:39 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:26:39.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:26:39 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:26:39 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:26:39 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:26:39.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:26:39 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1347: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:26:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:26:39] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Jan 20 19:26:39 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:26:39] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Jan 20 19:26:40 compute-0 ceph-mon[74381]: pgmap v1347: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:26:40 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:26:41 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:26:41 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:26:41 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:26:41.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:26:41 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:26:41 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:26:41 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:26:41.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:26:41 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1348: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:26:42 compute-0 sudo[291286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:26:42 compute-0 sudo[291286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:26:42 compute-0 sudo[291286]: pam_unix(sudo:session): session closed for user root
Jan 20 19:26:42 compute-0 nova_compute[254061]: 2026-01-20 19:26:42.535 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:26:42 compute-0 nova_compute[254061]: 2026-01-20 19:26:42.538 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:26:42 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:26:42 compute-0 ceph-mon[74381]: pgmap v1348: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:26:43 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:26:43 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:26:43 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:26:43.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:26:43 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:26:43 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:26:43 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:26:43.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:26:43 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1349: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:26:44 compute-0 ceph-mon[74381]: pgmap v1349: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:26:45 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:26:45 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:26:45 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:26:45.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:26:45 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:26:45 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:26:45 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:26:45.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:26:45 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1350: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:26:46 compute-0 podman[291315]: 2026-01-20 19:26:46.083902548 +0000 UTC m=+0.058576147 container health_status 7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 20 19:26:46 compute-0 ceph-mon[74381]: pgmap v1350: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:26:47 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:26:47 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:26:47 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:26:47.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:26:47 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:26:47 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:26:47 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:26:47.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:26:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:26:47.286Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:26:47 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1351: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:26:47 compute-0 nova_compute[254061]: 2026-01-20 19:26:47.536 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:26:47 compute-0 nova_compute[254061]: 2026-01-20 19:26:47.538 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:26:47 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:26:48 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 20 19:26:48 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3372181568' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 19:26:48 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 20 19:26:48 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3372181568' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 19:26:48 compute-0 ceph-mon[74381]: pgmap v1351: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:26:48 compute-0 ceph-mon[74381]: from='client.? 192.168.122.10:0/3372181568' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 19:26:48 compute-0 ceph-mon[74381]: from='client.? 192.168.122.10:0/3372181568' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 19:26:48 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:26:48.888Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:26:48 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:26:48.889Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:26:49 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:26:49 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:26:49 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:26:49.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:26:49 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:26:49 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:26:49 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:26:49.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:26:49 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1352: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:26:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:26:49] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Jan 20 19:26:49 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:26:49] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Jan 20 19:26:50 compute-0 ceph-mon[74381]: pgmap v1352: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:26:51 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:26:51 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:26:51 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:26:51.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:26:51 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:26:51 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:26:51 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:26:51.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:26:51 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1353: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:26:52 compute-0 nova_compute[254061]: 2026-01-20 19:26:52.538 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:26:52 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:26:52 compute-0 ceph-mon[74381]: pgmap v1353: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:26:53 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:26:53 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:26:53 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:26:53.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:26:53 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:26:53 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:26:53 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:26:53.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:26:53 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1354: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:26:53 compute-0 ceph-mon[74381]: pgmap v1354: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:26:55 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:26:55 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:26:55 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:26:55.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:26:55 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:26:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:26:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:26:55 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:26:55 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:26:55.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:26:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:26:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:26:55 compute-0 ceph-mgr[74676]: [balancer INFO root] Optimize plan auto_2026-01-20_19:26:55
Jan 20 19:26:55 compute-0 ceph-mgr[74676]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 19:26:55 compute-0 ceph-mgr[74676]: [balancer INFO root] do_upmap
Jan 20 19:26:55 compute-0 ceph-mgr[74676]: [balancer INFO root] pools ['backups', 'vms', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.data', '.nfs', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.control', 'images', '.rgw.root', 'default.rgw.log']
Jan 20 19:26:55 compute-0 ceph-mgr[74676]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 19:26:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:26:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:26:55 compute-0 podman[291345]: 2026-01-20 19:26:55.134871015 +0000 UTC m=+0.111387466 container health_status d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Jan 20 19:26:55 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:26:55 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1355: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:26:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 19:26:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:26:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 19:26:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:26:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 20 19:26:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:26:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:26:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:26:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:26:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:26:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 20 19:26:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:26:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 20 19:26:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:26:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:26:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:26:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 19:26:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:26:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 20 19:26:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:26:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:26:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:26:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 19:26:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:26:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 20 19:26:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 19:26:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:26:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 19:26:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:26:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:26:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:26:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:26:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:26:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:26:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:26:56 compute-0 ceph-mon[74381]: pgmap v1355: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:26:57 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:26:57 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:26:57 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:26:57.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:26:57 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:26:57 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:26:57 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:26:57.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:26:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:26:57.286Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:26:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:26:57.286Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:26:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:26:57.287Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:26:57 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1356: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:26:57 compute-0 nova_compute[254061]: 2026-01-20 19:26:57.541 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:26:57 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:26:58 compute-0 ceph-mon[74381]: pgmap v1356: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:26:58 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:26:58.890Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:26:59 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:26:59 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:26:59 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:26:59.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:26:59 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:26:59 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:26:59 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:26:59.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:26:59 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1357: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:26:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:26:59] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Jan 20 19:26:59 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:26:59] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Jan 20 19:27:00 compute-0 ceph-mon[74381]: pgmap v1357: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:27:01 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:27:01 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:27:01 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:27:01.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:27:01 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:27:01 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:27:01 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:27:01.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:27:01 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1358: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:27:02 compute-0 sudo[291377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:27:02 compute-0 sudo[291377]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:27:02 compute-0 sudo[291377]: pam_unix(sudo:session): session closed for user root
Jan 20 19:27:02 compute-0 nova_compute[254061]: 2026-01-20 19:27:02.543 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:27:02 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:27:02 compute-0 ceph-mon[74381]: pgmap v1358: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:27:03 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:27:03 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:27:03 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:27:03.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:27:03 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:27:03 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:27:03 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:27:03.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:27:03 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1359: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:27:04 compute-0 ceph-mon[74381]: pgmap v1359: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:27:05 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:27:05 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:27:05 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:27:05.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:27:05 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:27:05 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:27:05 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:27:05.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:27:05 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1360: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:27:05 compute-0 ceph-mon[74381]: pgmap v1360: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:27:07 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:27:07 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:27:07 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:27:07.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:27:07 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:27:07 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:27:07 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:27:07.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:27:07 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:27:07.287Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:27:07 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:27:07.287Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:27:07 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1361: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:27:07 compute-0 nova_compute[254061]: 2026-01-20 19:27:07.544 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:27:07 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:27:08 compute-0 ceph-mon[74381]: pgmap v1361: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:27:08 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:27:08.891Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:27:08 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:27:08.891Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:27:09 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:27:09 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:27:09 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:27:09.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:27:09 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:27:09 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:27:09 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:27:09.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:27:09 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1362: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:27:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:27:09] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Jan 20 19:27:09 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:27:09] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Jan 20 19:27:10 compute-0 ceph-mon[74381]: pgmap v1362: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:27:10 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:27:11 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:27:11 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:27:11 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:27:11.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:27:11 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:27:11 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:27:11 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:27:11.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:27:11 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1363: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:27:12 compute-0 nova_compute[254061]: 2026-01-20 19:27:12.547 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:27:12 compute-0 nova_compute[254061]: 2026-01-20 19:27:12.549 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:27:12 compute-0 nova_compute[254061]: 2026-01-20 19:27:12.549 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 20 19:27:12 compute-0 nova_compute[254061]: 2026-01-20 19:27:12.549 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 19:27:12 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:27:12 compute-0 ceph-mon[74381]: pgmap v1363: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:27:12 compute-0 nova_compute[254061]: 2026-01-20 19:27:12.573 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:27:12 compute-0 nova_compute[254061]: 2026-01-20 19:27:12.575 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 19:27:13 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:27:13 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:27:13 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:27:13.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:27:13 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:27:13 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:27:13 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:27:13.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:27:13 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1364: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:27:14 compute-0 ceph-mon[74381]: pgmap v1364: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:27:15 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:27:15 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:27:15 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:27:15.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:27:15 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:27:15 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:27:15 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:27:15.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:27:15 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1365: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:27:16 compute-0 ceph-mon[74381]: pgmap v1365: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:27:17 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:27:17 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:27:17 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:27:17.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:27:17 compute-0 podman[291418]: 2026-01-20 19:27:17.108638971 +0000 UTC m=+0.079557514 container health_status 7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Jan 20 19:27:17 compute-0 nova_compute[254061]: 2026-01-20 19:27:17.128 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:27:17 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:27:17 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:27:17 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:27:17.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:27:17 compute-0 nova_compute[254061]: 2026-01-20 19:27:17.155 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:27:17 compute-0 nova_compute[254061]: 2026-01-20 19:27:17.155 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:27:17 compute-0 nova_compute[254061]: 2026-01-20 19:27:17.156 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:27:17 compute-0 nova_compute[254061]: 2026-01-20 19:27:17.156 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 19:27:17 compute-0 nova_compute[254061]: 2026-01-20 19:27:17.156 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:27:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:27:17.288Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:27:17 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1366: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:27:17 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:27:17 compute-0 nova_compute[254061]: 2026-01-20 19:27:17.574 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:27:17 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:27:17 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1013650557' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:27:17 compute-0 nova_compute[254061]: 2026-01-20 19:27:17.596 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:27:17 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/1013650557' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:27:17 compute-0 nova_compute[254061]: 2026-01-20 19:27:17.758 254065 WARNING nova.virt.libvirt.driver [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 19:27:17 compute-0 nova_compute[254061]: 2026-01-20 19:27:17.759 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4521MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 19:27:17 compute-0 nova_compute[254061]: 2026-01-20 19:27:17.759 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:27:17 compute-0 nova_compute[254061]: 2026-01-20 19:27:17.759 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:27:18 compute-0 nova_compute[254061]: 2026-01-20 19:27:18.003 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 19:27:18 compute-0 nova_compute[254061]: 2026-01-20 19:27:18.003 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 19:27:18 compute-0 nova_compute[254061]: 2026-01-20 19:27:18.133 254065 DEBUG nova.scheduler.client.report [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Refreshing inventories for resource provider cb9161e5-191d-495c-920a-01144f42a215 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 20 19:27:18 compute-0 nova_compute[254061]: 2026-01-20 19:27:18.237 254065 DEBUG nova.scheduler.client.report [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Updating ProviderTree inventory for provider cb9161e5-191d-495c-920a-01144f42a215 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 20 19:27:18 compute-0 nova_compute[254061]: 2026-01-20 19:27:18.238 254065 DEBUG nova.compute.provider_tree [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Updating inventory in ProviderTree for provider cb9161e5-191d-495c-920a-01144f42a215 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 20 19:27:18 compute-0 nova_compute[254061]: 2026-01-20 19:27:18.251 254065 DEBUG nova.scheduler.client.report [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Refreshing aggregate associations for resource provider cb9161e5-191d-495c-920a-01144f42a215, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 20 19:27:18 compute-0 nova_compute[254061]: 2026-01-20 19:27:18.268 254065 DEBUG nova.scheduler.client.report [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Refreshing trait associations for resource provider cb9161e5-191d-495c-920a-01144f42a215, traits: COMPUTE_DEVICE_TAGGING,COMPUTE_SECURITY_TPM_1_2,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AESNI,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSSE3,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_ABM,COMPUTE_ACCELERATORS,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SSE4A,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NODE,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_RAW,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_F16C,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VOLUME_EXTEND,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_STORAGE_BUS_USB,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SSE,HW_CPU_X86_SSE2,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_BMI,HW_CPU_X86_SVM,HW_CPU_X86_AVX,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_NET_VIF_MODEL_VMXNET3 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 20 19:27:18 compute-0 nova_compute[254061]: 2026-01-20 19:27:18.283 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:27:18 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:27:18 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1355986975' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:27:18 compute-0 ceph-mon[74381]: pgmap v1366: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:27:18 compute-0 nova_compute[254061]: 2026-01-20 19:27:18.709 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:27:18 compute-0 nova_compute[254061]: 2026-01-20 19:27:18.713 254065 DEBUG nova.compute.provider_tree [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Inventory has not changed in ProviderTree for provider: cb9161e5-191d-495c-920a-01144f42a215 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 19:27:18 compute-0 nova_compute[254061]: 2026-01-20 19:27:18.726 254065 DEBUG nova.scheduler.client.report [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Inventory has not changed for provider cb9161e5-191d-495c-920a-01144f42a215 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 19:27:18 compute-0 nova_compute[254061]: 2026-01-20 19:27:18.727 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 19:27:18 compute-0 nova_compute[254061]: 2026-01-20 19:27:18.727 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.968s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:27:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:27:18.893Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:27:19 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:27:19 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:27:19 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:27:19.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:27:19 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:27:19 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:27:19 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:27:19.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:27:19 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1367: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:27:19 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/1355986975' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:27:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:27:19] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Jan 20 19:27:19 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:27:19] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Jan 20 19:27:20 compute-0 ceph-mon[74381]: pgmap v1367: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:27:21 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:27:21 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:27:21 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:27:21.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:27:21 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:27:21 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:27:21 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:27:21.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:27:21 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1368: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:27:21 compute-0 ceph-mon[74381]: pgmap v1368: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:27:22 compute-0 sudo[291485]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:27:22 compute-0 sudo[291485]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:27:22 compute-0 sudo[291485]: pam_unix(sudo:session): session closed for user root
Jan 20 19:27:22 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:27:22 compute-0 nova_compute[254061]: 2026-01-20 19:27:22.576 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:27:22 compute-0 nova_compute[254061]: 2026-01-20 19:27:22.578 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:27:22 compute-0 nova_compute[254061]: 2026-01-20 19:27:22.728 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:27:22 compute-0 nova_compute[254061]: 2026-01-20 19:27:22.728 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:27:22 compute-0 nova_compute[254061]: 2026-01-20 19:27:22.728 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:27:23 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:27:23 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:27:23 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:27:23.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:27:23 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/1139985577' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:27:23 compute-0 nova_compute[254061]: 2026-01-20 19:27:23.128 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:27:23 compute-0 nova_compute[254061]: 2026-01-20 19:27:23.128 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 19:27:23 compute-0 nova_compute[254061]: 2026-01-20 19:27:23.129 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 19:27:23 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:27:23 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:27:23 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:27:23.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:27:23 compute-0 nova_compute[254061]: 2026-01-20 19:27:23.154 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 19:27:23 compute-0 nova_compute[254061]: 2026-01-20 19:27:23.154 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:27:23 compute-0 nova_compute[254061]: 2026-01-20 19:27:23.155 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:27:23 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1369: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:27:24 compute-0 ceph-mon[74381]: pgmap v1369: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:27:24 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/599354013' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:27:24 compute-0 nova_compute[254061]: 2026-01-20 19:27:24.129 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:27:24 compute-0 nova_compute[254061]: 2026-01-20 19:27:24.130 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 19:27:24 compute-0 sudo[291512]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:27:24 compute-0 sudo[291512]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:27:24 compute-0 sudo[291512]: pam_unix(sudo:session): session closed for user root
Jan 20 19:27:24 compute-0 sudo[291538]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 20 19:27:24 compute-0 sudo[291538]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:27:25 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:27:25 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:27:25 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:27:25.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:27:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:27:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:27:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:27:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:27:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:27:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:27:25 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/604109035' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:27:25 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:27:25 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:27:25 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:27:25 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:27:25.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:27:25 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1370: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:27:25 compute-0 sudo[291538]: pam_unix(sudo:session): session closed for user root
Jan 20 19:27:25 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1371: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 20 19:27:25 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 19:27:25 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:27:25 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 20 19:27:25 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:27:25 compute-0 sudo[291594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:27:25 compute-0 sudo[291594]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:27:25 compute-0 sudo[291594]: pam_unix(sudo:session): session closed for user root
Jan 20 19:27:25 compute-0 sudo[291625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 19:27:25 compute-0 sudo[291625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:27:25 compute-0 podman[291618]: 2026-01-20 19:27:25.709776422 +0000 UTC m=+0.095190677 container health_status d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 20 19:27:26 compute-0 podman[291710]: 2026-01-20 19:27:26.074124324 +0000 UTC m=+0.037519877 container create 5c3b05d59c186116ce94457c23bcd21cb310a0646541ac95e7164d689911b419 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 20 19:27:26 compute-0 systemd[1]: Started libpod-conmon-5c3b05d59c186116ce94457c23bcd21cb310a0646541ac95e7164d689911b419.scope.
Jan 20 19:27:26 compute-0 nova_compute[254061]: 2026-01-20 19:27:26.128 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:27:26 compute-0 nova_compute[254061]: 2026-01-20 19:27:26.129 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 20 19:27:26 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:27:26 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/1258719949' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:27:26 compute-0 ceph-mon[74381]: pgmap v1370: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:27:26 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 19:27:26 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 19:27:26 compute-0 ceph-mon[74381]: pgmap v1371: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 20 19:27:26 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:27:26 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:27:26 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 19:27:26 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 19:27:26 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 19:27:26 compute-0 nova_compute[254061]: 2026-01-20 19:27:26.150 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 20 19:27:26 compute-0 podman[291710]: 2026-01-20 19:27:26.058945733 +0000 UTC m=+0.022341306 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:27:26 compute-0 podman[291710]: 2026-01-20 19:27:26.154318584 +0000 UTC m=+0.117714177 container init 5c3b05d59c186116ce94457c23bcd21cb310a0646541ac95e7164d689911b419 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_kepler, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 20 19:27:26 compute-0 podman[291710]: 2026-01-20 19:27:26.161166469 +0000 UTC m=+0.124562022 container start 5c3b05d59c186116ce94457c23bcd21cb310a0646541ac95e7164d689911b419 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Jan 20 19:27:26 compute-0 podman[291710]: 2026-01-20 19:27:26.1641497 +0000 UTC m=+0.127545263 container attach 5c3b05d59c186116ce94457c23bcd21cb310a0646541ac95e7164d689911b419 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_kepler, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:27:26 compute-0 nostalgic_kepler[291726]: 167 167
Jan 20 19:27:26 compute-0 systemd[1]: libpod-5c3b05d59c186116ce94457c23bcd21cb310a0646541ac95e7164d689911b419.scope: Deactivated successfully.
Jan 20 19:27:26 compute-0 podman[291710]: 2026-01-20 19:27:26.16674656 +0000 UTC m=+0.130142133 container died 5c3b05d59c186116ce94457c23bcd21cb310a0646541ac95e7164d689911b419 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_kepler, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 19:27:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c1aad893b65a22f974bd7861c3d23e4632b87e2d5f43833a5eb78688a422fc0-merged.mount: Deactivated successfully.
Jan 20 19:27:26 compute-0 podman[291710]: 2026-01-20 19:27:26.208170132 +0000 UTC m=+0.171565685 container remove 5c3b05d59c186116ce94457c23bcd21cb310a0646541ac95e7164d689911b419 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_kepler, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 20 19:27:26 compute-0 systemd[1]: libpod-conmon-5c3b05d59c186116ce94457c23bcd21cb310a0646541ac95e7164d689911b419.scope: Deactivated successfully.
Jan 20 19:27:26 compute-0 podman[291750]: 2026-01-20 19:27:26.397364453 +0000 UTC m=+0.042653547 container create 82e35b562d46e7c0ae31523c6b8b10206246d929b7106087f86347e0a9bdcff0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_germain, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 20 19:27:26 compute-0 systemd[1]: Started libpod-conmon-82e35b562d46e7c0ae31523c6b8b10206246d929b7106087f86347e0a9bdcff0.scope.
Jan 20 19:27:26 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:27:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81ef31b5fc4ed66b0c4113226b416ee58eb4d836f2b6de1b0f6144d2d86be4d2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:27:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81ef31b5fc4ed66b0c4113226b416ee58eb4d836f2b6de1b0f6144d2d86be4d2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:27:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81ef31b5fc4ed66b0c4113226b416ee58eb4d836f2b6de1b0f6144d2d86be4d2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:27:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81ef31b5fc4ed66b0c4113226b416ee58eb4d836f2b6de1b0f6144d2d86be4d2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:27:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/81ef31b5fc4ed66b0c4113226b416ee58eb4d836f2b6de1b0f6144d2d86be4d2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:27:26 compute-0 podman[291750]: 2026-01-20 19:27:26.379249872 +0000 UTC m=+0.024538976 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:27:26 compute-0 podman[291750]: 2026-01-20 19:27:26.483659058 +0000 UTC m=+0.128948172 container init 82e35b562d46e7c0ae31523c6b8b10206246d929b7106087f86347e0a9bdcff0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_germain, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Jan 20 19:27:26 compute-0 podman[291750]: 2026-01-20 19:27:26.499524168 +0000 UTC m=+0.144813262 container start 82e35b562d46e7c0ae31523c6b8b10206246d929b7106087f86347e0a9bdcff0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_germain, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True)
Jan 20 19:27:26 compute-0 podman[291750]: 2026-01-20 19:27:26.503397362 +0000 UTC m=+0.148686456 container attach 82e35b562d46e7c0ae31523c6b8b10206246d929b7106087f86347e0a9bdcff0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_germain, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 20 19:27:26 compute-0 focused_germain[291767]: --> passed data devices: 0 physical, 1 LVM
Jan 20 19:27:26 compute-0 focused_germain[291767]: --> All data devices are unavailable
Jan 20 19:27:26 compute-0 systemd[1]: libpod-82e35b562d46e7c0ae31523c6b8b10206246d929b7106087f86347e0a9bdcff0.scope: Deactivated successfully.
Jan 20 19:27:26 compute-0 podman[291750]: 2026-01-20 19:27:26.857371423 +0000 UTC m=+0.502660517 container died 82e35b562d46e7c0ae31523c6b8b10206246d929b7106087f86347e0a9bdcff0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Jan 20 19:27:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-81ef31b5fc4ed66b0c4113226b416ee58eb4d836f2b6de1b0f6144d2d86be4d2-merged.mount: Deactivated successfully.
Jan 20 19:27:26 compute-0 podman[291750]: 2026-01-20 19:27:26.893294845 +0000 UTC m=+0.538583929 container remove 82e35b562d46e7c0ae31523c6b8b10206246d929b7106087f86347e0a9bdcff0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_germain, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:27:26 compute-0 systemd[1]: libpod-conmon-82e35b562d46e7c0ae31523c6b8b10206246d929b7106087f86347e0a9bdcff0.scope: Deactivated successfully.
Jan 20 19:27:26 compute-0 sudo[291625]: pam_unix(sudo:session): session closed for user root
Jan 20 19:27:27 compute-0 sudo[291795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:27:27 compute-0 sudo[291795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:27:27 compute-0 sudo[291795]: pam_unix(sudo:session): session closed for user root
Jan 20 19:27:27 compute-0 sudo[291821]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- lvm list --format json
Jan 20 19:27:27 compute-0 sudo[291821]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:27:27 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:27:27 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:27:27 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:27:27.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:27:27 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:27:27 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:27:27 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:27:27.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:27:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:27:27.289Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:27:27 compute-0 podman[291885]: 2026-01-20 19:27:27.477269801 +0000 UTC m=+0.045287996 container create d0c6f0d32965d7f0c1beff87a87ec42563716273127fc43f1610ff3044b48f17 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_sammet, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:27:27 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1372: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 20 19:27:27 compute-0 systemd[1]: Started libpod-conmon-d0c6f0d32965d7f0c1beff87a87ec42563716273127fc43f1610ff3044b48f17.scope.
Jan 20 19:27:27 compute-0 podman[291885]: 2026-01-20 19:27:27.454276669 +0000 UTC m=+0.022294894 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:27:27 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:27:27 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:27:27 compute-0 podman[291885]: 2026-01-20 19:27:27.567624306 +0000 UTC m=+0.135642541 container init d0c6f0d32965d7f0c1beff87a87ec42563716273127fc43f1610ff3044b48f17 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_sammet, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Jan 20 19:27:27 compute-0 podman[291885]: 2026-01-20 19:27:27.574714339 +0000 UTC m=+0.142732534 container start d0c6f0d32965d7f0c1beff87a87ec42563716273127fc43f1610ff3044b48f17 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_sammet, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 19:27:27 compute-0 podman[291885]: 2026-01-20 19:27:27.578489221 +0000 UTC m=+0.146507516 container attach d0c6f0d32965d7f0c1beff87a87ec42563716273127fc43f1610ff3044b48f17 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_sammet, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:27:27 compute-0 nova_compute[254061]: 2026-01-20 19:27:27.579 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:27:27 compute-0 gallant_sammet[291901]: 167 167
Jan 20 19:27:27 compute-0 systemd[1]: libpod-d0c6f0d32965d7f0c1beff87a87ec42563716273127fc43f1610ff3044b48f17.scope: Deactivated successfully.
Jan 20 19:27:27 compute-0 podman[291885]: 2026-01-20 19:27:27.584278997 +0000 UTC m=+0.152297202 container died d0c6f0d32965d7f0c1beff87a87ec42563716273127fc43f1610ff3044b48f17 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_sammet, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 19:27:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd8c47d65ad19f254ca3ee7b8d0ba9b00d8facb2880025dc29368b47b9da2505-merged.mount: Deactivated successfully.
Jan 20 19:27:27 compute-0 podman[291885]: 2026-01-20 19:27:27.623734476 +0000 UTC m=+0.191752701 container remove d0c6f0d32965d7f0c1beff87a87ec42563716273127fc43f1610ff3044b48f17 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Jan 20 19:27:27 compute-0 systemd[1]: libpod-conmon-d0c6f0d32965d7f0c1beff87a87ec42563716273127fc43f1610ff3044b48f17.scope: Deactivated successfully.
Jan 20 19:27:27 compute-0 podman[291924]: 2026-01-20 19:27:27.849579998 +0000 UTC m=+0.052577324 container create afd72228412ae577f7ad5d6a15bb257dbf2ddc6bffab7369c30962742272019b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_perlman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:27:27 compute-0 systemd[1]: Started libpod-conmon-afd72228412ae577f7ad5d6a15bb257dbf2ddc6bffab7369c30962742272019b.scope.
Jan 20 19:27:27 compute-0 podman[291924]: 2026-01-20 19:27:27.828374704 +0000 UTC m=+0.031372010 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:27:27 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:27:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70cd62e6951da0b4495eb7a2f679afa0b43d052f868d160bd9b8448678887147/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:27:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70cd62e6951da0b4495eb7a2f679afa0b43d052f868d160bd9b8448678887147/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:27:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70cd62e6951da0b4495eb7a2f679afa0b43d052f868d160bd9b8448678887147/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:27:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70cd62e6951da0b4495eb7a2f679afa0b43d052f868d160bd9b8448678887147/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:27:27 compute-0 podman[291924]: 2026-01-20 19:27:27.956017719 +0000 UTC m=+0.159015045 container init afd72228412ae577f7ad5d6a15bb257dbf2ddc6bffab7369c30962742272019b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_perlman, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:27:27 compute-0 podman[291924]: 2026-01-20 19:27:27.968374604 +0000 UTC m=+0.171371930 container start afd72228412ae577f7ad5d6a15bb257dbf2ddc6bffab7369c30962742272019b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_perlman, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Jan 20 19:27:27 compute-0 podman[291924]: 2026-01-20 19:27:27.972402253 +0000 UTC m=+0.175399549 container attach afd72228412ae577f7ad5d6a15bb257dbf2ddc6bffab7369c30962742272019b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Jan 20 19:27:28 compute-0 vigorous_perlman[291940]: {
Jan 20 19:27:28 compute-0 vigorous_perlman[291940]:     "0": [
Jan 20 19:27:28 compute-0 vigorous_perlman[291940]:         {
Jan 20 19:27:28 compute-0 vigorous_perlman[291940]:             "devices": [
Jan 20 19:27:28 compute-0 vigorous_perlman[291940]:                 "/dev/loop3"
Jan 20 19:27:28 compute-0 vigorous_perlman[291940]:             ],
Jan 20 19:27:28 compute-0 vigorous_perlman[291940]:             "lv_name": "ceph_lv0",
Jan 20 19:27:28 compute-0 vigorous_perlman[291940]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:27:28 compute-0 vigorous_perlman[291940]:             "lv_size": "21470642176",
Jan 20 19:27:28 compute-0 vigorous_perlman[291940]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=aecbbf3b-b405-507b-97d7-637a83f5b4b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5f53c0c6-6046-4836-83f9-ff93da7e674e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:27:28 compute-0 vigorous_perlman[291940]:             "lv_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 19:27:28 compute-0 vigorous_perlman[291940]:             "name": "ceph_lv0",
Jan 20 19:27:28 compute-0 vigorous_perlman[291940]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:27:28 compute-0 vigorous_perlman[291940]:             "tags": {
Jan 20 19:27:28 compute-0 vigorous_perlman[291940]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:27:28 compute-0 vigorous_perlman[291940]:                 "ceph.block_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 19:27:28 compute-0 vigorous_perlman[291940]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:27:28 compute-0 vigorous_perlman[291940]:                 "ceph.cluster_fsid": "aecbbf3b-b405-507b-97d7-637a83f5b4b1",
Jan 20 19:27:28 compute-0 vigorous_perlman[291940]:                 "ceph.cluster_name": "ceph",
Jan 20 19:27:28 compute-0 vigorous_perlman[291940]:                 "ceph.crush_device_class": "",
Jan 20 19:27:28 compute-0 vigorous_perlman[291940]:                 "ceph.encrypted": "0",
Jan 20 19:27:28 compute-0 vigorous_perlman[291940]:                 "ceph.osd_fsid": "5f53c0c6-6046-4836-83f9-ff93da7e674e",
Jan 20 19:27:28 compute-0 vigorous_perlman[291940]:                 "ceph.osd_id": "0",
Jan 20 19:27:28 compute-0 vigorous_perlman[291940]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:27:28 compute-0 vigorous_perlman[291940]:                 "ceph.type": "block",
Jan 20 19:27:28 compute-0 vigorous_perlman[291940]:                 "ceph.vdo": "0",
Jan 20 19:27:28 compute-0 vigorous_perlman[291940]:                 "ceph.with_tpm": "0"
Jan 20 19:27:28 compute-0 vigorous_perlman[291940]:             },
Jan 20 19:27:28 compute-0 vigorous_perlman[291940]:             "type": "block",
Jan 20 19:27:28 compute-0 vigorous_perlman[291940]:             "vg_name": "ceph_vg0"
Jan 20 19:27:28 compute-0 vigorous_perlman[291940]:         }
Jan 20 19:27:28 compute-0 vigorous_perlman[291940]:     ]
Jan 20 19:27:28 compute-0 vigorous_perlman[291940]: }
Jan 20 19:27:28 compute-0 systemd[1]: libpod-afd72228412ae577f7ad5d6a15bb257dbf2ddc6bffab7369c30962742272019b.scope: Deactivated successfully.
Jan 20 19:27:28 compute-0 podman[291924]: 2026-01-20 19:27:28.316141266 +0000 UTC m=+0.519138602 container died afd72228412ae577f7ad5d6a15bb257dbf2ddc6bffab7369c30962742272019b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_perlman, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:27:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-70cd62e6951da0b4495eb7a2f679afa0b43d052f868d160bd9b8448678887147-merged.mount: Deactivated successfully.
Jan 20 19:27:28 compute-0 podman[291924]: 2026-01-20 19:27:28.361473843 +0000 UTC m=+0.564471129 container remove afd72228412ae577f7ad5d6a15bb257dbf2ddc6bffab7369c30962742272019b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_perlman, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:27:28 compute-0 systemd[1]: libpod-conmon-afd72228412ae577f7ad5d6a15bb257dbf2ddc6bffab7369c30962742272019b.scope: Deactivated successfully.
Jan 20 19:27:28 compute-0 sudo[291821]: pam_unix(sudo:session): session closed for user root
Jan 20 19:27:28 compute-0 sudo[291961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:27:28 compute-0 sudo[291961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:27:28 compute-0 sudo[291961]: pam_unix(sudo:session): session closed for user root
Jan 20 19:27:28 compute-0 ceph-mon[74381]: pgmap v1372: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 20 19:27:28 compute-0 sudo[291986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- raw list --format json
Jan 20 19:27:28 compute-0 sudo[291986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:27:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:27:28.893Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:27:28 compute-0 podman[292053]: 2026-01-20 19:27:28.941610426 +0000 UTC m=+0.036608632 container create a1734997aa614d7869084129f693740108bc21873f06083847cdb4a273c1c3e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Jan 20 19:27:28 compute-0 systemd[1]: Started libpod-conmon-a1734997aa614d7869084129f693740108bc21873f06083847cdb4a273c1c3e1.scope.
Jan 20 19:27:29 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:27:29 compute-0 podman[292053]: 2026-01-20 19:27:28.926471736 +0000 UTC m=+0.021469962 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:27:29 compute-0 podman[292053]: 2026-01-20 19:27:29.024604842 +0000 UTC m=+0.119603068 container init a1734997aa614d7869084129f693740108bc21873f06083847cdb4a273c1c3e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_thompson, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:27:29 compute-0 podman[292053]: 2026-01-20 19:27:29.032359162 +0000 UTC m=+0.127357368 container start a1734997aa614d7869084129f693740108bc21873f06083847cdb4a273c1c3e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_thompson, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:27:29 compute-0 upbeat_thompson[292070]: 167 167
Jan 20 19:27:29 compute-0 podman[292053]: 2026-01-20 19:27:29.036587476 +0000 UTC m=+0.131585702 container attach a1734997aa614d7869084129f693740108bc21873f06083847cdb4a273c1c3e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_thompson, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 19:27:29 compute-0 systemd[1]: libpod-a1734997aa614d7869084129f693740108bc21873f06083847cdb4a273c1c3e1.scope: Deactivated successfully.
Jan 20 19:27:29 compute-0 podman[292053]: 2026-01-20 19:27:29.039012811 +0000 UTC m=+0.134011037 container died a1734997aa614d7869084129f693740108bc21873f06083847cdb4a273c1c3e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2)
Jan 20 19:27:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ea071b5a7703d606db82f09a03572230626e4d4f4e4647c56b9b5d633542a46-merged.mount: Deactivated successfully.
Jan 20 19:27:29 compute-0 podman[292053]: 2026-01-20 19:27:29.077798021 +0000 UTC m=+0.172796227 container remove a1734997aa614d7869084129f693740108bc21873f06083847cdb4a273c1c3e1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 20 19:27:29 compute-0 systemd[1]: libpod-conmon-a1734997aa614d7869084129f693740108bc21873f06083847cdb4a273c1c3e1.scope: Deactivated successfully.
Jan 20 19:27:29 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:27:29 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:27:29 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:27:29.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:27:29 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:27:29 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:27:29 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:27:29.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:27:29 compute-0 podman[292093]: 2026-01-20 19:27:29.326211015 +0000 UTC m=+0.064420465 container create d834844269f11232e9eaad9b187bb311e40f3aae16f3e57f017b25799b814346 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:27:29 compute-0 systemd[1]: Started libpod-conmon-d834844269f11232e9eaad9b187bb311e40f3aae16f3e57f017b25799b814346.scope.
Jan 20 19:27:29 compute-0 podman[292093]: 2026-01-20 19:27:29.296646194 +0000 UTC m=+0.034855694 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:27:29 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:27:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9cfe11ec5dbcfa8f5f23a66303d453100081c06dda2e233196e5b77ce8b85b2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:27:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9cfe11ec5dbcfa8f5f23a66303d453100081c06dda2e233196e5b77ce8b85b2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:27:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9cfe11ec5dbcfa8f5f23a66303d453100081c06dda2e233196e5b77ce8b85b2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:27:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9cfe11ec5dbcfa8f5f23a66303d453100081c06dda2e233196e5b77ce8b85b2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:27:29 compute-0 podman[292093]: 2026-01-20 19:27:29.431083283 +0000 UTC m=+0.169292753 container init d834844269f11232e9eaad9b187bb311e40f3aae16f3e57f017b25799b814346 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_morse, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 20 19:27:29 compute-0 podman[292093]: 2026-01-20 19:27:29.443957882 +0000 UTC m=+0.182167322 container start d834844269f11232e9eaad9b187bb311e40f3aae16f3e57f017b25799b814346 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_morse, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:27:29 compute-0 podman[292093]: 2026-01-20 19:27:29.448029752 +0000 UTC m=+0.186239192 container attach d834844269f11232e9eaad9b187bb311e40f3aae16f3e57f017b25799b814346 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_morse, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid)
Jan 20 19:27:29 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1373: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 20 19:27:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:27:29] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Jan 20 19:27:29 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:27:29] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Jan 20 19:27:30 compute-0 lvm[292184]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 19:27:30 compute-0 lvm[292184]: VG ceph_vg0 finished
Jan 20 19:27:30 compute-0 naughty_morse[292109]: {}
Jan 20 19:27:30 compute-0 systemd[1]: libpod-d834844269f11232e9eaad9b187bb311e40f3aae16f3e57f017b25799b814346.scope: Deactivated successfully.
Jan 20 19:27:30 compute-0 systemd[1]: libpod-d834844269f11232e9eaad9b187bb311e40f3aae16f3e57f017b25799b814346.scope: Consumed 1.310s CPU time.
Jan 20 19:27:30 compute-0 podman[292093]: 2026-01-20 19:27:30.265976821 +0000 UTC m=+1.004186251 container died d834844269f11232e9eaad9b187bb311e40f3aae16f3e57f017b25799b814346 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 20 19:27:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-a9cfe11ec5dbcfa8f5f23a66303d453100081c06dda2e233196e5b77ce8b85b2-merged.mount: Deactivated successfully.
Jan 20 19:27:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:27:30.305 165659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:27:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:27:30.307 165659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:27:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:27:30.308 165659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:27:30 compute-0 podman[292093]: 2026-01-20 19:27:30.314501244 +0000 UTC m=+1.052710634 container remove d834844269f11232e9eaad9b187bb311e40f3aae16f3e57f017b25799b814346 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_morse, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:27:30 compute-0 systemd[1]: libpod-conmon-d834844269f11232e9eaad9b187bb311e40f3aae16f3e57f017b25799b814346.scope: Deactivated successfully.
Jan 20 19:27:30 compute-0 sudo[291986]: pam_unix(sudo:session): session closed for user root
Jan 20 19:27:30 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:27:30 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:27:30 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:27:30 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:27:30 compute-0 sudo[292199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 19:27:30 compute-0 sudo[292199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:27:30 compute-0 sudo[292199]: pam_unix(sudo:session): session closed for user root
Jan 20 19:27:30 compute-0 ceph-mon[74381]: pgmap v1373: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 20 19:27:30 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:27:30 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:27:31 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:27:31 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:27:31 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:27:31.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:27:31 compute-0 nova_compute[254061]: 2026-01-20 19:27:31.129 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:27:31 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:27:31 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:27:31 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:27:31.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:27:31 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1374: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 20 19:27:32 compute-0 nova_compute[254061]: 2026-01-20 19:27:32.141 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:27:32 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:27:32 compute-0 nova_compute[254061]: 2026-01-20 19:27:32.581 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:27:32 compute-0 ceph-mon[74381]: pgmap v1374: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 20 19:27:33 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:27:33 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:27:33 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:27:33.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:27:33 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:27:33 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:27:33 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:27:33.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:27:33 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1375: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 20 19:27:33 compute-0 ceph-mon[74381]: pgmap v1375: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 20 19:27:35 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:27:35 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:27:35 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:27:35.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:27:35 compute-0 nova_compute[254061]: 2026-01-20 19:27:35.129 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:27:35 compute-0 nova_compute[254061]: 2026-01-20 19:27:35.129 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 20 19:27:35 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:27:35 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:27:35 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:27:35.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:27:35 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1376: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 20 19:27:36 compute-0 ceph-mon[74381]: pgmap v1376: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 20 19:27:37 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:27:37 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:27:37 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:27:37.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:27:37 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:27:37 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:27:37 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:27:37.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:27:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:27:37.292Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:27:37 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1377: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:27:37 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:27:37 compute-0 nova_compute[254061]: 2026-01-20 19:27:37.584 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:27:38 compute-0 ceph-mon[74381]: pgmap v1377: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:27:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:27:38.896Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:27:39 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:27:39 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:27:39 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:27:39.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:27:39 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:27:39 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:27:39 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:27:39.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:27:39 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1378: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:27:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:27:39] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Jan 20 19:27:39 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:27:39] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Jan 20 19:27:40 compute-0 ceph-mon[74381]: pgmap v1378: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:27:40 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:27:41 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:27:41 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:27:41 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:27:41.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:27:41 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:27:41 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:27:41 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:27:41.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:27:41 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1379: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:27:42 compute-0 sudo[292236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:27:42 compute-0 sudo[292236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:27:42 compute-0 sudo[292236]: pam_unix(sudo:session): session closed for user root
Jan 20 19:27:42 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:27:42 compute-0 nova_compute[254061]: 2026-01-20 19:27:42.586 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:27:42 compute-0 ceph-mon[74381]: pgmap v1379: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:27:43 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:27:43 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:27:43 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:27:43.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:27:43 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:27:43 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:27:43 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:27:43.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:27:43 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1380: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:27:44 compute-0 ceph-mon[74381]: pgmap v1380: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:27:45 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:27:45 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:27:45 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:27:45.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:27:45 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:27:45 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:27:45 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:27:45.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:27:45 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1381: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:27:46 compute-0 ceph-mon[74381]: pgmap v1381: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:27:47 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:27:47 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:27:47 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:27:47.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:27:47 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:27:47 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:27:47 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:27:47.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:27:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:27:47.293Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:27:47 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1382: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:27:47 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:27:47 compute-0 nova_compute[254061]: 2026-01-20 19:27:47.589 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:27:48 compute-0 podman[292267]: 2026-01-20 19:27:48.067715927 +0000 UTC m=+0.047826395 container health_status 7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Jan 20 19:27:48 compute-0 ceph-mon[74381]: pgmap v1382: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:27:48 compute-0 ceph-mon[74381]: from='client.? 192.168.122.10:0/959361073' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 19:27:48 compute-0 ceph-mon[74381]: from='client.? 192.168.122.10:0/959361073' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 19:27:48 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:27:48.897Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:27:49 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:27:49 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:27:49 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:27:49.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:27:49 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:27:49 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:27:49 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:27:49.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:27:49 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1383: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:27:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:27:49] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Jan 20 19:27:49 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:27:49] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Jan 20 19:27:50 compute-0 ceph-mon[74381]: pgmap v1383: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:27:51 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:27:51 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:27:51 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:27:51.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:27:51 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:27:51 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:27:51 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:27:51.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:27:51 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1384: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:27:52 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:27:52 compute-0 nova_compute[254061]: 2026-01-20 19:27:52.591 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:27:52 compute-0 ceph-mon[74381]: pgmap v1384: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:27:53 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:27:53 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:27:53 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:27:53.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:27:53 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:27:53 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:27:53 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:27:53.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:27:53 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1385: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:27:54 compute-0 ceph-mon[74381]: pgmap v1385: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:27:55 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:27:55 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:27:55 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:27:55.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:27:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:27:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:27:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:27:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:27:55 compute-0 ceph-mgr[74676]: [balancer INFO root] Optimize plan auto_2026-01-20_19:27:55
Jan 20 19:27:55 compute-0 ceph-mgr[74676]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 19:27:55 compute-0 ceph-mgr[74676]: [balancer INFO root] do_upmap
Jan 20 19:27:55 compute-0 ceph-mgr[74676]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.meta', 'images', 'default.rgw.log', 'vms', 'default.rgw.control', 'volumes', 'default.rgw.meta', '.nfs', '.mgr', 'cephfs.cephfs.data', 'backups']
Jan 20 19:27:55 compute-0 ceph-mgr[74676]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 19:27:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:27:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:27:55 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:27:55 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:27:55 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:27:55.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:27:55 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1386: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:27:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 19:27:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:27:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 19:27:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:27:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 20 19:27:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:27:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:27:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:27:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:27:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:27:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 20 19:27:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:27:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 20 19:27:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:27:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:27:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:27:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 19:27:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:27:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 20 19:27:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:27:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:27:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:27:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 19:27:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:27:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 20 19:27:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 19:27:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:27:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 19:27:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:27:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:27:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:27:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:27:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:27:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:27:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:27:55 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:27:55 compute-0 ceph-mon[74381]: pgmap v1386: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:27:56 compute-0 podman[292296]: 2026-01-20 19:27:56.164789164 +0000 UTC m=+0.132299172 container health_status d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.schema-version=1.0)
Jan 20 19:27:57 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:27:57 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:27:57 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:27:57.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:27:57 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:27:57 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:27:57 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:27:57.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:27:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:27:57.294Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:27:57 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1387: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:27:57 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:27:57 compute-0 nova_compute[254061]: 2026-01-20 19:27:57.593 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:27:58 compute-0 ceph-mon[74381]: pgmap v1387: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:27:58 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:27:58.898Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:27:59 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:27:59 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:27:59 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:27:59.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:27:59 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:27:59 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:27:59 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:27:59.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:27:59 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1388: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:27:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:27:59] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Jan 20 19:27:59 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:27:59] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Jan 20 19:28:00 compute-0 ceph-mon[74381]: pgmap v1388: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:28:01 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:28:01 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:28:01 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:28:01.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:28:01 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:28:01 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:28:01 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:28:01.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:28:01 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1389: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:28:02 compute-0 sudo[292328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:28:02 compute-0 sudo[292328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:28:02 compute-0 sudo[292328]: pam_unix(sudo:session): session closed for user root
Jan 20 19:28:02 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:28:02 compute-0 nova_compute[254061]: 2026-01-20 19:28:02.596 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:28:02 compute-0 nova_compute[254061]: 2026-01-20 19:28:02.597 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:28:02 compute-0 nova_compute[254061]: 2026-01-20 19:28:02.597 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 20 19:28:02 compute-0 nova_compute[254061]: 2026-01-20 19:28:02.597 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 19:28:02 compute-0 nova_compute[254061]: 2026-01-20 19:28:02.598 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 19:28:02 compute-0 nova_compute[254061]: 2026-01-20 19:28:02.601 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:28:02 compute-0 ceph-mon[74381]: pgmap v1389: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:28:03 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:28:03 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:28:03 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:28:03.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:28:03 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:28:03 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:28:03 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:28:03.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:28:03 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1390: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:28:03 compute-0 ceph-mon[74381]: pgmap v1390: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:28:05 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:28:05 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:28:05 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:28:05.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:28:05 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:28:05 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:28:05 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:28:05.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:28:05 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1391: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:28:06 compute-0 ceph-mon[74381]: pgmap v1391: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:28:07 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:28:07 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:28:07 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:28:07.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:28:07 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:28:07 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:28:07 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:28:07.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:28:07 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:28:07.295Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:28:07 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:28:07.296Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:28:07 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1392: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:28:07 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:28:07 compute-0 nova_compute[254061]: 2026-01-20 19:28:07.597 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:28:07 compute-0 nova_compute[254061]: 2026-01-20 19:28:07.624 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:28:08 compute-0 ceph-mon[74381]: pgmap v1392: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:28:08 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:28:08.898Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:28:08 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:28:08.898Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:28:09 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:28:09 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:28:09 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:28:09.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:28:09 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:28:09 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:28:09 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:28:09.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:28:09 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1393: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:28:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:28:09] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Jan 20 19:28:09 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:28:09] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Jan 20 19:28:10 compute-0 ceph-mon[74381]: pgmap v1393: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:28:10 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:28:11 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:28:11 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:28:11 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:28:11.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:28:11 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:28:11 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:28:11 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:28:11.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:28:11 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1394: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:28:12 compute-0 ceph-mon[74381]: pgmap v1394: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:28:12 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:28:12 compute-0 nova_compute[254061]: 2026-01-20 19:28:12.626 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:28:12 compute-0 nova_compute[254061]: 2026-01-20 19:28:12.627 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:28:12 compute-0 nova_compute[254061]: 2026-01-20 19:28:12.627 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 20 19:28:12 compute-0 nova_compute[254061]: 2026-01-20 19:28:12.627 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 19:28:12 compute-0 nova_compute[254061]: 2026-01-20 19:28:12.653 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:28:12 compute-0 nova_compute[254061]: 2026-01-20 19:28:12.653 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 19:28:13 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:28:13 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:28:13 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:28:13.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:28:13 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:28:13 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:28:13 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:28:13.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:28:13 compute-0 nova_compute[254061]: 2026-01-20 19:28:13.479 254065 DEBUG oslo_concurrency.processutils [None req-05a8ff47-3271-4c33-be33-e784729bebcf 2adc51676c98427eab082bdf7b2efc18 811a4eb676464ca2bd20c0cc2d2f61c9 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:28:13 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1395: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:28:13 compute-0 nova_compute[254061]: 2026-01-20 19:28:13.512 254065 DEBUG oslo_concurrency.processutils [None req-05a8ff47-3271-4c33-be33-e784729bebcf 2adc51676c98427eab082bdf7b2efc18 811a4eb676464ca2bd20c0cc2d2f61c9 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.033s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:28:14 compute-0 ceph-mon[74381]: pgmap v1395: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:28:15 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:28:15 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:28:15 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:28:15.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:28:15 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:28:15 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:28:15 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:28:15.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:28:15 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1396: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:28:16 compute-0 ceph-mon[74381]: pgmap v1396: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:28:17 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:28:17 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:28:17 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:28:17.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:28:17 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:28:17 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:28:17 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:28:17.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:28:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:28:17.296Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:28:17 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1397: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:28:17 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:28:17 compute-0 nova_compute[254061]: 2026-01-20 19:28:17.655 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:28:17 compute-0 nova_compute[254061]: 2026-01-20 19:28:17.657 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:28:17 compute-0 nova_compute[254061]: 2026-01-20 19:28:17.657 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 20 19:28:17 compute-0 nova_compute[254061]: 2026-01-20 19:28:17.657 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 19:28:17 compute-0 nova_compute[254061]: 2026-01-20 19:28:17.702 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:28:17 compute-0 nova_compute[254061]: 2026-01-20 19:28:17.703 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 19:28:18 compute-0 ceph-mon[74381]: pgmap v1397: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:28:18 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:28:18.853 165659 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'be:fe:f2', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '0e:71:69:cd:a8:95'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 19:28:18 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:28:18.854 165659 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 19:28:18 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:28:18.855 165659 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=7018ca8a-de0e-4b56-bb43-675238d4f8b3, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 19:28:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:28:18.899Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:28:18 compute-0 nova_compute[254061]: 2026-01-20 19:28:18.899 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:28:19 compute-0 podman[292372]: 2026-01-20 19:28:19.114656641 +0000 UTC m=+0.075553006 container health_status 7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Jan 20 19:28:19 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:28:19 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:28:19 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:28:19.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:28:19 compute-0 nova_compute[254061]: 2026-01-20 19:28:19.147 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:28:19 compute-0 nova_compute[254061]: 2026-01-20 19:28:19.178 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:28:19 compute-0 nova_compute[254061]: 2026-01-20 19:28:19.179 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:28:19 compute-0 nova_compute[254061]: 2026-01-20 19:28:19.180 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:28:19 compute-0 nova_compute[254061]: 2026-01-20 19:28:19.180 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 19:28:19 compute-0 nova_compute[254061]: 2026-01-20 19:28:19.181 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:28:19 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:28:19 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:28:19 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:28:19.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:28:19 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1398: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:28:19 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:28:19 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2561476413' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:28:19 compute-0 nova_compute[254061]: 2026-01-20 19:28:19.667 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:28:19 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/2561476413' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:28:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:28:19] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Jan 20 19:28:19 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:28:19] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Jan 20 19:28:19 compute-0 nova_compute[254061]: 2026-01-20 19:28:19.893 254065 WARNING nova.virt.libvirt.driver [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 19:28:19 compute-0 nova_compute[254061]: 2026-01-20 19:28:19.894 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4510MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 19:28:19 compute-0 nova_compute[254061]: 2026-01-20 19:28:19.894 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:28:19 compute-0 nova_compute[254061]: 2026-01-20 19:28:19.895 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:28:20 compute-0 nova_compute[254061]: 2026-01-20 19:28:20.002 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 19:28:20 compute-0 nova_compute[254061]: 2026-01-20 19:28:20.002 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 19:28:20 compute-0 nova_compute[254061]: 2026-01-20 19:28:20.022 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:28:20 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:28:20 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1128406728' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:28:20 compute-0 nova_compute[254061]: 2026-01-20 19:28:20.498 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:28:20 compute-0 nova_compute[254061]: 2026-01-20 19:28:20.504 254065 DEBUG nova.compute.provider_tree [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Inventory has not changed in ProviderTree for provider: cb9161e5-191d-495c-920a-01144f42a215 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 19:28:20 compute-0 nova_compute[254061]: 2026-01-20 19:28:20.521 254065 DEBUG nova.scheduler.client.report [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Inventory has not changed for provider cb9161e5-191d-495c-920a-01144f42a215 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 19:28:20 compute-0 nova_compute[254061]: 2026-01-20 19:28:20.522 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 19:28:20 compute-0 nova_compute[254061]: 2026-01-20 19:28:20.523 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.628s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:28:20 compute-0 ceph-mon[74381]: pgmap v1398: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:28:20 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/1128406728' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:28:21 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:28:21 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:28:21 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:28:21.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:28:21 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:28:21 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:28:21 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:28:21.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:28:21 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1399: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:28:21 compute-0 ceph-mon[74381]: pgmap v1399: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:28:22 compute-0 nova_compute[254061]: 2026-01-20 19:28:22.505 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:28:22 compute-0 nova_compute[254061]: 2026-01-20 19:28:22.505 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:28:22 compute-0 sudo[292438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:28:22 compute-0 sudo[292438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:28:22 compute-0 sudo[292438]: pam_unix(sudo:session): session closed for user root
Jan 20 19:28:22 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:28:22 compute-0 nova_compute[254061]: 2026-01-20 19:28:22.703 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:28:22 compute-0 nova_compute[254061]: 2026-01-20 19:28:22.705 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:28:23 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/2528865475' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:28:23 compute-0 nova_compute[254061]: 2026-01-20 19:28:23.128 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:28:23 compute-0 nova_compute[254061]: 2026-01-20 19:28:23.129 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 19:28:23 compute-0 nova_compute[254061]: 2026-01-20 19:28:23.129 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 19:28:23 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:28:23 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:28:23 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:28:23.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:28:23 compute-0 nova_compute[254061]: 2026-01-20 19:28:23.159 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 19:28:23 compute-0 nova_compute[254061]: 2026-01-20 19:28:23.159 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:28:23 compute-0 nova_compute[254061]: 2026-01-20 19:28:23.160 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:28:23 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:28:23 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:28:23 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:28:23.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:28:23 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1400: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:28:24 compute-0 ceph-mon[74381]: pgmap v1400: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:28:25 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/537696595' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:28:25 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/1535218116' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:28:25 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:28:25 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/779234893' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:28:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:28:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:28:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:28:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:28:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:28:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:28:25 compute-0 nova_compute[254061]: 2026-01-20 19:28:25.129 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:28:25 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:28:25 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:28:25 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:28:25.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:28:25 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:28:25 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:28:25 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:28:25.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:28:25 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1401: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:28:26 compute-0 ceph-mon[74381]: pgmap v1401: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:28:26 compute-0 nova_compute[254061]: 2026-01-20 19:28:26.128 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:28:26 compute-0 nova_compute[254061]: 2026-01-20 19:28:26.129 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 19:28:27 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:28:27 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:28:27 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:28:27.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:28:27 compute-0 podman[292469]: 2026-01-20 19:28:27.209685492 +0000 UTC m=+0.180664521 container health_status d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 20 19:28:27 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:28:27 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:28:27 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:28:27.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:28:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:28:27.298Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:28:27 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1402: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:28:27 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:28:27 compute-0 ceph-mon[74381]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #81. Immutable memtables: 0.
Jan 20 19:28:27 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:28:27.675589) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 19:28:27 compute-0 ceph-mon[74381]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 81
Jan 20 19:28:27 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768937307675612, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 1347, "num_deletes": 255, "total_data_size": 2464637, "memory_usage": 2505920, "flush_reason": "Manual Compaction"}
Jan 20 19:28:27 compute-0 ceph-mon[74381]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #82: started
Jan 20 19:28:27 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768937307689600, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 82, "file_size": 2411940, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 37426, "largest_seqno": 38772, "table_properties": {"data_size": 2405581, "index_size": 3558, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13445, "raw_average_key_size": 19, "raw_value_size": 2392741, "raw_average_value_size": 3518, "num_data_blocks": 155, "num_entries": 680, "num_filter_entries": 680, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768937179, "oldest_key_time": 1768937179, "file_creation_time": 1768937307, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cbf3ab03-d51c-4622-b6c7-e997cd5246eb", "db_session_id": "I40O2DG19JCNHUB0JQU4", "orig_file_number": 82, "seqno_to_time_mapping": "N/A"}}
Jan 20 19:28:27 compute-0 ceph-mon[74381]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 14071 microseconds, and 6698 cpu microseconds.
Jan 20 19:28:27 compute-0 ceph-mon[74381]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 19:28:27 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:28:27.689653) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #82: 2411940 bytes OK
Jan 20 19:28:27 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:28:27.689675) [db/memtable_list.cc:519] [default] Level-0 commit table #82 started
Jan 20 19:28:27 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:28:27.691535) [db/memtable_list.cc:722] [default] Level-0 commit table #82: memtable #1 done
Jan 20 19:28:27 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:28:27.691555) EVENT_LOG_v1 {"time_micros": 1768937307691548, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 19:28:27 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:28:27.691575) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 19:28:27 compute-0 ceph-mon[74381]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 2458709, prev total WAL file size 2458709, number of live WAL files 2.
Jan 20 19:28:27 compute-0 ceph-mon[74381]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000078.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 19:28:27 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:28:27.692992) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303035' seq:72057594037927935, type:22 .. '6C6F676D0031323536' seq:0, type:0; will stop at (end)
Jan 20 19:28:27 compute-0 ceph-mon[74381]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 19:28:27 compute-0 ceph-mon[74381]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [82(2355KB)], [80(12MB)]
Jan 20 19:28:27 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768937307693050, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [82], "files_L6": [80], "score": -1, "input_data_size": 15860125, "oldest_snapshot_seqno": -1}
Jan 20 19:28:27 compute-0 nova_compute[254061]: 2026-01-20 19:28:27.706 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:28:27 compute-0 nova_compute[254061]: 2026-01-20 19:28:27.707 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:28:27 compute-0 nova_compute[254061]: 2026-01-20 19:28:27.708 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 20 19:28:27 compute-0 nova_compute[254061]: 2026-01-20 19:28:27.708 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 19:28:27 compute-0 nova_compute[254061]: 2026-01-20 19:28:27.708 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:28:27 compute-0 nova_compute[254061]: 2026-01-20 19:28:27.708 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 19:28:27 compute-0 ceph-mon[74381]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #83: 7159 keys, 15722971 bytes, temperature: kUnknown
Jan 20 19:28:27 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768937307800596, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 83, "file_size": 15722971, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15675551, "index_size": 28428, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17925, "raw_key_size": 188233, "raw_average_key_size": 26, "raw_value_size": 15546898, "raw_average_value_size": 2171, "num_data_blocks": 1116, "num_entries": 7159, "num_filter_entries": 7159, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768934326, "oldest_key_time": 0, "file_creation_time": 1768937307, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cbf3ab03-d51c-4622-b6c7-e997cd5246eb", "db_session_id": "I40O2DG19JCNHUB0JQU4", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Jan 20 19:28:27 compute-0 ceph-mon[74381]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 19:28:27 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:28:27.800875) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 15722971 bytes
Jan 20 19:28:27 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:28:27.802165) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 147.4 rd, 146.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 12.8 +0.0 blob) out(15.0 +0.0 blob), read-write-amplify(13.1) write-amplify(6.5) OK, records in: 7683, records dropped: 524 output_compression: NoCompression
Jan 20 19:28:27 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:28:27.802182) EVENT_LOG_v1 {"time_micros": 1768937307802172, "job": 46, "event": "compaction_finished", "compaction_time_micros": 107626, "compaction_time_cpu_micros": 51003, "output_level": 6, "num_output_files": 1, "total_output_size": 15722971, "num_input_records": 7683, "num_output_records": 7159, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 19:28:27 compute-0 ceph-mon[74381]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000082.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 19:28:27 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768937307802656, "job": 46, "event": "table_file_deletion", "file_number": 82}
Jan 20 19:28:27 compute-0 ceph-mon[74381]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 19:28:27 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768937307805018, "job": 46, "event": "table_file_deletion", "file_number": 80}
Jan 20 19:28:27 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:28:27.692860) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:28:27 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:28:27.805113) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:28:27 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:28:27.805121) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:28:27 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:28:27.805124) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:28:27 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:28:27.805127) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:28:27 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:28:27.805130) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:28:28 compute-0 ceph-mon[74381]: pgmap v1402: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:28:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:28:28.901Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:28:29 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:28:29 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:28:29 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:28:29.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:28:29 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:28:29 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:28:29 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:28:29.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:28:29 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1403: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:28:30 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:28:30] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Jan 20 19:28:30 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:28:30] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Jan 20 19:28:30 compute-0 ceph-mon[74381]: pgmap v1403: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:28:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:28:30.306 165659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:28:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:28:30.307 165659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:28:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:28:30.307 165659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:28:30 compute-0 sudo[292500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:28:30 compute-0 sudo[292500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:28:30 compute-0 sudo[292500]: pam_unix(sudo:session): session closed for user root
Jan 20 19:28:30 compute-0 sudo[292525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Jan 20 19:28:30 compute-0 sudo[292525]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:28:31 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:28:31 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:28:31 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:28:31.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:28:31 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:28:31 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:28:31 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:28:31.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:28:31 compute-0 podman[292626]: 2026-01-20 19:28:31.468376219 +0000 UTC m=+0.069438750 container exec 2fba800b181b7eef43f9bbe592c3e2bd413c4a1140b3b26fe8cd839a68603da7 (image=quay.io/ceph/ceph:v19, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Jan 20 19:28:31 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1404: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:28:31 compute-0 podman[292626]: 2026-01-20 19:28:31.57777142 +0000 UTC m=+0.178833931 container exec_died 2fba800b181b7eef43f9bbe592c3e2bd413c4a1140b3b26fe8cd839a68603da7 (image=quay.io/ceph/ceph:v19, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mon-compute-0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 19:28:32 compute-0 nova_compute[254061]: 2026-01-20 19:28:32.125 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:28:32 compute-0 podman[292762]: 2026-01-20 19:28:32.188449808 +0000 UTC m=+0.077659932 container exec d3ebab8fa832c2f5835ae87e49226279b06e4ab5bb811ac33df7c9afc2478663 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 19:28:32 compute-0 podman[292762]: 2026-01-20 19:28:32.201164854 +0000 UTC m=+0.090374968 container exec_died d3ebab8fa832c2f5835ae87e49226279b06e4ab5bb811ac33df7c9afc2478663 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 19:28:32 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:28:32 compute-0 ceph-mon[74381]: pgmap v1404: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:28:32 compute-0 nova_compute[254061]: 2026-01-20 19:28:32.709 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:28:32 compute-0 nova_compute[254061]: 2026-01-20 19:28:32.710 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:28:32 compute-0 podman[292883]: 2026-01-20 19:28:32.798401648 +0000 UTC m=+0.073448579 container exec 4e8e761c58a323b2e232c17a02958108663f7cfbfd5c846bd9edf3835afc34ea (image=quay.io/ceph/haproxy:2.3, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm)
Jan 20 19:28:32 compute-0 podman[292883]: 2026-01-20 19:28:32.811202554 +0000 UTC m=+0.086249435 container exec_died 4e8e761c58a323b2e232c17a02958108663f7cfbfd5c846bd9edf3835afc34ea (image=quay.io/ceph/haproxy:2.3, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-haproxy-nfs-cephfs-compute-0-ujqhrm)
Jan 20 19:28:33 compute-0 podman[292951]: 2026-01-20 19:28:33.148381261 +0000 UTC m=+0.074967961 container exec 03da620c845e54fd933e41497235ed884812b05ac119911f7372fce805879a03 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-keepalived-nfs-cephfs-compute-0-kuklye, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, io.openshift.expose-services=, build-date=2023-02-22T09:23:20, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, io.buildah.version=1.28.2, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9)
Jan 20 19:28:33 compute-0 podman[292951]: 2026-01-20 19:28:33.164549649 +0000 UTC m=+0.091136309 container exec_died 03da620c845e54fd933e41497235ed884812b05ac119911f7372fce805879a03 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-keepalived-nfs-cephfs-compute-0-kuklye, vcs-type=git, vendor=Red Hat, Inc., version=2.2.4, com.redhat.component=keepalived-container, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, io.buildah.version=1.28.2, name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, release=1793, architecture=x86_64, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Jan 20 19:28:33 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:28:33 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:28:33 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:28:33.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:28:33 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:28:33 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:28:33 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:28:33.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:28:33 compute-0 podman[293016]: 2026-01-20 19:28:33.43542265 +0000 UTC m=+0.080719226 container exec a8dfcd2937823e651e7ddb7da3250c4b2d68a69832f3be4617cbfedf4f0f7749 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 19:28:33 compute-0 podman[293016]: 2026-01-20 19:28:33.469560943 +0000 UTC m=+0.114857509 container exec_died a8dfcd2937823e651e7ddb7da3250c4b2d68a69832f3be4617cbfedf4f0f7749 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 19:28:33 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1405: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:28:33 compute-0 podman[293092]: 2026-01-20 19:28:33.769889923 +0000 UTC m=+0.076442761 container exec 07b235bbcf8e275aef429beeae1d676b774a5b2e004859627b1a936217e5b679 (image=quay.io/ceph/grafana:10.4.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 20 19:28:33 compute-0 podman[293092]: 2026-01-20 19:28:33.943414519 +0000 UTC m=+0.249967367 container exec_died 07b235bbcf8e275aef429beeae1d676b774a5b2e004859627b1a936217e5b679 (image=quay.io/ceph/grafana:10.4.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Jan 20 19:28:34 compute-0 nova_compute[254061]: 2026-01-20 19:28:34.123 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:28:34 compute-0 podman[293203]: 2026-01-20 19:28:34.335162313 +0000 UTC m=+0.063067868 container exec 3b46b6d3dc84fc240ebe1bff0868890dcc11b4799d0aa0050a48fba74c90cc31 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 19:28:34 compute-0 podman[293203]: 2026-01-20 19:28:34.375198166 +0000 UTC m=+0.103103711 container exec_died 3b46b6d3dc84fc240ebe1bff0868890dcc11b4799d0aa0050a48fba74c90cc31 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 20 19:28:34 compute-0 sudo[292525]: pam_unix(sudo:session): session closed for user root
Jan 20 19:28:34 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:28:34 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:28:34 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:28:34 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:28:34 compute-0 sudo[293246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:28:34 compute-0 sudo[293246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:28:34 compute-0 sudo[293246]: pam_unix(sudo:session): session closed for user root
Jan 20 19:28:34 compute-0 sudo[293271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 20 19:28:34 compute-0 sudo[293271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:28:35 compute-0 ceph-mon[74381]: pgmap v1405: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:28:35 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:28:35 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:28:35 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:28:35 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:28:35 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:28:35.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:28:35 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:28:35 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:28:35 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:28:35.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:28:35 compute-0 sudo[293271]: pam_unix(sudo:session): session closed for user root
Jan 20 19:28:35 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1406: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Jan 20 19:28:35 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 19:28:35 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:28:35 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 20 19:28:35 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:28:35 compute-0 sudo[293330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:28:35 compute-0 sudo[293330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:28:35 compute-0 sudo[293330]: pam_unix(sudo:session): session closed for user root
Jan 20 19:28:35 compute-0 sudo[293355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 19:28:35 compute-0 sudo[293355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:28:35 compute-0 podman[293423]: 2026-01-20 19:28:35.972223031 +0000 UTC m=+0.040916418 container create 60c56cc489c0f1c6c2417ed0ebfa2b6e9ab43e3fb7252650163804d74c2722c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid)
Jan 20 19:28:36 compute-0 systemd[1]: Started libpod-conmon-60c56cc489c0f1c6c2417ed0ebfa2b6e9ab43e3fb7252650163804d74c2722c2.scope.
Jan 20 19:28:36 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:28:36 compute-0 podman[293423]: 2026-01-20 19:28:35.956970709 +0000 UTC m=+0.025664126 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:28:36 compute-0 podman[293423]: 2026-01-20 19:28:36.063649716 +0000 UTC m=+0.132343193 container init 60c56cc489c0f1c6c2417ed0ebfa2b6e9ab43e3fb7252650163804d74c2722c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_sammet, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 20 19:28:36 compute-0 podman[293423]: 2026-01-20 19:28:36.069680999 +0000 UTC m=+0.138374416 container start 60c56cc489c0f1c6c2417ed0ebfa2b6e9ab43e3fb7252650163804d74c2722c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_sammet, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:28:36 compute-0 podman[293423]: 2026-01-20 19:28:36.073135173 +0000 UTC m=+0.141828560 container attach 60c56cc489c0f1c6c2417ed0ebfa2b6e9ab43e3fb7252650163804d74c2722c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_sammet, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Jan 20 19:28:36 compute-0 agitated_sammet[293439]: 167 167
Jan 20 19:28:36 compute-0 systemd[1]: libpod-60c56cc489c0f1c6c2417ed0ebfa2b6e9ab43e3fb7252650163804d74c2722c2.scope: Deactivated successfully.
Jan 20 19:28:36 compute-0 podman[293423]: 2026-01-20 19:28:36.077224514 +0000 UTC m=+0.145917911 container died 60c56cc489c0f1c6c2417ed0ebfa2b6e9ab43e3fb7252650163804d74c2722c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:28:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-f8a6e0bc2d749f56eb0cd2ea88834ffb746fbb4bf0115dbcac91cff62184d8bd-merged.mount: Deactivated successfully.
Jan 20 19:28:36 compute-0 podman[293423]: 2026-01-20 19:28:36.116283981 +0000 UTC m=+0.184977378 container remove 60c56cc489c0f1c6c2417ed0ebfa2b6e9ab43e3fb7252650163804d74c2722c2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_sammet, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Jan 20 19:28:36 compute-0 systemd[1]: libpod-conmon-60c56cc489c0f1c6c2417ed0ebfa2b6e9ab43e3fb7252650163804d74c2722c2.scope: Deactivated successfully.
Jan 20 19:28:36 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 19:28:36 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 19:28:36 compute-0 ceph-mon[74381]: pgmap v1406: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Jan 20 19:28:36 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:28:36 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:28:36 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 19:28:36 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 19:28:36 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 19:28:36 compute-0 podman[293463]: 2026-01-20 19:28:36.318221986 +0000 UTC m=+0.041923616 container create b87a36b275b7445721f4f2982727e9c897f09f605c5bc2b51f304951740eebf5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_poitras, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 19:28:36 compute-0 systemd[1]: Started libpod-conmon-b87a36b275b7445721f4f2982727e9c897f09f605c5bc2b51f304951740eebf5.scope.
Jan 20 19:28:36 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:28:36 compute-0 podman[293463]: 2026-01-20 19:28:36.301477733 +0000 UTC m=+0.025179383 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:28:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d02d6d07648cf5ce601a9c1d7b0218f02ffff3f81b8bea60dcaf58ed61a7f2b4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:28:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d02d6d07648cf5ce601a9c1d7b0218f02ffff3f81b8bea60dcaf58ed61a7f2b4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:28:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d02d6d07648cf5ce601a9c1d7b0218f02ffff3f81b8bea60dcaf58ed61a7f2b4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:28:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d02d6d07648cf5ce601a9c1d7b0218f02ffff3f81b8bea60dcaf58ed61a7f2b4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:28:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d02d6d07648cf5ce601a9c1d7b0218f02ffff3f81b8bea60dcaf58ed61a7f2b4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:28:36 compute-0 podman[293463]: 2026-01-20 19:28:36.415245282 +0000 UTC m=+0.138946932 container init b87a36b275b7445721f4f2982727e9c897f09f605c5bc2b51f304951740eebf5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_poitras, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:28:36 compute-0 podman[293463]: 2026-01-20 19:28:36.424138503 +0000 UTC m=+0.147840133 container start b87a36b275b7445721f4f2982727e9c897f09f605c5bc2b51f304951740eebf5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 19:28:36 compute-0 podman[293463]: 2026-01-20 19:28:36.428725187 +0000 UTC m=+0.152426887 container attach b87a36b275b7445721f4f2982727e9c897f09f605c5bc2b51f304951740eebf5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_poitras, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 20 19:28:36 compute-0 gifted_poitras[293480]: --> passed data devices: 0 physical, 1 LVM
Jan 20 19:28:36 compute-0 gifted_poitras[293480]: --> All data devices are unavailable
Jan 20 19:28:36 compute-0 systemd[1]: libpod-b87a36b275b7445721f4f2982727e9c897f09f605c5bc2b51f304951740eebf5.scope: Deactivated successfully.
Jan 20 19:28:36 compute-0 podman[293463]: 2026-01-20 19:28:36.777526998 +0000 UTC m=+0.501228658 container died b87a36b275b7445721f4f2982727e9c897f09f605c5bc2b51f304951740eebf5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_poitras, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True)
Jan 20 19:28:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-d02d6d07648cf5ce601a9c1d7b0218f02ffff3f81b8bea60dcaf58ed61a7f2b4-merged.mount: Deactivated successfully.
Jan 20 19:28:36 compute-0 podman[293463]: 2026-01-20 19:28:36.820154381 +0000 UTC m=+0.543856011 container remove b87a36b275b7445721f4f2982727e9c897f09f605c5bc2b51f304951740eebf5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_poitras, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:28:36 compute-0 systemd[1]: libpod-conmon-b87a36b275b7445721f4f2982727e9c897f09f605c5bc2b51f304951740eebf5.scope: Deactivated successfully.
Jan 20 19:28:36 compute-0 sudo[293355]: pam_unix(sudo:session): session closed for user root
Jan 20 19:28:36 compute-0 sudo[293509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:28:36 compute-0 sudo[293509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:28:36 compute-0 sudo[293509]: pam_unix(sudo:session): session closed for user root
Jan 20 19:28:37 compute-0 sudo[293534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- lvm list --format json
Jan 20 19:28:37 compute-0 sudo[293534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:28:37 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:28:37 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:28:37 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:28:37.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:28:37 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:28:37 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:28:37 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:28:37.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:28:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:28:37.298Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:28:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:28:37.299Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:28:37 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1407: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:28:37 compute-0 podman[293601]: 2026-01-20 19:28:37.414648082 +0000 UTC m=+0.028749408 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:28:37 compute-0 podman[293601]: 2026-01-20 19:28:37.579393562 +0000 UTC m=+0.193494808 container create ce65d56aa7e1da0385249820debf2728e3130ed6d6ba4ebc72118e127897a44d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_feistel, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Jan 20 19:28:37 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:28:37 compute-0 systemd[1]: Started libpod-conmon-ce65d56aa7e1da0385249820debf2728e3130ed6d6ba4ebc72118e127897a44d.scope.
Jan 20 19:28:37 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:28:37 compute-0 podman[293601]: 2026-01-20 19:28:37.649070227 +0000 UTC m=+0.263171523 container init ce65d56aa7e1da0385249820debf2728e3130ed6d6ba4ebc72118e127897a44d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:28:37 compute-0 podman[293601]: 2026-01-20 19:28:37.656982861 +0000 UTC m=+0.271084107 container start ce65d56aa7e1da0385249820debf2728e3130ed6d6ba4ebc72118e127897a44d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_feistel, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Jan 20 19:28:37 compute-0 stoic_feistel[293617]: 167 167
Jan 20 19:28:37 compute-0 systemd[1]: libpod-ce65d56aa7e1da0385249820debf2728e3130ed6d6ba4ebc72118e127897a44d.scope: Deactivated successfully.
Jan 20 19:28:37 compute-0 podman[293601]: 2026-01-20 19:28:37.661860484 +0000 UTC m=+0.275961750 container attach ce65d56aa7e1da0385249820debf2728e3130ed6d6ba4ebc72118e127897a44d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_feistel, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:28:37 compute-0 podman[293601]: 2026-01-20 19:28:37.662886431 +0000 UTC m=+0.276987697 container died ce65d56aa7e1da0385249820debf2728e3130ed6d6ba4ebc72118e127897a44d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_feistel, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 20 19:28:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-2b5591389a17e5ec08039017e071bdcab6b43ad84566b8f5f27963ef91e7288f-merged.mount: Deactivated successfully.
Jan 20 19:28:37 compute-0 podman[293601]: 2026-01-20 19:28:37.698024912 +0000 UTC m=+0.312126158 container remove ce65d56aa7e1da0385249820debf2728e3130ed6d6ba4ebc72118e127897a44d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2)
Jan 20 19:28:37 compute-0 systemd[1]: libpod-conmon-ce65d56aa7e1da0385249820debf2728e3130ed6d6ba4ebc72118e127897a44d.scope: Deactivated successfully.
Jan 20 19:28:37 compute-0 nova_compute[254061]: 2026-01-20 19:28:37.711 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:28:37 compute-0 nova_compute[254061]: 2026-01-20 19:28:37.713 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:28:37 compute-0 podman[293641]: 2026-01-20 19:28:37.897291436 +0000 UTC m=+0.056462240 container create 964aec16dd3f772a72c21e77df72477702f2b94993c1e2a25c34efd2903c45f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_chatelet, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 20 19:28:37 compute-0 systemd[1]: Started libpod-conmon-964aec16dd3f772a72c21e77df72477702f2b94993c1e2a25c34efd2903c45f4.scope.
Jan 20 19:28:37 compute-0 podman[293641]: 2026-01-20 19:28:37.866827391 +0000 UTC m=+0.025998185 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:28:37 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:28:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2110a80d9edb8bfc3c0f76e9243cd3c2fb2aa8b80d80265d6c4a6fd24d04fe3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:28:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2110a80d9edb8bfc3c0f76e9243cd3c2fb2aa8b80d80265d6c4a6fd24d04fe3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:28:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2110a80d9edb8bfc3c0f76e9243cd3c2fb2aa8b80d80265d6c4a6fd24d04fe3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:28:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2110a80d9edb8bfc3c0f76e9243cd3c2fb2aa8b80d80265d6c4a6fd24d04fe3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:28:37 compute-0 podman[293641]: 2026-01-20 19:28:37.999650886 +0000 UTC m=+0.158821670 container init 964aec16dd3f772a72c21e77df72477702f2b94993c1e2a25c34efd2903c45f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_chatelet, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:28:38 compute-0 podman[293641]: 2026-01-20 19:28:38.006643506 +0000 UTC m=+0.165814270 container start 964aec16dd3f772a72c21e77df72477702f2b94993c1e2a25c34efd2903c45f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325)
Jan 20 19:28:38 compute-0 podman[293641]: 2026-01-20 19:28:38.010361736 +0000 UTC m=+0.169532510 container attach 964aec16dd3f772a72c21e77df72477702f2b94993c1e2a25c34efd2903c45f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 20 19:28:38 compute-0 blissful_chatelet[293658]: {
Jan 20 19:28:38 compute-0 blissful_chatelet[293658]:     "0": [
Jan 20 19:28:38 compute-0 blissful_chatelet[293658]:         {
Jan 20 19:28:38 compute-0 blissful_chatelet[293658]:             "devices": [
Jan 20 19:28:38 compute-0 blissful_chatelet[293658]:                 "/dev/loop3"
Jan 20 19:28:38 compute-0 blissful_chatelet[293658]:             ],
Jan 20 19:28:38 compute-0 blissful_chatelet[293658]:             "lv_name": "ceph_lv0",
Jan 20 19:28:38 compute-0 blissful_chatelet[293658]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:28:38 compute-0 blissful_chatelet[293658]:             "lv_size": "21470642176",
Jan 20 19:28:38 compute-0 blissful_chatelet[293658]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=aecbbf3b-b405-507b-97d7-637a83f5b4b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5f53c0c6-6046-4836-83f9-ff93da7e674e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:28:38 compute-0 blissful_chatelet[293658]:             "lv_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 19:28:38 compute-0 blissful_chatelet[293658]:             "name": "ceph_lv0",
Jan 20 19:28:38 compute-0 blissful_chatelet[293658]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:28:38 compute-0 blissful_chatelet[293658]:             "tags": {
Jan 20 19:28:38 compute-0 blissful_chatelet[293658]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:28:38 compute-0 blissful_chatelet[293658]:                 "ceph.block_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 19:28:38 compute-0 blissful_chatelet[293658]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:28:38 compute-0 blissful_chatelet[293658]:                 "ceph.cluster_fsid": "aecbbf3b-b405-507b-97d7-637a83f5b4b1",
Jan 20 19:28:38 compute-0 blissful_chatelet[293658]:                 "ceph.cluster_name": "ceph",
Jan 20 19:28:38 compute-0 blissful_chatelet[293658]:                 "ceph.crush_device_class": "",
Jan 20 19:28:38 compute-0 blissful_chatelet[293658]:                 "ceph.encrypted": "0",
Jan 20 19:28:38 compute-0 blissful_chatelet[293658]:                 "ceph.osd_fsid": "5f53c0c6-6046-4836-83f9-ff93da7e674e",
Jan 20 19:28:38 compute-0 blissful_chatelet[293658]:                 "ceph.osd_id": "0",
Jan 20 19:28:38 compute-0 blissful_chatelet[293658]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:28:38 compute-0 blissful_chatelet[293658]:                 "ceph.type": "block",
Jan 20 19:28:38 compute-0 blissful_chatelet[293658]:                 "ceph.vdo": "0",
Jan 20 19:28:38 compute-0 blissful_chatelet[293658]:                 "ceph.with_tpm": "0"
Jan 20 19:28:38 compute-0 blissful_chatelet[293658]:             },
Jan 20 19:28:38 compute-0 blissful_chatelet[293658]:             "type": "block",
Jan 20 19:28:38 compute-0 blissful_chatelet[293658]:             "vg_name": "ceph_vg0"
Jan 20 19:28:38 compute-0 blissful_chatelet[293658]:         }
Jan 20 19:28:38 compute-0 blissful_chatelet[293658]:     ]
Jan 20 19:28:38 compute-0 blissful_chatelet[293658]: }
Jan 20 19:28:38 compute-0 systemd[1]: libpod-964aec16dd3f772a72c21e77df72477702f2b94993c1e2a25c34efd2903c45f4.scope: Deactivated successfully.
Jan 20 19:28:38 compute-0 conmon[293658]: conmon 964aec16dd3f772a72c2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-964aec16dd3f772a72c21e77df72477702f2b94993c1e2a25c34efd2903c45f4.scope/container/memory.events
Jan 20 19:28:38 compute-0 podman[293667]: 2026-01-20 19:28:38.371463849 +0000 UTC m=+0.041410581 container died 964aec16dd3f772a72c21e77df72477702f2b94993c1e2a25c34efd2903c45f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_chatelet, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Jan 20 19:28:38 compute-0 ceph-mon[74381]: pgmap v1407: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:28:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-c2110a80d9edb8bfc3c0f76e9243cd3c2fb2aa8b80d80265d6c4a6fd24d04fe3-merged.mount: Deactivated successfully.
Jan 20 19:28:38 compute-0 podman[293667]: 2026-01-20 19:28:38.415994835 +0000 UTC m=+0.085941597 container remove 964aec16dd3f772a72c21e77df72477702f2b94993c1e2a25c34efd2903c45f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:28:38 compute-0 systemd[1]: libpod-conmon-964aec16dd3f772a72c21e77df72477702f2b94993c1e2a25c34efd2903c45f4.scope: Deactivated successfully.
Jan 20 19:28:38 compute-0 sudo[293534]: pam_unix(sudo:session): session closed for user root
Jan 20 19:28:38 compute-0 sudo[293682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:28:38 compute-0 sudo[293682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:28:38 compute-0 sudo[293682]: pam_unix(sudo:session): session closed for user root
Jan 20 19:28:38 compute-0 sudo[293707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- raw list --format json
Jan 20 19:28:38 compute-0 sudo[293707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:28:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:28:38.902Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:28:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:28:38.902Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:28:39 compute-0 podman[293776]: 2026-01-20 19:28:39.009386246 +0000 UTC m=+0.034181916 container create fa86b01b99ea8621730ae3efd9c879996370f836dccba8337816613e94573d83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 20 19:28:39 compute-0 systemd[1]: Started libpod-conmon-fa86b01b99ea8621730ae3efd9c879996370f836dccba8337816613e94573d83.scope.
Jan 20 19:28:39 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:28:39 compute-0 podman[293776]: 2026-01-20 19:28:39.086556444 +0000 UTC m=+0.111352134 container init fa86b01b99ea8621730ae3efd9c879996370f836dccba8337816613e94573d83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:28:39 compute-0 podman[293776]: 2026-01-20 19:28:38.995314775 +0000 UTC m=+0.020110465 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:28:39 compute-0 podman[293776]: 2026-01-20 19:28:39.097958043 +0000 UTC m=+0.122753713 container start fa86b01b99ea8621730ae3efd9c879996370f836dccba8337816613e94573d83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_yonath, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 20 19:28:39 compute-0 podman[293776]: 2026-01-20 19:28:39.100791949 +0000 UTC m=+0.125587659 container attach fa86b01b99ea8621730ae3efd9c879996370f836dccba8337816613e94573d83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_yonath, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:28:39 compute-0 heuristic_yonath[293793]: 167 167
Jan 20 19:28:39 compute-0 systemd[1]: libpod-fa86b01b99ea8621730ae3efd9c879996370f836dccba8337816613e94573d83.scope: Deactivated successfully.
Jan 20 19:28:39 compute-0 podman[293776]: 2026-01-20 19:28:39.102349932 +0000 UTC m=+0.127145612 container died fa86b01b99ea8621730ae3efd9c879996370f836dccba8337816613e94573d83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_yonath, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Jan 20 19:28:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-43fee22b657172e306024328f267ba8e64e4a5b095c280836813d828fd21b6a4-merged.mount: Deactivated successfully.
Jan 20 19:28:39 compute-0 podman[293776]: 2026-01-20 19:28:39.132080607 +0000 UTC m=+0.156876277 container remove fa86b01b99ea8621730ae3efd9c879996370f836dccba8337816613e94573d83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:28:39 compute-0 systemd[1]: libpod-conmon-fa86b01b99ea8621730ae3efd9c879996370f836dccba8337816613e94573d83.scope: Deactivated successfully.
Jan 20 19:28:39 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:28:39 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:28:39 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:28:39.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:28:39 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:28:39 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:28:39 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:28:39.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:28:39 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1408: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Jan 20 19:28:39 compute-0 podman[293816]: 2026-01-20 19:28:39.335234566 +0000 UTC m=+0.048347691 container create 1a954abdbcb0ab5e8cd813588e44826fda1a59074823fe77e483d1cfefa5a142 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid)
Jan 20 19:28:39 compute-0 systemd[1]: Started libpod-conmon-1a954abdbcb0ab5e8cd813588e44826fda1a59074823fe77e483d1cfefa5a142.scope.
Jan 20 19:28:39 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:28:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34d876374a2fa81108877d2bcba920214decb556180f30a1c3789443133ef501/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:28:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34d876374a2fa81108877d2bcba920214decb556180f30a1c3789443133ef501/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:28:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34d876374a2fa81108877d2bcba920214decb556180f30a1c3789443133ef501/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:28:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34d876374a2fa81108877d2bcba920214decb556180f30a1c3789443133ef501/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:28:39 compute-0 podman[293816]: 2026-01-20 19:28:39.310959118 +0000 UTC m=+0.024072263 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:28:39 compute-0 podman[293816]: 2026-01-20 19:28:39.419552707 +0000 UTC m=+0.132665912 container init 1a954abdbcb0ab5e8cd813588e44826fda1a59074823fe77e483d1cfefa5a142 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_hamilton, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:28:39 compute-0 podman[293816]: 2026-01-20 19:28:39.431136531 +0000 UTC m=+0.144249666 container start 1a954abdbcb0ab5e8cd813588e44826fda1a59074823fe77e483d1cfefa5a142 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 20 19:28:39 compute-0 podman[293816]: 2026-01-20 19:28:39.434522583 +0000 UTC m=+0.147635728 container attach 1a954abdbcb0ab5e8cd813588e44826fda1a59074823fe77e483d1cfefa5a142 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_hamilton, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:28:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:28:39] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Jan 20 19:28:39 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:28:39] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Jan 20 19:28:40 compute-0 lvm[293907]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 19:28:40 compute-0 lvm[293907]: VG ceph_vg0 finished
Jan 20 19:28:40 compute-0 elegant_hamilton[293833]: {}
Jan 20 19:28:40 compute-0 systemd[1]: libpod-1a954abdbcb0ab5e8cd813588e44826fda1a59074823fe77e483d1cfefa5a142.scope: Deactivated successfully.
Jan 20 19:28:40 compute-0 systemd[1]: libpod-1a954abdbcb0ab5e8cd813588e44826fda1a59074823fe77e483d1cfefa5a142.scope: Consumed 1.119s CPU time.
Jan 20 19:28:40 compute-0 podman[293816]: 2026-01-20 19:28:40.142486774 +0000 UTC m=+0.855599909 container died 1a954abdbcb0ab5e8cd813588e44826fda1a59074823fe77e483d1cfefa5a142 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_hamilton, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:28:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-34d876374a2fa81108877d2bcba920214decb556180f30a1c3789443133ef501-merged.mount: Deactivated successfully.
Jan 20 19:28:40 compute-0 podman[293816]: 2026-01-20 19:28:40.186161576 +0000 UTC m=+0.899274701 container remove 1a954abdbcb0ab5e8cd813588e44826fda1a59074823fe77e483d1cfefa5a142 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_hamilton, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 19:28:40 compute-0 systemd[1]: libpod-conmon-1a954abdbcb0ab5e8cd813588e44826fda1a59074823fe77e483d1cfefa5a142.scope: Deactivated successfully.
Jan 20 19:28:40 compute-0 sudo[293707]: pam_unix(sudo:session): session closed for user root
Jan 20 19:28:40 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:28:40 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:28:40 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:28:40 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:28:40 compute-0 sudo[293921]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 19:28:40 compute-0 sudo[293921]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:28:40 compute-0 sudo[293921]: pam_unix(sudo:session): session closed for user root
Jan 20 19:28:40 compute-0 ceph-mon[74381]: pgmap v1408: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Jan 20 19:28:40 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:28:40 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:28:40 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:28:41 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:28:41 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:28:41 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:28:41.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:28:41 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:28:41 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:28:41 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:28:41.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:28:41 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1409: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:28:42 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:28:42 compute-0 sudo[293949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:28:42 compute-0 sudo[293949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:28:42 compute-0 sudo[293949]: pam_unix(sudo:session): session closed for user root
Jan 20 19:28:42 compute-0 nova_compute[254061]: 2026-01-20 19:28:42.713 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:28:42 compute-0 nova_compute[254061]: 2026-01-20 19:28:42.714 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:28:42 compute-0 nova_compute[254061]: 2026-01-20 19:28:42.715 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5001 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 20 19:28:42 compute-0 nova_compute[254061]: 2026-01-20 19:28:42.715 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 19:28:42 compute-0 nova_compute[254061]: 2026-01-20 19:28:42.715 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 19:28:42 compute-0 nova_compute[254061]: 2026-01-20 19:28:42.716 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:28:42 compute-0 ceph-mon[74381]: pgmap v1409: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:28:43 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:28:43 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:28:43 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:28:43.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:28:43 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:28:43 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:28:43 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:28:43.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:28:43 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1410: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Jan 20 19:28:44 compute-0 ceph-mon[74381]: pgmap v1410: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Jan 20 19:28:45 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:28:45 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:28:45 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:28:45.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:28:45 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:28:45 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:28:45 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:28:45.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:28:45 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1411: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Jan 20 19:28:46 compute-0 ceph-mon[74381]: pgmap v1411: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Jan 20 19:28:47 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:28:47 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:28:47 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:28:47.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:28:47 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:28:47 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:28:47 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:28:47.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:28:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:28:47.300Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:28:47 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1412: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:28:47 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:28:47 compute-0 nova_compute[254061]: 2026-01-20 19:28:47.717 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:28:48 compute-0 ceph-mon[74381]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 19:28:48 compute-0 ceph-mon[74381]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.0 total, 600.0 interval
                                           Cumulative writes: 8529 writes, 38K keys, 8527 commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.02 MB/s
                                           Cumulative WAL: 8529 writes, 8527 syncs, 1.00 writes per sync, written: 0.07 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1504 writes, 7168 keys, 1504 commit groups, 1.0 writes per commit group, ingest: 11.38 MB, 0.02 MB/s
                                           Interval WAL: 1504 writes, 1504 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0    108.6      0.57              0.19        23    0.025       0      0       0.0       0.0
                                             L6      1/0   14.99 MB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   4.9    143.2    123.9      2.42              0.81        22    0.110    135K    12K       0.0       0.0
                                            Sum      1/0   14.99 MB   0.0      0.3     0.1      0.3       0.4      0.1       0.0   5.9    116.0    121.0      2.98              1.00        45    0.066    135K    12K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.9    139.1    142.7      0.69              0.28        12    0.058     44K   3621       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   0.0    143.2    123.9      2.42              0.81        22    0.110    135K    12K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0    109.4      0.56              0.19        22    0.026       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     12.2      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 3000.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.060, interval 0.014
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.35 GB write, 0.12 MB/s write, 0.34 GB read, 0.12 MB/s read, 3.0 seconds
                                           Interval compaction: 0.10 GB write, 0.17 MB/s write, 0.09 GB read, 0.16 MB/s read, 0.7 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x564b95c0c9b0#2 capacity: 304.00 MB usage: 29.72 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.000216 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1836,28.69 MB,9.43624%) FilterBlock(46,391.61 KB,0.1258%) IndexBlock(46,668.05 KB,0.214602%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 20 19:28:48 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 20 19:28:48 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2442743525' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 19:28:48 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 20 19:28:48 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2442743525' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 19:28:48 compute-0 ceph-mon[74381]: pgmap v1412: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:28:48 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:28:48.903Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:28:49 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:28:49 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:28:49 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:28:49.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:28:49 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:28:49 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:28:49 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:28:49.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:28:49 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1413: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:28:49 compute-0 ceph-mon[74381]: from='client.? 192.168.122.10:0/2442743525' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 19:28:49 compute-0 ceph-mon[74381]: from='client.? 192.168.122.10:0/2442743525' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 19:28:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:28:49] "GET /metrics HTTP/1.1" 200 48450 "" "Prometheus/2.51.0"
Jan 20 19:28:49 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:28:49] "GET /metrics HTTP/1.1" 200 48450 "" "Prometheus/2.51.0"
Jan 20 19:28:50 compute-0 podman[293982]: 2026-01-20 19:28:50.098790055 +0000 UTC m=+0.074055515 container health_status 7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202)
Jan 20 19:28:51 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:28:51 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:28:51 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:28:51.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:28:51 compute-0 ceph-mon[74381]: pgmap v1413: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:28:51 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:28:51 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:28:51 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:28:51.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:28:51 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1414: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:28:52 compute-0 ceph-mon[74381]: pgmap v1414: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:28:52 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:28:52 compute-0 nova_compute[254061]: 2026-01-20 19:28:52.719 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:28:52 compute-0 nova_compute[254061]: 2026-01-20 19:28:52.720 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:28:53 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:28:53 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:28:53 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:28:53.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:28:53 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:28:53 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:28:53 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:28:53.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:28:53 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1415: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:28:54 compute-0 ceph-mon[74381]: pgmap v1415: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:28:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:28:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:28:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:28:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:28:55 compute-0 ceph-mgr[74676]: [balancer INFO root] Optimize plan auto_2026-01-20_19:28:55
Jan 20 19:28:55 compute-0 ceph-mgr[74676]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 19:28:55 compute-0 ceph-mgr[74676]: [balancer INFO root] do_upmap
Jan 20 19:28:55 compute-0 ceph-mgr[74676]: [balancer INFO root] pools ['cephfs.cephfs.data', 'vms', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.log', 'default.rgw.control', '.rgw.root', 'images', '.mgr', '.nfs', 'backups']
Jan 20 19:28:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:28:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:28:55 compute-0 ceph-mgr[74676]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 19:28:55 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:28:55 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:28:55 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:28:55.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:28:55 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:28:55 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:28:55 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:28:55.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:28:55 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1416: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:28:55 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:28:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 19:28:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 19:28:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:28:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 19:28:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:28:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 19:28:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:28:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 20 19:28:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:28:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:28:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:28:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:28:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:28:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 20 19:28:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:28:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 20 19:28:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:28:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:28:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:28:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 19:28:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:28:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 20 19:28:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:28:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:28:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:28:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 19:28:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:28:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 20 19:28:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:28:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:28:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:28:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:28:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:28:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:28:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:28:56 compute-0 ceph-mon[74381]: pgmap v1416: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:28:57 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:28:57 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:28:57 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:28:57.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:28:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:28:57.301Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:28:57 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:28:57 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:28:57 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:28:57.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:28:57 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1417: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:28:57 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:28:57 compute-0 nova_compute[254061]: 2026-01-20 19:28:57.722 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:28:57 compute-0 nova_compute[254061]: 2026-01-20 19:28:57.724 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:28:57 compute-0 nova_compute[254061]: 2026-01-20 19:28:57.724 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 20 19:28:57 compute-0 nova_compute[254061]: 2026-01-20 19:28:57.724 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 19:28:57 compute-0 nova_compute[254061]: 2026-01-20 19:28:57.725 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 19:28:57 compute-0 nova_compute[254061]: 2026-01-20 19:28:57.727 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:28:58 compute-0 podman[294010]: 2026-01-20 19:28:58.112491295 +0000 UTC m=+0.087231773 container health_status d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 19:28:58 compute-0 ceph-mon[74381]: pgmap v1417: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:28:58 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:28:58.904Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:28:59 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:28:59 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:28:59 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:28:59.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:28:59 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:28:59 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:28:59 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:28:59.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:28:59 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1418: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:28:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:28:59] "GET /metrics HTTP/1.1" 200 48452 "" "Prometheus/2.51.0"
Jan 20 19:28:59 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:28:59] "GET /metrics HTTP/1.1" 200 48452 "" "Prometheus/2.51.0"
Jan 20 19:29:00 compute-0 ceph-mon[74381]: pgmap v1418: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:29:01 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:29:01 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:29:01 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:29:01.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:29:01 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:29:01 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:29:01 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:29:01.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:29:01 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1419: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:29:02 compute-0 ceph-mon[74381]: pgmap v1419: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:29:02 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:29:02 compute-0 nova_compute[254061]: 2026-01-20 19:29:02.723 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:29:02 compute-0 nova_compute[254061]: 2026-01-20 19:29:02.727 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:29:02 compute-0 sudo[294040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:29:02 compute-0 sudo[294040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:29:02 compute-0 sudo[294040]: pam_unix(sudo:session): session closed for user root
Jan 20 19:29:03 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:29:03 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:29:03 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:29:03.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:29:03 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:29:03 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:29:03 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:29:03.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:29:03 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1420: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:29:04 compute-0 ceph-mon[74381]: pgmap v1420: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:29:05 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:29:05 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:29:05 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:29:05.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:29:05 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:29:05 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:29:05 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:29:05.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:29:05 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1421: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:29:06 compute-0 ceph-mon[74381]: pgmap v1421: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:29:07 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:29:07 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:29:07 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:29:07.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:29:07 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:29:07.302Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:29:07 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:29:07 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:29:07 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:29:07.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:29:07 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1422: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:29:07 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:29:07 compute-0 nova_compute[254061]: 2026-01-20 19:29:07.726 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:29:07 compute-0 nova_compute[254061]: 2026-01-20 19:29:07.728 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:29:08 compute-0 ceph-mon[74381]: pgmap v1422: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:29:08 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:29:08.906Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:29:09 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:29:09 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:29:09 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:29:09.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:29:09 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:29:09 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:29:09 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:29:09.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:29:09 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1423: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:29:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:29:09] "GET /metrics HTTP/1.1" 200 48452 "" "Prometheus/2.51.0"
Jan 20 19:29:09 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:29:09] "GET /metrics HTTP/1.1" 200 48452 "" "Prometheus/2.51.0"
Jan 20 19:29:10 compute-0 ceph-mon[74381]: pgmap v1423: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:29:10 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:29:11 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:29:11 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:29:11 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:29:11.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:29:11 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:29:11 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:29:11 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:29:11.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:29:11 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1424: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:29:11 compute-0 ceph-mon[74381]: pgmap v1424: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:29:12 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:29:12 compute-0 nova_compute[254061]: 2026-01-20 19:29:12.729 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:29:12 compute-0 nova_compute[254061]: 2026-01-20 19:29:12.731 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:29:12 compute-0 nova_compute[254061]: 2026-01-20 19:29:12.731 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 20 19:29:12 compute-0 nova_compute[254061]: 2026-01-20 19:29:12.731 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 19:29:12 compute-0 nova_compute[254061]: 2026-01-20 19:29:12.772 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:29:12 compute-0 nova_compute[254061]: 2026-01-20 19:29:12.773 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 19:29:13 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:29:13 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:29:13 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:29:13.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:29:13 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:29:13 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:29:13 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:29:13.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:29:13 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1425: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:29:14 compute-0 ceph-mon[74381]: pgmap v1425: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:29:15 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:29:15 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:29:15 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:29:15.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:29:15 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:29:15 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1426: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:29:15 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:29:15 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:29:15.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:29:16 compute-0 ceph-mon[74381]: pgmap v1426: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:29:17 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:29:17 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:29:17 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:29:17.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:29:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:29:17.304Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:29:17 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1427: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:29:17 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:29:17 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:29:17 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:29:17.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:29:17 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:29:17 compute-0 nova_compute[254061]: 2026-01-20 19:29:17.773 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:29:17 compute-0 nova_compute[254061]: 2026-01-20 19:29:17.775 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:29:17 compute-0 nova_compute[254061]: 2026-01-20 19:29:17.775 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 20 19:29:17 compute-0 nova_compute[254061]: 2026-01-20 19:29:17.775 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 19:29:17 compute-0 nova_compute[254061]: 2026-01-20 19:29:17.819 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:29:17 compute-0 nova_compute[254061]: 2026-01-20 19:29:17.820 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 19:29:18 compute-0 ceph-mon[74381]: pgmap v1427: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:29:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:29:18.907Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:29:19 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:29:19 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.002000053s ======
Jan 20 19:29:19 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:29:19.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000053s
Jan 20 19:29:19 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1428: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:29:19 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:29:19 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:29:19 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:29:19.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:29:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:29:19] "GET /metrics HTTP/1.1" 200 48451 "" "Prometheus/2.51.0"
Jan 20 19:29:19 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:29:19] "GET /metrics HTTP/1.1" 200 48451 "" "Prometheus/2.51.0"
Jan 20 19:29:20 compute-0 nova_compute[254061]: 2026-01-20 19:29:20.129 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:29:20 compute-0 nova_compute[254061]: 2026-01-20 19:29:20.160 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:29:20 compute-0 nova_compute[254061]: 2026-01-20 19:29:20.160 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:29:20 compute-0 nova_compute[254061]: 2026-01-20 19:29:20.161 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:29:20 compute-0 nova_compute[254061]: 2026-01-20 19:29:20.161 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 19:29:20 compute-0 nova_compute[254061]: 2026-01-20 19:29:20.162 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:29:20 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:29:20 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2573063282' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:29:20 compute-0 nova_compute[254061]: 2026-01-20 19:29:20.639 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:29:20 compute-0 ceph-mon[74381]: pgmap v1428: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:29:20 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/2573063282' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:29:20 compute-0 nova_compute[254061]: 2026-01-20 19:29:20.881 254065 WARNING nova.virt.libvirt.driver [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 19:29:20 compute-0 nova_compute[254061]: 2026-01-20 19:29:20.883 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4522MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 19:29:20 compute-0 nova_compute[254061]: 2026-01-20 19:29:20.883 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:29:20 compute-0 nova_compute[254061]: 2026-01-20 19:29:20.884 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:29:20 compute-0 nova_compute[254061]: 2026-01-20 19:29:20.971 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 19:29:20 compute-0 nova_compute[254061]: 2026-01-20 19:29:20.972 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 19:29:20 compute-0 nova_compute[254061]: 2026-01-20 19:29:20.987 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:29:21 compute-0 podman[294108]: 2026-01-20 19:29:21.086464394 +0000 UTC m=+0.058977177 container health_status 7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 20 19:29:21 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:29:21 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:29:21 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:29:21.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:29:21 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1429: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:29:21 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:29:21 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:29:21 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:29:21.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:29:21 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:29:21 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2346546179' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:29:21 compute-0 nova_compute[254061]: 2026-01-20 19:29:21.493 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:29:21 compute-0 nova_compute[254061]: 2026-01-20 19:29:21.500 254065 DEBUG nova.compute.provider_tree [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Inventory has not changed in ProviderTree for provider: cb9161e5-191d-495c-920a-01144f42a215 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 19:29:21 compute-0 nova_compute[254061]: 2026-01-20 19:29:21.526 254065 DEBUG nova.scheduler.client.report [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Inventory has not changed for provider cb9161e5-191d-495c-920a-01144f42a215 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 19:29:21 compute-0 nova_compute[254061]: 2026-01-20 19:29:21.530 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 19:29:21 compute-0 nova_compute[254061]: 2026-01-20 19:29:21.531 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.647s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:29:21 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/2346546179' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:29:22 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:29:22 compute-0 nova_compute[254061]: 2026-01-20 19:29:22.820 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:29:22 compute-0 ceph-mon[74381]: pgmap v1429: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:29:22 compute-0 sudo[294150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:29:22 compute-0 sudo[294150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:29:22 compute-0 sudo[294150]: pam_unix(sudo:session): session closed for user root
Jan 20 19:29:23 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:29:23 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:29:23 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:29:23.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:29:23 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1430: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:29:23 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:29:23 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:29:23 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:29:23.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:29:23 compute-0 nova_compute[254061]: 2026-01-20 19:29:23.532 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:29:23 compute-0 nova_compute[254061]: 2026-01-20 19:29:23.532 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:29:23 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/1869425389' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:29:23 compute-0 ceph-mon[74381]: pgmap v1430: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:29:24 compute-0 nova_compute[254061]: 2026-01-20 19:29:24.128 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:29:24 compute-0 nova_compute[254061]: 2026-01-20 19:29:24.128 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 19:29:24 compute-0 nova_compute[254061]: 2026-01-20 19:29:24.128 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 19:29:24 compute-0 nova_compute[254061]: 2026-01-20 19:29:24.185 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 19:29:24 compute-0 nova_compute[254061]: 2026-01-20 19:29:24.186 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:29:24 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/562574302' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:29:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:29:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:29:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:29:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:29:25 compute-0 nova_compute[254061]: 2026-01-20 19:29:25.129 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:29:25 compute-0 nova_compute[254061]: 2026-01-20 19:29:25.129 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:29:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:29:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:29:25 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:29:25 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:29:25 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:29:25.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:29:25 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1431: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:29:25 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:29:25 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:29:25 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:29:25.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:29:25 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:29:25 compute-0 ceph-mon[74381]: pgmap v1431: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:29:26 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/3766955893' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:29:27 compute-0 nova_compute[254061]: 2026-01-20 19:29:27.128 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:29:27 compute-0 nova_compute[254061]: 2026-01-20 19:29:27.128 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 19:29:27 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:29:27 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:29:27 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:29:27.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:29:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:29:27.305Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:29:27 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1432: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:29:27 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:29:27 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:29:27 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:29:27.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:29:27 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:29:27 compute-0 nova_compute[254061]: 2026-01-20 19:29:27.824 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:29:27 compute-0 nova_compute[254061]: 2026-01-20 19:29:27.825 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:29:27 compute-0 nova_compute[254061]: 2026-01-20 19:29:27.825 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 20 19:29:27 compute-0 nova_compute[254061]: 2026-01-20 19:29:27.826 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 19:29:27 compute-0 nova_compute[254061]: 2026-01-20 19:29:27.850 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:29:27 compute-0 nova_compute[254061]: 2026-01-20 19:29:27.850 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 19:29:27 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/990714847' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:29:27 compute-0 ceph-mon[74381]: pgmap v1432: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:29:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:29:28.908Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:29:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:29:28.908Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:29:29 compute-0 podman[294182]: 2026-01-20 19:29:29.167831087 +0000 UTC m=+0.137498752 container health_status d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 20 19:29:29 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:29:29 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:29:29 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:29:29.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:29:29 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1433: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:29:29 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:29:29 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:29:29 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:29:29.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:29:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:29:29] "GET /metrics HTTP/1.1" 200 48451 "" "Prometheus/2.51.0"
Jan 20 19:29:29 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:29:29] "GET /metrics HTTP/1.1" 200 48451 "" "Prometheus/2.51.0"
Jan 20 19:29:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:29:30.308 165659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:29:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:29:30.308 165659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:29:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:29:30.308 165659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:29:30 compute-0 ceph-mon[74381]: pgmap v1433: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:29:31 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:29:31 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:29:31 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:29:31.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:29:31 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1434: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:29:31 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:29:31 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:29:31 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:29:31.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:29:31 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-crash-compute-0[79794]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Jan 20 19:29:32 compute-0 ceph-mon[74381]: pgmap v1434: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:29:32 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:29:32 compute-0 nova_compute[254061]: 2026-01-20 19:29:32.851 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:29:32 compute-0 nova_compute[254061]: 2026-01-20 19:29:32.853 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:29:32 compute-0 nova_compute[254061]: 2026-01-20 19:29:32.853 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 20 19:29:32 compute-0 nova_compute[254061]: 2026-01-20 19:29:32.853 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 19:29:32 compute-0 nova_compute[254061]: 2026-01-20 19:29:32.854 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 19:29:32 compute-0 nova_compute[254061]: 2026-01-20 19:29:32.856 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:29:33 compute-0 nova_compute[254061]: 2026-01-20 19:29:33.125 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:29:33 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:29:33 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.003000079s ======
Jan 20 19:29:33 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:29:33.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000079s
Jan 20 19:29:33 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1435: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:29:33 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:29:33 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:29:33 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:29:33.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:29:34 compute-0 ceph-mon[74381]: pgmap v1435: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:29:35 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:29:35 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:29:35 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:29:35.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:29:35 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1436: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:29:35 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:29:35 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:29:35 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:29:35.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:29:36 compute-0 ceph-mon[74381]: pgmap v1436: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:29:37 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:29:37 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:29:37 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:29:37.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:29:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:29:37.306Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:29:37 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1437: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:29:37 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:29:37 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:29:37 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:29:37.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:29:37 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:29:37 compute-0 nova_compute[254061]: 2026-01-20 19:29:37.855 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:29:37 compute-0 nova_compute[254061]: 2026-01-20 19:29:37.856 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:29:38 compute-0 ceph-mon[74381]: pgmap v1437: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:29:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:29:38.909Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:29:39 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:29:39 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:29:39 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:29:39.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:29:39 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1438: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:29:39 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:29:39 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:29:39 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:29:39.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:29:39 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:29:39] "GET /metrics HTTP/1.1" 200 48451 "" "Prometheus/2.51.0"
Jan 20 19:29:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:29:39] "GET /metrics HTTP/1.1" 200 48451 "" "Prometheus/2.51.0"
Jan 20 19:29:40 compute-0 ceph-mon[74381]: pgmap v1438: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:29:40 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:29:40 compute-0 sudo[294219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:29:40 compute-0 sudo[294219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:29:40 compute-0 sudo[294219]: pam_unix(sudo:session): session closed for user root
Jan 20 19:29:40 compute-0 sudo[294244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 20 19:29:40 compute-0 sudo[294244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:29:41 compute-0 sudo[294244]: pam_unix(sudo:session): session closed for user root
Jan 20 19:29:41 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:29:41 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:29:41 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:29:41.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:29:41 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1439: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:29:41 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1440: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 20 19:29:41 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 19:29:41 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:29:41 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:29:41 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:29:41.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:29:41 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:29:41 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 20 19:29:41 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 19:29:41 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 19:29:42 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:29:42 compute-0 sudo[294303]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:29:42 compute-0 sudo[294303]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:29:42 compute-0 sudo[294303]: pam_unix(sudo:session): session closed for user root
Jan 20 19:29:42 compute-0 sudo[294328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 19:29:42 compute-0 sudo[294328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:29:42 compute-0 podman[294395]: 2026-01-20 19:29:42.577144546 +0000 UTC m=+0.038581236 container create acc9e53e6947ee2f9b0da33338cbd691e431472c74b012fc1d4a6ef0c014d0e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_hugle, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Jan 20 19:29:42 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:29:42 compute-0 systemd[1]: Started libpod-conmon-acc9e53e6947ee2f9b0da33338cbd691e431472c74b012fc1d4a6ef0c014d0e4.scope.
Jan 20 19:29:42 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:29:42 compute-0 podman[294395]: 2026-01-20 19:29:42.560472144 +0000 UTC m=+0.021908844 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:29:42 compute-0 podman[294395]: 2026-01-20 19:29:42.659234738 +0000 UTC m=+0.120671418 container init acc9e53e6947ee2f9b0da33338cbd691e431472c74b012fc1d4a6ef0c014d0e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_hugle, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:29:42 compute-0 podman[294395]: 2026-01-20 19:29:42.668154469 +0000 UTC m=+0.129591169 container start acc9e53e6947ee2f9b0da33338cbd691e431472c74b012fc1d4a6ef0c014d0e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_hugle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:29:42 compute-0 podman[294395]: 2026-01-20 19:29:42.671633983 +0000 UTC m=+0.133070723 container attach acc9e53e6947ee2f9b0da33338cbd691e431472c74b012fc1d4a6ef0c014d0e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_hugle, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 20 19:29:42 compute-0 crazy_hugle[294412]: 167 167
Jan 20 19:29:42 compute-0 systemd[1]: libpod-acc9e53e6947ee2f9b0da33338cbd691e431472c74b012fc1d4a6ef0c014d0e4.scope: Deactivated successfully.
Jan 20 19:29:42 compute-0 podman[294417]: 2026-01-20 19:29:42.73211063 +0000 UTC m=+0.040134977 container died acc9e53e6947ee2f9b0da33338cbd691e431472c74b012fc1d4a6ef0c014d0e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 20 19:29:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-98f6143852b6629f5db9f41b3a6ff5bd7ad8a8f4f65c98a1b73bf78f7871e3df-merged.mount: Deactivated successfully.
Jan 20 19:29:42 compute-0 podman[294417]: 2026-01-20 19:29:42.767769065 +0000 UTC m=+0.075793392 container remove acc9e53e6947ee2f9b0da33338cbd691e431472c74b012fc1d4a6ef0c014d0e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=crazy_hugle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 20 19:29:42 compute-0 systemd[1]: libpod-conmon-acc9e53e6947ee2f9b0da33338cbd691e431472c74b012fc1d4a6ef0c014d0e4.scope: Deactivated successfully.
Jan 20 19:29:42 compute-0 ceph-mon[74381]: pgmap v1439: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:29:42 compute-0 ceph-mon[74381]: pgmap v1440: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 20 19:29:42 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:29:42 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:29:42 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 19:29:42 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 19:29:42 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 19:29:42 compute-0 nova_compute[254061]: 2026-01-20 19:29:42.857 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:29:42 compute-0 nova_compute[254061]: 2026-01-20 19:29:42.860 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:29:42 compute-0 nova_compute[254061]: 2026-01-20 19:29:42.860 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 20 19:29:42 compute-0 nova_compute[254061]: 2026-01-20 19:29:42.861 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 19:29:42 compute-0 nova_compute[254061]: 2026-01-20 19:29:42.887 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:29:42 compute-0 nova_compute[254061]: 2026-01-20 19:29:42.888 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 19:29:42 compute-0 sudo[294435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:29:42 compute-0 sudo[294435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:29:42 compute-0 sudo[294435]: pam_unix(sudo:session): session closed for user root
Jan 20 19:29:43 compute-0 podman[294464]: 2026-01-20 19:29:43.022919981 +0000 UTC m=+0.062367379 container create e9f8dccef482f0b000c05859d7eacb9953e314ce345db2a57ccca5cab618542f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_hugle, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Jan 20 19:29:43 compute-0 systemd[1]: Started libpod-conmon-e9f8dccef482f0b000c05859d7eacb9953e314ce345db2a57ccca5cab618542f.scope.
Jan 20 19:29:43 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:29:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4440f73e55c87a1ca1d36fda7fd32af9349aa8d360e07af1473e401397fad641/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:29:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4440f73e55c87a1ca1d36fda7fd32af9349aa8d360e07af1473e401397fad641/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:29:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4440f73e55c87a1ca1d36fda7fd32af9349aa8d360e07af1473e401397fad641/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:29:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4440f73e55c87a1ca1d36fda7fd32af9349aa8d360e07af1473e401397fad641/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:29:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4440f73e55c87a1ca1d36fda7fd32af9349aa8d360e07af1473e401397fad641/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:29:43 compute-0 podman[294464]: 2026-01-20 19:29:43.000086123 +0000 UTC m=+0.039533551 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:29:43 compute-0 podman[294464]: 2026-01-20 19:29:43.112352692 +0000 UTC m=+0.151800080 container init e9f8dccef482f0b000c05859d7eacb9953e314ce345db2a57ccca5cab618542f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_hugle, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:29:43 compute-0 podman[294464]: 2026-01-20 19:29:43.123977236 +0000 UTC m=+0.163424604 container start e9f8dccef482f0b000c05859d7eacb9953e314ce345db2a57ccca5cab618542f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 20 19:29:43 compute-0 podman[294464]: 2026-01-20 19:29:43.126847684 +0000 UTC m=+0.166295052 container attach e9f8dccef482f0b000c05859d7eacb9953e314ce345db2a57ccca5cab618542f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:29:43 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:29:43 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:29:43 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:29:43.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:29:43 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1441: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 20 19:29:43 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:29:43 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:29:43 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:29:43.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:29:43 compute-0 priceless_hugle[294482]: --> passed data devices: 0 physical, 1 LVM
Jan 20 19:29:43 compute-0 priceless_hugle[294482]: --> All data devices are unavailable
Jan 20 19:29:43 compute-0 systemd[1]: libpod-e9f8dccef482f0b000c05859d7eacb9953e314ce345db2a57ccca5cab618542f.scope: Deactivated successfully.
Jan 20 19:29:43 compute-0 podman[294464]: 2026-01-20 19:29:43.464482113 +0000 UTC m=+0.503929501 container died e9f8dccef482f0b000c05859d7eacb9953e314ce345db2a57ccca5cab618542f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Jan 20 19:29:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-4440f73e55c87a1ca1d36fda7fd32af9349aa8d360e07af1473e401397fad641-merged.mount: Deactivated successfully.
Jan 20 19:29:43 compute-0 podman[294464]: 2026-01-20 19:29:43.526587624 +0000 UTC m=+0.566035012 container remove e9f8dccef482f0b000c05859d7eacb9953e314ce345db2a57ccca5cab618542f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_hugle, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Jan 20 19:29:43 compute-0 systemd[1]: libpod-conmon-e9f8dccef482f0b000c05859d7eacb9953e314ce345db2a57ccca5cab618542f.scope: Deactivated successfully.
Jan 20 19:29:43 compute-0 sudo[294328]: pam_unix(sudo:session): session closed for user root
Jan 20 19:29:43 compute-0 sudo[294513]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:29:43 compute-0 sudo[294513]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:29:43 compute-0 sudo[294513]: pam_unix(sudo:session): session closed for user root
Jan 20 19:29:43 compute-0 sudo[294538]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- lvm list --format json
Jan 20 19:29:43 compute-0 sudo[294538]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:29:44 compute-0 podman[294606]: 2026-01-20 19:29:44.102142282 +0000 UTC m=+0.041615288 container create 4113f9b9b1da6f9369c63828ee4d14e16153786dc30092bc30dd00c989070cb4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:29:44 compute-0 systemd[1]: Started libpod-conmon-4113f9b9b1da6f9369c63828ee4d14e16153786dc30092bc30dd00c989070cb4.scope.
Jan 20 19:29:44 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:29:44 compute-0 podman[294606]: 2026-01-20 19:29:44.084159845 +0000 UTC m=+0.023632851 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:29:44 compute-0 podman[294606]: 2026-01-20 19:29:44.178234231 +0000 UTC m=+0.117707257 container init 4113f9b9b1da6f9369c63828ee4d14e16153786dc30092bc30dd00c989070cb4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 19:29:44 compute-0 podman[294606]: 2026-01-20 19:29:44.184353847 +0000 UTC m=+0.123826843 container start 4113f9b9b1da6f9369c63828ee4d14e16153786dc30092bc30dd00c989070cb4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Jan 20 19:29:44 compute-0 sad_sammet[294623]: 167 167
Jan 20 19:29:44 compute-0 systemd[1]: libpod-4113f9b9b1da6f9369c63828ee4d14e16153786dc30092bc30dd00c989070cb4.scope: Deactivated successfully.
Jan 20 19:29:44 compute-0 podman[294606]: 2026-01-20 19:29:44.189430114 +0000 UTC m=+0.128903150 container attach 4113f9b9b1da6f9369c63828ee4d14e16153786dc30092bc30dd00c989070cb4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_sammet, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:29:44 compute-0 podman[294606]: 2026-01-20 19:29:44.189955369 +0000 UTC m=+0.129428365 container died 4113f9b9b1da6f9369c63828ee4d14e16153786dc30092bc30dd00c989070cb4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_sammet, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:29:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-b0dc1742ad580f60174e2b8defea542f1fe0389968d83a0ac1167e7ff711e737-merged.mount: Deactivated successfully.
Jan 20 19:29:44 compute-0 podman[294606]: 2026-01-20 19:29:44.233540268 +0000 UTC m=+0.173013264 container remove 4113f9b9b1da6f9369c63828ee4d14e16153786dc30092bc30dd00c989070cb4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_sammet, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Jan 20 19:29:44 compute-0 systemd[1]: libpod-conmon-4113f9b9b1da6f9369c63828ee4d14e16153786dc30092bc30dd00c989070cb4.scope: Deactivated successfully.
Jan 20 19:29:44 compute-0 podman[294645]: 2026-01-20 19:29:44.41388068 +0000 UTC m=+0.050321653 container create ebd2492ffeab13e932ce592483bcb3ced411be562f2a1856f321db1d85b8b664 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_shamir, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:29:44 compute-0 systemd[1]: Started libpod-conmon-ebd2492ffeab13e932ce592483bcb3ced411be562f2a1856f321db1d85b8b664.scope.
Jan 20 19:29:44 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:29:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94f562d339fb3875a5b8314662833d0007d1af568913aa5599e7eab810b12735/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:29:44 compute-0 podman[294645]: 2026-01-20 19:29:44.389759426 +0000 UTC m=+0.026200489 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:29:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94f562d339fb3875a5b8314662833d0007d1af568913aa5599e7eab810b12735/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:29:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94f562d339fb3875a5b8314662833d0007d1af568913aa5599e7eab810b12735/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:29:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94f562d339fb3875a5b8314662833d0007d1af568913aa5599e7eab810b12735/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:29:44 compute-0 podman[294645]: 2026-01-20 19:29:44.504110941 +0000 UTC m=+0.140551934 container init ebd2492ffeab13e932ce592483bcb3ced411be562f2a1856f321db1d85b8b664 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_shamir, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 20 19:29:44 compute-0 podman[294645]: 2026-01-20 19:29:44.510381612 +0000 UTC m=+0.146822625 container start ebd2492ffeab13e932ce592483bcb3ced411be562f2a1856f321db1d85b8b664 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:29:44 compute-0 podman[294645]: 2026-01-20 19:29:44.514111562 +0000 UTC m=+0.150552555 container attach ebd2492ffeab13e932ce592483bcb3ced411be562f2a1856f321db1d85b8b664 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_shamir, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 20 19:29:44 compute-0 optimistic_shamir[294662]: {
Jan 20 19:29:44 compute-0 optimistic_shamir[294662]:     "0": [
Jan 20 19:29:44 compute-0 optimistic_shamir[294662]:         {
Jan 20 19:29:44 compute-0 optimistic_shamir[294662]:             "devices": [
Jan 20 19:29:44 compute-0 optimistic_shamir[294662]:                 "/dev/loop3"
Jan 20 19:29:44 compute-0 optimistic_shamir[294662]:             ],
Jan 20 19:29:44 compute-0 optimistic_shamir[294662]:             "lv_name": "ceph_lv0",
Jan 20 19:29:44 compute-0 optimistic_shamir[294662]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:29:44 compute-0 optimistic_shamir[294662]:             "lv_size": "21470642176",
Jan 20 19:29:44 compute-0 optimistic_shamir[294662]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=aecbbf3b-b405-507b-97d7-637a83f5b4b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5f53c0c6-6046-4836-83f9-ff93da7e674e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:29:44 compute-0 optimistic_shamir[294662]:             "lv_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 19:29:44 compute-0 optimistic_shamir[294662]:             "name": "ceph_lv0",
Jan 20 19:29:44 compute-0 optimistic_shamir[294662]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:29:44 compute-0 optimistic_shamir[294662]:             "tags": {
Jan 20 19:29:44 compute-0 optimistic_shamir[294662]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:29:44 compute-0 optimistic_shamir[294662]:                 "ceph.block_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 19:29:44 compute-0 optimistic_shamir[294662]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:29:44 compute-0 optimistic_shamir[294662]:                 "ceph.cluster_fsid": "aecbbf3b-b405-507b-97d7-637a83f5b4b1",
Jan 20 19:29:44 compute-0 optimistic_shamir[294662]:                 "ceph.cluster_name": "ceph",
Jan 20 19:29:44 compute-0 optimistic_shamir[294662]:                 "ceph.crush_device_class": "",
Jan 20 19:29:44 compute-0 optimistic_shamir[294662]:                 "ceph.encrypted": "0",
Jan 20 19:29:44 compute-0 optimistic_shamir[294662]:                 "ceph.osd_fsid": "5f53c0c6-6046-4836-83f9-ff93da7e674e",
Jan 20 19:29:44 compute-0 optimistic_shamir[294662]:                 "ceph.osd_id": "0",
Jan 20 19:29:44 compute-0 optimistic_shamir[294662]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:29:44 compute-0 optimistic_shamir[294662]:                 "ceph.type": "block",
Jan 20 19:29:44 compute-0 optimistic_shamir[294662]:                 "ceph.vdo": "0",
Jan 20 19:29:44 compute-0 optimistic_shamir[294662]:                 "ceph.with_tpm": "0"
Jan 20 19:29:44 compute-0 optimistic_shamir[294662]:             },
Jan 20 19:29:44 compute-0 optimistic_shamir[294662]:             "type": "block",
Jan 20 19:29:44 compute-0 optimistic_shamir[294662]:             "vg_name": "ceph_vg0"
Jan 20 19:29:44 compute-0 optimistic_shamir[294662]:         }
Jan 20 19:29:44 compute-0 optimistic_shamir[294662]:     ]
Jan 20 19:29:44 compute-0 optimistic_shamir[294662]: }
Jan 20 19:29:44 compute-0 ceph-mon[74381]: pgmap v1441: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 20 19:29:44 compute-0 systemd[1]: libpod-ebd2492ffeab13e932ce592483bcb3ced411be562f2a1856f321db1d85b8b664.scope: Deactivated successfully.
Jan 20 19:29:44 compute-0 podman[294645]: 2026-01-20 19:29:44.820916116 +0000 UTC m=+0.457357109 container died ebd2492ffeab13e932ce592483bcb3ced411be562f2a1856f321db1d85b8b664 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_shamir, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS)
Jan 20 19:29:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-94f562d339fb3875a5b8314662833d0007d1af568913aa5599e7eab810b12735-merged.mount: Deactivated successfully.
Jan 20 19:29:44 compute-0 podman[294645]: 2026-01-20 19:29:44.872901204 +0000 UTC m=+0.509342187 container remove ebd2492ffeab13e932ce592483bcb3ced411be562f2a1856f321db1d85b8b664 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_shamir, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Jan 20 19:29:44 compute-0 systemd[1]: libpod-conmon-ebd2492ffeab13e932ce592483bcb3ced411be562f2a1856f321db1d85b8b664.scope: Deactivated successfully.
Jan 20 19:29:44 compute-0 sudo[294538]: pam_unix(sudo:session): session closed for user root
Jan 20 19:29:44 compute-0 sudo[294682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:29:44 compute-0 sudo[294682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:29:44 compute-0 sudo[294682]: pam_unix(sudo:session): session closed for user root
Jan 20 19:29:45 compute-0 sudo[294708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- raw list --format json
Jan 20 19:29:45 compute-0 sudo[294708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:29:45 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:29:45 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:29:45 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:29:45.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:29:45 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1442: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 20 19:29:45 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:29:45 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:29:45 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:29:45.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:29:45 compute-0 podman[294773]: 2026-01-20 19:29:45.412103568 +0000 UTC m=+0.042926263 container create 2a9498fa7bba14c086144fa684433dfd2629aedea9c2007d07cd83411820da99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 19:29:45 compute-0 systemd[1]: Started libpod-conmon-2a9498fa7bba14c086144fa684433dfd2629aedea9c2007d07cd83411820da99.scope.
Jan 20 19:29:45 compute-0 podman[294773]: 2026-01-20 19:29:45.390476342 +0000 UTC m=+0.021299047 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:29:45 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:29:45 compute-0 podman[294773]: 2026-01-20 19:29:45.498983779 +0000 UTC m=+0.129806484 container init 2a9498fa7bba14c086144fa684433dfd2629aedea9c2007d07cd83411820da99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_northcutt, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid)
Jan 20 19:29:45 compute-0 podman[294773]: 2026-01-20 19:29:45.505070944 +0000 UTC m=+0.135893639 container start 2a9498fa7bba14c086144fa684433dfd2629aedea9c2007d07cd83411820da99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_northcutt, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 20 19:29:45 compute-0 charming_northcutt[294789]: 167 167
Jan 20 19:29:45 compute-0 systemd[1]: libpod-2a9498fa7bba14c086144fa684433dfd2629aedea9c2007d07cd83411820da99.scope: Deactivated successfully.
Jan 20 19:29:45 compute-0 podman[294773]: 2026-01-20 19:29:45.509019391 +0000 UTC m=+0.139842066 container attach 2a9498fa7bba14c086144fa684433dfd2629aedea9c2007d07cd83411820da99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_northcutt, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 20 19:29:45 compute-0 podman[294773]: 2026-01-20 19:29:45.509314808 +0000 UTC m=+0.140137473 container died 2a9498fa7bba14c086144fa684433dfd2629aedea9c2007d07cd83411820da99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325)
Jan 20 19:29:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b9332e678518de74f97a617b2a04063dcc545d824b249c875cbb114116c0796-merged.mount: Deactivated successfully.
Jan 20 19:29:45 compute-0 podman[294773]: 2026-01-20 19:29:45.546229187 +0000 UTC m=+0.177051852 container remove 2a9498fa7bba14c086144fa684433dfd2629aedea9c2007d07cd83411820da99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_northcutt, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:29:45 compute-0 systemd[1]: libpod-conmon-2a9498fa7bba14c086144fa684433dfd2629aedea9c2007d07cd83411820da99.scope: Deactivated successfully.
Jan 20 19:29:45 compute-0 podman[294812]: 2026-01-20 19:29:45.705034626 +0000 UTC m=+0.038630167 container create 2cd3445f037c20c34c47eb662cfa51c68e2cc82144c87529e22b6dab607d5100 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_meninsky, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:29:45 compute-0 systemd[1]: Started libpod-conmon-2cd3445f037c20c34c47eb662cfa51c68e2cc82144c87529e22b6dab607d5100.scope.
Jan 20 19:29:45 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:29:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9365f076231831b479758a5fa5df063305329fd1c56847cd4833ebe85bec97a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:29:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9365f076231831b479758a5fa5df063305329fd1c56847cd4833ebe85bec97a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:29:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9365f076231831b479758a5fa5df063305329fd1c56847cd4833ebe85bec97a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:29:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9365f076231831b479758a5fa5df063305329fd1c56847cd4833ebe85bec97a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:29:45 compute-0 podman[294812]: 2026-01-20 19:29:45.775605586 +0000 UTC m=+0.109201107 container init 2cd3445f037c20c34c47eb662cfa51c68e2cc82144c87529e22b6dab607d5100 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_meninsky, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 20 19:29:45 compute-0 podman[294812]: 2026-01-20 19:29:45.686636348 +0000 UTC m=+0.020231879 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:29:45 compute-0 podman[294812]: 2026-01-20 19:29:45.783005237 +0000 UTC m=+0.116600748 container start 2cd3445f037c20c34c47eb662cfa51c68e2cc82144c87529e22b6dab607d5100 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Jan 20 19:29:45 compute-0 podman[294812]: 2026-01-20 19:29:45.785973237 +0000 UTC m=+0.119568738 container attach 2cd3445f037c20c34c47eb662cfa51c68e2cc82144c87529e22b6dab607d5100 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_meninsky, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:29:46 compute-0 lvm[294903]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 19:29:46 compute-0 lvm[294903]: VG ceph_vg0 finished
Jan 20 19:29:46 compute-0 beautiful_meninsky[294829]: {}
Jan 20 19:29:46 compute-0 systemd[1]: libpod-2cd3445f037c20c34c47eb662cfa51c68e2cc82144c87529e22b6dab607d5100.scope: Deactivated successfully.
Jan 20 19:29:46 compute-0 podman[294812]: 2026-01-20 19:29:46.475063638 +0000 UTC m=+0.808659129 container died 2cd3445f037c20c34c47eb662cfa51c68e2cc82144c87529e22b6dab607d5100 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_meninsky, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:29:46 compute-0 systemd[1]: libpod-2cd3445f037c20c34c47eb662cfa51c68e2cc82144c87529e22b6dab607d5100.scope: Consumed 1.067s CPU time.
Jan 20 19:29:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-b9365f076231831b479758a5fa5df063305329fd1c56847cd4833ebe85bec97a-merged.mount: Deactivated successfully.
Jan 20 19:29:46 compute-0 podman[294812]: 2026-01-20 19:29:46.51913115 +0000 UTC m=+0.852726651 container remove 2cd3445f037c20c34c47eb662cfa51c68e2cc82144c87529e22b6dab607d5100 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_meninsky, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Jan 20 19:29:46 compute-0 systemd[1]: libpod-conmon-2cd3445f037c20c34c47eb662cfa51c68e2cc82144c87529e22b6dab607d5100.scope: Deactivated successfully.
Jan 20 19:29:46 compute-0 sudo[294708]: pam_unix(sudo:session): session closed for user root
Jan 20 19:29:46 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:29:46 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:29:46 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:29:46 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:29:46 compute-0 sudo[294918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 19:29:46 compute-0 sudo[294918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:29:46 compute-0 sudo[294918]: pam_unix(sudo:session): session closed for user root
Jan 20 19:29:46 compute-0 ceph-mon[74381]: pgmap v1442: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 20 19:29:46 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:29:46 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:29:47 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:29:47 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:29:47 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:29:47.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:29:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:29:47.307Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:29:47 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1443: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 20 19:29:47 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:29:47 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:29:47 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:29:47.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:29:47 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:29:47 compute-0 nova_compute[254061]: 2026-01-20 19:29:47.887 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:29:47 compute-0 nova_compute[254061]: 2026-01-20 19:29:47.890 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:29:48 compute-0 ceph-mon[74381]: pgmap v1443: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 20 19:29:48 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 20 19:29:48 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3019473434' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 19:29:48 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 20 19:29:48 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3019473434' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 19:29:48 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:29:48.910Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:29:48 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:29:48.910Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:29:48 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:29:48.910Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:29:49 compute-0 ceph-mon[74381]: from='client.? 192.168.122.10:0/3019473434' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 19:29:49 compute-0 ceph-mon[74381]: from='client.? 192.168.122.10:0/3019473434' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 19:29:49 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:29:49 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:29:49 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:29:49.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:29:49 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1444: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 20 19:29:49 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:29:49 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:29:49 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:29:49.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:29:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:29:49] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Jan 20 19:29:49 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:29:49] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Jan 20 19:29:50 compute-0 ceph-mon[74381]: pgmap v1444: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 20 19:29:51 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:29:51 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:29:51 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:29:51.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:29:51 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1445: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 20 19:29:51 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:29:51 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:29:51 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:29:51.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:29:52 compute-0 podman[294949]: 2026-01-20 19:29:52.086624853 +0000 UTC m=+0.056136980 container health_status 7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 20 19:29:52 compute-0 ceph-mon[74381]: pgmap v1445: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 20 19:29:52 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:29:52 compute-0 nova_compute[254061]: 2026-01-20 19:29:52.891 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:29:52 compute-0 nova_compute[254061]: 2026-01-20 19:29:52.893 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:29:52 compute-0 nova_compute[254061]: 2026-01-20 19:29:52.893 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 20 19:29:52 compute-0 nova_compute[254061]: 2026-01-20 19:29:52.894 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 19:29:52 compute-0 nova_compute[254061]: 2026-01-20 19:29:52.932 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:29:52 compute-0 nova_compute[254061]: 2026-01-20 19:29:52.932 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 19:29:53 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:29:53 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:29:53 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:29:53.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:29:53 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1446: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:29:53 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:29:53 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:29:53 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:29:53.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:29:54 compute-0 ceph-mon[74381]: pgmap v1446: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:29:54 compute-0 ceph-mon[74381]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #84. Immutable memtables: 0.
Jan 20 19:29:54 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:29:54.418581) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 19:29:54 compute-0 ceph-mon[74381]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 84
Jan 20 19:29:54 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768937394418615, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 1024, "num_deletes": 251, "total_data_size": 1765462, "memory_usage": 1795472, "flush_reason": "Manual Compaction"}
Jan 20 19:29:54 compute-0 ceph-mon[74381]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #85: started
Jan 20 19:29:54 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768937394435224, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 85, "file_size": 1704487, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 38773, "largest_seqno": 39796, "table_properties": {"data_size": 1699546, "index_size": 2465, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 11005, "raw_average_key_size": 19, "raw_value_size": 1689492, "raw_average_value_size": 3049, "num_data_blocks": 109, "num_entries": 554, "num_filter_entries": 554, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768937308, "oldest_key_time": 1768937308, "file_creation_time": 1768937394, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cbf3ab03-d51c-4622-b6c7-e997cd5246eb", "db_session_id": "I40O2DG19JCNHUB0JQU4", "orig_file_number": 85, "seqno_to_time_mapping": "N/A"}}
Jan 20 19:29:54 compute-0 ceph-mon[74381]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 16698 microseconds, and 7860 cpu microseconds.
Jan 20 19:29:54 compute-0 ceph-mon[74381]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 19:29:54 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:29:54.435274) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #85: 1704487 bytes OK
Jan 20 19:29:54 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:29:54.435297) [db/memtable_list.cc:519] [default] Level-0 commit table #85 started
Jan 20 19:29:54 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:29:54.437559) [db/memtable_list.cc:722] [default] Level-0 commit table #85: memtable #1 done
Jan 20 19:29:54 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:29:54.437582) EVENT_LOG_v1 {"time_micros": 1768937394437574, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 19:29:54 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:29:54.437603) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 19:29:54 compute-0 ceph-mon[74381]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 1760695, prev total WAL file size 1760695, number of live WAL files 2.
Jan 20 19:29:54 compute-0 ceph-mon[74381]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000081.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 19:29:54 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:29:54.438548) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Jan 20 19:29:54 compute-0 ceph-mon[74381]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 19:29:54 compute-0 ceph-mon[74381]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [85(1664KB)], [83(14MB)]
Jan 20 19:29:54 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768937394439268, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [85], "files_L6": [83], "score": -1, "input_data_size": 17427458, "oldest_snapshot_seqno": -1}
Jan 20 19:29:54 compute-0 ceph-mon[74381]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #86: 7197 keys, 15264067 bytes, temperature: kUnknown
Jan 20 19:29:54 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768937394563173, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 86, "file_size": 15264067, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15216758, "index_size": 28189, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18053, "raw_key_size": 189726, "raw_average_key_size": 26, "raw_value_size": 15087863, "raw_average_value_size": 2096, "num_data_blocks": 1103, "num_entries": 7197, "num_filter_entries": 7197, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768934326, "oldest_key_time": 0, "file_creation_time": 1768937394, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cbf3ab03-d51c-4622-b6c7-e997cd5246eb", "db_session_id": "I40O2DG19JCNHUB0JQU4", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Jan 20 19:29:54 compute-0 ceph-mon[74381]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 19:29:54 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:29:54.563515) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 15264067 bytes
Jan 20 19:29:54 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:29:54.565293) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 140.5 rd, 123.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 15.0 +0.0 blob) out(14.6 +0.0 blob), read-write-amplify(19.2) write-amplify(9.0) OK, records in: 7713, records dropped: 516 output_compression: NoCompression
Jan 20 19:29:54 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:29:54.565322) EVENT_LOG_v1 {"time_micros": 1768937394565310, "job": 48, "event": "compaction_finished", "compaction_time_micros": 124008, "compaction_time_cpu_micros": 56944, "output_level": 6, "num_output_files": 1, "total_output_size": 15264067, "num_input_records": 7713, "num_output_records": 7197, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 19:29:54 compute-0 ceph-mon[74381]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000085.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 19:29:54 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768937394566133, "job": 48, "event": "table_file_deletion", "file_number": 85}
Jan 20 19:29:54 compute-0 ceph-mon[74381]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 19:29:54 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768937394571983, "job": 48, "event": "table_file_deletion", "file_number": 83}
Jan 20 19:29:54 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:29:54.438434) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:29:54 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:29:54.572056) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:29:54 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:29:54.572064) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:29:54 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:29:54.572071) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:29:54 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:29:54.572073) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:29:54 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:29:54.572076) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:29:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:29:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:29:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:29:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:29:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:29:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:29:55 compute-0 ceph-mgr[74676]: [balancer INFO root] Optimize plan auto_2026-01-20_19:29:55
Jan 20 19:29:55 compute-0 ceph-mgr[74676]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 19:29:55 compute-0 ceph-mgr[74676]: [balancer INFO root] do_upmap
Jan 20 19:29:55 compute-0 ceph-mgr[74676]: [balancer INFO root] pools ['backups', '.mgr', 'images', 'default.rgw.log', 'cephfs.cephfs.meta', 'vms', 'volumes', '.rgw.root', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.data', '.nfs']
Jan 20 19:29:55 compute-0 ceph-mgr[74676]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 19:29:55 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:29:55 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:29:55 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:29:55.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:29:55 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1447: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:29:55 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:29:55 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:29:55 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:29:55.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:29:55 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:29:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 19:29:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:29:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 19:29:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:29:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 20 19:29:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:29:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:29:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:29:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:29:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:29:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 20 19:29:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:29:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 20 19:29:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:29:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:29:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:29:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 19:29:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:29:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 20 19:29:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:29:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:29:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:29:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 19:29:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:29:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 20 19:29:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 19:29:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:29:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 19:29:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:29:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:29:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:29:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:29:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:29:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:29:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:29:56 compute-0 ceph-mon[74381]: pgmap v1447: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:29:57 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:29:57 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:29:57 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:29:57.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:29:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:29:57.309Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:29:57 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1448: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:29:57 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:29:57 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:29:57 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:29:57.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:29:57 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:29:57 compute-0 nova_compute[254061]: 2026-01-20 19:29:57.933 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:29:57 compute-0 nova_compute[254061]: 2026-01-20 19:29:57.934 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:29:57 compute-0 nova_compute[254061]: 2026-01-20 19:29:57.935 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 20 19:29:57 compute-0 nova_compute[254061]: 2026-01-20 19:29:57.935 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 19:29:57 compute-0 nova_compute[254061]: 2026-01-20 19:29:57.935 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 19:29:57 compute-0 nova_compute[254061]: 2026-01-20 19:29:57.936 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:29:58 compute-0 ceph-mon[74381]: pgmap v1448: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:29:58 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:29:58.911Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:29:59 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:29:59 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:29:59 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:29:59.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:29:59 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1449: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:29:59 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:29:59 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:29:59 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:29:59.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:29:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:29:59] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Jan 20 19:29:59 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:29:59] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Jan 20 19:30:00 compute-0 ceph-mon[74381]: log_channel(cluster) log [WRN] : overall HEALTH_WARN 1 failed cephadm daemon(s)
Jan 20 19:30:00 compute-0 podman[294976]: 2026-01-20 19:30:00.144611542 +0000 UTC m=+0.121909741 container health_status d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 20 19:30:00 compute-0 ceph-mon[74381]: pgmap v1449: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:30:00 compute-0 ceph-mon[74381]: overall HEALTH_WARN 1 failed cephadm daemon(s)
Jan 20 19:30:01 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:30:01 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:30:01 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:30:01.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:30:01 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1450: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:30:01 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:30:01 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:30:01 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:30:01.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:30:02 compute-0 ceph-mon[74381]: pgmap v1450: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:30:02 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:30:02 compute-0 nova_compute[254061]: 2026-01-20 19:30:02.938 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:30:03 compute-0 sudo[295005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:30:03 compute-0 sudo[295005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:30:03 compute-0 sudo[295005]: pam_unix(sudo:session): session closed for user root
Jan 20 19:30:03 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:30:03 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:30:03 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:30:03.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:30:03 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1451: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:30:03 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:30:03 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:30:03 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:30:03.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:30:04 compute-0 ceph-mon[74381]: pgmap v1451: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:30:05 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:30:05 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:30:05 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:30:05.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:30:05 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1452: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:30:05 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:30:05 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:30:05 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:30:05.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:30:06 compute-0 ceph-mon[74381]: pgmap v1452: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:30:07 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:30:07 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:30:07 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:30:07.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:30:07 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:30:07.310Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:30:07 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1453: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:30:07 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:30:07 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:30:07 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:30:07.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:30:07 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:30:07 compute-0 nova_compute[254061]: 2026-01-20 19:30:07.940 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:30:08 compute-0 ceph-mon[74381]: pgmap v1453: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:30:08 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:30:08.913Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:30:08 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:30:08.913Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:30:09 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:30:09 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:30:09 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:30:09.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:30:09 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1454: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:30:09 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:30:09 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:30:09 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:30:09.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:30:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:30:09] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Jan 20 19:30:09 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:30:09] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Jan 20 19:30:10 compute-0 ceph-mon[74381]: pgmap v1454: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:30:10 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:30:11 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:30:11 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:30:11 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:30:11.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:30:11 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1455: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:30:11 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:30:11 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:30:11 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:30:11.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:30:12 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:30:12 compute-0 ceph-mon[74381]: pgmap v1455: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:30:12 compute-0 nova_compute[254061]: 2026-01-20 19:30:12.941 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:30:13 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:30:13 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:30:13 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:30:13.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:30:13 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1456: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:30:13 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:30:13 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:30:13 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:30:13.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:30:14 compute-0 ceph-mon[74381]: pgmap v1456: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:30:15 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:30:15 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:30:15 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:30:15.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:30:15 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1457: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:30:15 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:30:15 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:30:15 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:30:15.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:30:17 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:30:17 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:30:17 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:30:17.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:30:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:30:17.311Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:30:17 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1458: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:30:17 compute-0 ceph-mon[74381]: pgmap v1457: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:30:17 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:30:17 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:30:17 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:30:17.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:30:17 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:30:17 compute-0 nova_compute[254061]: 2026-01-20 19:30:17.943 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:30:18 compute-0 ceph-mon[74381]: pgmap v1458: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:30:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:30:18.914Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:30:19 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:30:19 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:30:19 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:30:19.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:30:19 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1459: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:30:19 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:30:19 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:30:19 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:30:19.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:30:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:30:19] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Jan 20 19:30:19 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:30:19] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Jan 20 19:30:20 compute-0 nova_compute[254061]: 2026-01-20 19:30:20.129 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:30:20 compute-0 nova_compute[254061]: 2026-01-20 19:30:20.204 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:30:20 compute-0 nova_compute[254061]: 2026-01-20 19:30:20.204 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:30:20 compute-0 nova_compute[254061]: 2026-01-20 19:30:20.205 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:30:20 compute-0 nova_compute[254061]: 2026-01-20 19:30:20.205 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 19:30:20 compute-0 nova_compute[254061]: 2026-01-20 19:30:20.206 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:30:20 compute-0 ceph-mon[74381]: pgmap v1459: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:30:20 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:30:20 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2888365851' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:30:20 compute-0 nova_compute[254061]: 2026-01-20 19:30:20.724 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:30:20 compute-0 nova_compute[254061]: 2026-01-20 19:30:20.885 254065 WARNING nova.virt.libvirt.driver [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 19:30:20 compute-0 nova_compute[254061]: 2026-01-20 19:30:20.886 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4497MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 19:30:20 compute-0 nova_compute[254061]: 2026-01-20 19:30:20.887 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:30:20 compute-0 nova_compute[254061]: 2026-01-20 19:30:20.887 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:30:20 compute-0 nova_compute[254061]: 2026-01-20 19:30:20.958 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 19:30:20 compute-0 nova_compute[254061]: 2026-01-20 19:30:20.959 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 19:30:20 compute-0 nova_compute[254061]: 2026-01-20 19:30:20.973 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:30:21 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:30:21 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:30:21 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:30:21.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:30:21 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1460: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:30:21 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:30:21 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1032040668' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:30:21 compute-0 nova_compute[254061]: 2026-01-20 19:30:21.432 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:30:21 compute-0 nova_compute[254061]: 2026-01-20 19:30:21.439 254065 DEBUG nova.compute.provider_tree [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Inventory has not changed in ProviderTree for provider: cb9161e5-191d-495c-920a-01144f42a215 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 19:30:21 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:30:21 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:30:21 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:30:21.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:30:21 compute-0 nova_compute[254061]: 2026-01-20 19:30:21.461 254065 DEBUG nova.scheduler.client.report [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Inventory has not changed for provider cb9161e5-191d-495c-920a-01144f42a215 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 19:30:21 compute-0 nova_compute[254061]: 2026-01-20 19:30:21.464 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 19:30:21 compute-0 nova_compute[254061]: 2026-01-20 19:30:21.465 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.578s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:30:21 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/2888365851' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:30:21 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/1032040668' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:30:22 compute-0 ceph-mon[74381]: pgmap v1460: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:30:22 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:30:22 compute-0 nova_compute[254061]: 2026-01-20 19:30:22.944 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:30:23 compute-0 podman[295095]: 2026-01-20 19:30:23.112676332 +0000 UTC m=+0.076233935 container health_status 7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 20 19:30:23 compute-0 sudo[295106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:30:23 compute-0 sudo[295106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:30:23 compute-0 sudo[295106]: pam_unix(sudo:session): session closed for user root
Jan 20 19:30:23 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:30:23 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:30:23 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:30:23.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:30:23 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1461: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:30:23 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:30:23 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:30:23 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:30:23.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:30:23 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/3332829207' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:30:24 compute-0 ceph-mon[74381]: pgmap v1461: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:30:24 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/271898426' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:30:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:30:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:30:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:30:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:30:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:30:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:30:25 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:30:25 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:30:25 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:30:25.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:30:25 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1462: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:30:25 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:30:25 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:30:25 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:30:25.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:30:25 compute-0 nova_compute[254061]: 2026-01-20 19:30:25.466 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:30:25 compute-0 nova_compute[254061]: 2026-01-20 19:30:25.466 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:30:25 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:30:26 compute-0 nova_compute[254061]: 2026-01-20 19:30:26.128 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:30:26 compute-0 nova_compute[254061]: 2026-01-20 19:30:26.129 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 19:30:26 compute-0 nova_compute[254061]: 2026-01-20 19:30:26.129 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 19:30:26 compute-0 nova_compute[254061]: 2026-01-20 19:30:26.388 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 19:30:26 compute-0 nova_compute[254061]: 2026-01-20 19:30:26.389 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:30:26 compute-0 nova_compute[254061]: 2026-01-20 19:30:26.390 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:30:26 compute-0 ceph-mon[74381]: pgmap v1462: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:30:27 compute-0 nova_compute[254061]: 2026-01-20 19:30:27.128 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:30:27 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:30:27 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:30:27 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:30:27.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:30:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:30:27.312Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Jan 20 19:30:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:30:27.313Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:30:27 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1463: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:30:27 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:30:27 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:30:27 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:30:27.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:30:27 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:30:27 compute-0 nova_compute[254061]: 2026-01-20 19:30:27.946 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:30:27 compute-0 nova_compute[254061]: 2026-01-20 19:30:27.948 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:30:28 compute-0 ceph-mon[74381]: pgmap v1463: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:30:28 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/1639024992' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:30:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:30:28.915Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:30:29 compute-0 nova_compute[254061]: 2026-01-20 19:30:29.128 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:30:29 compute-0 nova_compute[254061]: 2026-01-20 19:30:29.128 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 19:30:29 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:30:29 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:30:29 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:30:29.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:30:29 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1464: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:30:29 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:30:29 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:30:29 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:30:29.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:30:29 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/1854791306' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:30:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:30:29] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Jan 20 19:30:29 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:30:29] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Jan 20 19:30:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:30:30.309 165659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:30:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:30:30.310 165659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:30:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:30:30.310 165659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:30:30 compute-0 ceph-mon[74381]: pgmap v1464: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:30:31 compute-0 podman[295147]: 2026-01-20 19:30:31.104607244 +0000 UTC m=+0.074217290 container health_status d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 20 19:30:31 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:30:31 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:30:31 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:30:31.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:30:31 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1465: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:30:31 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:30:31 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:30:31 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:30:31.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:30:32 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:30:32 compute-0 ceph-mon[74381]: pgmap v1465: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:30:32 compute-0 ceph-osd[82836]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 19:30:32 compute-0 ceph-osd[82836]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.1 total, 600.0 interval
                                           Cumulative writes: 13K writes, 48K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 13K writes, 3863 syncs, 3.48 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 565 writes, 856 keys, 565 commit groups, 1.0 writes per commit group, ingest: 0.27 MB, 0.00 MB/s
                                           Interval WAL: 565 writes, 279 syncs, 2.03 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 20 19:30:32 compute-0 nova_compute[254061]: 2026-01-20 19:30:32.948 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:30:33 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:30:33 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:30:33 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:30:33.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:30:33 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1466: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:30:33 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:30:33 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:30:33 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:30:33.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:30:34 compute-0 ceph-mon[74381]: pgmap v1466: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:30:35 compute-0 nova_compute[254061]: 2026-01-20 19:30:35.124 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:30:35 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:30:35 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:30:35 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:30:35.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:30:35 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1467: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:30:35 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:30:35 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:30:35 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:30:35.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:30:36 compute-0 ceph-mon[74381]: pgmap v1467: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:30:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:30:37.315Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:30:37 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:30:37 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:30:37 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:30:37.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:30:37 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1468: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:30:37 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:30:37 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:30:37 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:30:37.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:30:37 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:30:37 compute-0 nova_compute[254061]: 2026-01-20 19:30:37.951 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:30:38 compute-0 nova_compute[254061]: 2026-01-20 19:30:38.123 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:30:38 compute-0 ceph-mon[74381]: pgmap v1468: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:30:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:30:38.916Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:30:39 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:30:39 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:30:39 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:30:39.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:30:39 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1469: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:30:39 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:30:39 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:30:39 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:30:39.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:30:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:30:39] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Jan 20 19:30:39 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:30:39] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Jan 20 19:30:40 compute-0 ceph-mon[74381]: pgmap v1469: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:30:40 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:30:41 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:30:41 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:30:41 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:30:41.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:30:41 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1470: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:30:41 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:30:41 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:30:41 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:30:41.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:30:42 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:30:42 compute-0 ceph-mon[74381]: pgmap v1470: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:30:42 compute-0 nova_compute[254061]: 2026-01-20 19:30:42.953 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:30:42 compute-0 nova_compute[254061]: 2026-01-20 19:30:42.955 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:30:42 compute-0 nova_compute[254061]: 2026-01-20 19:30:42.955 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 20 19:30:42 compute-0 nova_compute[254061]: 2026-01-20 19:30:42.955 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 19:30:42 compute-0 nova_compute[254061]: 2026-01-20 19:30:42.985 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:30:42 compute-0 nova_compute[254061]: 2026-01-20 19:30:42.986 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 19:30:43 compute-0 sudo[295185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:30:43 compute-0 sudo[295185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:30:43 compute-0 sudo[295185]: pam_unix(sudo:session): session closed for user root
Jan 20 19:30:43 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:30:43 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:30:43 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:30:43.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:30:43 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1471: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:30:43 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:30:43 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:30:43 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:30:43.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:30:43 compute-0 ceph-mon[74381]: pgmap v1471: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:30:45 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:30:45 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:30:45 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:30:45.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:30:45 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1472: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:30:45 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:30:45 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:30:45 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:30:45.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:30:46 compute-0 ceph-mon[74381]: pgmap v1472: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:30:46 compute-0 sudo[295213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:30:46 compute-0 sudo[295213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:30:46 compute-0 sudo[295213]: pam_unix(sudo:session): session closed for user root
Jan 20 19:30:47 compute-0 sudo[295239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Jan 20 19:30:47 compute-0 sudo[295239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:30:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:30:47.315Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:30:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:30:47.316Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:30:47 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:30:47 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:30:47 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:30:47.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:30:47 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1473: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:30:47 compute-0 sudo[295239]: pam_unix(sudo:session): session closed for user root
Jan 20 19:30:47 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:30:47 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:30:47 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:30:47 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:30:47 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Jan 20 19:30:47 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:30:47 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Jan 20 19:30:47 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:30:47 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:30:47 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:30:47 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:30:47.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:30:47 compute-0 sudo[295286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:30:47 compute-0 sudo[295286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:30:47 compute-0 sudo[295286]: pam_unix(sudo:session): session closed for user root
Jan 20 19:30:47 compute-0 sudo[295311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 20 19:30:47 compute-0 sudo[295311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:30:47 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:30:47 compute-0 nova_compute[254061]: 2026-01-20 19:30:47.987 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:30:48 compute-0 sudo[295311]: pam_unix(sudo:session): session closed for user root
Jan 20 19:30:48 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1474: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 20 19:30:48 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 19:30:48 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:30:48 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 20 19:30:48 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:30:48 compute-0 sudo[295369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:30:48 compute-0 sudo[295369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:30:48 compute-0 sudo[295369]: pam_unix(sudo:session): session closed for user root
Jan 20 19:30:48 compute-0 sudo[295394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 19:30:48 compute-0 sudo[295394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:30:48 compute-0 ceph-mon[74381]: pgmap v1473: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:30:48 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:30:48 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:30:48 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:30:48 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:30:48 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 19:30:48 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 19:30:48 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:30:48 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:30:48 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 19:30:48 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 19:30:48 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 19:30:48 compute-0 podman[295461]: 2026-01-20 19:30:48.785705856 +0000 UTC m=+0.035878827 container create ec9b06ebc607a72ad5a33ab46843076ce5a22d978801540c0e1ccea7a0179897 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_jennings, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid)
Jan 20 19:30:48 compute-0 systemd[1]: Started libpod-conmon-ec9b06ebc607a72ad5a33ab46843076ce5a22d978801540c0e1ccea7a0179897.scope.
Jan 20 19:30:48 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:30:48 compute-0 podman[295461]: 2026-01-20 19:30:48.850677245 +0000 UTC m=+0.100850266 container init ec9b06ebc607a72ad5a33ab46843076ce5a22d978801540c0e1ccea7a0179897 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_jennings, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Jan 20 19:30:48 compute-0 podman[295461]: 2026-01-20 19:30:48.85678588 +0000 UTC m=+0.106958851 container start ec9b06ebc607a72ad5a33ab46843076ce5a22d978801540c0e1ccea7a0179897 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_jennings, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325)
Jan 20 19:30:48 compute-0 podman[295461]: 2026-01-20 19:30:48.860471429 +0000 UTC m=+0.110644410 container attach ec9b06ebc607a72ad5a33ab46843076ce5a22d978801540c0e1ccea7a0179897 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_jennings, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Jan 20 19:30:48 compute-0 serene_jennings[295479]: 167 167
Jan 20 19:30:48 compute-0 systemd[1]: libpod-ec9b06ebc607a72ad5a33ab46843076ce5a22d978801540c0e1ccea7a0179897.scope: Deactivated successfully.
Jan 20 19:30:48 compute-0 podman[295461]: 2026-01-20 19:30:48.863210283 +0000 UTC m=+0.113383264 container died ec9b06ebc607a72ad5a33ab46843076ce5a22d978801540c0e1ccea7a0179897 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 19:30:48 compute-0 podman[295461]: 2026-01-20 19:30:48.770925758 +0000 UTC m=+0.021098749 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:30:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-78fdf92e3c824c51b8161cc2f95f691004a7a211905f0fedf8b21934efa41411-merged.mount: Deactivated successfully.
Jan 20 19:30:48 compute-0 podman[295461]: 2026-01-20 19:30:48.901959156 +0000 UTC m=+0.152132127 container remove ec9b06ebc607a72ad5a33ab46843076ce5a22d978801540c0e1ccea7a0179897 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Jan 20 19:30:48 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:30:48.917Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:30:48 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:30:48.918Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:30:48 compute-0 systemd[1]: libpod-conmon-ec9b06ebc607a72ad5a33ab46843076ce5a22d978801540c0e1ccea7a0179897.scope: Deactivated successfully.
Jan 20 19:30:49 compute-0 podman[295503]: 2026-01-20 19:30:49.067921844 +0000 UTC m=+0.046573375 container create a3f882d892b2ef03d31643ff31461e634a06efe5ceaa1f65d885ba6e01543859 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_jennings, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 20 19:30:49 compute-0 systemd[1]: Started libpod-conmon-a3f882d892b2ef03d31643ff31461e634a06efe5ceaa1f65d885ba6e01543859.scope.
Jan 20 19:30:49 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:30:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b713cf069de8f3296f2b48950afd621ebc0acf397e65784f009ce7d97f6661cd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:30:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b713cf069de8f3296f2b48950afd621ebc0acf397e65784f009ce7d97f6661cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:30:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b713cf069de8f3296f2b48950afd621ebc0acf397e65784f009ce7d97f6661cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:30:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b713cf069de8f3296f2b48950afd621ebc0acf397e65784f009ce7d97f6661cd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:30:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b713cf069de8f3296f2b48950afd621ebc0acf397e65784f009ce7d97f6661cd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:30:49 compute-0 podman[295503]: 2026-01-20 19:30:49.137096107 +0000 UTC m=+0.115747688 container init a3f882d892b2ef03d31643ff31461e634a06efe5ceaa1f65d885ba6e01543859 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_jennings, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:30:49 compute-0 podman[295503]: 2026-01-20 19:30:49.043791915 +0000 UTC m=+0.022443536 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:30:49 compute-0 podman[295503]: 2026-01-20 19:30:49.149955653 +0000 UTC m=+0.128607184 container start a3f882d892b2ef03d31643ff31461e634a06efe5ceaa1f65d885ba6e01543859 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:30:49 compute-0 podman[295503]: 2026-01-20 19:30:49.153356174 +0000 UTC m=+0.132007745 container attach a3f882d892b2ef03d31643ff31461e634a06efe5ceaa1f65d885ba6e01543859 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_jennings, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:30:49 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:30:49 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:30:49 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:30:49.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:30:49 compute-0 ceph-mon[74381]: pgmap v1474: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 20 19:30:49 compute-0 ceph-mon[74381]: from='client.? 192.168.122.10:0/3519880313' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 19:30:49 compute-0 ceph-mon[74381]: from='client.? 192.168.122.10:0/3519880313' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 19:30:49 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:30:49 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:30:49 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:30:49.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:30:49 compute-0 busy_jennings[295518]: --> passed data devices: 0 physical, 1 LVM
Jan 20 19:30:49 compute-0 busy_jennings[295518]: --> All data devices are unavailable
Jan 20 19:30:49 compute-0 systemd[1]: libpod-a3f882d892b2ef03d31643ff31461e634a06efe5ceaa1f65d885ba6e01543859.scope: Deactivated successfully.
Jan 20 19:30:49 compute-0 podman[295503]: 2026-01-20 19:30:49.545948065 +0000 UTC m=+0.524599616 container died a3f882d892b2ef03d31643ff31461e634a06efe5ceaa1f65d885ba6e01543859 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_jennings, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 19:30:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-b713cf069de8f3296f2b48950afd621ebc0acf397e65784f009ce7d97f6661cd-merged.mount: Deactivated successfully.
Jan 20 19:30:49 compute-0 podman[295503]: 2026-01-20 19:30:49.590848174 +0000 UTC m=+0.569499715 container remove a3f882d892b2ef03d31643ff31461e634a06efe5ceaa1f65d885ba6e01543859 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_jennings, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:30:49 compute-0 systemd[1]: libpod-conmon-a3f882d892b2ef03d31643ff31461e634a06efe5ceaa1f65d885ba6e01543859.scope: Deactivated successfully.
Jan 20 19:30:49 compute-0 sudo[295394]: pam_unix(sudo:session): session closed for user root
Jan 20 19:30:49 compute-0 sudo[295547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:30:49 compute-0 sudo[295547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:30:49 compute-0 sudo[295547]: pam_unix(sudo:session): session closed for user root
Jan 20 19:30:49 compute-0 sudo[295572]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- lvm list --format json
Jan 20 19:30:49 compute-0 sudo[295572]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:30:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:30:49] "GET /metrics HTTP/1.1" 200 48452 "" "Prometheus/2.51.0"
Jan 20 19:30:49 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:30:49] "GET /metrics HTTP/1.1" 200 48452 "" "Prometheus/2.51.0"
Jan 20 19:30:50 compute-0 podman[295640]: 2026-01-20 19:30:50.073541571 +0000 UTC m=+0.036505174 container create e9595f46435ab204760bc731e994be766a4cb0824eff656ab93cfa28c6f4769a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_carver, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:30:50 compute-0 systemd[1]: Started libpod-conmon-e9595f46435ab204760bc731e994be766a4cb0824eff656ab93cfa28c6f4769a.scope.
Jan 20 19:30:50 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:30:50 compute-0 podman[295640]: 2026-01-20 19:30:50.13813083 +0000 UTC m=+0.101094473 container init e9595f46435ab204760bc731e994be766a4cb0824eff656ab93cfa28c6f4769a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_carver, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 19:30:50 compute-0 podman[295640]: 2026-01-20 19:30:50.143772822 +0000 UTC m=+0.106736425 container start e9595f46435ab204760bc731e994be766a4cb0824eff656ab93cfa28c6f4769a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_carver, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Jan 20 19:30:50 compute-0 podman[295640]: 2026-01-20 19:30:50.146754322 +0000 UTC m=+0.109717935 container attach e9595f46435ab204760bc731e994be766a4cb0824eff656ab93cfa28c6f4769a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_carver, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 20 19:30:50 compute-0 inspiring_carver[295656]: 167 167
Jan 20 19:30:50 compute-0 systemd[1]: libpod-e9595f46435ab204760bc731e994be766a4cb0824eff656ab93cfa28c6f4769a.scope: Deactivated successfully.
Jan 20 19:30:50 compute-0 conmon[295656]: conmon e9595f46435ab204760b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e9595f46435ab204760bc731e994be766a4cb0824eff656ab93cfa28c6f4769a.scope/container/memory.events
Jan 20 19:30:50 compute-0 podman[295640]: 2026-01-20 19:30:50.149059024 +0000 UTC m=+0.112022647 container died e9595f46435ab204760bc731e994be766a4cb0824eff656ab93cfa28c6f4769a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_carver, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 20 19:30:50 compute-0 podman[295640]: 2026-01-20 19:30:50.058380262 +0000 UTC m=+0.021343895 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:30:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-addf24f8d30545b565e2f1afd51fcf2cdc1c6e8e733da8caf4eabc4610aa40f2-merged.mount: Deactivated successfully.
Jan 20 19:30:50 compute-0 podman[295640]: 2026-01-20 19:30:50.180682215 +0000 UTC m=+0.143645818 container remove e9595f46435ab204760bc731e994be766a4cb0824eff656ab93cfa28c6f4769a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_carver, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:30:50 compute-0 systemd[1]: libpod-conmon-e9595f46435ab204760bc731e994be766a4cb0824eff656ab93cfa28c6f4769a.scope: Deactivated successfully.
Jan 20 19:30:50 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1475: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 20 19:30:50 compute-0 podman[295678]: 2026-01-20 19:30:50.352040679 +0000 UTC m=+0.056046161 container create 669ba37618471e49a4a55a3acc0c255506093773db44fbc664826ddb2741c2b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_solomon, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:30:50 compute-0 systemd[1]: Started libpod-conmon-669ba37618471e49a4a55a3acc0c255506093773db44fbc664826ddb2741c2b7.scope.
Jan 20 19:30:50 compute-0 podman[295678]: 2026-01-20 19:30:50.324397565 +0000 UTC m=+0.028403057 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:30:50 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:30:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2242fc659b359a666263db4fbb36f856e5ccd7b6cb0d3778190139d84ef94e84/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:30:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2242fc659b359a666263db4fbb36f856e5ccd7b6cb0d3778190139d84ef94e84/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:30:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2242fc659b359a666263db4fbb36f856e5ccd7b6cb0d3778190139d84ef94e84/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:30:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2242fc659b359a666263db4fbb36f856e5ccd7b6cb0d3778190139d84ef94e84/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:30:50 compute-0 podman[295678]: 2026-01-20 19:30:50.439189475 +0000 UTC m=+0.143194947 container init 669ba37618471e49a4a55a3acc0c255506093773db44fbc664826ddb2741c2b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:30:50 compute-0 podman[295678]: 2026-01-20 19:30:50.447884349 +0000 UTC m=+0.151889801 container start 669ba37618471e49a4a55a3acc0c255506093773db44fbc664826ddb2741c2b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_solomon, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 20 19:30:50 compute-0 podman[295678]: 2026-01-20 19:30:50.45051775 +0000 UTC m=+0.154523232 container attach 669ba37618471e49a4a55a3acc0c255506093773db44fbc664826ddb2741c2b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_solomon, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:30:50 compute-0 cranky_solomon[295694]: {
Jan 20 19:30:50 compute-0 cranky_solomon[295694]:     "0": [
Jan 20 19:30:50 compute-0 cranky_solomon[295694]:         {
Jan 20 19:30:50 compute-0 cranky_solomon[295694]:             "devices": [
Jan 20 19:30:50 compute-0 cranky_solomon[295694]:                 "/dev/loop3"
Jan 20 19:30:50 compute-0 cranky_solomon[295694]:             ],
Jan 20 19:30:50 compute-0 cranky_solomon[295694]:             "lv_name": "ceph_lv0",
Jan 20 19:30:50 compute-0 cranky_solomon[295694]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:30:50 compute-0 cranky_solomon[295694]:             "lv_size": "21470642176",
Jan 20 19:30:50 compute-0 cranky_solomon[295694]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=aecbbf3b-b405-507b-97d7-637a83f5b4b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5f53c0c6-6046-4836-83f9-ff93da7e674e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:30:50 compute-0 cranky_solomon[295694]:             "lv_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 19:30:50 compute-0 cranky_solomon[295694]:             "name": "ceph_lv0",
Jan 20 19:30:50 compute-0 cranky_solomon[295694]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:30:50 compute-0 cranky_solomon[295694]:             "tags": {
Jan 20 19:30:50 compute-0 cranky_solomon[295694]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:30:50 compute-0 cranky_solomon[295694]:                 "ceph.block_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 19:30:50 compute-0 cranky_solomon[295694]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:30:50 compute-0 cranky_solomon[295694]:                 "ceph.cluster_fsid": "aecbbf3b-b405-507b-97d7-637a83f5b4b1",
Jan 20 19:30:50 compute-0 cranky_solomon[295694]:                 "ceph.cluster_name": "ceph",
Jan 20 19:30:50 compute-0 cranky_solomon[295694]:                 "ceph.crush_device_class": "",
Jan 20 19:30:50 compute-0 cranky_solomon[295694]:                 "ceph.encrypted": "0",
Jan 20 19:30:50 compute-0 cranky_solomon[295694]:                 "ceph.osd_fsid": "5f53c0c6-6046-4836-83f9-ff93da7e674e",
Jan 20 19:30:50 compute-0 cranky_solomon[295694]:                 "ceph.osd_id": "0",
Jan 20 19:30:50 compute-0 cranky_solomon[295694]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:30:50 compute-0 cranky_solomon[295694]:                 "ceph.type": "block",
Jan 20 19:30:50 compute-0 cranky_solomon[295694]:                 "ceph.vdo": "0",
Jan 20 19:30:50 compute-0 cranky_solomon[295694]:                 "ceph.with_tpm": "0"
Jan 20 19:30:50 compute-0 cranky_solomon[295694]:             },
Jan 20 19:30:50 compute-0 cranky_solomon[295694]:             "type": "block",
Jan 20 19:30:50 compute-0 cranky_solomon[295694]:             "vg_name": "ceph_vg0"
Jan 20 19:30:50 compute-0 cranky_solomon[295694]:         }
Jan 20 19:30:50 compute-0 cranky_solomon[295694]:     ]
Jan 20 19:30:50 compute-0 cranky_solomon[295694]: }
Jan 20 19:30:50 compute-0 systemd[1]: libpod-669ba37618471e49a4a55a3acc0c255506093773db44fbc664826ddb2741c2b7.scope: Deactivated successfully.
Jan 20 19:30:50 compute-0 podman[295678]: 2026-01-20 19:30:50.751602707 +0000 UTC m=+0.455608169 container died 669ba37618471e49a4a55a3acc0c255506093773db44fbc664826ddb2741c2b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_solomon, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:30:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-2242fc659b359a666263db4fbb36f856e5ccd7b6cb0d3778190139d84ef94e84-merged.mount: Deactivated successfully.
Jan 20 19:30:50 compute-0 podman[295678]: 2026-01-20 19:30:50.80592478 +0000 UTC m=+0.509930252 container remove 669ba37618471e49a4a55a3acc0c255506093773db44fbc664826ddb2741c2b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:30:50 compute-0 systemd[1]: libpod-conmon-669ba37618471e49a4a55a3acc0c255506093773db44fbc664826ddb2741c2b7.scope: Deactivated successfully.
Jan 20 19:30:50 compute-0 sudo[295572]: pam_unix(sudo:session): session closed for user root
Jan 20 19:30:50 compute-0 sudo[295717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:30:50 compute-0 sudo[295717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:30:50 compute-0 sudo[295717]: pam_unix(sudo:session): session closed for user root
Jan 20 19:30:50 compute-0 sudo[295742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- raw list --format json
Jan 20 19:30:50 compute-0 sudo[295742]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:30:51 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:30:51 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:30:51 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:30:51.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:30:51 compute-0 podman[295808]: 2026-01-20 19:30:51.388768262 +0000 UTC m=+0.055142015 container create 5c83979261e580b01ed99fd0fa72209ddd3385594bd42221607bd9210113f5d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_mclaren, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Jan 20 19:30:51 compute-0 systemd[1]: Started libpod-conmon-5c83979261e580b01ed99fd0fa72209ddd3385594bd42221607bd9210113f5d1.scope.
Jan 20 19:30:51 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:30:51 compute-0 podman[295808]: 2026-01-20 19:30:51.371102886 +0000 UTC m=+0.037476609 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:30:51 compute-0 podman[295808]: 2026-01-20 19:30:51.474212243 +0000 UTC m=+0.140585956 container init 5c83979261e580b01ed99fd0fa72209ddd3385594bd42221607bd9210113f5d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_mclaren, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Jan 20 19:30:51 compute-0 podman[295808]: 2026-01-20 19:30:51.484624343 +0000 UTC m=+0.150998066 container start 5c83979261e580b01ed99fd0fa72209ddd3385594bd42221607bd9210113f5d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_mclaren, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Jan 20 19:30:51 compute-0 podman[295808]: 2026-01-20 19:30:51.487604384 +0000 UTC m=+0.153978117 container attach 5c83979261e580b01ed99fd0fa72209ddd3385594bd42221607bd9210113f5d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_mclaren, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 20 19:30:51 compute-0 admiring_mclaren[295825]: 167 167
Jan 20 19:30:51 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:30:51 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:30:51 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:30:51.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:30:51 compute-0 systemd[1]: libpod-5c83979261e580b01ed99fd0fa72209ddd3385594bd42221607bd9210113f5d1.scope: Deactivated successfully.
Jan 20 19:30:51 compute-0 podman[295808]: 2026-01-20 19:30:51.491971981 +0000 UTC m=+0.158345694 container died 5c83979261e580b01ed99fd0fa72209ddd3385594bd42221607bd9210113f5d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_mclaren, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True)
Jan 20 19:30:51 compute-0 ceph-mon[74381]: pgmap v1475: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 20 19:30:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c658a5c258702a3c1ee47651899738abeb5a469cea02a1144811f70c1000c1f-merged.mount: Deactivated successfully.
Jan 20 19:30:51 compute-0 podman[295808]: 2026-01-20 19:30:51.53505081 +0000 UTC m=+0.201424523 container remove 5c83979261e580b01ed99fd0fa72209ddd3385594bd42221607bd9210113f5d1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_mclaren, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 20 19:30:51 compute-0 systemd[1]: libpod-conmon-5c83979261e580b01ed99fd0fa72209ddd3385594bd42221607bd9210113f5d1.scope: Deactivated successfully.
Jan 20 19:30:51 compute-0 podman[295850]: 2026-01-20 19:30:51.715141139 +0000 UTC m=+0.041351773 container create a5aaab1ee9e3865f873b8a1c3df2f236fcad798e421fd4a1225ce124bf127c4b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_visvesvaraya, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:30:51 compute-0 systemd[1]: Started libpod-conmon-a5aaab1ee9e3865f873b8a1c3df2f236fcad798e421fd4a1225ce124bf127c4b.scope.
Jan 20 19:30:51 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:30:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca79982e2f9540d0cb53efd51502ba4b71924361aa57b9b1e45d82a3963a672a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:30:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca79982e2f9540d0cb53efd51502ba4b71924361aa57b9b1e45d82a3963a672a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:30:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca79982e2f9540d0cb53efd51502ba4b71924361aa57b9b1e45d82a3963a672a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:30:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca79982e2f9540d0cb53efd51502ba4b71924361aa57b9b1e45d82a3963a672a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:30:51 compute-0 podman[295850]: 2026-01-20 19:30:51.695701716 +0000 UTC m=+0.021912420 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:30:51 compute-0 podman[295850]: 2026-01-20 19:30:51.791387523 +0000 UTC m=+0.117598197 container init a5aaab1ee9e3865f873b8a1c3df2f236fcad798e421fd4a1225ce124bf127c4b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_visvesvaraya, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 20 19:30:51 compute-0 podman[295850]: 2026-01-20 19:30:51.805875463 +0000 UTC m=+0.132086107 container start a5aaab1ee9e3865f873b8a1c3df2f236fcad798e421fd4a1225ce124bf127c4b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Jan 20 19:30:51 compute-0 podman[295850]: 2026-01-20 19:30:51.808726989 +0000 UTC m=+0.134937643 container attach a5aaab1ee9e3865f873b8a1c3df2f236fcad798e421fd4a1225ce124bf127c4b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_visvesvaraya, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 20 19:30:52 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1476: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 20 19:30:52 compute-0 lvm[295941]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 19:30:52 compute-0 lvm[295941]: VG ceph_vg0 finished
Jan 20 19:30:52 compute-0 sharp_visvesvaraya[295866]: {}
Jan 20 19:30:52 compute-0 systemd[1]: libpod-a5aaab1ee9e3865f873b8a1c3df2f236fcad798e421fd4a1225ce124bf127c4b.scope: Deactivated successfully.
Jan 20 19:30:52 compute-0 systemd[1]: libpod-a5aaab1ee9e3865f873b8a1c3df2f236fcad798e421fd4a1225ce124bf127c4b.scope: Consumed 1.129s CPU time.
Jan 20 19:30:52 compute-0 podman[295945]: 2026-01-20 19:30:52.568855365 +0000 UTC m=+0.032889737 container died a5aaab1ee9e3865f873b8a1c3df2f236fcad798e421fd4a1225ce124bf127c4b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_visvesvaraya, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:30:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca79982e2f9540d0cb53efd51502ba4b71924361aa57b9b1e45d82a3963a672a-merged.mount: Deactivated successfully.
Jan 20 19:30:52 compute-0 podman[295945]: 2026-01-20 19:30:52.606889869 +0000 UTC m=+0.070924231 container remove a5aaab1ee9e3865f873b8a1c3df2f236fcad798e421fd4a1225ce124bf127c4b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_visvesvaraya, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:30:52 compute-0 systemd[1]: libpod-conmon-a5aaab1ee9e3865f873b8a1c3df2f236fcad798e421fd4a1225ce124bf127c4b.scope: Deactivated successfully.
Jan 20 19:30:52 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:30:52 compute-0 sudo[295742]: pam_unix(sudo:session): session closed for user root
Jan 20 19:30:52 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:30:52 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:30:52 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:30:52 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:30:52 compute-0 sudo[295962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 19:30:52 compute-0 sudo[295962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:30:52 compute-0 sudo[295962]: pam_unix(sudo:session): session closed for user root
Jan 20 19:30:52 compute-0 nova_compute[254061]: 2026-01-20 19:30:52.989 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:30:53 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:30:53 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:30:53 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:30:53.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:30:53 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:30:53 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:30:53 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:30:53.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:30:53 compute-0 ceph-mon[74381]: pgmap v1476: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 20 19:30:53 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:30:53 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:30:54 compute-0 podman[295989]: 2026-01-20 19:30:54.075733977 +0000 UTC m=+0.055032933 container health_status 7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Jan 20 19:30:54 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1477: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 20 19:30:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:30:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:30:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:30:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:30:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:30:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:30:55 compute-0 ceph-mgr[74676]: [balancer INFO root] Optimize plan auto_2026-01-20_19:30:55
Jan 20 19:30:55 compute-0 ceph-mgr[74676]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 19:30:55 compute-0 ceph-mgr[74676]: [balancer INFO root] do_upmap
Jan 20 19:30:55 compute-0 ceph-mgr[74676]: [balancer INFO root] pools ['.nfs', 'default.rgw.meta', 'cephfs.cephfs.data', 'backups', 'default.rgw.log', 'volumes', 'vms', '.mgr', '.rgw.root', 'images', 'default.rgw.control', 'cephfs.cephfs.meta']
Jan 20 19:30:55 compute-0 ceph-mgr[74676]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 19:30:55 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:30:55 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:30:55 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:30:55.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:30:55 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:30:55 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:30:55 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:30:55.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:30:55 compute-0 ceph-mon[74381]: pgmap v1477: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 20 19:30:55 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:30:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 19:30:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:30:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 19:30:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:30:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 20 19:30:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:30:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:30:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:30:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:30:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:30:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 20 19:30:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:30:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 20 19:30:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:30:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:30:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:30:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 19:30:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:30:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 20 19:30:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:30:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:30:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:30:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 19:30:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:30:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 20 19:30:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 19:30:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:30:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 19:30:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:30:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:30:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:30:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:30:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:30:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:30:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:30:56 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1478: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Jan 20 19:30:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:30:57.317Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:30:57 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:30:57 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:30:57 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:30:57.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:30:57 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:30:57 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:30:57 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:30:57.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:30:57 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:30:57 compute-0 ceph-mon[74381]: pgmap v1478: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Jan 20 19:30:57 compute-0 nova_compute[254061]: 2026-01-20 19:30:57.991 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:30:57 compute-0 nova_compute[254061]: 2026-01-20 19:30:57.993 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:30:57 compute-0 nova_compute[254061]: 2026-01-20 19:30:57.993 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 20 19:30:57 compute-0 nova_compute[254061]: 2026-01-20 19:30:57.994 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 19:30:58 compute-0 nova_compute[254061]: 2026-01-20 19:30:58.020 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:30:58 compute-0 nova_compute[254061]: 2026-01-20 19:30:58.021 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 19:30:58 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1479: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 20 19:30:58 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:30:58.919Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:30:59 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:30:59 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:30:59 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:30:59.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:30:59 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:30:59 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:30:59 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:30:59.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:30:59 compute-0 ceph-mon[74381]: pgmap v1479: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Jan 20 19:30:59 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:30:59] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Jan 20 19:30:59 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:30:59] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Jan 20 19:31:00 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1480: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:31:01 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:31:01 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:31:01 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:31:01.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:31:01 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:31:01 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:31:01 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:31:01.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:31:01 compute-0 ceph-mon[74381]: pgmap v1480: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:31:02 compute-0 podman[296019]: 2026-01-20 19:31:02.156715275 +0000 UTC m=+0.123438616 container health_status d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 20 19:31:02 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1481: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:31:02 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:31:03 compute-0 nova_compute[254061]: 2026-01-20 19:31:03.022 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:31:03 compute-0 nova_compute[254061]: 2026-01-20 19:31:03.023 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:31:03 compute-0 nova_compute[254061]: 2026-01-20 19:31:03.024 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 20 19:31:03 compute-0 nova_compute[254061]: 2026-01-20 19:31:03.024 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 19:31:03 compute-0 nova_compute[254061]: 2026-01-20 19:31:03.058 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:31:03 compute-0 nova_compute[254061]: 2026-01-20 19:31:03.059 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 19:31:03 compute-0 sudo[296048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:31:03 compute-0 sudo[296048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:31:03 compute-0 sudo[296048]: pam_unix(sudo:session): session closed for user root
Jan 20 19:31:03 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:31:03 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:31:03 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:31:03.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:31:03 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:31:03 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:31:03 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:31:03.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:31:03 compute-0 ceph-mon[74381]: pgmap v1481: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:31:04 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1482: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:31:05 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:31:05 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:31:05 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:31:05.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:31:05 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:31:05 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:31:05 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:31:05.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:31:05 compute-0 ceph-mon[74381]: pgmap v1482: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:31:06 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1483: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:31:07 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:31:07.319Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:31:07 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:31:07 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:31:07 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:31:07.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:31:07 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:31:07 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:31:07 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:31:07.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:31:07 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:31:07 compute-0 ceph-mon[74381]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #87. Immutable memtables: 0.
Jan 20 19:31:07 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:31:07.644660) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 19:31:07 compute-0 ceph-mon[74381]: rocksdb: [db/flush_job.cc:856] [default] [JOB 49] Flushing memtable with next log file: 87
Jan 20 19:31:07 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768937467644694, "job": 49, "event": "flush_started", "num_memtables": 1, "num_entries": 893, "num_deletes": 251, "total_data_size": 1521537, "memory_usage": 1557720, "flush_reason": "Manual Compaction"}
Jan 20 19:31:07 compute-0 ceph-mon[74381]: rocksdb: [db/flush_job.cc:885] [default] [JOB 49] Level-0 flush table #88: started
Jan 20 19:31:07 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768937467653498, "cf_name": "default", "job": 49, "event": "table_file_creation", "file_number": 88, "file_size": 980373, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 39797, "largest_seqno": 40689, "table_properties": {"data_size": 976644, "index_size": 1445, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9898, "raw_average_key_size": 20, "raw_value_size": 968704, "raw_average_value_size": 2048, "num_data_blocks": 62, "num_entries": 473, "num_filter_entries": 473, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768937395, "oldest_key_time": 1768937395, "file_creation_time": 1768937467, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cbf3ab03-d51c-4622-b6c7-e997cd5246eb", "db_session_id": "I40O2DG19JCNHUB0JQU4", "orig_file_number": 88, "seqno_to_time_mapping": "N/A"}}
Jan 20 19:31:07 compute-0 ceph-mon[74381]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 49] Flush lasted 8882 microseconds, and 3919 cpu microseconds.
Jan 20 19:31:07 compute-0 ceph-mon[74381]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 19:31:07 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:31:07.653540) [db/flush_job.cc:967] [default] [JOB 49] Level-0 flush table #88: 980373 bytes OK
Jan 20 19:31:07 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:31:07.653558) [db/memtable_list.cc:519] [default] Level-0 commit table #88 started
Jan 20 19:31:07 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:31:07.655314) [db/memtable_list.cc:722] [default] Level-0 commit table #88: memtable #1 done
Jan 20 19:31:07 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:31:07.655332) EVENT_LOG_v1 {"time_micros": 1768937467655327, "job": 49, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 19:31:07 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:31:07.655349) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 19:31:07 compute-0 ceph-mon[74381]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 49] Try to delete WAL files size 1517255, prev total WAL file size 1517255, number of live WAL files 2.
Jan 20 19:31:07 compute-0 ceph-mon[74381]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000084.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 19:31:07 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:31:07.655955) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031323537' seq:72057594037927935, type:22 .. '6D6772737461740031353039' seq:0, type:0; will stop at (end)
Jan 20 19:31:07 compute-0 ceph-mon[74381]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 50] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 19:31:07 compute-0 ceph-mon[74381]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 49 Base level 0, inputs: [88(957KB)], [86(14MB)]
Jan 20 19:31:07 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768937467656001, "job": 50, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [88], "files_L6": [86], "score": -1, "input_data_size": 16244440, "oldest_snapshot_seqno": -1}
Jan 20 19:31:07 compute-0 ceph-mon[74381]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 50] Generated table #89: 7183 keys, 12638365 bytes, temperature: kUnknown
Jan 20 19:31:07 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768937467731348, "cf_name": "default", "job": 50, "event": "table_file_creation", "file_number": 89, "file_size": 12638365, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12594991, "index_size": 24318, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17989, "raw_key_size": 189618, "raw_average_key_size": 26, "raw_value_size": 12470012, "raw_average_value_size": 1736, "num_data_blocks": 942, "num_entries": 7183, "num_filter_entries": 7183, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768934326, "oldest_key_time": 0, "file_creation_time": 1768937467, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "cbf3ab03-d51c-4622-b6c7-e997cd5246eb", "db_session_id": "I40O2DG19JCNHUB0JQU4", "orig_file_number": 89, "seqno_to_time_mapping": "N/A"}}
Jan 20 19:31:07 compute-0 ceph-mon[74381]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 19:31:07 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:31:07.731604) [db/compaction/compaction_job.cc:1663] [default] [JOB 50] Compacted 1@0 + 1@6 files to L6 => 12638365 bytes
Jan 20 19:31:07 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:31:07.732765) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 215.4 rd, 167.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 14.6 +0.0 blob) out(12.1 +0.0 blob), read-write-amplify(29.5) write-amplify(12.9) OK, records in: 7670, records dropped: 487 output_compression: NoCompression
Jan 20 19:31:07 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:31:07.732789) EVENT_LOG_v1 {"time_micros": 1768937467732778, "job": 50, "event": "compaction_finished", "compaction_time_micros": 75418, "compaction_time_cpu_micros": 34982, "output_level": 6, "num_output_files": 1, "total_output_size": 12638365, "num_input_records": 7670, "num_output_records": 7183, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 19:31:07 compute-0 ceph-mon[74381]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000088.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 19:31:07 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768937467733157, "job": 50, "event": "table_file_deletion", "file_number": 88}
Jan 20 19:31:07 compute-0 ceph-mon[74381]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000086.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 19:31:07 compute-0 ceph-mon[74381]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768937467736430, "job": 50, "event": "table_file_deletion", "file_number": 86}
Jan 20 19:31:07 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:31:07.655870) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:31:07 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:31:07.736612) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:31:07 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:31:07.736621) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:31:07 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:31:07.736624) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:31:07 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:31:07.736627) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:31:07 compute-0 ceph-mon[74381]: rocksdb: (Original Log Time 2026/01/20-19:31:07.736630) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:31:07 compute-0 ceph-mon[74381]: pgmap v1483: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:31:08 compute-0 nova_compute[254061]: 2026-01-20 19:31:08.059 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:31:08 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1484: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:31:08 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:31:08.920Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:31:09 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:31:09 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:31:09 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:31:09.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:31:09 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:31:09 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:31:09 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:31:09.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:31:09 compute-0 ceph-mon[74381]: pgmap v1484: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:31:09 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:31:09] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Jan 20 19:31:09 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:31:09] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Jan 20 19:31:10 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1485: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:31:10 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:31:11 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:31:11 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:31:11 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:31:11.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:31:11 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:31:11 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:31:11 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:31:11.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:31:11 compute-0 ceph-mon[74381]: pgmap v1485: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:31:12 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1486: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:31:12 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:31:13 compute-0 nova_compute[254061]: 2026-01-20 19:31:13.062 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:31:13 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:31:13 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:31:13 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:31:13.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:31:13 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:31:13 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:31:13 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:31:13.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:31:13 compute-0 ceph-mon[74381]: pgmap v1486: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:31:14 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1487: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:31:15 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:31:15 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:31:15 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:31:15.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:31:15 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:31:15 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:31:15 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:31:15.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:31:15 compute-0 ceph-mon[74381]: pgmap v1487: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:31:16 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1488: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:31:17 compute-0 ceph-mon[74381]: pgmap v1488: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:31:17 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:31:17.320Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:31:17 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:31:17 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:31:17 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:31:17.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:31:17 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:31:17 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:31:17 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:31:17.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:31:17 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:31:18 compute-0 nova_compute[254061]: 2026-01-20 19:31:18.065 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:31:18 compute-0 nova_compute[254061]: 2026-01-20 19:31:18.067 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:31:18 compute-0 nova_compute[254061]: 2026-01-20 19:31:18.067 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 20 19:31:18 compute-0 nova_compute[254061]: 2026-01-20 19:31:18.067 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 19:31:18 compute-0 nova_compute[254061]: 2026-01-20 19:31:18.118 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:31:18 compute-0 nova_compute[254061]: 2026-01-20 19:31:18.119 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 19:31:18 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1489: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:31:18 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:31:18.921Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:31:19 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:31:19 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:31:19 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:31:19.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:31:19 compute-0 ceph-mon[74381]: pgmap v1489: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:31:19 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:31:19 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:31:19 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:31:19.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:31:19 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:31:19] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Jan 20 19:31:19 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:31:19] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Jan 20 19:31:20 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1490: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:31:21 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:31:21 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:31:21 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:31:21.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:31:21 compute-0 ceph-mon[74381]: pgmap v1490: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:31:21 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:31:21 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:31:21 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:31:21.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:31:22 compute-0 nova_compute[254061]: 2026-01-20 19:31:22.128 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:31:22 compute-0 nova_compute[254061]: 2026-01-20 19:31:22.226 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:31:22 compute-0 nova_compute[254061]: 2026-01-20 19:31:22.227 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:31:22 compute-0 nova_compute[254061]: 2026-01-20 19:31:22.227 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:31:22 compute-0 nova_compute[254061]: 2026-01-20 19:31:22.227 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 19:31:22 compute-0 nova_compute[254061]: 2026-01-20 19:31:22.228 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:31:22 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1491: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:31:22 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:31:22 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:31:22 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3937136939' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:31:22 compute-0 nova_compute[254061]: 2026-01-20 19:31:22.702 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:31:22 compute-0 nova_compute[254061]: 2026-01-20 19:31:22.885 254065 WARNING nova.virt.libvirt.driver [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 19:31:22 compute-0 nova_compute[254061]: 2026-01-20 19:31:22.886 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4513MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 19:31:22 compute-0 nova_compute[254061]: 2026-01-20 19:31:22.887 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:31:22 compute-0 nova_compute[254061]: 2026-01-20 19:31:22.887 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:31:23 compute-0 nova_compute[254061]: 2026-01-20 19:31:23.000 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 19:31:23 compute-0 nova_compute[254061]: 2026-01-20 19:31:23.001 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 19:31:23 compute-0 nova_compute[254061]: 2026-01-20 19:31:23.024 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:31:23 compute-0 nova_compute[254061]: 2026-01-20 19:31:23.119 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:31:23 compute-0 nova_compute[254061]: 2026-01-20 19:31:23.140 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:31:23 compute-0 nova_compute[254061]: 2026-01-20 19:31:23.141 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5022 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 20 19:31:23 compute-0 nova_compute[254061]: 2026-01-20 19:31:23.141 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 19:31:23 compute-0 nova_compute[254061]: 2026-01-20 19:31:23.142 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 19:31:23 compute-0 nova_compute[254061]: 2026-01-20 19:31:23.144 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:31:23 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:31:23 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:31:23 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:31:23.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:31:23 compute-0 sudo[296135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:31:23 compute-0 sudo[296135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:31:23 compute-0 sudo[296135]: pam_unix(sudo:session): session closed for user root
Jan 20 19:31:23 compute-0 ceph-mon[74381]: pgmap v1491: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:31:23 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/3937136939' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:31:23 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:31:23 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2615363095' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:31:23 compute-0 nova_compute[254061]: 2026-01-20 19:31:23.501 254065 DEBUG oslo_concurrency.processutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:31:23 compute-0 nova_compute[254061]: 2026-01-20 19:31:23.508 254065 DEBUG nova.compute.provider_tree [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Inventory has not changed in ProviderTree for provider: cb9161e5-191d-495c-920a-01144f42a215 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 19:31:23 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:31:23 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:31:23 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:31:23.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:31:23 compute-0 nova_compute[254061]: 2026-01-20 19:31:23.733 254065 DEBUG nova.scheduler.client.report [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Inventory has not changed for provider cb9161e5-191d-495c-920a-01144f42a215 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 19:31:23 compute-0 nova_compute[254061]: 2026-01-20 19:31:23.735 254065 DEBUG nova.compute.resource_tracker [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 19:31:23 compute-0 nova_compute[254061]: 2026-01-20 19:31:23.735 254065 DEBUG oslo_concurrency.lockutils [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.848s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:31:24 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1492: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:31:24 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/2615363095' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:31:24 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/2016771661' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:31:25 compute-0 podman[296163]: 2026-01-20 19:31:25.100604484 +0000 UTC m=+0.068242908 container health_status 7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 20 19:31:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:31:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:31:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:31:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:31:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:31:25 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:31:25 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:31:25 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:31:25 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:31:25.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:31:25 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:31:25 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:31:25 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:31:25.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:31:25 compute-0 ceph-mon[74381]: pgmap v1492: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:31:25 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/2651267716' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:31:25 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:31:26 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1493: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:31:26 compute-0 nova_compute[254061]: 2026-01-20 19:31:26.736 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:31:26 compute-0 nova_compute[254061]: 2026-01-20 19:31:26.737 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 19:31:26 compute-0 nova_compute[254061]: 2026-01-20 19:31:26.737 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 19:31:26 compute-0 nova_compute[254061]: 2026-01-20 19:31:26.759 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 19:31:26 compute-0 nova_compute[254061]: 2026-01-20 19:31:26.759 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:31:26 compute-0 nova_compute[254061]: 2026-01-20 19:31:26.759 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:31:26 compute-0 nova_compute[254061]: 2026-01-20 19:31:26.760 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:31:26 compute-0 nova_compute[254061]: 2026-01-20 19:31:26.760 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:31:27 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:31:27.320Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:31:27 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:31:27 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:31:27 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:31:27.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:31:27 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:31:27 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:31:27 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:31:27.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:31:27 compute-0 ceph-mon[74381]: pgmap v1493: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:31:27 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:31:28 compute-0 nova_compute[254061]: 2026-01-20 19:31:28.145 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:31:28 compute-0 nova_compute[254061]: 2026-01-20 19:31:28.148 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 20 19:31:28 compute-0 nova_compute[254061]: 2026-01-20 19:31:28.149 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5004 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117
Jan 20 19:31:28 compute-0 nova_compute[254061]: 2026-01-20 19:31:28.149 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 19:31:28 compute-0 nova_compute[254061]: 2026-01-20 19:31:28.187 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:31:28 compute-0 nova_compute[254061]: 2026-01-20 19:31:28.187 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 20 19:31:28 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1494: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:31:28 compute-0 sshd-session[296186]: Accepted publickey for zuul from 192.168.122.10 port 36938 ssh2: ECDSA SHA256:OqjpBifwMHsrzwMbJxHbqE54q1skVz6aecL1tN9sOps
Jan 20 19:31:28 compute-0 systemd-logind[796]: New session 60 of user zuul.
Jan 20 19:31:28 compute-0 systemd[1]: Started Session 60 of User zuul.
Jan 20 19:31:28 compute-0 sshd-session[296186]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 19:31:28 compute-0 sudo[296191]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Jan 20 19:31:28 compute-0 sudo[296191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:31:28 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:31:28.923Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:31:29 compute-0 nova_compute[254061]: 2026-01-20 19:31:29.128 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:31:29 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:31:29 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:31:29 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:31:29.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:31:29 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:31:29 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:31:29 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:31:29.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:31:29 compute-0 ceph-mon[74381]: pgmap v1494: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:31:29 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:31:29] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Jan 20 19:31:29 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:31:29] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Jan 20 19:31:30 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1495: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:31:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:31:30.311 165659 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:31:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:31:30.311 165659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:31:30 compute-0 ovn_metadata_agent[165637]: 2026-01-20 19:31:30.311 165659 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:31:30 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/1485699795' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:31:31 compute-0 nova_compute[254061]: 2026-01-20 19:31:31.128 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:31:31 compute-0 nova_compute[254061]: 2026-01-20 19:31:31.128 254065 DEBUG nova.compute.manager [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 19:31:31 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:31:31 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:31:31 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:31:31.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:31:31 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.18054 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:31 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:31:31 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:31:31 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:31:31.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:31:31 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.27047 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:31 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.26866 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:31 compute-0 ceph-mon[74381]: pgmap v1495: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:31:31 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/223737142' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Jan 20 19:31:32 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1496: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:31:32 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.18063 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:32 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.27062 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:32 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.26878 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:32 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:31:32 compute-0 ceph-mon[74381]: from='client.18054 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:32 compute-0 ceph-mon[74381]: from='client.27047 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:32 compute-0 ceph-mon[74381]: from='client.26866 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:33 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0)
Jan 20 19:31:33 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/281603779' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 20 19:31:33 compute-0 podman[296421]: 2026-01-20 19:31:33.183763308 +0000 UTC m=+0.142596760 container health_status d1ce553cd23751e522deaf2ae5562d4a89b66e67624ebeebc5e3d3fb4ee6c5cf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 20 19:31:33 compute-0 nova_compute[254061]: 2026-01-20 19:31:33.187 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:31:33 compute-0 nova_compute[254061]: 2026-01-20 19:31:33.188 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:31:33 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:31:33 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:31:33 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:31:33.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:31:33 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:31:33 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:31:33 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:31:33.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:31:33 compute-0 ceph-mon[74381]: pgmap v1496: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:31:33 compute-0 ceph-mon[74381]: from='client.18063 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:33 compute-0 ceph-mon[74381]: from='client.27062 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:33 compute-0 ceph-mon[74381]: from='client.26878 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:33 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/3596073401' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 20 19:31:33 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/281603779' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 20 19:31:33 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/2286102174' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Jan 20 19:31:34 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1497: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:31:35 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:31:35 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:31:35 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:31:35.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:31:35 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:31:35 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:31:35 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:31:35.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:31:35 compute-0 ceph-mon[74381]: pgmap v1497: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:31:36 compute-0 nova_compute[254061]: 2026-01-20 19:31:36.124 254065 DEBUG oslo_service.periodic_task [None req-50360144-c0fa-4df1-aa8a-e6fac8fe8ae3 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:31:36 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1498: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:31:36 compute-0 ovs-vsctl[296559]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Jan 20 19:31:37 compute-0 virtqemud[253535]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Jan 20 19:31:37 compute-0 virtqemud[253535]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Jan 20 19:31:37 compute-0 virtqemud[253535]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 20 19:31:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:31:37.321Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Jan 20 19:31:37 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:31:37.323Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:31:37 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:31:37 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:31:37 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:31:37.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:31:37 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:31:37 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:31:37 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:31:37.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:31:37 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:31:37 compute-0 ceph-mds[96670]: mds.cephfs.compute-0.bekmxe asok_command: cache status {prefix=cache status} (starting...)
Jan 20 19:31:37 compute-0 ceph-mon[74381]: pgmap v1498: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:31:37 compute-0 lvm[296862]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 19:31:37 compute-0 lvm[296862]: VG ceph_vg0 finished
Jan 20 19:31:37 compute-0 ceph-mds[96670]: mds.cephfs.compute-0.bekmxe asok_command: client ls {prefix=client ls} (starting...)
Jan 20 19:31:38 compute-0 nova_compute[254061]: 2026-01-20 19:31:38.189 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:31:38 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1499: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:31:38 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.18084 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:38 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.26902 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:38 compute-0 ceph-mds[96670]: mds.cephfs.compute-0.bekmxe asok_command: damage ls {prefix=damage ls} (starting...)
Jan 20 19:31:38 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.27074 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:38 compute-0 ceph-mds[96670]: mds.cephfs.compute-0.bekmxe asok_command: dump loads {prefix=dump loads} (starting...)
Jan 20 19:31:38 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Jan 20 19:31:38 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/324754075' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 20 19:31:38 compute-0 ceph-mds[96670]: mds.cephfs.compute-0.bekmxe asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Jan 20 19:31:38 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/324754075' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 20 19:31:38 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.18096 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:38 compute-0 ceph-mds[96670]: mds.cephfs.compute-0.bekmxe asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Jan 20 19:31:38 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:31:38.924Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:31:38 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Jan 20 19:31:38 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 20 19:31:38 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.26920 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:38 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Jan 20 19:31:38 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 20 19:31:39 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 19:31:39 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/825939662' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 19:31:39 compute-0 ceph-mds[96670]: mds.cephfs.compute-0.bekmxe asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Jan 20 19:31:39 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.27086 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:39 compute-0 ceph-mds[96670]: mds.cephfs.compute-0.bekmxe asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Jan 20 19:31:39 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.18114 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:39 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.26929 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:39 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:31:39 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:31:39 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:31:39.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:31:39 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config log"} v 0)
Jan 20 19:31:39 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/399181121' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 20 19:31:39 compute-0 ceph-mds[96670]: mds.cephfs.compute-0.bekmxe asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Jan 20 19:31:39 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.27098 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:39 compute-0 sshd-session[296876]: Received disconnect from 43.103.0.45 port 59578:11:  [preauth]
Jan 20 19:31:39 compute-0 sshd-session[296876]: Disconnected from authenticating user root 43.103.0.45 port 59578 [preauth]
Jan 20 19:31:39 compute-0 ceph-mds[96670]: mds.cephfs.compute-0.bekmxe asok_command: get subtrees {prefix=get subtrees} (starting...)
Jan 20 19:31:39 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:31:39 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:31:39 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:31:39.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:31:39 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.18126 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:39 compute-0 ceph-mds[96670]: mds.cephfs.compute-0.bekmxe asok_command: ops {prefix=ops} (starting...)
Jan 20 19:31:39 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:31:39] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Jan 20 19:31:39 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:31:39] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Jan 20 19:31:39 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.26953 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:39 compute-0 ceph-mon[74381]: pgmap v1499: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:31:39 compute-0 ceph-mon[74381]: from='client.18084 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:39 compute-0 ceph-mon[74381]: from='client.26902 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:39 compute-0 ceph-mon[74381]: from='client.27074 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:39 compute-0 ceph-mon[74381]: from='client.18096 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:39 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/650088752' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 20 19:31:39 compute-0 ceph-mon[74381]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 20 19:31:39 compute-0 ceph-mon[74381]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 20 19:31:39 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/3921270906' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Jan 20 19:31:39 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/825939662' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 19:31:39 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/2195715294' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 19:31:39 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/2407176101' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 19:31:39 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/399181121' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 20 19:31:39 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/3622271551' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 20 19:31:39 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/3441390405' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Jan 20 19:31:39 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.27116 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:39 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config-key dump"} v 0)
Jan 20 19:31:39 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3301839796' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 20 19:31:40 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Jan 20 19:31:40 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3609542290' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 20 19:31:40 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1500: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:31:40 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.18156 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:40 compute-0 ceph-mds[96670]: mds.cephfs.compute-0.bekmxe asok_command: session ls {prefix=session ls} (starting...)
Jan 20 19:31:40 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Jan 20 19:31:40 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3731135745' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 20 19:31:40 compute-0 ceph-mds[96670]: mds.cephfs.compute-0.bekmxe asok_command: status {prefix=status} (starting...)
Jan 20 19:31:40 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.27140 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:40 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.26980 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:40 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.18183 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:40 compute-0 ceph-mon[74381]: from='client.26920 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:40 compute-0 ceph-mon[74381]: from='client.27086 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:40 compute-0 ceph-mon[74381]: from='client.18114 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:40 compute-0 ceph-mon[74381]: from='client.26929 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:40 compute-0 ceph-mon[74381]: from='client.27098 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:40 compute-0 ceph-mon[74381]: from='client.18126 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:40 compute-0 ceph-mon[74381]: from='client.26953 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:40 compute-0 ceph-mon[74381]: from='client.27116 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:40 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/3301839796' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 20 19:31:40 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/3609542290' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 20 19:31:40 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:31:40 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/4062045186' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 20 19:31:40 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/318620608' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Jan 20 19:31:40 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/2585355992' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 20 19:31:40 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/1676255513' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Jan 20 19:31:40 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/3731135745' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 20 19:31:40 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/1307975834' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 20 19:31:40 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Jan 20 19:31:40 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2007398454' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 20 19:31:40 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.27158 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:41 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.27001 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:41 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Jan 20 19:31:41 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/589209743' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 20 19:31:41 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Jan 20 19:31:41 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3543965102' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 20 19:31:41 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:31:41 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:31:41 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:31:41.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:31:41 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Jan 20 19:31:41 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 20 19:31:41 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Jan 20 19:31:41 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3760931713' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 20 19:31:41 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Jan 20 19:31:41 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 20 19:31:41 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:31:41 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:31:41 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:31:41.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:31:41 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 20 19:31:41 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1816900849' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 20 19:31:41 compute-0 ceph-mon[74381]: pgmap v1500: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:31:41 compute-0 ceph-mon[74381]: from='client.18156 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:41 compute-0 ceph-mon[74381]: from='client.27140 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:41 compute-0 ceph-mon[74381]: from='client.26980 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:41 compute-0 ceph-mon[74381]: from='client.18183 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:41 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/723728142' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 20 19:31:41 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/2007398454' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 20 19:31:41 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/589209743' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 20 19:31:41 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/490718176' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 20 19:31:41 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/3232464888' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 20 19:31:41 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/3543965102' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 20 19:31:41 compute-0 ceph-mon[74381]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 20 19:31:41 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/3389765450' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 20 19:31:41 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/3760931713' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 20 19:31:41 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/4072902411' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 20 19:31:41 compute-0 ceph-mon[74381]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Jan 20 19:31:41 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/3073331614' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 20 19:31:41 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/3389876415' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 20 19:31:41 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/1816900849' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 20 19:31:41 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.18231 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:41 compute-0 ceph-mgr[74676]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 20 19:31:41 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T19:31:41.877+0000 7fb4429f1640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 20 19:31:42 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat"} v 0)
Jan 20 19:31:42 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1036927027' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 20 19:31:42 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1501: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:31:42 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.27206 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:42 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T19:31:42.297+0000 7fb4429f1640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 20 19:31:42 compute-0 ceph-mgr[74676]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 20 19:31:42 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Jan 20 19:31:42 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2349240815' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 20 19:31:42 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.27040 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:42 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: 2026-01-20T19:31:42.408+0000 7fb4429f1640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 20 19:31:42 compute-0 ceph-mgr[74676]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 20 19:31:42 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Jan 20 19:31:42 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2841255167' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 20 19:31:42 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:31:42 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Jan 20 19:31:42 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1985340158' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 20 19:31:42 compute-0 ceph-mon[74381]: from='client.27158 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:42 compute-0 ceph-mon[74381]: from='client.27001 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:42 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/4131084017' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 20 19:31:42 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/721790313' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Jan 20 19:31:42 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/746757915' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 20 19:31:42 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/1266210608' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 20 19:31:42 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/1036927027' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 20 19:31:42 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/2349240815' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 20 19:31:42 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/897597623' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 20 19:31:42 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/3784498074' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Jan 20 19:31:42 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/2841255167' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 20 19:31:42 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/2797832172' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 20 19:31:42 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/1985340158' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 20 19:31:42 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.18276 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:43 compute-0 nova_compute[254061]: 2026-01-20 19:31:43.191 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:31:43 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Jan 20 19:31:43 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/74373412' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 20 19:31:43 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.18291 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:43 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.27067 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:43 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:31:43 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:31:43 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:31:43.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:31:43 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.27245 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:43 compute-0 sudo[297796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Jan 20 19:31:43 compute-0 sudo[297796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:31:43 compute-0 sudo[297796]: pam_unix(sudo:session): session closed for user root
Jan 20 19:31:43 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:31:43 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:31:43 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:31:43.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:31:43 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Jan 20 19:31:43 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3934550366' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 20 19:31:43 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.18300 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:43 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.27082 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:43 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.27260 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:43 compute-0 ceph-mon[74381]: from='client.18231 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:43 compute-0 ceph-mon[74381]: pgmap v1501: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:31:43 compute-0 ceph-mon[74381]: from='client.27206 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:43 compute-0 ceph-mon[74381]: from='client.27040 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:43 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/293874381' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 20 19:31:43 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/1172913009' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Jan 20 19:31:43 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/3252191830' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 20 19:31:43 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/2154426499' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 20 19:31:43 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/74373412' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 20 19:31:43 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/1910112346' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Jan 20 19:31:43 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/474365900' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 20 19:31:43 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/3934550366' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:58:49.843300+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 3792896 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:58:50.843544+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 3792896 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfb3e000
Jan 20 19:31:43 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 31.763120651s of 31.766254425s, submitted: 1
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 ms_handle_reset con 0x5649d0df8800 session 0x5649d0db72c0
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 ms_handle_reset con 0x5649ce399c00 session 0x5649ce8d85a0
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:58:51.843908+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 3792896 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:43 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971117 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:58:52.844090+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 3792896 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:58:53.844243+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 3792896 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:58:54.844477+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 3792896 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:58:55.844672+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 3792896 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:58:56.844929+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 3792896 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbea000
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:43 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972629 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:58:57.845140+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 3792896 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:58:58.845325+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 3792896 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:58:59.845561+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 3792896 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:00.845863+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 3792896 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:01.846073+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 3792896 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:43 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972629 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbe8400
Jan 20 19:31:43 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.176573753s of 11.186085701s, submitted: 2
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:02.846249+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86409216 unmapped: 3809280 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:03.846409+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86409216 unmapped: 3809280 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:04.846543+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86409216 unmapped: 3809280 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:05.846720+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86409216 unmapped: 3809280 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:06.846910+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86409216 unmapped: 3809280 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:43 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974141 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:07.847073+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86409216 unmapped: 3809280 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:08.847192+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86409216 unmapped: 3809280 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:09.847386+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86409216 unmapped: 3809280 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:10.847594+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86409216 unmapped: 3809280 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:11.847735+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86409216 unmapped: 3809280 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:43 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974141 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:12.847885+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86409216 unmapped: 3809280 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:13.848075+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86409216 unmapped: 3809280 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.656981468s of 12.160857201s, submitted: 3
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:14.848208+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86409216 unmapped: 3809280 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:15.848354+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86409216 unmapped: 3809280 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:16.848578+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86409216 unmapped: 3809280 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:43 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 973550 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:17.848870+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86409216 unmapped: 3809280 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:18.849217+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86409216 unmapped: 3809280 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:19.849427+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86409216 unmapped: 3809280 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:20.849615+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 3801088 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:21.849750+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 3801088 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:43 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 973418 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:22.849878+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 3801088 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:23.850007+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 3801088 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:24.850126+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 3801088 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:25.850287+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 3801088 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:26.850426+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 3801088 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:43 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 973418 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:27.850646+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 3801088 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:28.850846+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 3801088 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:29.850986+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 3801088 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:30.851228+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 3801088 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:31.851378+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 3792896 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:43 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 973418 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:32.851755+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 3792896 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:33.851907+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 3792896 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:34.852060+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 3792896 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:35.852554+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 3792896 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:36.852796+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 3792896 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:43 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 973418 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:37.853285+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 3792896 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:38.853496+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 3792896 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:39.853705+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 3792896 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:40.853859+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 3792896 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:41.854106+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 3792896 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:43 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 973418 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:42.854266+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 3792896 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:43.854427+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 3792896 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:44.854600+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 3792896 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:45.854773+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 3792896 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 ms_handle_reset con 0x5649cfbe8400 session 0x5649d0db6f00
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:46.854923+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 3792896 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:43 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 973418 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:47.855195+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 3792896 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:48.855460+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86433792 unmapped: 3784704 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:49.855654+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86433792 unmapped: 3784704 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:50.855884+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86433792 unmapped: 3784704 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:51.856079+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86433792 unmapped: 3784704 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:43 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 973418 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:52.856225+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86433792 unmapped: 3784704 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:53.856398+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86433792 unmapped: 3784704 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:54.856565+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86433792 unmapped: 3784704 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:55.856890+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86433792 unmapped: 3784704 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:56.857097+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86433792 unmapped: 3784704 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649ce398400
Jan 20 19:31:43 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 42.696498871s of 42.703052521s, submitted: 2
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:43 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 973550 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:57.857315+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86433792 unmapped: 3784704 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T18:59:58.857441+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86433792 unmapped: 3784704 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:00.274057+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86433792 unmapped: 3784704 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:01.274198+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86441984 unmapped: 3776512 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:02.274597+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86441984 unmapped: 3776512 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:43 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 975062 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 9010 writes, 35K keys, 9010 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 9010 writes, 1929 syncs, 4.67 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 622 writes, 961 keys, 622 commit groups, 1.0 writes per commit group, ingest: 0.32 MB, 0.00 MB/s
                                           Interval WAL: 622 writes, 300 syncs, 2.07 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5649cc1b5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5649cc1b5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5649cc1b5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5649cc1b5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.01              0.00         1    0.006       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5649cc1b5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5649cc1b5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5649cc1b5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5649cc1b49b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5649cc1b49b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5649cc1b49b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5649cc1b5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5649cc1b5350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 2.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:03.274739+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86474752 unmapped: 3743744 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:04.274896+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86474752 unmapped: 3743744 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:05.275155+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86474752 unmapped: 3743744 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:06.275355+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86474752 unmapped: 3743744 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:07.275513+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86474752 unmapped: 3743744 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:43 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974471 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:08.275667+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86474752 unmapped: 3743744 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:09.275834+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86474752 unmapped: 3743744 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:10.276008+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86474752 unmapped: 3743744 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:11.276125+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86474752 unmapped: 3743744 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:12.276250+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86474752 unmapped: 3743744 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.116757393s of 15.127565384s, submitted: 3
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:43 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974339 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:13.276442+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86474752 unmapped: 3743744 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:14.276650+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86474752 unmapped: 3743744 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:15.276853+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86474752 unmapped: 3743744 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:16.276992+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86482944 unmapped: 3735552 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:17.277114+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86491136 unmapped: 3727360 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:43 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974339 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:18.277594+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86491136 unmapped: 3727360 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:19.277717+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86491136 unmapped: 3727360 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:20.277853+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86499328 unmapped: 3719168 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:21.277989+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86499328 unmapped: 3719168 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:22.278117+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86499328 unmapped: 3719168 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:43 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974339 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:23.278236+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86499328 unmapped: 3719168 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:24.278364+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86499328 unmapped: 3719168 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:25.278494+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 3710976 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:26.278625+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 3710976 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:27.278792+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 3710976 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:43 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974339 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:28.279074+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 3710976 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:29.279207+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 3710976 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:30.279341+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 3710976 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:31.279553+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 3710976 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:32.279901+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 3710976 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:43 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974339 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:33.280382+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 3710976 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:34.280509+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 3710976 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:35.280632+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 3710976 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:36.281039+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 3710976 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:37.281334+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 3702784 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:43 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974339 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:38.281544+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 3702784 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:39.281914+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 3702784 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:40.282614+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 3702784 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:41.282779+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 3702784 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:42.282926+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 3702784 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:43 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974339 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:43.283077+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 3702784 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:44.283198+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 3702784 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:45.283358+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 3702784 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:46.283513+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 3702784 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:47.283685+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 3702784 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:43 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974339 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:48.283876+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 3702784 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:49.284026+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 3702784 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:50.284147+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86515712 unmapped: 3702784 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:51.284300+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 3694592 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:52.284429+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 3694592 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:43 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974339 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:53.284827+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 3694592 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:54.284961+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 3694592 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:55.285074+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 3694592 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:56.285384+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 3694592 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:57.285516+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 3694592 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:43 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974339 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:58.285659+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 3694592 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:00:59.285817+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 3694592 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:00.285927+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 3694592 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:01.286056+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 3694592 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:02.286174+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 3694592 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:43 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974339 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:03.286303+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 3694592 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:04.286419+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 3694592 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:05.286669+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 3694592 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:06.286783+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 3694592 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:07.286959+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 3694592 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:43 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974339 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:08.287157+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 3694592 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:09.287301+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 3694592 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:10.287445+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 3694592 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:11.287571+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 3694592 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:12.287655+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 3694592 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:43 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974339 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:13.287776+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 3694592 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:14.287901+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 3694592 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:15.288065+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 3694592 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:16.288194+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 3694592 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:17.288371+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 3694592 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:43 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974339 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:18.288560+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 3694592 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:19.288729+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 3694592 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:20.288883+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86523904 unmapped: 3694592 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:21.289026+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86532096 unmapped: 3686400 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:22.289148+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86532096 unmapped: 3686400 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:43 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974339 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:23.289256+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86532096 unmapped: 3686400 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:24.289433+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86532096 unmapped: 3686400 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:25.289583+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86540288 unmapped: 3678208 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:26.289709+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86540288 unmapped: 3678208 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:27.289904+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86540288 unmapped: 3678208 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:43 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974339 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:28.290070+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86540288 unmapped: 3678208 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:29.290240+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86540288 unmapped: 3678208 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 ms_handle_reset con 0x5649cfbea000 session 0x5649cf975c20
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 ms_handle_reset con 0x5649cfb3e000 session 0x5649d0d105a0
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:30.290404+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86540288 unmapped: 3678208 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:31.290625+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86540288 unmapped: 3678208 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:32.290751+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86540288 unmapped: 3678208 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:43 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974339 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:33.290928+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86540288 unmapped: 3678208 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:34.291060+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86540288 unmapped: 3678208 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:35.291184+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86540288 unmapped: 3678208 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:36.291357+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86540288 unmapped: 3678208 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:37.291524+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86540288 unmapped: 3678208 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:43 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974339 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:38.291745+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86540288 unmapped: 3678208 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:39.291877+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86540288 unmapped: 3678208 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:40.292026+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cdab5400
Jan 20 19:31:43 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 88.315132141s of 88.409767151s, submitted: 1
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86540288 unmapped: 3678208 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:41.292178+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86540288 unmapped: 3678208 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:42.292337+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86540288 unmapped: 3678208 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:43 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974471 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:43.292527+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86540288 unmapped: 3678208 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:44.292707+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86548480 unmapped: 3670016 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:45.292858+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86548480 unmapped: 3670016 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:46.293013+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86548480 unmapped: 3670016 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:47.293195+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cdc88000
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86548480 unmapped: 3670016 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:43 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977495 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:48.293387+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86548480 unmapped: 3670016 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:49.293561+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 ms_handle_reset con 0x5649ce398400 session 0x5649d08e10e0
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86548480 unmapped: 3670016 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:50.293703+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86556672 unmapped: 3661824 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.198902130s of 10.208003044s, submitted: 3
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:51.293883+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86556672 unmapped: 3661824 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:52.294032+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86556672 unmapped: 3661824 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:43 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976904 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:53.294227+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 3620864 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:54.294387+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 3620864 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:55.294511+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 3620864 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:56.294656+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 3620864 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:57.294826+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 3620864 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976916 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:58.294990+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 3620864 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:01:59.295167+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 3620864 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:00.295284+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 3620864 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:01.295440+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.799607277s of 10.407449722s, submitted: 108
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86597632 unmapped: 3620864 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:02.295612+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cf9d7400
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86622208 unmapped: 3596288 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976904 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:03.295756+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86622208 unmapped: 3596288 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:04.295875+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [0,1,1,1])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [0,0,0,0,1])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:05.295998+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:06.296224+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:07.296394+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976904 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:08.296550+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:09.296664+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbeb000
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:10.296890+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:11.297036+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:12.297127+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.383151054s of 10.842937469s, submitted: 168
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977102 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:13.297261+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:14.297383+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:15.297503+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:16.297713+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:17.297909+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:18.298096+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977102 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:19.298244+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:20.298392+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:21.298534+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:22.298664+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:23.298796+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977102 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:24.298997+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:25.299177+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:26.299350+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:27.299566+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:28.299734+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977102 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:29.299845+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:30.300020+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:31.300171+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:32.300330+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:33.300480+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977102 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:34.300629+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:35.300795+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86261760 unmapped: 3956736 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:36.300966+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86269952 unmapped: 3948544 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:37.301164+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86269952 unmapped: 3948544 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:38.301348+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86269952 unmapped: 3948544 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977102 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:39.301496+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86269952 unmapped: 3948544 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:40.301643+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86269952 unmapped: 3948544 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:41.301858+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 3940352 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:42.302031+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 3940352 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:43.302172+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 3940352 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977102 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:44.302304+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 3940352 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:45.302500+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 3940352 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:46.302657+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 3940352 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:47.302794+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 3940352 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:48.302997+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 3940352 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977102 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:49.303153+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 3940352 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:50.303314+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 3940352 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:51.303445+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 3940352 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:52.303577+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 3940352 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:53.303718+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 3940352 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977102 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:54.303887+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 3940352 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:55.304024+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 3940352 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:56.304150+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 3940352 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:57.304278+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 3940352 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:58.304422+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 3940352 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977102 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:02:59.304565+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 3940352 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:00.304707+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 3940352 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:01.304854+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 3940352 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:02.304972+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 3940352 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:03.305156+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 3940352 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977102 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:04.305292+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 3940352 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:05.305422+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 3940352 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:06.305542+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 3940352 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:07.305677+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 3940352 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:08.305879+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86278144 unmapped: 3940352 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977102 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:09.306013+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86286336 unmapped: 3932160 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:10.306139+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86286336 unmapped: 3932160 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:11.306276+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86286336 unmapped: 3932160 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:12.306425+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86286336 unmapped: 3932160 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:13.306549+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86286336 unmapped: 3932160 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977102 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:14.306683+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86286336 unmapped: 3932160 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:15.306846+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86286336 unmapped: 3932160 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:16.306985+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:17.307140+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:18.307283+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977102 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:19.307423+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:20.307597+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:21.307734+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:22.307880+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:23.308023+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977102 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:24.308190+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:25.308327+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:26.308482+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:27.308687+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:28.308893+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977102 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:29.309058+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:30.309211+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:31.309346+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:32.309466+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:33.309642+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977102 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:34.309854+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:35.310031+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:36.310201+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:37.310383+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:38.310560+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977102 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:39.310689+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:40.310868+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:41.311000+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:42.311123+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:43.311259+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977102 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:44.311424+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:45.321511+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:46.321756+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:47.321948+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:48.322170+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977102 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:49.322367+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:50.322516+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:51.322687+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:52.322897+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:53.323048+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977102 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:54.323208+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:55.323341+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 ms_handle_reset con 0x5649cdc88000 session 0x5649d0ac1a40
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 ms_handle_reset con 0x5649cdab5400 session 0x5649d0d110e0
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:56.323472+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:57.323656+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:58.323836+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977102 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:03:59.323974+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:00.324115+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:01.324239+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:02.324524+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:03.324725+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:43 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977102 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:04.324902+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:05.325070+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:06.325263+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbe8400
Jan 20 19:31:43 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 114.807556152s of 114.820899963s, submitted: 3
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:07.325428+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:08.325646+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:43 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977234 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:09.325940+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:10.326147+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:11.326295+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:12.326494+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:13.326680+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:43 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 978746 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:14.326907+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:15.327070+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:16.327217+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:17.327354+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:18.327527+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:43 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 978155 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:19.327715+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:20.327937+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.707267761s of 13.717912674s, submitted: 3
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:21.328106+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:22.328292+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:23.328474+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:43 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 978023 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:24.328606+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86294528 unmapped: 3923968 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:25.328715+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86302720 unmapped: 3915776 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 ms_handle_reset con 0x5649cfbe8400 session 0x5649d0c072c0
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:26.328865+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86302720 unmapped: 3915776 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:27.329063+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86302720 unmapped: 3915776 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:43 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:43 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:28.329230+0000)
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:43 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 978023 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86302720 unmapped: 3915776 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:29.329388+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86302720 unmapped: 3915776 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:30.329576+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 159 ms_handle_reset con 0x5649cfbeb000 session 0x5649d0d2b860
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 159 ms_handle_reset con 0x5649cf9d7400 session 0x5649cfc1fa40
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86302720 unmapped: 3915776 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:31.329784+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86302720 unmapped: 3915776 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:32.329951+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86302720 unmapped: 3915776 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:33.330098+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 978023 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86302720 unmapped: 3915776 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:34.330270+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86302720 unmapped: 3915776 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:35.330431+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86302720 unmapped: 3915776 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:36.330562+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cf9d9800
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.972690582s of 15.975857735s, submitted: 1
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86302720 unmapped: 3915776 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:37.330724+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86302720 unmapped: 3915776 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:38.330931+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 978155 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86302720 unmapped: 3915776 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:39.331126+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86302720 unmapped: 3915776 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:40.331250+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86302720 unmapped: 3915776 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:41.331386+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbec800
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86302720 unmapped: 3915776 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:42.331524+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86302720 unmapped: 3915776 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:43.331742+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 978287 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86302720 unmapped: 3915776 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:44.331932+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86302720 unmapped: 3915776 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:45.332136+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86302720 unmapped: 3915776 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:46.332371+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86302720 unmapped: 3915776 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:47.332559+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbeb400
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.718964577s of 10.725953102s, submitted: 2
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86302720 unmapped: 3915776 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:48.332799+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979799 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86302720 unmapped: 3915776 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:49.333010+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86302720 unmapped: 3915776 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:50.333154+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86302720 unmapped: 3915776 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:51.333337+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86302720 unmapped: 3915776 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:52.333501+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86302720 unmapped: 3915776 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:53.333636+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979667 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86302720 unmapped: 3915776 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:54.333774+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86302720 unmapped: 3915776 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:55.333899+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86302720 unmapped: 3915776 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:56.334106+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:57.334241+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.459848404s of 10.484528542s, submitted: 2
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:58.334400+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979535 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:59.334546+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:00.334684+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:01.334860+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:02.335002+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:03.335118+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979535 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:04.335239+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:05.335399+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:06.335512+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:07.335744+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:08.336046+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979535 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:09.336203+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:10.336412+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:11.336682+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:12.336976+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:13.337146+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979535 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:14.337277+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:15.337533+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:16.337694+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:17.337892+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:18.338061+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979535 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:19.338184+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:20.338312+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:21.338426+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:22.338544+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 159 ms_handle_reset con 0x5649cfbeb400 session 0x5649d08e1a40
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 159 ms_handle_reset con 0x5649cfbec800 session 0x5649d0c07860
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:23.338670+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979535 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:24.338867+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:25.339031+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 159 ms_handle_reset con 0x5649cf97cc00 session 0x5649d08e03c0
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 159 ms_handle_reset con 0x5649cfbed000 session 0x5649d0e09c20
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:26.339160+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:27.339304+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:28.339489+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979535 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:29.339623+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d01e9000
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:30.339778+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 159 heartbeat osd_stat(store_statfs(0x4fc634000/0x0/0x4ffc00000, data 0x11a0e1/0x1d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:31.339931+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 159 handle_osd_map epochs [159,160], i have 159, src has [1,160]
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 34.143909454s of 34.247035980s, submitted: 1
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:32.340100+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86310912 unmapped: 3907584 heap: 90218496 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:33.340233+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _renew_subs
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 160 handle_osd_map epochs [161,161], i have 160, src has [1,161]
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0df4400
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1019962 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86450176 unmapped: 13082624 heap: 99532800 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:34.340715+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 161 handle_osd_map epochs [161,162], i have 161, src has [1,162]
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86466560 unmapped: 13066240 heap: 99532800 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 162 ms_handle_reset con 0x5649d01e9000 session 0x5649cf7ab4a0
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:35.340921+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 162 heartbeat osd_stat(store_statfs(0x4fc1bb000/0x0/0x4ffc00000, data 0x58e460/0x650000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86466560 unmapped: 13066240 heap: 99532800 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbf2c00
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:36.341057+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 86499328 unmapped: 21430272 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cdb48000
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:37.341220+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _renew_subs
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 162 handle_osd_map epochs [163,163], i have 162, src has [1,163]
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 163 ms_handle_reset con 0x5649cfbf2c00 session 0x5649d0d105a0
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87556096 unmapped: 20373504 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:38.341421+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1080143 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87556096 unmapped: 20373504 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:39.341588+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfa04800
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87556096 unmapped: 20373504 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:40.341734+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87556096 unmapped: 20373504 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 163 heartbeat osd_stat(store_statfs(0x4fb9b4000/0x0/0x4ffc00000, data 0xd927b3/0xe57000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:41.341881+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 163 handle_osd_map epochs [163,164], i have 163, src has [1,164]
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87556096 unmapped: 20373504 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:42.342002+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87556096 unmapped: 20373504 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0812800
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.648534775s of 10.933682442s, submitted: 74
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:43.342164+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085765 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87556096 unmapped: 20373504 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:44.342319+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87556096 unmapped: 20373504 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:45.342478+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fb9b1000/0x0/0x4ffc00000, data 0xd94815/0xe5a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87556096 unmapped: 20373504 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:46.342681+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87580672 unmapped: 20348928 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:47.342895+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87580672 unmapped: 20348928 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:48.343122+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fb9b2000/0x0/0x4ffc00000, data 0xd94815/0xe5a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1085714 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87580672 unmapped: 20348928 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:49.343292+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87580672 unmapped: 20348928 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:50.343481+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fb9b2000/0x0/0x4ffc00000, data 0xd94815/0xe5a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87580672 unmapped: 20348928 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:51.343639+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87580672 unmapped: 20348928 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:52.343832+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87580672 unmapped: 20348928 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.901214600s of 10.076541901s, submitted: 5
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:53.344128+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fb9b2000/0x0/0x4ffc00000, data 0xd94815/0xe5a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084991 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87580672 unmapped: 20348928 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:54.344333+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87580672 unmapped: 20348928 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fb9b2000/0x0/0x4ffc00000, data 0xd94815/0xe5a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:55.344512+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87580672 unmapped: 20348928 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:56.344670+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87580672 unmapped: 20348928 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:57.344876+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87580672 unmapped: 20348928 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:58.345070+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084991 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87580672 unmapped: 20348928 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:59.345202+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87580672 unmapped: 20348928 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:00.345356+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fb9b2000/0x0/0x4ffc00000, data 0xd94815/0xe5a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87580672 unmapped: 20348928 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:01.345472+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87580672 unmapped: 20348928 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:02.345619+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87580672 unmapped: 20348928 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:03.345753+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084991 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87580672 unmapped: 20348928 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:04.345896+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87580672 unmapped: 20348928 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:05.346021+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87580672 unmapped: 20348928 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:06.346146+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fb9b2000/0x0/0x4ffc00000, data 0xd94815/0xe5a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87580672 unmapped: 20348928 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:07.346257+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87580672 unmapped: 20348928 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:08.346435+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084991 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87580672 unmapped: 20348928 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:09.346636+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87580672 unmapped: 20348928 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:10.346862+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87580672 unmapped: 20348928 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:11.347025+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fb9b2000/0x0/0x4ffc00000, data 0xd94815/0xe5a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87580672 unmapped: 20348928 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:12.347178+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87580672 unmapped: 20348928 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:13.347322+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1084991 data_alloc: 218103808 data_used: 143360
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87580672 unmapped: 20348928 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:14.347467+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87580672 unmapped: 20348928 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:15.347695+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87580672 unmapped: 20348928 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:16.347853+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fb9b2000/0x0/0x4ffc00000, data 0xd94815/0xe5a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87588864 unmapped: 20340736 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:17.348027+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 87588864 unmapped: 20340736 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:18.348201+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbf6c00
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 164 ms_handle_reset con 0x5649cfbf6c00 session 0x5649cf64a780
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cf97cc00
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 164 ms_handle_reset con 0x5649cf97cc00 session 0x5649d0d30780
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095935 data_alloc: 218103808 data_used: 4800512
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 92241920 unmapped: 15687680 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:19.348378+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 164 heartbeat osd_stat(store_statfs(0x4fb9b2000/0x0/0x4ffc00000, data 0xd94815/0xe5a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 92241920 unmapped: 15687680 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 164 handle_osd_map epochs [165,165], i have 164, src has [1,165]
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 26.999790192s of 27.003772736s, submitted: 1
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:20.348528+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbee800
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _renew_subs
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 165 handle_osd_map epochs [166,166], i have 165, src has [1,166]
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 92561408 unmapped: 15368192 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:21.348692+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 166 ms_handle_reset con 0x5649cfbee800 session 0x5649ce8d8f00
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbecc00
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 166 ms_handle_reset con 0x5649cfbecc00 session 0x5649d0f701e0
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 94994432 unmapped: 12935168 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:22.348882+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 94994432 unmapped: 12935168 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:23.349124+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbf8800
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 166 ms_handle_reset con 0x5649cfbf8800 session 0x5649cfb06780
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1153554 data_alloc: 218103808 data_used: 4804608
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 94994432 unmapped: 12935168 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:24.349246+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 94994432 unmapped: 12935168 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:25.349368+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cd9f3c00
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 166 ms_handle_reset con 0x5649cd9f3c00 session 0x5649ce8d2f00
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fb46e000/0x0/0x4ffc00000, data 0x12d5bc3/0x139e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 94986240 unmapped: 12943360 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:26.349844+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbf6400
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 166 ms_handle_reset con 0x5649cfbf6400 session 0x5649cf64ab40
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfb41400
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 166 ms_handle_reset con 0x5649cfb41400 session 0x5649d06270e0
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 94986240 unmapped: 12943360 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:27.349961+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 94986240 unmapped: 12943360 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:28.350116+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1154165 data_alloc: 218103808 data_used: 4804608
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 94994432 unmapped: 12935168 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:29.350309+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbebc00
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 166 heartbeat osd_stat(store_statfs(0x4fa2cd000/0x0/0x4ffc00000, data 0x12d5be6/0x139f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 99688448 unmapped: 8241152 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:30.350488+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 99688448 unmapped: 8241152 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:31.350627+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _renew_subs
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 166 handle_osd_map epochs [167,167], i have 166, src has [1,167]
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.404651642s of 11.529529572s, submitted: 53
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 100737024 unmapped: 7192576 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:32.350911+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 100737024 unmapped: 7192576 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:33.351158+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa2c9000/0x0/0x4ffc00000, data 0x12d7c48/0x13a2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193451 data_alloc: 234881024 data_used: 9924608
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 100753408 unmapped: 7176192 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:34.351369+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 100753408 unmapped: 7176192 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:35.351782+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa2c9000/0x0/0x4ffc00000, data 0x12d7c48/0x13a2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 100753408 unmapped: 7176192 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:36.352113+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 100753408 unmapped: 7176192 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:37.352453+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 100753408 unmapped: 7176192 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:38.352901+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193451 data_alloc: 234881024 data_used: 9924608
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 100753408 unmapped: 7176192 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:39.353055+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 100753408 unmapped: 7176192 heap: 107929600 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:40.353241+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa2c9000/0x0/0x4ffc00000, data 0x12d7c48/0x13a2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:41.355406+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 3776512 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.024612427s of 10.284733772s, submitted: 118
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:42.355578+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107356160 unmapped: 3719168 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f93f6000/0x0/0x4ffc00000, data 0x21abc48/0x2276000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:43.355890+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107806720 unmapped: 3268608 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1317907 data_alloc: 234881024 data_used: 11325440
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:44.356091+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107806720 unmapped: 3268608 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:45.356329+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107806720 unmapped: 3268608 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:46.356645+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107806720 unmapped: 3268608 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:47.356787+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107806720 unmapped: 3268608 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f936c000/0x0/0x4ffc00000, data 0x2235c48/0x2300000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:48.357008+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107200512 unmapped: 3874816 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f934b000/0x0/0x4ffc00000, data 0x2256c48/0x2321000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfa04800 session 0x5649cf8025a0
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d0df4400 session 0x5649ce8fc960
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1314731 data_alloc: 234881024 data_used: 11333632
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:49.357146+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107200512 unmapped: 3874816 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:50.357336+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107208704 unmapped: 3866624 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:51.357474+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107208704 unmapped: 3866624 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:52.357753+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107208704 unmapped: 3866624 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:53.357962+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107208704 unmapped: 3866624 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.102064133s of 12.177471161s, submitted: 37
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1314835 data_alloc: 234881024 data_used: 11333632
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:54.358124+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f934b000/0x0/0x4ffc00000, data 0x2256c48/0x2321000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 3825664 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:55.358292+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 3825664 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9342000/0x0/0x4ffc00000, data 0x225fc48/0x232a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:56.358463+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 3825664 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9342000/0x0/0x4ffc00000, data 0x225fc48/0x232a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:57.358661+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 3825664 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:58.358849+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 3825664 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:59.359037+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1314835 data_alloc: 234881024 data_used: 11333632
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107249664 unmapped: 3825664 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbefc00
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0df9c00
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d0df9c00 session 0x5649ce59ab40
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfa04800
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:00.359292+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107241472 unmapped: 3833856 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfa04800 session 0x5649d08e0d20
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:01.359569+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107241472 unmapped: 3833856 heap: 111075328 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbea000
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbea000 session 0x5649cfdb92c0
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0811000
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d0811000 session 0x5649ce8f6960
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:02.359945+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107085824 unmapped: 12386304 heap: 119472128 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f8d96000/0x0/0x4ffc00000, data 0x280acaa/0x28d6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:03.360173+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107085824 unmapped: 12386304 heap: 119472128 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:04.360363+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1358416 data_alloc: 234881024 data_used: 11333632
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107085824 unmapped: 12386304 heap: 119472128 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:05.360516+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107085824 unmapped: 12386304 heap: 119472128 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0175800
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.447368622s of 11.733536720s, submitted: 28
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:06.360680+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107085824 unmapped: 12386304 heap: 119472128 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:07.360890+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107085824 unmapped: 12386304 heap: 119472128 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:08.361116+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f8d96000/0x0/0x4ffc00000, data 0x280acaa/0x28d6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107118592 unmapped: 12353536 heap: 119472128 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0df4800
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:09.361290+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1358673 data_alloc: 234881024 data_used: 11366400
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107143168 unmapped: 12328960 heap: 119472128 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:10.361452+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 111050752 unmapped: 8421376 heap: 119472128 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:11.361617+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 111050752 unmapped: 8421376 heap: 119472128 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:12.361789+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 111050752 unmapped: 8421376 heap: 119472128 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:13.361983+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 111050752 unmapped: 8421376 heap: 119472128 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f8d96000/0x0/0x4ffc00000, data 0x280acaa/0x28d6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:14.362129+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1397534 data_alloc: 234881024 data_used: 16273408
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 111050752 unmapped: 8421376 heap: 119472128 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:15.362286+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 111050752 unmapped: 8421376 heap: 119472128 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:16.362461+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cf9d9800 session 0x5649ce8f74a0
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 111050752 unmapped: 8421376 heap: 119472128 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f8d96000/0x0/0x4ffc00000, data 0x280acaa/0x28d6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:17.362614+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 111050752 unmapped: 8421376 heap: 119472128 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:18.362856+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.861881256s of 12.330757141s, submitted: 8
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 111050752 unmapped: 8421376 heap: 119472128 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:19.362992+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1397510 data_alloc: 234881024 data_used: 16273408
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 111050752 unmapped: 8421376 heap: 119472128 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:20.363108+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 111902720 unmapped: 7569408 heap: 119472128 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:21.363245+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115146752 unmapped: 5373952 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:22.363390+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115294208 unmapped: 5226496 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f8545000/0x0/0x4ffc00000, data 0x3053caa/0x311f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:23.363529+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115326976 unmapped: 5193728 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:24.363643+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1470454 data_alloc: 234881024 data_used: 16990208
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 6979584 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:25.363748+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 6979584 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:26.363885+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 6979584 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cf62dc00
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:27.364023+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 6979584 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:28.364320+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f8544000/0x0/0x4ffc00000, data 0x305ccaa/0x3128000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113541120 unmapped: 6979584 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.644721031s of 10.839123726s, submitted: 68
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d0df4800 session 0x5649cdc2d680
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:29.364467+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1470346 data_alloc: 234881024 data_used: 16990208
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113549312 unmapped: 6971392 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0df4800
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f8544000/0x0/0x4ffc00000, data 0x305ccaa/0x3128000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d0df4800 session 0x5649cf7a3860
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:30.364632+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 109682688 unmapped: 10838016 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:31.365125+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 109682688 unmapped: 10838016 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:32.365253+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 109682688 unmapped: 10838016 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cf97cc00
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:33.365382+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 109682688 unmapped: 10838016 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:34.366036+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1325354 data_alloc: 234881024 data_used: 10432512
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 109682688 unmapped: 10838016 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:35.366419+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 109682688 unmapped: 10838016 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9335000/0x0/0x4ffc00000, data 0x226bc48/0x2336000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:36.367476+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 109682688 unmapped: 10838016 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:37.368641+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 109682688 unmapped: 10838016 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:38.368868+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 109682688 unmapped: 10838016 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:39.369214+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1324135 data_alloc: 234881024 data_used: 10432512
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 109682688 unmapped: 10838016 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.674302101s of 10.743412971s, submitted: 23
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbebc00 session 0x5649d0ef0000
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0811c00
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:40.369359+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105193472 unmapped: 15327232 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9336000/0x0/0x4ffc00000, data 0x226bc48/0x2336000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [1])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d0811c00 session 0x5649d0f71860
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:41.370109+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105226240 unmapped: 15294464 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:42.370455+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa457000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105226240 unmapped: 15294464 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa457000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:43.370989+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105226240 unmapped: 15294464 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:44.371947+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1129106 data_alloc: 218103808 data_used: 4788224
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105226240 unmapped: 15294464 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:45.372172+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105226240 unmapped: 15294464 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:46.372651+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105226240 unmapped: 15294464 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa457000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:47.373033+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105226240 unmapped: 15294464 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:48.373243+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105226240 unmapped: 15294464 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa457000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:49.373451+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1129106 data_alloc: 218103808 data_used: 4788224
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105226240 unmapped: 15294464 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:50.373622+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105226240 unmapped: 15294464 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:51.373873+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa457000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105226240 unmapped: 15294464 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:52.374064+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105226240 unmapped: 15294464 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:53.374364+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105226240 unmapped: 15294464 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:54.374528+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1129106 data_alloc: 218103808 data_used: 4788224
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105226240 unmapped: 15294464 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:55.374714+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105226240 unmapped: 15294464 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa457000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:56.374922+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105226240 unmapped: 15294464 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:57.375160+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105226240 unmapped: 15294464 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:58.375487+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105226240 unmapped: 15294464 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:59.375685+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1129106 data_alloc: 218103808 data_used: 4788224
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105226240 unmapped: 15294464 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:00.375902+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa457000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105226240 unmapped: 15294464 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:01.376104+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105226240 unmapped: 15294464 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:02.376301+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105226240 unmapped: 15294464 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:03.376433+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105226240 unmapped: 15294464 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:04.376586+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbf3000
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbf3000 session 0x5649d01810e0
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbf8c00
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa457000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1129106 data_alloc: 218103808 data_used: 4788224
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105226240 unmapped: 15294464 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbf8c00 session 0x5649cf9254a0
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:05.377555+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105226240 unmapped: 15294464 heap: 120520704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cf97cc00 session 0x5649cf925860
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cf62dc00 session 0x5649ce8f7860
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbebc00
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 25.769863129s of 25.853391647s, submitted: 35
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbebc00 session 0x5649d06265a0
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbf3000
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbf3000 session 0x5649cfb07e00
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0afb800
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d0afb800 session 0x5649d0ef0d20
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cf62dc00
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cf62dc00 session 0x5649ce9850e0
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cf97cc00
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cf97cc00 session 0x5649cf7ab680
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:06.378387+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 104980480 unmapped: 25518080 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:07.379229+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 104980480 unmapped: 25518080 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:08.380064+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9dbd000/0x0/0x4ffc00000, data 0x17e6bc3/0x18af000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 104980480 unmapped: 25518080 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbebc00
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbebc00 session 0x5649cdc2d4a0
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:09.380768+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204025 data_alloc: 218103808 data_used: 4788224
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 104988672 unmapped: 25509888 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:10.381306+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 104988672 unmapped: 25509888 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbf3000
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbf3000 session 0x5649cdc2d2c0
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9dbd000/0x0/0x4ffc00000, data 0x17e6bc3/0x18af000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:11.381850+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649ce399c00
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649ce399c00 session 0x5649cda95860
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649ce399c00
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 104988672 unmapped: 25509888 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649ce399c00 session 0x5649cda950e0
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:12.382229+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d0812800 session 0x5649cfb072c0
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cdb48000 session 0x5649cdc2cb40
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105013248 unmapped: 25485312 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cf62dc00
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cf97cc00
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9dbc000/0x0/0x4ffc00000, data 0x17e6bd3/0x18b0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:13.382586+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105021440 unmapped: 25477120 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:14.382758+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1278595 data_alloc: 234881024 data_used: 15269888
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 110182400 unmapped: 20316160 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:15.383070+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 110182400 unmapped: 20316160 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:16.383211+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0df5c00
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.744391441s of 10.835625648s, submitted: 21
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 110182400 unmapped: 20316160 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:17.383502+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 110182400 unmapped: 20316160 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:18.383864+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9dbc000/0x0/0x4ffc00000, data 0x17e6bd3/0x18b0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 110215168 unmapped: 20283392 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:19.384107+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1278727 data_alloc: 234881024 data_used: 15269888
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 110215168 unmapped: 20283392 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:20.384333+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 110215168 unmapped: 20283392 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:21.384514+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 110215168 unmapped: 20283392 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:22.384785+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbf4800
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 110215168 unmapped: 20283392 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:23.385095+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbf2400
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 110223360 unmapped: 20275200 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:24.385273+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9dbc000/0x0/0x4ffc00000, data 0x17e6bd3/0x18b0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1280203 data_alloc: 234881024 data_used: 15269888
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 110256128 unmapped: 20242432 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:25.385497+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9dbc000/0x0/0x4ffc00000, data 0x17e6bd3/0x18b0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112836608 unmapped: 17661952 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9dbc000/0x0/0x4ffc00000, data 0x17e6bd3/0x18b0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:26.385898+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112902144 unmapped: 17596416 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:27.386316+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.594102859s of 10.725716591s, submitted: 56
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113025024 unmapped: 17473536 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:28.386897+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113025024 unmapped: 17473536 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:29.387229+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbf0800
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351290 data_alloc: 234881024 data_used: 15495168
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113025024 unmapped: 17473536 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:30.387583+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113025024 unmapped: 17473536 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:31.387900+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f94dc000/0x0/0x4ffc00000, data 0x20c6bd3/0x2190000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113025024 unmapped: 17473536 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:32.388114+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113033216 unmapped: 17465344 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:33.388268+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113033216 unmapped: 17465344 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:34.388595+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1351158 data_alloc: 234881024 data_used: 15495168
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113033216 unmapped: 17465344 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:35.388890+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113033216 unmapped: 17465344 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f94dc000/0x0/0x4ffc00000, data 0x20c6bd3/0x2190000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:36.389064+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f94dc000/0x0/0x4ffc00000, data 0x20c6bd3/0x2190000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113065984 unmapped: 17432576 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:37.389261+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113065984 unmapped: 17432576 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.780820847s of 10.622550964s, submitted: 11
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbf2400 session 0x5649cf925e00
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbf0800 session 0x5649cf925860
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:38.389553+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113065984 unmapped: 17432576 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:39.389747+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1349595 data_alloc: 234881024 data_used: 15495168
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113065984 unmapped: 17432576 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:40.389909+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113065984 unmapped: 17432576 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:41.390090+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113065984 unmapped: 17432576 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:42.390293+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f94dc000/0x0/0x4ffc00000, data 0x20c6bd3/0x2190000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113065984 unmapped: 17432576 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:43.390480+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113065984 unmapped: 17432576 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:44.390659+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1349595 data_alloc: 234881024 data_used: 15495168
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113065984 unmapped: 17432576 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:45.390918+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f94dc000/0x0/0x4ffc00000, data 0x20c6bd3/0x2190000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113065984 unmapped: 17432576 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:46.391119+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113065984 unmapped: 17432576 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:47.391277+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f94dc000/0x0/0x4ffc00000, data 0x20c6bd3/0x2190000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113065984 unmapped: 17432576 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:48.391482+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.931000710s of 10.931001663s, submitted: 0
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cf62dc00 session 0x5649cfdb8f00
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cf97cc00 session 0x5649cf9754a0
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113065984 unmapped: 17432576 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cf80b000
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d01e9000
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f94dc000/0x0/0x4ffc00000, data 0x20c6bd3/0x2190000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:49.391653+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d01e9000 session 0x5649ce471860
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141386 data_alloc: 218103808 data_used: 4788224
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105955328 unmapped: 24543232 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:50.391914+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105955328 unmapped: 24543232 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:51.392094+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105955328 unmapped: 24543232 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:52.392287+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105955328 unmapped: 24543232 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:53.392519+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa809000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105955328 unmapped: 24543232 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:54.392710+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1142898 data_alloc: 218103808 data_used: 4788224
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105955328 unmapped: 24543232 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfa05000
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:55.392880+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa809000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105955328 unmapped: 24543232 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:56.393036+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105955328 unmapped: 24543232 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:57.393194+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa809000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105955328 unmapped: 24543232 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:58.393388+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa809000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105955328 unmapped: 24543232 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:59.393547+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1143819 data_alloc: 218103808 data_used: 4788224
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105955328 unmapped: 24543232 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa809000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:00.393702+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105955328 unmapped: 24543232 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:01.393899+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa809000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105955328 unmapped: 24543232 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:02.394088+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105955328 unmapped: 24543232 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.334737778s of 14.428777695s, submitted: 32
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:03.394211+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105955328 unmapped: 24543232 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:04.394392+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1143687 data_alloc: 218103808 data_used: 4788224
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105955328 unmapped: 24543232 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:05.394592+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa809000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105955328 unmapped: 24543232 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:06.394757+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105955328 unmapped: 24543232 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:07.394923+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa809000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105955328 unmapped: 24543232 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:08.395127+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa809000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105955328 unmapped: 24543232 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:09.395275+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1143687 data_alloc: 218103808 data_used: 4788224
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105955328 unmapped: 24543232 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:10.395432+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105955328 unmapped: 24543232 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:11.395568+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105955328 unmapped: 24543232 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:12.395900+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105955328 unmapped: 24543232 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:13.396112+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105955328 unmapped: 24543232 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:14.396339+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa809000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1143687 data_alloc: 218103808 data_used: 4788224
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105955328 unmapped: 24543232 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:15.396534+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbf5800
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbf5800 session 0x5649d0f71c20
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cdab5400
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cdab5400 session 0x5649d0f70f00
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cdab5400
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cdab5400 session 0x5649d06274a0
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105971712 unmapped: 24526848 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cf62dc00
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cf62dc00 session 0x5649ce986000
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfb40000
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.963714600s of 12.966668129s, submitted: 1
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:16.396729+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfb40000 session 0x5649ce8d81e0
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbe8c00
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbe8c00 session 0x5649cf9741e0
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbf8400
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbf8400 session 0x5649d0ef0d20
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbf8400
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbf8400 session 0x5649cfb072c0
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cdab5400
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cdab5400 session 0x5649d0d2ab40
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d0175800 session 0x5649cf803680
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbefc00 session 0x5649cf975e00
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa809000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 104931328 unmapped: 25567232 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:17.396988+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa6e7000/0x0/0x4ffc00000, data 0xebbc25/0xf85000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 104931328 unmapped: 25567232 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:18.397148+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 104931328 unmapped: 25567232 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa6e7000/0x0/0x4ffc00000, data 0xebbc25/0xf85000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:19.397314+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1162657 data_alloc: 218103808 data_used: 4788224
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 104931328 unmapped: 25567232 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:20.397461+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 104931328 unmapped: 25567232 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:21.397639+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cf62cc00
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cf62cc00 session 0x5649d0e12d20
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 104931328 unmapped: 25567232 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cdab5400
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbefc00
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:22.397773+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 104939520 unmapped: 25559040 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:23.398007+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 104939520 unmapped: 25559040 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:24.398153+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa6e7000/0x0/0x4ffc00000, data 0xebbc25/0xf85000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170594 data_alloc: 218103808 data_used: 5869568
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 104939520 unmapped: 25559040 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:25.398268+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 104939520 unmapped: 25559040 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:26.398439+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 104939520 unmapped: 25559040 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:27.398619+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbf9800
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.266611099s of 11.405261993s, submitted: 42
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 104947712 unmapped: 25550848 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:28.398869+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d0df5c00 session 0x5649d0ac1c20
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbf4800 session 0x5649cf924960
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa6e7000/0x0/0x4ffc00000, data 0xebbc25/0xf85000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 104947712 unmapped: 25550848 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:29.399051+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170726 data_alloc: 218103808 data_used: 5869568
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 104947712 unmapped: 25550848 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:30.399230+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 104947712 unmapped: 25550848 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:31.399444+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 104947712 unmapped: 25550848 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:32.399621+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 104947712 unmapped: 25550848 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:33.399781+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 104947712 unmapped: 25550848 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:34.399995+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa6e7000/0x0/0x4ffc00000, data 0xebbc25/0xf85000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1198584 data_alloc: 218103808 data_used: 5976064
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 106487808 unmapped: 24010752 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:35.400160+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 106504192 unmapped: 23994368 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:36.400286+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 106504192 unmapped: 23994368 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:37.400638+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3cf000/0x0/0x4ffc00000, data 0x11cbc25/0x1295000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 106504192 unmapped: 23994368 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:38.400903+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 106504192 unmapped: 23994368 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:39.401081+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1201352 data_alloc: 218103808 data_used: 6217728
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 106504192 unmapped: 23994368 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:40.401241+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3cf000/0x0/0x4ffc00000, data 0x11cbc25/0x1295000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 106504192 unmapped: 23994368 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:41.401408+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.696000099s of 13.824744225s, submitted: 41
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 106520576 unmapped: 23977984 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:42.401630+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 106520576 unmapped: 23977984 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:43.401880+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3cf000/0x0/0x4ffc00000, data 0x11cbc25/0x1295000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 106520576 unmapped: 23977984 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:44.402110+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197980 data_alloc: 218103808 data_used: 6221824
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105406464 unmapped: 25092096 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:45.402266+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105406464 unmapped: 25092096 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:46.402439+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105406464 unmapped: 25092096 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:47.402647+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105406464 unmapped: 25092096 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:48.402915+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105406464 unmapped: 25092096 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:49.403080+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3d7000/0x0/0x4ffc00000, data 0x11cbc25/0x1295000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197980 data_alloc: 218103808 data_used: 6221824
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105406464 unmapped: 25092096 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:50.403294+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105406464 unmapped: 25092096 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:51.403483+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3d7000/0x0/0x4ffc00000, data 0x11cbc25/0x1295000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105406464 unmapped: 25092096 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:52.403661+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 105406464 unmapped: 25092096 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:53.403795+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0df4800
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.430935860s of 12.437618256s, submitted: 2
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d0df4800 session 0x5649d0ef12c0
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cdb49800
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cdb49800 session 0x5649d0ef14a0
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9f3e000/0x0/0x4ffc00000, data 0x1663c4e/0x172e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 106225664 unmapped: 24272896 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:54.403985+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1260497 data_alloc: 218103808 data_used: 6221824
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 106225664 unmapped: 24272896 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:55.404099+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:56.404228+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 106225664 unmapped: 24272896 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9bbe000/0x0/0x4ffc00000, data 0x19e3c87/0x1aae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:57.404385+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 106242048 unmapped: 24256512 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:58.404631+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 106242048 unmapped: 24256512 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:59.404768+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 106242048 unmapped: 24256512 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9bbe000/0x0/0x4ffc00000, data 0x19e3c87/0x1aae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1260365 data_alloc: 218103808 data_used: 6221824
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:00.404898+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 106242048 unmapped: 24256512 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cdc8a000
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9bbe000/0x0/0x4ffc00000, data 0x19e3c87/0x1aae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:01.405026+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 110985216 unmapped: 19513344 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:02.405143+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 110985216 unmapped: 19513344 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 10K writes, 40K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 10K writes, 2672 syncs, 4.02 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1723 writes, 5203 keys, 1723 commit groups, 1.0 writes per commit group, ingest: 5.27 MB, 0.01 MB/s
                                           Interval WAL: 1723 writes, 743 syncs, 2.32 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:03.405293+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 110985216 unmapped: 19513344 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:04.405430+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 110985216 unmapped: 19513344 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315237 data_alloc: 234881024 data_used: 12255232
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:05.405559+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 110985216 unmapped: 19513344 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:06.405678+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 110985216 unmapped: 19513344 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9bbe000/0x0/0x4ffc00000, data 0x19e3c87/0x1aae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:07.405823+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 111026176 unmapped: 19472384 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:08.405983+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 111026176 unmapped: 19472384 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:09.406128+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 111026176 unmapped: 19472384 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9bbe000/0x0/0x4ffc00000, data 0x19e3c87/0x1aae000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315237 data_alloc: 234881024 data_used: 12255232
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:10.406313+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 111058944 unmapped: 19439616 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.627267838s of 16.744453430s, submitted: 31
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:11.406487+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112648192 unmapped: 17850368 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9862000/0x0/0x4ffc00000, data 0x1d38c87/0x1e03000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:12.406633+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113483776 unmapped: 17014784 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:13.407097+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113467392 unmapped: 17031168 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:14.407437+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113467392 unmapped: 17031168 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1345691 data_alloc: 234881024 data_used: 12300288
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:15.407627+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113467392 unmapped: 17031168 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f97db000/0x0/0x4ffc00000, data 0x1dc0c87/0x1e8b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:16.407796+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113467392 unmapped: 17031168 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:17.408434+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113467392 unmapped: 17031168 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:18.408677+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113598464 unmapped: 16900096 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cdc8a000 session 0x5649ce59af00
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:19.408867+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113606656 unmapped: 16891904 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cdb49800
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cdb49800 session 0x5649cf8021e0
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1204881 data_alloc: 218103808 data_used: 4124672
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:20.409055+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 109330432 unmapped: 21168128 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:21.409314+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 109330432 unmapped: 21168128 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fd6000/0x0/0x4ffc00000, data 0x11cbc25/0x1295000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:22.409583+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.292435646s of 11.659756660s, submitted: 96
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 109330432 unmapped: 21168128 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:23.409801+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 109330432 unmapped: 21168128 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cdab5400 session 0x5649cda450e0
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbefc00 session 0x5649cda443c0
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:24.410148+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0afbc00
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107151360 unmapped: 23347200 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d0afbc00 session 0x5649cfdd03c0
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1157535 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:25.410411+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 23339008 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:26.410754+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 23339008 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:27.410976+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 23339008 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:28.411362+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 23339008 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:29.411650+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 23339008 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1157535 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:30.411917+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 23339008 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:31.412278+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 23339008 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:32.412457+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 23339008 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:33.412591+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 23339008 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:34.412866+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 23339008 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1157535 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:35.413103+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 23339008 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:36.413335+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 23339008 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:37.413550+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 23339008 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:38.413764+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 23339008 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:39.413987+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 23339008 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1157535 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:40.414208+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 23339008 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:41.414410+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 23339008 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:42.414663+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 23339008 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:43.414900+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 23339008 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:44.415069+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 23339008 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:45.415267+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1157535 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 23339008 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:46.415472+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 23339008 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:47.415635+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 23339008 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:48.415846+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 23339008 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:49.415970+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107159552 unmapped: 23339008 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cdc88800
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cdc88800 session 0x5649ce59a5a0
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cdab5400
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cdab5400 session 0x5649ce59ab40
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cdb49800
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cdb49800 session 0x5649ce8f70e0
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cdc88800
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cdc88800 session 0x5649cdc2d2c0
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:50.416103+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbefc00
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 27.890466690s of 28.021562576s, submitted: 47
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1161093 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbefc00 session 0x5649cdc2c780
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0afbc00
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d0afbc00 session 0x5649cfe0fc20
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cdab5400
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cdab5400 session 0x5649cdd00d20
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107298816 unmapped: 23199744 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cdb49800
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cdb49800 session 0x5649d0dc74a0
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cdc88800
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cdc88800 session 0x5649d08e0780
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:51.416255+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107315200 unmapped: 23183360 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:52.416409+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9cd9000/0x0/0x4ffc00000, data 0x14b8c35/0x1583000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107315200 unmapped: 23183360 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9cd9000/0x0/0x4ffc00000, data 0x14b8c35/0x1583000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:53.416572+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107315200 unmapped: 23183360 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:54.416779+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107315200 unmapped: 23183360 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:55.416953+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1211944 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107315200 unmapped: 23183360 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:56.417154+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9cd9000/0x0/0x4ffc00000, data 0x14b8c35/0x1583000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107315200 unmapped: 23183360 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d080f800
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:57.417323+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 107315200 unmapped: 23183360 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:58.417521+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 109797376 unmapped: 20701184 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:59.417669+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 109797376 unmapped: 20701184 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:00.417892+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261648 data_alloc: 234881024 data_used: 10063872
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9cd9000/0x0/0x4ffc00000, data 0x14b8c35/0x1583000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 109797376 unmapped: 20701184 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:01.418081+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 109797376 unmapped: 20701184 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:02.418214+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 109797376 unmapped: 20701184 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:03.418410+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9cd9000/0x0/0x4ffc00000, data 0x14b8c35/0x1583000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 109830144 unmapped: 20668416 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:04.418642+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 109830144 unmapped: 20668416 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:05.418790+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261648 data_alloc: 234881024 data_used: 10063872
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 109830144 unmapped: 20668416 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:06.418949+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 109830144 unmapped: 20668416 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9cd9000/0x0/0x4ffc00000, data 0x14b8c35/0x1583000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:07.419074+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 109871104 unmapped: 20627456 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:08.419260+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 109871104 unmapped: 20627456 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:09.419465+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.087341309s of 19.172225952s, submitted: 26
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 111443968 unmapped: 19054592 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:10.419655+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1313314 data_alloc: 234881024 data_used: 10055680
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 111894528 unmapped: 18604032 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f960e000/0x0/0x4ffc00000, data 0x1b7bc35/0x1c46000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:11.419793+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 111943680 unmapped: 18554880 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:12.419999+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112091136 unmapped: 18407424 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:13.420193+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9602000/0x0/0x4ffc00000, data 0x1b87c35/0x1c52000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112091136 unmapped: 18407424 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9602000/0x0/0x4ffc00000, data 0x1b87c35/0x1c52000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:14.420956+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112091136 unmapped: 18407424 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9602000/0x0/0x4ffc00000, data 0x1b87c35/0x1c52000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:15.421080+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1324806 data_alloc: 234881024 data_used: 10637312
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112099328 unmapped: 18399232 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:16.421212+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112099328 unmapped: 18399232 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:17.421348+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112099328 unmapped: 18399232 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:18.421525+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112099328 unmapped: 18399232 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:19.421891+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112099328 unmapped: 18399232 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:20.422124+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1324822 data_alloc: 234881024 data_used: 10637312
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112099328 unmapped: 18399232 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9602000/0x0/0x4ffc00000, data 0x1b87c35/0x1c52000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:21.422716+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112099328 unmapped: 18399232 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9602000/0x0/0x4ffc00000, data 0x1b87c35/0x1c52000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:22.423664+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112099328 unmapped: 18399232 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:23.423879+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112107520 unmapped: 18391040 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:24.424840+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112107520 unmapped: 18391040 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:25.424997+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9602000/0x0/0x4ffc00000, data 0x1b87c35/0x1c52000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1324822 data_alloc: 234881024 data_used: 10637312
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112107520 unmapped: 18391040 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:26.425140+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112107520 unmapped: 18391040 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9602000/0x0/0x4ffc00000, data 0x1b87c35/0x1c52000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:27.425263+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112107520 unmapped: 18391040 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:28.425615+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112107520 unmapped: 18391040 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:29.425976+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112107520 unmapped: 18391040 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9602000/0x0/0x4ffc00000, data 0x1b87c35/0x1c52000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:30.426154+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1324822 data_alloc: 234881024 data_used: 10637312
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112115712 unmapped: 18382848 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9602000/0x0/0x4ffc00000, data 0x1b87c35/0x1c52000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:31.426346+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112115712 unmapped: 18382848 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cf7edc00 session 0x5649ce8d5a40
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cddd0800
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:32.426628+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112115712 unmapped: 18382848 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cf7cfc00 session 0x5649cda94780
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbf0400
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:33.426885+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112115712 unmapped: 18382848 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:34.427142+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112115712 unmapped: 18382848 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:35.427491+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1324822 data_alloc: 234881024 data_used: 10637312
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112123904 unmapped: 18374656 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9602000/0x0/0x4ffc00000, data 0x1b87c35/0x1c52000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:36.427617+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0df9000
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 26.557266235s of 27.021982193s, submitted: 61
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d0df9000 session 0x5649ce59b860
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0811400
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d0811400 session 0x5649cdc2de00
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cdab5400
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cdab5400 session 0x5649cda452c0
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cdb49800
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 111460352 unmapped: 19038208 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cdb49800 session 0x5649ce8f81e0
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cdc88800
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cdc88800 session 0x5649cda44f00
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:37.427781+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 111468544 unmapped: 19030016 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:38.428009+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649ce398000 session 0x5649d0ef1e00
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0df9000
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9050000/0x0/0x4ffc00000, data 0x2140c97/0x220c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 111468544 unmapped: 19030016 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:39.428203+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 111476736 unmapped: 19021824 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:40.428534+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1374730 data_alloc: 234881024 data_used: 10641408
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 111476736 unmapped: 19021824 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:41.428898+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 111476736 unmapped: 19021824 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:42.429111+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 111476736 unmapped: 19021824 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:43.429373+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9050000/0x0/0x4ffc00000, data 0x2140c97/0x220c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 111476736 unmapped: 19021824 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:44.429530+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbee400
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 111476736 unmapped: 19021824 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9050000/0x0/0x4ffc00000, data 0x2140c97/0x220c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:45.429745+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1383374 data_alloc: 234881024 data_used: 11751424
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113180672 unmapped: 17317888 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9050000/0x0/0x4ffc00000, data 0x2140c97/0x220c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:46.429902+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 114925568 unmapped: 15572992 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:47.430115+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 114933760 unmapped: 15564800 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:48.430280+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 114933760 unmapped: 15564800 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:49.430497+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 114933760 unmapped: 15564800 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:50.430742+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9050000/0x0/0x4ffc00000, data 0x2140c97/0x220c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1414534 data_alloc: 234881024 data_used: 16355328
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 114941952 unmapped: 15556608 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:51.431425+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 114941952 unmapped: 15556608 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:52.432278+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 114941952 unmapped: 15556608 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:53.432705+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 114941952 unmapped: 15556608 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:54.433180+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.700933456s of 17.824674606s, submitted: 45
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115007488 unmapped: 15491072 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:55.433705+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1414366 data_alloc: 234881024 data_used: 16355328
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9050000/0x0/0x4ffc00000, data 0x2140c97/0x220c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [0,0,1])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116973568 unmapped: 13524992 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:56.433872+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 124428288 unmapped: 6070272 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:57.435947+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 124346368 unmapped: 6152192 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:58.437892+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f850f000/0x0/0x4ffc00000, data 0x2c81c97/0x2d4d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 124346368 unmapped: 6152192 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:59.439431+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 124346368 unmapped: 6152192 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:00.439578+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1522140 data_alloc: 234881024 data_used: 17633280
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 124354560 unmapped: 6144000 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:01.439891+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 124354560 unmapped: 6144000 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:02.440475+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 124362752 unmapped: 6135808 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:03.441101+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 124362752 unmapped: 6135808 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:04.441349+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f84f0000/0x0/0x4ffc00000, data 0x2ca0c97/0x2d6c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 124362752 unmapped: 6135808 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:05.442187+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1517884 data_alloc: 234881024 data_used: 17637376
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 124362752 unmapped: 6135808 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:06.442516+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 124362752 unmapped: 6135808 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:07.442889+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 124362752 unmapped: 6135808 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:08.443410+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 124362752 unmapped: 6135808 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.933344841s of 14.779636383s, submitted: 413
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:09.443587+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f84e6000/0x0/0x4ffc00000, data 0x2caac97/0x2d76000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 124526592 unmapped: 5971968 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:10.443729+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1518164 data_alloc: 234881024 data_used: 17637376
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 124526592 unmapped: 5971968 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:11.444056+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbee400 session 0x5649cd74b2c0
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0df5800
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 119554048 unmapped: 10944512 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d0df5800 session 0x5649d01812c0
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:12.444279+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 119562240 unmapped: 10936320 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:13.444698+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 119562240 unmapped: 10936320 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:14.445045+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 119562240 unmapped: 10936320 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:15.445490+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f916c000/0x0/0x4ffc00000, data 0x1b87c35/0x1c52000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1334544 data_alloc: 234881024 data_used: 9682944
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 119562240 unmapped: 10936320 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:16.445700+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 119562240 unmapped: 10936320 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:17.446076+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 119562240 unmapped: 10936320 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:18.446235+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 119562240 unmapped: 10936320 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f916c000/0x0/0x4ffc00000, data 0x1b87c35/0x1c52000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:19.446420+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 119562240 unmapped: 10936320 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:20.446610+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d080f800 session 0x5649d090a780
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1333780 data_alloc: 234881024 data_used: 9682944
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbf2c00
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.768727303s of 11.869369507s, submitted: 43
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 119570432 unmapped: 10928128 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:21.446733+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbf2c00 session 0x5649ce8f83c0
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f960a000/0x0/0x4ffc00000, data 0x1b87c35/0x1c52000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9ff8000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 114491392 unmapped: 16007168 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:22.446921+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 114491392 unmapped: 16007168 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:23.447059+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 114491392 unmapped: 16007168 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:24.447251+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 114491392 unmapped: 16007168 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:25.447400+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182128 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 114491392 unmapped: 16007168 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:26.448218+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9ff8000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 114491392 unmapped: 16007168 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9ff8000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:27.448478+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9ff8000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113672192 unmapped: 16826368 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:28.448727+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113672192 unmapped: 16826368 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:29.449202+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9ff8000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113672192 unmapped: 16826368 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:30.449397+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182128 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113672192 unmapped: 16826368 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:31.449651+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113672192 unmapped: 16826368 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:32.449963+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113672192 unmapped: 16826368 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:33.450624+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9ff8000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113672192 unmapped: 16826368 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:34.451086+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113672192 unmapped: 16826368 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:35.451219+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182128 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113672192 unmapped: 16826368 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:36.451420+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113672192 unmapped: 16826368 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:37.451593+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9ff8000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113672192 unmapped: 16826368 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:38.451827+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113672192 unmapped: 16826368 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:39.452015+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113672192 unmapped: 16826368 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:40.452174+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182128 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113672192 unmapped: 16826368 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:41.452475+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113672192 unmapped: 16826368 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9ff8000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:42.452772+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113672192 unmapped: 16826368 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:43.452933+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113672192 unmapped: 16826368 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:44.453115+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9ff8000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113672192 unmapped: 16826368 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:45.453276+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1182128 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113672192 unmapped: 16826368 heap: 130498560 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:46.453413+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0df6400
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d0df6400 session 0x5649d0d10b40
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cdb49800
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cdb49800 session 0x5649d0d101e0
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0810000
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d0810000 session 0x5649cda44d20
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cdb49800
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cdb49800 session 0x5649d06265a0
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbf2c00
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 25.952045441s of 26.003047943s, submitted: 21
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbf2c00 session 0x5649d0180d20
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d080f800
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d080f800 session 0x5649d0f714a0
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0df6400
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d0df6400 session 0x5649cf8030e0
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113377280 unmapped: 21323776 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0df5400
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:47.453618+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d0df5400 session 0x5649d0656960
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cdb49800
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cdb49800 session 0x5649ce59b4a0
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113377280 unmapped: 21323776 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:48.453975+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113377280 unmapped: 21323776 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:49.454105+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113377280 unmapped: 21323776 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:50.454278+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217597 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fab000/0x0/0x4ffc00000, data 0x11e8bc3/0x12b1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cf983400 session 0x5649cfdd1680
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfa04c00
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113360896 unmapped: 21340160 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:51.454443+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113360896 unmapped: 21340160 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:52.454597+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113360896 unmapped: 21340160 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:53.454774+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cdc88400
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113360896 unmapped: 21340160 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:54.454973+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113451008 unmapped: 21250048 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:55.455117+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1246001 data_alloc: 218103808 data_used: 6885376
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113451008 unmapped: 21250048 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:56.455261+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fab000/0x0/0x4ffc00000, data 0x11e8bc3/0x12b1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113451008 unmapped: 21250048 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:57.455382+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.537994385s of 10.609987259s, submitted: 18
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cdc88400 session 0x5649d0180f00
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbf6c00
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 110166016 unmapped: 24535040 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbf6c00 session 0x5649ce8d43c0
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:58.455539+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 110166016 unmapped: 24535040 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:59.455780+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 110166016 unmapped: 24535040 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:00.456006+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1185801 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 110166016 unmapped: 24535040 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:01.456144+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 110166016 unmapped: 24535040 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:02.456293+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 110166016 unmapped: 24535040 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:03.456453+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 110166016 unmapped: 24535040 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:04.456615+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 110166016 unmapped: 24535040 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:05.457404+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1185801 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:06.457534+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 110166016 unmapped: 24535040 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:07.457683+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 110166016 unmapped: 24535040 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:08.457881+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 110166016 unmapped: 24535040 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0df5c00
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.857390404s of 10.914286613s, submitted: 20
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [0,0,0,0,0,1,1])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d0df5c00 session 0x5649d0f71860
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:09.457977+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 110870528 unmapped: 23830528 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:10.458111+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 110870528 unmapped: 23830528 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1248497 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:11.458223+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 110870528 unmapped: 23830528 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cf97c400
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cf97c400 session 0x5649cf803680
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9c79000/0x0/0x4ffc00000, data 0x151abc3/0x15e3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cf97c400
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cf97c400 session 0x5649cf802f00
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:12.458346+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 110870528 unmapped: 23830528 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cdb49800
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cdb49800 session 0x5649cf8023c0
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cdc88400
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cdc88400 session 0x5649cf802d20
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbf6c00
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:13.458468+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 110952448 unmapped: 23748608 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0df5c00
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:14.458553+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 110952448 unmapped: 23748608 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:15.458694+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113737728 unmapped: 20963328 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d0df5c00 session 0x5649d0d305a0
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbf6c00 session 0x5649d01692c0
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1303764 data_alloc: 234881024 data_used: 10366976
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbf6c00
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:16.458868+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbf6c00 session 0x5649d0168780
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112222208 unmapped: 22478848 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:17.459005+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112222208 unmapped: 22478848 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f8000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:18.459158+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112222208 unmapped: 22478848 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:19.459280+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112222208 unmapped: 22478848 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:20.459419+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112222208 unmapped: 22478848 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193715 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:21.459608+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112222208 unmapped: 22478848 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f8000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:22.459748+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112222208 unmapped: 22478848 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:23.459885+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112222208 unmapped: 22478848 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f8000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:24.460024+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112222208 unmapped: 22478848 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:25.460219+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112222208 unmapped: 22478848 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193715 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:26.460463+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112222208 unmapped: 22478848 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f8000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:27.460638+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112222208 unmapped: 22478848 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:28.460823+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112222208 unmapped: 22478848 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:29.460945+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112222208 unmapped: 22478848 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f8000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:30.461067+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112222208 unmapped: 22478848 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193715 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:31.461184+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112222208 unmapped: 22478848 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:32.461303+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112222208 unmapped: 22478848 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f8000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:33.461414+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112222208 unmapped: 22478848 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:34.461543+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112222208 unmapped: 22478848 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:35.461654+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112222208 unmapped: 22478848 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193715 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:36.461777+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112222208 unmapped: 22478848 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f8000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f8000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:37.461898+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112222208 unmapped: 22478848 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:38.462088+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112222208 unmapped: 22478848 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:39.462292+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112222208 unmapped: 22478848 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f8000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:40.462428+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112222208 unmapped: 22478848 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1193715 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:41.462580+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112222208 unmapped: 22478848 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f8000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:42.462730+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112222208 unmapped: 22478848 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cdc89c00
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cdc89c00 session 0x5649d0d10d20
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbe8400
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbe8400 session 0x5649d0d11a40
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbf8000
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbf8000 session 0x5649d0d11680
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cf97cc00
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cf97cc00 session 0x5649d0d10780
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f8000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cdc89c00
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 34.183135986s of 34.295379639s, submitted: 35
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:43.462891+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112222208 unmapped: 22478848 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cdc89c00 session 0x5649cf974960
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbe8400
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbe8400 session 0x5649cf975860
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbf6c00
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbf6c00 session 0x5649d0e12960
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbf8000
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbf8000 session 0x5649d0ef0d20
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0df5c00
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d0df5c00 session 0x5649cf948b40
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:44.463084+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113410048 unmapped: 21291008 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa05e000/0x0/0x4ffc00000, data 0x1134c25/0x11fe000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:45.463249+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113410048 unmapped: 21291008 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1222981 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:46.463421+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113410048 unmapped: 21291008 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa05e000/0x0/0x4ffc00000, data 0x1134c25/0x11fe000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:47.463543+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113410048 unmapped: 21291008 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:48.463694+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113410048 unmapped: 21291008 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbf6000
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbf6000 session 0x5649cf949860
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:49.463875+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113418240 unmapped: 21282816 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cf985400
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbefc00
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:50.464051+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113426432 unmapped: 21274624 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1238702 data_alloc: 218103808 data_used: 4853760
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:51.464238+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113426432 unmapped: 21274624 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa05e000/0x0/0x4ffc00000, data 0x1134c25/0x11fe000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:52.464445+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113426432 unmapped: 21274624 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:53.464579+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113426432 unmapped: 21274624 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:54.464794+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113426432 unmapped: 21274624 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:55.464950+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113426432 unmapped: 21274624 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1249494 data_alloc: 218103808 data_used: 6467584
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa05e000/0x0/0x4ffc00000, data 0x1134c25/0x11fe000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:56.465108+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113426432 unmapped: 21274624 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:57.465269+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113426432 unmapped: 21274624 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:58.465455+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113426432 unmapped: 21274624 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa05e000/0x0/0x4ffc00000, data 0x1134c25/0x11fe000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:59.465596+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113426432 unmapped: 21274624 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:00.465704+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 113426432 unmapped: 21274624 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfa05000 session 0x5649cf9245a0
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cf80b000 session 0x5649cf975c20
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1249494 data_alloc: 218103808 data_used: 6467584
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:01.465887+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.421611786s of 18.642799377s, submitted: 39
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 114008064 unmapped: 20692992 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:02.466009+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116039680 unmapped: 18661376 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9d43000/0x0/0x4ffc00000, data 0x144fc25/0x1519000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:03.466205+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116047872 unmapped: 18653184 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:04.466421+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116047872 unmapped: 18653184 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:05.466610+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116056064 unmapped: 18644992 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1278352 data_alloc: 218103808 data_used: 6868992
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:06.466741+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116219904 unmapped: 18481152 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:07.466859+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116219904 unmapped: 18481152 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9d3f000/0x0/0x4ffc00000, data 0x1453c25/0x151d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:08.467000+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116219904 unmapped: 18481152 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:09.467123+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116219904 unmapped: 18481152 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:10.467266+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116219904 unmapped: 18481152 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1278352 data_alloc: 218103808 data_used: 6868992
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:11.467400+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116219904 unmapped: 18481152 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cdb47400
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.050851822s of 10.144117355s, submitted: 41
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9d3f000/0x0/0x4ffc00000, data 0x1453c25/0x151d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:12.467578+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116219904 unmapped: 18481152 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:13.467729+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116219904 unmapped: 18481152 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:14.467863+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116219904 unmapped: 18481152 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cf985400 session 0x5649d0180780
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbefc00 session 0x5649d0d2ab40
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:15.468022+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115580928 unmapped: 19120128 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cf80b000
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cf80b000 session 0x5649cf803a40
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199614 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:16.468245+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 111992832 unmapped: 22708224 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:17.468396+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 111992832 unmapped: 22708224 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d012a400
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa082000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:18.468564+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 111992832 unmapped: 22708224 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:19.468737+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 111992832 unmapped: 22708224 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:20.469051+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 111992832 unmapped: 22708224 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199614 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:21.469190+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa082000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 111992832 unmapped: 22708224 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:22.469366+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 111992832 unmapped: 22708224 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa082000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:23.469509+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 111992832 unmapped: 22708224 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:24.469641+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 111992832 unmapped: 22708224 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:25.469764+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 111992832 unmapped: 22708224 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: mgrc ms_handle_reset ms_handle_reset con 0x5649cfb41c00
Jan 20 19:31:44 compute-0 ceph-osd[82836]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1083080178
Jan 20 19:31:44 compute-0 ceph-osd[82836]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1083080178,v1:192.168.122.100:6801/1083080178]
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: get_auth_request con 0x5649cf97cc00 auth_method 0
Jan 20 19:31:44 compute-0 ceph-osd[82836]: mgrc handle_mgr_configure stats_period=5
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199614 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:26.469933+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 22691840 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:27.470127+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 22691840 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:28.470288+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 22691840 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa082000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.266271591s of 17.418762207s, submitted: 48
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:29.470422+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 22691840 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa082000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:30.470557+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 22691840 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199482 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:31.470703+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 22691840 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:32.470902+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 22691840 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa082000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:33.471116+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa082000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 22691840 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:34.471240+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 22691840 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:35.471340+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 22691840 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199482 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:36.471478+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 22691840 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:37.471615+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 22691840 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:38.471751+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 22691840 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cdaa8400
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cdaa8400 session 0x5649cfc1f680
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cf985c00
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cf985c00 session 0x5649cfc1e3c0
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfa05800
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfa05800 session 0x5649cf802b40
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0813800
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d0813800 session 0x5649cf8030e0
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0813800
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.329735756s of 10.332962990s, submitted: 1
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:39.471861+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa082000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d0813800 session 0x5649cf803680
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112320512 unmapped: 22380544 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:40.471977+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112320512 unmapped: 22380544 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1234242 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:41.472157+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112386048 unmapped: 22315008 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa010000/0x0/0x4ffc00000, data 0x1183bc3/0x124c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:42.472372+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112386048 unmapped: 22315008 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:43.472510+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112386048 unmapped: 22315008 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbf0c00
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbf0c00 session 0x5649d0f714a0
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:44.472673+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa010000/0x0/0x4ffc00000, data 0x1183bc3/0x124c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112386048 unmapped: 22315008 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbf6000
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbf6000 session 0x5649d0f71860
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:45.472794+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cf62dc00
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cf62dc00 session 0x5649ce8d43c0
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfb41400
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112386048 unmapped: 22315008 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1234242 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:46.472961+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112394240 unmapped: 22306816 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfb41400 session 0x5649ce8d85a0
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:47.475950+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa00f000/0x0/0x4ffc00000, data 0x1183bd3/0x124d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112394240 unmapped: 22306816 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfb41400
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:48.476096+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cf62dc00
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112394240 unmapped: 22306816 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:49.476223+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 21766144 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:50.476321+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 21766144 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1260680 data_alloc: 218103808 data_used: 6393856
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:51.476446+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 21766144 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:52.476617+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 21766144 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:53.476735+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa00f000/0x0/0x4ffc00000, data 0x1183bd3/0x124d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 21766144 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:54.476838+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 21766144 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:55.476939+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa00f000/0x0/0x4ffc00000, data 0x1183bd3/0x124d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 21766144 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa00f000/0x0/0x4ffc00000, data 0x1183bd3/0x124d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1260680 data_alloc: 218103808 data_used: 6393856
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:56.477063+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 21766144 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:57.477191+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 21766144 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:58.478177+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa00f000/0x0/0x4ffc00000, data 0x1183bd3/0x124d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 112934912 unmapped: 21766144 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.720109940s of 19.755855560s, submitted: 12
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:59.478298+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 114401280 unmapped: 20299776 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:00.478422+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115113984 unmapped: 19587072 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1292800 data_alloc: 218103808 data_used: 6426624
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:01.478555+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115113984 unmapped: 19587072 heap: 134701056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0df6800
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d0df6800 session 0x5649d0f70f00
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0afb800
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d0afb800 session 0x5649cd74a960
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:02.478669+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115523584 unmapped: 23371776 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:03.478923+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115523584 unmapped: 23371776 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f93de000/0x0/0x4ffc00000, data 0x1db3c35/0x1e7e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:04.479051+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115523584 unmapped: 23371776 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:05.479171+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0df7c00
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d0df7c00 session 0x5649d0ef1860
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115523584 unmapped: 23371776 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0df8800
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d0df8800 session 0x5649d0ef14a0
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1355699 data_alloc: 218103808 data_used: 6426624
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:06.479365+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115507200 unmapped: 23388160 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbf3c00
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbf3c00 session 0x5649d0ef10e0
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:07.479510+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0afb800
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d0afb800 session 0x5649d0ef0b40
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115507200 unmapped: 23388160 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f93bf000/0x0/0x4ffc00000, data 0x1dd2c35/0x1e9d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0df6800
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:08.479682+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115507200 unmapped: 23388160 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:09.479853+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0df7c00
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115556352 unmapped: 23339008 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:10.479977+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 120889344 unmapped: 18006016 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1421657 data_alloc: 234881024 data_used: 15912960
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:11.480162+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 120889344 unmapped: 18006016 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.846256256s of 13.088277817s, submitted: 78
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:12.480275+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 121061376 unmapped: 17833984 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f93be000/0x0/0x4ffc00000, data 0x1dd2c45/0x1e9e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:13.480431+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 121061376 unmapped: 17833984 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:14.480642+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 121061376 unmapped: 17833984 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:15.480926+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 121061376 unmapped: 17833984 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:16.481061+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1421601 data_alloc: 234881024 data_used: 15912960
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 121061376 unmapped: 17833984 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:17.481184+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 121061376 unmapped: 17833984 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f93b4000/0x0/0x4ffc00000, data 0x1ddcc45/0x1ea8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:18.481311+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 121135104 unmapped: 17760256 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:19.481431+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 121135104 unmapped: 17760256 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:20.481545+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 124837888 unmapped: 14057472 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:21.481850+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1461783 data_alloc: 234881024 data_used: 16588800
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 125165568 unmapped: 13729792 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:22.481984+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 124641280 unmapped: 14254080 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f8f64000/0x0/0x4ffc00000, data 0x222cc45/0x22f8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:23.482146+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 124682240 unmapped: 14213120 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:24.482266+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 124682240 unmapped: 14213120 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f8f64000/0x0/0x4ffc00000, data 0x222cc45/0x22f8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:25.482456+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 124682240 unmapped: 14213120 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f8f64000/0x0/0x4ffc00000, data 0x222cc45/0x22f8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:26.482593+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1465059 data_alloc: 234881024 data_used: 17182720
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 124682240 unmapped: 14213120 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:27.482714+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 124682240 unmapped: 14213120 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:28.482895+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 124682240 unmapped: 14213120 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:29.483040+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 124690432 unmapped: 14204928 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:30.483178+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 124706816 unmapped: 14188544 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.803529739s of 18.944644928s, submitted: 59
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:31.483346+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1466219 data_alloc: 234881024 data_used: 17256448
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 124706816 unmapped: 14188544 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f8f62000/0x0/0x4ffc00000, data 0x222dc45/0x22f9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:32.483503+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 124706816 unmapped: 14188544 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:33.483641+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 124706816 unmapped: 14188544 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:34.483892+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 124715008 unmapped: 14180352 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d0df7c00 session 0x5649d06270e0
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d0df6800 session 0x5649ce8f9e00
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:35.484073+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0df8000
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 20905984 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d0df8000 session 0x5649cdc2d2c0
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:36.484206+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296890 data_alloc: 218103808 data_used: 6426624
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9a37000/0x0/0x4ffc00000, data 0x14dabd3/0x15a4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 20905984 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:37.484335+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 20905984 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:38.484488+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 20905984 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9a37000/0x0/0x4ffc00000, data 0x14dabd3/0x15a4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:39.484608+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 20905984 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:40.484719+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117989376 unmapped: 20905984 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cf62dc00 session 0x5649d0ac0780
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfb41400 session 0x5649d0656960
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:41.484846+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0afb800
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296890 data_alloc: 218103808 data_used: 6426624
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.182349205s of 10.414586067s, submitted: 23
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115523584 unmapped: 23371776 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d0afb800 session 0x5649d0ef12c0
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:42.484955+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 23363584 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:43.485081+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 23363584 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:44.485201+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 23363584 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:45.485380+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 23363584 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:46.485511+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212877 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 23363584 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:47.485634+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 23363584 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:48.485771+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 23363584 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:49.485919+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 23363584 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:50.486065+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 23363584 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:51.486239+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212877 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 23363584 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:52.486376+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 23363584 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:53.486500+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 23363584 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:54.486673+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 23363584 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:55.486858+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 23363584 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:56.487005+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212877 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 23363584 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:57.487160+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 23363584 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:58.487306+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 23363584 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:59.487480+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 23363584 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:00.487608+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 23363584 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:01.487877+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212877 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 23363584 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:02.488006+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 23363584 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:03.488125+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 23363584 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:04.488280+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 23363584 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:05.488416+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 23363584 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:06.488574+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212877 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 23363584 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:07.488701+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbf4400
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbf4400 session 0x5649cda450e0
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d01fc000
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d01fc000 session 0x5649ce470f00
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbe9000
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbe9000 session 0x5649d0d10f00
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 23363584 heap: 138895360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbe9000
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbe9000 session 0x5649cf925680
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfb41400
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 26.502956390s of 26.523035049s, submitted: 9
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfb41400 session 0x5649ce59b680
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbf4400
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:08.488892+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbf4400 session 0x5649cf803860
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d01fc000
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d01fc000 session 0x5649d090a3c0
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0afb800
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d0afb800 session 0x5649d01694a0
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649d0afb800
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d0afb800 session 0x5649d0d103c0
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116572160 unmapped: 26525696 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:09.489153+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116572160 unmapped: 26525696 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:10.489395+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116580352 unmapped: 26517504 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cf80b400
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cf80b400 session 0x5649ce471a40
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:11.489598+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1270568 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cf9d9800
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cf9d9800 session 0x5649cf9250e0
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116580352 unmapped: 26517504 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:12.489793+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfb40400
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfb40400 session 0x5649d01805a0
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbf7400
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbf7400 session 0x5649d0181860
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9c75000/0x0/0x4ffc00000, data 0x151dbd3/0x15e7000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115531776 unmapped: 27566080 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:13.490145+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbf7400
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cf80b400
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 115548160 unmapped: 27549696 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:14.490322+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117948416 unmapped: 25149440 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:15.490463+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117948416 unmapped: 25149440 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9c74000/0x0/0x4ffc00000, data 0x151dbe3/0x15e8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:16.490613+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1324602 data_alloc: 234881024 data_used: 10555392
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117948416 unmapped: 25149440 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:17.490736+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117948416 unmapped: 25149440 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:18.490872+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9c74000/0x0/0x4ffc00000, data 0x151dbe3/0x15e8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117948416 unmapped: 25149440 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:19.491028+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117948416 unmapped: 25149440 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9c74000/0x0/0x4ffc00000, data 0x151dbe3/0x15e8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:20.491156+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117948416 unmapped: 25149440 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:21.491341+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1324602 data_alloc: 234881024 data_used: 10555392
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117948416 unmapped: 25149440 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:22.491464+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117948416 unmapped: 25149440 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:23.491575+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117948416 unmapped: 25149440 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:24.491733+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9c74000/0x0/0x4ffc00000, data 0x151dbe3/0x15e8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.495733261s of 16.590114594s, submitted: 20
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 119349248 unmapped: 23748608 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:25.491884+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 119431168 unmapped: 23666688 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:26.492036+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1361224 data_alloc: 234881024 data_used: 10567680
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 120561664 unmapped: 22536192 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9910000/0x0/0x4ffc00000, data 0x1881be3/0x194c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:27.492168+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 120561664 unmapped: 22536192 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:28.492388+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 120561664 unmapped: 22536192 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:29.492584+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 120561664 unmapped: 22536192 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:30.492700+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 120561664 unmapped: 22536192 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:31.492879+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1361224 data_alloc: 234881024 data_used: 10567680
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 120561664 unmapped: 22536192 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:32.493017+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9901000/0x0/0x4ffc00000, data 0x1890be3/0x195b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 120561664 unmapped: 22536192 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:33.493137+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9901000/0x0/0x4ffc00000, data 0x1890be3/0x195b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 120561664 unmapped: 22536192 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:34.493307+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 120561664 unmapped: 22536192 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:35.493429+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 120561664 unmapped: 22536192 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:36.493566+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1360096 data_alloc: 234881024 data_used: 10567680
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.018393517s of 12.156224251s, submitted: 39
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbf7400 session 0x5649cda45a40
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cf80b400 session 0x5649d0d2b0e0
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 120561664 unmapped: 22536192 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cdaa8000
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:37.493677+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cdaa8000 session 0x5649cd74a780
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 26787840 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:38.493886+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 26787840 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:39.494086+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 26787840 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:40.494289+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 26787840 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:41.494494+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 26787840 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:42.494703+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 26787840 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:43.494885+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 26787840 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:44.495137+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 26787840 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:45.495308+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 26787840 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:46.495497+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 26787840 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:47.495689+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 26787840 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:48.495900+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 26787840 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:49.496060+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 26787840 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:50.496180+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 26787840 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:51.496295+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 26787840 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:52.496419+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 26787840 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:53.496581+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 26787840 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:54.496730+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 26787840 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:55.496891+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 26787840 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:56.497056+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 26787840 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:57.497213+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 26787840 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:58.497432+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 26787840 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:59.497654+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 26787840 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:00.497781+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 26787840 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:01.497923+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 26787840 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:02.498069+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 26787840 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:03.498238+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 26787840 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:04.498407+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 26787840 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:05.498559+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116310016 unmapped: 26787840 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:06.498706+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 26779648 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:07.498828+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 26779648 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:08.498980+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 26779648 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:09.499195+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 26779648 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:10.499350+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 26779648 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:11.499506+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 26779648 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:12.499705+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 26779648 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:13.499891+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116318208 unmapped: 26779648 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:14.500089+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116326400 unmapped: 26771456 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:15.500225+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116326400 unmapped: 26771456 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:16.500401+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116326400 unmapped: 26771456 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:17.500547+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116326400 unmapped: 26771456 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:18.500731+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116326400 unmapped: 26771456 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:19.500900+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116326400 unmapped: 26771456 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:20.501066+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116326400 unmapped: 26771456 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:21.501239+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116326400 unmapped: 26771456 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:22.501428+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 26763264 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:23.501603+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 26763264 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:24.501906+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 26763264 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:25.502094+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 26763264 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:26.502236+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 26763264 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:27.502586+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 26763264 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:28.502892+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 26763264 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:29.503044+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 26763264 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:30.503267+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116342784 unmapped: 26755072 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:31.503471+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116342784 unmapped: 26755072 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:32.503705+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116342784 unmapped: 26755072 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:33.503899+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116342784 unmapped: 26755072 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:34.504165+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116342784 unmapped: 26755072 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:35.504384+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116342784 unmapped: 26755072 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:36.504552+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 26746880 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:37.504700+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 26746880 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:38.504880+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 26746880 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:39.505035+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 26746880 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:40.505183+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 26746880 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:41.505353+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 26746880 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:42.505520+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 26746880 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:43.505736+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 26746880 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:44.505951+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 26746880 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:45.506248+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 26746880 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:46.507019+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 26746880 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:47.507262+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:48.507740+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 26746880 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:49.508171+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 26746880 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:50.508455+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 26746880 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:51.508788+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116350976 unmapped: 26746880 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:52.509272+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116359168 unmapped: 26738688 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:53.509493+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116359168 unmapped: 26738688 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:54.510326+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116359168 unmapped: 26738688 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:55.510793+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116359168 unmapped: 26738688 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:56.511258+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 26730496 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:57.511396+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 26730496 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:58.511547+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 26730496 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:59.511695+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 26730496 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:00.511825+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 26730496 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:01.512444+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 26730496 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:02.512554+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116367360 unmapped: 26730496 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: do_command 'config diff' '{prefix=config diff}'
Jan 20 19:31:44 compute-0 ceph-osd[82836]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 20 19:31:44 compute-0 ceph-osd[82836]: do_command 'config show' '{prefix=config show}'
Jan 20 19:31:44 compute-0 ceph-osd[82836]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 20 19:31:44 compute-0 ceph-osd[82836]: do_command 'counter dump' '{prefix=counter dump}'
Jan 20 19:31:44 compute-0 ceph-osd[82836]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 20 19:31:44 compute-0 ceph-osd[82836]: do_command 'counter schema' '{prefix=counter schema}'
Jan 20 19:31:44 compute-0 ceph-osd[82836]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:03.512757+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116408320 unmapped: 26689536 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:04.512856+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 26550272 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: do_command 'log dump' '{prefix=log dump}'
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:05.513204+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116547584 unmapped: 26550272 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: do_command 'perf dump' '{prefix=perf dump}'
Jan 20 19:31:44 compute-0 ceph-osd[82836]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Jan 20 19:31:44 compute-0 ceph-osd[82836]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Jan 20 19:31:44 compute-0 ceph-osd[82836]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: do_command 'perf schema' '{prefix=perf schema}'
Jan 20 19:31:44 compute-0 ceph-osd[82836]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:06.513336+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116826112 unmapped: 26271744 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:07.513452+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116826112 unmapped: 26271744 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:08.513696+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116826112 unmapped: 26271744 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:09.513875+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116826112 unmapped: 26271744 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:10.513993+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116826112 unmapped: 26271744 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:11.514127+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116834304 unmapped: 26263552 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:12.514339+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116834304 unmapped: 26263552 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:13.514535+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116834304 unmapped: 26263552 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:14.514966+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116834304 unmapped: 26263552 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:15.515100+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116834304 unmapped: 26263552 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:16.515228+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116834304 unmapped: 26263552 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:17.515368+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116834304 unmapped: 26263552 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:18.515517+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116834304 unmapped: 26263552 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:19.515634+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116834304 unmapped: 26263552 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:20.515757+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116834304 unmapped: 26263552 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:21.515899+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116834304 unmapped: 26263552 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:22.516049+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116834304 unmapped: 26263552 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:23.516201+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116834304 unmapped: 26263552 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:24.516333+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116834304 unmapped: 26263552 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:25.516470+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116834304 unmapped: 26263552 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:26.516589+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116842496 unmapped: 26255360 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:27.516684+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116842496 unmapped: 26255360 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:28.516846+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116842496 unmapped: 26255360 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:29.516991+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116842496 unmapped: 26255360 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:30.517124+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116842496 unmapped: 26255360 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:31.517251+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116842496 unmapped: 26255360 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:32.517389+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116842496 unmapped: 26255360 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:33.517537+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116842496 unmapped: 26255360 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:34.517690+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116842496 unmapped: 26255360 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:35.517830+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116850688 unmapped: 26247168 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:36.518002+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116850688 unmapped: 26247168 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:37.518129+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116850688 unmapped: 26247168 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:38.518274+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116850688 unmapped: 26247168 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:39.518411+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116850688 unmapped: 26247168 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:40.518508+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116850688 unmapped: 26247168 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:41.518696+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116850688 unmapped: 26247168 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:42.518898+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116850688 unmapped: 26247168 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:43.519044+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116850688 unmapped: 26247168 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:44.519155+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116850688 unmapped: 26247168 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:45.519318+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116850688 unmapped: 26247168 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:46.519502+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116850688 unmapped: 26247168 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:47.519668+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116850688 unmapped: 26247168 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:48.519871+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116858880 unmapped: 26238976 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:49.520094+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116858880 unmapped: 26238976 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:50.520262+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116858880 unmapped: 26238976 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:51.520397+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116858880 unmapped: 26238976 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:52.520592+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116858880 unmapped: 26238976 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:53.520729+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116858880 unmapped: 26238976 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:54.520890+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116858880 unmapped: 26238976 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:55.521101+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116858880 unmapped: 26238976 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:56.521270+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116858880 unmapped: 26238976 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:57.521393+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116867072 unmapped: 26230784 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:58.521541+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116867072 unmapped: 26230784 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:59.521628+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116867072 unmapped: 26230784 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:00.521779+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116867072 unmapped: 26230784 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:01.521952+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116867072 unmapped: 26230784 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:02.522124+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116867072 unmapped: 26230784 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:03.522278+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116867072 unmapped: 26230784 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:04.522476+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116867072 unmapped: 26230784 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:05.522608+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116867072 unmapped: 26230784 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:06.522729+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116875264 unmapped: 26222592 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:07.522877+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116875264 unmapped: 26222592 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:08.523012+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116875264 unmapped: 26222592 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:09.523164+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116875264 unmapped: 26222592 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:10.523294+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116875264 unmapped: 26222592 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:11.523503+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116875264 unmapped: 26222592 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:12.523663+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116875264 unmapped: 26222592 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:13.523800+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116875264 unmapped: 26222592 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:14.523941+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116875264 unmapped: 26222592 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:15.524084+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116875264 unmapped: 26222592 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:16.524229+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116875264 unmapped: 26222592 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:17.524385+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116883456 unmapped: 26214400 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:18.524555+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116883456 unmapped: 26214400 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:19.524717+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116883456 unmapped: 26214400 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:20.524916+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116883456 unmapped: 26214400 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:21.525065+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116883456 unmapped: 26214400 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:22.525355+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116883456 unmapped: 26214400 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:23.525516+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116883456 unmapped: 26214400 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:24.525667+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116883456 unmapped: 26214400 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:25.525877+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116883456 unmapped: 26214400 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:26.526074+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116883456 unmapped: 26214400 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:27.526314+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116883456 unmapped: 26214400 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:28.526609+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116883456 unmapped: 26214400 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:29.526755+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116891648 unmapped: 26206208 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:30.527007+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116891648 unmapped: 26206208 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:31.527148+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116891648 unmapped: 26206208 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:32.527297+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116891648 unmapped: 26206208 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:33.527520+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116891648 unmapped: 26206208 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:34.527696+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116891648 unmapped: 26206208 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:35.527882+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116891648 unmapped: 26206208 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:36.528014+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116891648 unmapped: 26206208 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:37.528190+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116891648 unmapped: 26206208 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:38.528398+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116891648 unmapped: 26206208 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:39.528636+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116891648 unmapped: 26206208 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:40.528761+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116891648 unmapped: 26206208 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:41.528971+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116899840 unmapped: 26198016 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:42.529293+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116899840 unmapped: 26198016 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2398342518' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:43.529500+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116899840 unmapped: 26198016 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:44.529648+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116899840 unmapped: 26198016 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:45.529848+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116899840 unmapped: 26198016 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:46.529986+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116899840 unmapped: 26198016 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:47.530122+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116899840 unmapped: 26198016 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:48.530474+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116899840 unmapped: 26198016 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:49.530590+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116899840 unmapped: 26198016 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:50.530715+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116908032 unmapped: 26189824 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:51.530912+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116908032 unmapped: 26189824 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:52.531119+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116908032 unmapped: 26189824 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:53.531265+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116908032 unmapped: 26189824 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:54.531657+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116908032 unmapped: 26189824 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:55.531991+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116908032 unmapped: 26189824 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:56.532308+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116908032 unmapped: 26189824 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:57.532472+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116908032 unmapped: 26189824 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:58.532777+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116908032 unmapped: 26189824 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:59.533010+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116908032 unmapped: 26189824 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:00.533228+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116908032 unmapped: 26189824 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:01.533430+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116908032 unmapped: 26189824 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:02.533648+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 12K writes, 47K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 12K writes, 3584 syncs, 3.59 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2134 writes, 6845 keys, 2134 commit groups, 1.0 writes per commit group, ingest: 7.09 MB, 0.01 MB/s
                                           Interval WAL: 2134 writes, 912 syncs, 2.34 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116908032 unmapped: 26189824 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:03.533860+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116908032 unmapped: 26189824 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:04.534014+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116908032 unmapped: 26189824 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:05.534221+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116908032 unmapped: 26189824 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:06.534440+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116908032 unmapped: 26189824 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:07.534601+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116908032 unmapped: 26189824 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:08.534780+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116908032 unmapped: 26189824 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:09.534916+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116908032 unmapped: 26189824 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:10.535091+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116916224 unmapped: 26181632 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:11.535246+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116916224 unmapped: 26181632 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:12.535396+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116916224 unmapped: 26181632 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:13.535551+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116916224 unmapped: 26181632 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:14.535700+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116916224 unmapped: 26181632 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:15.535854+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116916224 unmapped: 26181632 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:16.535997+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116916224 unmapped: 26181632 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:17.536145+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116916224 unmapped: 26181632 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:18.536309+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116924416 unmapped: 26173440 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:19.536460+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116924416 unmapped: 26173440 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:20.536604+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116924416 unmapped: 26173440 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:21.536723+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116924416 unmapped: 26173440 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:22.536879+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116924416 unmapped: 26173440 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:23.537020+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116924416 unmapped: 26173440 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:24.537183+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116924416 unmapped: 26173440 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:25.537305+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116924416 unmapped: 26173440 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:26.537493+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 26165248 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:27.537622+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 26165248 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:28.537867+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 26165248 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:29.538065+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 26165248 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:30.538217+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 26165248 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:31.538347+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 26165248 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:32.538505+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 26165248 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:33.538669+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 26165248 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:34.538822+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 26165248 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:35.538971+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 26165248 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:36.539112+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 26165248 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:37.539298+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 26165248 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:38.539483+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 26165248 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:39.539705+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 26165248 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:40.539864+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 26165248 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:41.540045+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 26165248 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:42.540211+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 26165248 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:43.540421+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 26165248 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:44.540624+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 26165248 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:45.540745+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 26165248 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:46.540916+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 26165248 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:47.541075+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 26165248 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:48.541246+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116940800 unmapped: 26157056 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:49.541397+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116940800 unmapped: 26157056 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:50.541564+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116940800 unmapped: 26157056 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:51.541733+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116940800 unmapped: 26157056 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:52.541893+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116940800 unmapped: 26157056 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:53.542015+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116940800 unmapped: 26157056 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:54.542145+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116940800 unmapped: 26157056 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:55.542333+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116940800 unmapped: 26157056 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:56.542498+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116940800 unmapped: 26157056 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:57.542661+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116940800 unmapped: 26157056 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:58.542862+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 26148864 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:59.543047+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 26148864 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:00.543174+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 26148864 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:01.543311+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 26148864 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:02.543449+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 26148864 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:03.543574+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 26148864 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:04.543767+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 26148864 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:05.543913+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 26148864 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:06.544089+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 26148864 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:07.544297+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 26148864 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:08.544552+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 26148864 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:09.544692+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 26148864 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:10.544859+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 26148864 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:11.545066+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 26148864 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:12.545220+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 26148864 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:13.545348+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116948992 unmapped: 26148864 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:14.545450+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116957184 unmapped: 26140672 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:15.545575+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116957184 unmapped: 26140672 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:16.545700+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116957184 unmapped: 26140672 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:17.545854+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116957184 unmapped: 26140672 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:18.546016+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116957184 unmapped: 26140672 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:19.546140+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116957184 unmapped: 26140672 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:20.546246+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:21.546366+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116957184 unmapped: 26140672 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:22.546504+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116957184 unmapped: 26140672 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:23.546699+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116957184 unmapped: 26140672 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:24.546857+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116957184 unmapped: 26140672 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:25.546978+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116957184 unmapped: 26140672 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:26.547159+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116957184 unmapped: 26140672 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:27.547303+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116957184 unmapped: 26140672 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:28.547464+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116957184 unmapped: 26140672 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:29.547586+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116957184 unmapped: 26140672 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:30.547714+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116957184 unmapped: 26140672 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:31.547857+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116965376 unmapped: 26132480 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:32.548085+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116965376 unmapped: 26132480 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:33.548226+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116965376 unmapped: 26132480 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:34.548404+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116965376 unmapped: 26132480 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:35.548610+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116965376 unmapped: 26132480 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:36.548745+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116965376 unmapped: 26132480 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:37.548850+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116965376 unmapped: 26132480 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:38.549016+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116965376 unmapped: 26132480 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:39.549195+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116973568 unmapped: 26124288 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:40.549355+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116973568 unmapped: 26124288 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:41.549490+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116973568 unmapped: 26124288 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:42.549604+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116973568 unmapped: 26124288 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:43.549710+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116973568 unmapped: 26124288 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:44.549832+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116973568 unmapped: 26124288 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:45.550028+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116973568 unmapped: 26124288 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:46.550164+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116973568 unmapped: 26124288 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:47.550342+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116973568 unmapped: 26124288 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:48.550491+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116973568 unmapped: 26124288 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:49.550636+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116973568 unmapped: 26124288 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:50.550718+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4fa3f9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116973568 unmapped: 26124288 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:51.550843+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116973568 unmapped: 26124288 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:52.551012+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116973568 unmapped: 26124288 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:53.551170+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116973568 unmapped: 26124288 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 317.120391846s of 317.198272705s, submitted: 28
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:54.551465+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116932608 unmapped: 26165248 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:55.551605+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 26050560 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:56.551778+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117047296 unmapped: 26050560 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:57.551905+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26042368 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:58.552062+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26042368 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:59.552245+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26042368 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:00.552381+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26042368 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:01.552537+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26042368 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:02.552693+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26042368 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:03.553052+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26042368 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:04.553268+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26042368 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:05.555506+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26042368 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:06.555918+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117055488 unmapped: 26042368 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:07.556910+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 26034176 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:08.559354+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 26034176 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:09.560865+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 26034176 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:10.561943+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117063680 unmapped: 26034176 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:11.562229+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 26025984 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:12.562695+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 26025984 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:13.563695+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 26025984 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:14.563999+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 26025984 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:15.564192+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 26025984 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:16.564532+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 26025984 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:17.564725+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 26025984 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:18.565379+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117071872 unmapped: 26025984 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:19.565998+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 26017792 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:20.566152+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117080064 unmapped: 26017792 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:21.566385+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 26009600 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:22.566684+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 26009600 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:23.566860+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 26009600 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:24.567055+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 26009600 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:25.567222+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 26009600 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:26.567385+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 26009600 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:27.567862+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 26009600 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:28.568004+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 26009600 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:29.568222+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 26009600 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:30.568422+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 26009600 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:31.568544+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 26009600 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:32.568666+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 26009600 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:33.568850+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 26009600 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:34.568993+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117088256 unmapped: 26009600 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:35.569091+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 26001408 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:36.569234+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 26001408 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:37.569368+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 26001408 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:38.569541+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 26001408 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:39.569735+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 26001408 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:40.569897+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 26001408 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:41.570074+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 26001408 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:42.570146+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 26001408 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:43.570288+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 26001408 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:44.570448+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 26001408 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:45.570562+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 26001408 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:46.570713+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 26001408 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:47.570878+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 26001408 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:48.571074+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 26001408 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:49.571242+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 26001408 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:50.571414+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 26001408 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:51.571578+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 26001408 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:52.571842+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 26001408 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:53.572073+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 26001408 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:54.572261+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 26001408 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:55.572484+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 26001408 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:56.572668+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 26001408 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:57.572932+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 26001408 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:58.573109+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 26001408 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:59.573233+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 26001408 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:00.573453+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 26001408 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:01.573590+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 26001408 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:02.573719+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 26001408 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:03.573906+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 26001408 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:04.574047+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 26001408 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:05.574197+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 26001408 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:06.574372+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 26001408 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:07.574520+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 26001408 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:08.575187+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 26001408 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:09.575600+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 26001408 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:10.576400+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 26001408 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:11.576976+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 26001408 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:12.577472+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 26001408 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:13.577897+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 26001408 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:14.578138+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 26001408 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:15.578538+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 26001408 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:16.579413+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 26001408 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:17.579632+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 26001408 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:18.580267+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 26001408 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:19.580595+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 26001408 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:20.581135+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 26001408 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:21.581352+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 26001408 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:22.581729+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 26001408 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:23.582030+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 26001408 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:24.582432+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 26001408 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:25.582631+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 26001408 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:26.582936+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 26001408 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:27.583188+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 26001408 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:28.583398+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117096448 unmapped: 26001408 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:29.583585+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 25993216 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:30.583793+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 25993216 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:31.584107+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 25993216 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:32.584375+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 25993216 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:33.584793+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 25993216 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:34.584954+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 25993216 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:35.585160+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 25993216 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:36.585331+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 25993216 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:37.585483+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 25993216 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:38.585667+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 25993216 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:39.585790+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 25993216 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:40.585964+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 25993216 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:41.586080+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 25993216 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:42.586297+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 25993216 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:43.586449+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 25993216 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:44.586666+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 25993216 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:45.586863+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117104640 unmapped: 25993216 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:46.587089+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 25985024 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:47.587836+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 25985024 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:48.588009+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 25985024 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:49.588167+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 25985024 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:50.588322+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 25985024 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:51.588443+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 25985024 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:52.588583+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 25985024 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:53.588748+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 25985024 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:54.588898+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 25985024 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:55.589024+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 25985024 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:56.589209+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 25976832 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:57.589355+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 25976832 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:58.589528+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 25976832 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:59.589681+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 25976832 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:00.589952+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 25976832 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:01.590124+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 25976832 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:02.590301+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 25976832 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:03.590448+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 25976832 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:04.590615+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 25976832 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:05.590744+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 25976832 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:06.591010+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 25976832 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:07.591275+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 25976832 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:08.591524+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 25976832 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:09.591692+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117121024 unmapped: 25976832 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:10.591864+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 25968640 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:11.591997+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 25968640 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:12.592143+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 25968640 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:13.592290+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 25968640 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:14.593416+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 25968640 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:15.594582+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 25968640 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:16.595200+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 25968640 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:17.595787+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 25968640 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:18.596100+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 25968640 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:19.596465+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 25968640 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:20.596595+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 25968640 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:21.596988+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 25968640 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:22.597386+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 25968640 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:23.597501+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 25968640 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:24.597660+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 25968640 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:25.598066+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 25968640 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:26.598296+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 25960448 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:27.598648+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 25960448 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:28.598896+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 25960448 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:29.599057+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 25960448 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:30.599293+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 25960448 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:31.599522+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 25960448 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:32.599762+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 25960448 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:33.600012+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 25960448 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:34.600194+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 25960448 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:35.600417+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 25960448 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:36.600570+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 25960448 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:37.600859+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 25960448 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:38.601028+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 25960448 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:39.601166+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 25960448 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:40.601422+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117137408 unmapped: 25960448 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:41.601593+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 25952256 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:42.601745+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 25952256 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:43.601881+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 25952256 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:44.602016+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 25952256 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:45.602204+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 25952256 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:46.602389+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 25952256 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:47.602577+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 25952256 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:48.602739+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 25952256 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:49.602885+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 25952256 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:50.603045+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 25952256 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:51.603175+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 25952256 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:52.603317+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 25952256 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:53.603515+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 25952256 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:54.603717+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 25952256 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:55.603911+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 25952256 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:56.604076+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 25952256 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:57.604241+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117145600 unmapped: 25952256 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:58.604435+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117153792 unmapped: 25944064 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:59.604552+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117153792 unmapped: 25944064 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:00.604745+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117153792 unmapped: 25944064 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:01.604854+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117153792 unmapped: 25944064 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:02.604989+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117153792 unmapped: 25944064 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets getting new tickets!
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:03.605252+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _finish_auth 0
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:03.606347+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117153792 unmapped: 25944064 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:04.605415+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117153792 unmapped: 25944064 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:05.605544+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117153792 unmapped: 25944064 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:06.605736+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 25935872 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:07.605919+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 25935872 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:08.606071+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 25935872 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:09.606233+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 25935872 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:10.606402+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 25935872 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:11.606586+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:12.606799+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 25935872 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:13.606974+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 25935872 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:14.607118+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 25935872 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:15.607336+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 25935872 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:16.607482+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 25935872 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:17.607644+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 25935872 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:18.607900+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 25935872 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:19.608065+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 25935872 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:20.608237+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 25935872 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:21.608450+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 25935872 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:22.608658+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117161984 unmapped: 25935872 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:23.608793+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117170176 unmapped: 25927680 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:24.609001+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117178368 unmapped: 25919488 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:25.609121+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117178368 unmapped: 25919488 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:26.609276+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117178368 unmapped: 25919488 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:27.609423+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117178368 unmapped: 25919488 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:28.609582+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117178368 unmapped: 25919488 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:29.609711+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117178368 unmapped: 25919488 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:30.609865+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117178368 unmapped: 25919488 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:31.610012+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117178368 unmapped: 25919488 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:32.610218+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117178368 unmapped: 25919488 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:33.610405+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117178368 unmapped: 25919488 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:34.610541+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117178368 unmapped: 25919488 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:35.610717+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117178368 unmapped: 25919488 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:36.610849+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 25911296 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:37.610970+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 25911296 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:38.611104+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 25911296 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:39.611273+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 25911296 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:40.611419+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 25911296 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:41.611560+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 25911296 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:42.611708+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 25911296 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:43.611846+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 25911296 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:44.611975+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 25911296 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:45.612115+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 25911296 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:46.612285+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 25911296 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:47.612419+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 25911296 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:48.612583+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 25911296 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:49.612761+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 25911296 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:50.612951+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 25911296 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:51.613093+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 25911296 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:52.613236+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117186560 unmapped: 25911296 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:53.613355+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 25903104 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:54.613501+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 25903104 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:55.613630+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 25903104 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:56.613764+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 25903104 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:57.613891+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 25903104 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:58.614102+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 25903104 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:59.614367+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 25903104 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:00.614513+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 25903104 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:01.614668+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 25903104 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:02.614874+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117194752 unmapped: 25903104 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:03.615041+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 25894912 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:04.615174+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 25894912 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:05.615317+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 25894912 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:06.615481+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 25894912 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:07.615686+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 25894912 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:08.615897+0000)
Jan 20 19:31:44 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.27097 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:44 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.18318 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:44 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.27275 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 25894912 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:09.616066+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 25894912 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:10.616430+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 25894912 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:11.616584+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 25894912 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:12.616711+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 25894912 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:13.616855+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 25894912 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:14.617002+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 25894912 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:15.617188+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 25894912 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:16.617411+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 25894912 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:17.617554+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 25894912 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:18.617766+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117202944 unmapped: 25894912 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:19.617910+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 25886720 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:20.618047+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 25886720 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:21.618237+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 25886720 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:22.618354+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 25886720 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:23.618494+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 25886720 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:24.618712+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 25886720 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:25.618844+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 25886720 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:26.618998+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 25886720 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:27.619171+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 25886720 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:28.619323+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 25886720 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:29.619462+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 25886720 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:30.619577+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 25886720 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:31.619680+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 25886720 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cddd0800 session 0x5649d0168b40
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbeb400
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:32.619863+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 25886720 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfbf0400 session 0x5649ce8f74a0
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbf5000
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:33.620037+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 25886720 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:34.620231+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117211136 unmapped: 25886720 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:35.620417+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 25878528 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:36.620591+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 25878528 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:37.620734+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 25878528 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d0df9000 session 0x5649cda95680
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfbf3800
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:38.620868+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 25878528 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:39.621032+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 25878528 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:40.621168+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 25878528 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:41.621307+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 25878528 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:42.621434+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 25878528 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:43.621625+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 25878528 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:44.621843+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 25878528 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:45.622022+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 25878528 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:46.622152+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 25878528 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:47.622295+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 25878528 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:48.622492+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 25878528 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:49.622644+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117219328 unmapped: 25878528 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:50.622785+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 25870336 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:51.622976+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 25870336 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:52.623101+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 25870336 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:53.623251+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 25870336 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:54.623480+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 25870336 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:55.623705+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 25870336 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:56.623868+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 25870336 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:57.624008+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 25870336 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:58.624204+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117227520 unmapped: 25870336 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:59.624368+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 25862144 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:27:00.624558+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 25862144 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:27:01.624710+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 25862144 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:27:02.624850+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 25862144 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:27:03.624972+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 25862144 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:27:04.625086+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 25862144 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:27:05.625278+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 25862144 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:27:06.625466+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 25862144 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:27:07.625641+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 25862144 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:27:08.625892+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 25862144 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:27:09.626085+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 25862144 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:27:10.626252+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 25862144 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:27:11.626417+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 25862144 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:27:12.626585+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 25862144 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:27:13.626715+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 25862144 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:27:14.626902+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117235712 unmapped: 25862144 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:27:15.627049+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 25853952 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:27:16.627183+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 25853952 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:27:17.627418+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 25853952 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:27:18.627608+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 25853952 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:27:19.627911+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 25853952 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:27:20.628044+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 25853952 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:27:21.628167+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 25853952 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:27:22.628409+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 25853952 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:27:23.628563+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 25853952 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:27:24.628686+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 25853952 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:27:25.628912+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 25853952 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:27:26.629052+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 25853952 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:27:27.629165+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 25853952 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:27:28.629302+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 25853952 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:27:29.629449+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 25853952 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:27:30.629600+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117243904 unmapped: 25853952 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:27:31.629793+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 25845760 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:27:32.629984+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 25845760 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:27:33.630106+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 25845760 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:27:34.630243+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 25845760 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:27:35.630357+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 25845760 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:27:36.630503+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 25845760 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:27:37.630662+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117252096 unmapped: 25845760 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:27:38.630888+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117260288 unmapped: 25837568 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:27:39.631066+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117260288 unmapped: 25837568 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:27:40.631246+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117260288 unmapped: 25837568 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:27:41.631391+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117260288 unmapped: 25837568 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:27:42.631527+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117260288 unmapped: 25837568 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:27:43.631679+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117260288 unmapped: 25837568 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:27:44.631798+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117260288 unmapped: 25837568 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:27:45.632022+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117260288 unmapped: 25837568 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:27:46.632136+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117260288 unmapped: 25837568 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:27:47.632291+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117260288 unmapped: 25837568 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:27:48.632545+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117260288 unmapped: 25837568 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:27:49.632780+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117260288 unmapped: 25837568 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:27:50.632990+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117260288 unmapped: 25837568 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649cfa04c00 session 0x5649cdd014a0
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cfb41400
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:27:51.633181+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117260288 unmapped: 25837568 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:27:52.633295+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117260288 unmapped: 25837568 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:27:53.633497+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117260288 unmapped: 25837568 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:27:54.633650+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117268480 unmapped: 25829376 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:27:55.633896+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117268480 unmapped: 25829376 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:27:56.634081+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117268480 unmapped: 25829376 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:27:57.634255+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117268480 unmapped: 25829376 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:27:58.634464+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117268480 unmapped: 25829376 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:27:59.634590+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117268480 unmapped: 25829376 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:28:00.634859+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117268480 unmapped: 25829376 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:28:01.634987+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117268480 unmapped: 25829376 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:28:02.635171+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117268480 unmapped: 25829376 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:28:03.635346+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117268480 unmapped: 25829376 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:28:04.635481+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117268480 unmapped: 25829376 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:28:05.635613+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117268480 unmapped: 25829376 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:28:06.635759+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117268480 unmapped: 25829376 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:28:07.635853+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117268480 unmapped: 25829376 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:28:08.635999+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117268480 unmapped: 25829376 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:28:09.636133+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117276672 unmapped: 25821184 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:28:10.636269+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117276672 unmapped: 25821184 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:28:11.636432+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117276672 unmapped: 25821184 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:28:12.636628+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117276672 unmapped: 25821184 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:28:13.636782+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117276672 unmapped: 25821184 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:28:14.636927+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117276672 unmapped: 25821184 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:28:15.637129+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117276672 unmapped: 25821184 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:28:16.637268+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117276672 unmapped: 25821184 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:28:17.637402+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117276672 unmapped: 25821184 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:28:18.637546+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117276672 unmapped: 25821184 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:28:19.637668+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117276672 unmapped: 25821184 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:28:20.637788+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117276672 unmapped: 25821184 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:28:21.637933+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117276672 unmapped: 25821184 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:28:22.638070+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117284864 unmapped: 25812992 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:28:23.638210+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117284864 unmapped: 25812992 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:28:24.638384+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117284864 unmapped: 25812992 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:28:25.638551+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117284864 unmapped: 25812992 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:28:26.638750+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117284864 unmapped: 25812992 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:28:27.638922+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117284864 unmapped: 25812992 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:28:28.639168+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117284864 unmapped: 25812992 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:28:29.639310+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117284864 unmapped: 25812992 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:28:30.639469+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117284864 unmapped: 25812992 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:28:31.639649+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117284864 unmapped: 25812992 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:28:32.639851+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117284864 unmapped: 25812992 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:28:33.639987+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117284864 unmapped: 25812992 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:28:34.640117+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117293056 unmapped: 25804800 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:28:35.640444+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117293056 unmapped: 25804800 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:28:36.640588+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117293056 unmapped: 25804800 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:28:37.640962+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117293056 unmapped: 25804800 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:28:38.641269+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117293056 unmapped: 25804800 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:28:39.641449+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117293056 unmapped: 25804800 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:28:40.641585+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117293056 unmapped: 25804800 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:28:41.641895+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117293056 unmapped: 25804800 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:28:42.642063+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117293056 unmapped: 25804800 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:28:43.642345+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117293056 unmapped: 25804800 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:28:44.642602+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117293056 unmapped: 25804800 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:28:45.642906+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117293056 unmapped: 25804800 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:28:46.643095+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117301248 unmapped: 25796608 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:28:47.643292+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117301248 unmapped: 25796608 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:28:48.643536+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117301248 unmapped: 25796608 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:28:49.643708+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117301248 unmapped: 25796608 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:28:50.643933+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117301248 unmapped: 25796608 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:28:51.644112+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117301248 unmapped: 25796608 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:28:52.644244+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117301248 unmapped: 25796608 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:28:53.644474+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117301248 unmapped: 25796608 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:28:54.644747+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117301248 unmapped: 25796608 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:28:55.644906+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117301248 unmapped: 25796608 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:28:56.645157+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117301248 unmapped: 25796608 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:28:57.645334+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117301248 unmapped: 25796608 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:28:58.645526+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117309440 unmapped: 25788416 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:28:59.645653+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117309440 unmapped: 25788416 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:29:00.645799+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117309440 unmapped: 25788416 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:29:01.646048+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117309440 unmapped: 25788416 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:29:02.646206+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117309440 unmapped: 25788416 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:29:03.646366+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117309440 unmapped: 25788416 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:29:04.646695+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117309440 unmapped: 25788416 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:29:05.646931+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117309440 unmapped: 25788416 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:29:06.647130+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117317632 unmapped: 25780224 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:29:07.647292+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117317632 unmapped: 25780224 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:29:08.647468+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117317632 unmapped: 25780224 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:29:09.647780+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117317632 unmapped: 25780224 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:29:10.648001+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117317632 unmapped: 25780224 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:29:11.648238+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117317632 unmapped: 25780224 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:29:12.648384+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117317632 unmapped: 25780224 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:29:13.648528+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117317632 unmapped: 25780224 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:29:14.648780+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117317632 unmapped: 25780224 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:29:15.649094+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117317632 unmapped: 25780224 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:29:16.649242+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117317632 unmapped: 25780224 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:29:17.649432+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 ms_handle_reset con 0x5649d012a400 session 0x5649d0181c20
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: handle_auth_request added challenge on 0x5649cf97d800
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117317632 unmapped: 25780224 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:29:18.649612+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117317632 unmapped: 25780224 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:29:19.649756+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117317632 unmapped: 25780224 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:29:20.649934+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117325824 unmapped: 25772032 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:29:21.650119+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117325824 unmapped: 25772032 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:29:22.650245+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117325824 unmapped: 25772032 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:29:23.650392+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117325824 unmapped: 25772032 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:29:24.650575+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117325824 unmapped: 25772032 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:29:25.650764+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: mgrc ms_handle_reset ms_handle_reset con 0x5649cf97cc00
Jan 20 19:31:44 compute-0 ceph-osd[82836]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/1083080178
Jan 20 19:31:44 compute-0 ceph-osd[82836]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/1083080178,v1:192.168.122.100:6801/1083080178]
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: get_auth_request con 0x5649cf7ed800 auth_method 0
Jan 20 19:31:44 compute-0 ceph-osd[82836]: mgrc handle_mgr_configure stats_period=5
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117325824 unmapped: 25772032 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:29:26.650948+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117325824 unmapped: 25772032 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:29:27.651101+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117325824 unmapped: 25772032 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:29:28.651266+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117325824 unmapped: 25772032 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:29:29.651427+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117325824 unmapped: 25772032 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:29:30.651569+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117325824 unmapped: 25772032 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:29:31.651762+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117334016 unmapped: 25763840 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:29:32.651959+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117334016 unmapped: 25763840 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:29:33.652147+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117334016 unmapped: 25763840 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:29:34.652313+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117334016 unmapped: 25763840 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:29:35.652547+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117334016 unmapped: 25763840 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:29:36.653069+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117334016 unmapped: 25763840 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:29:37.654660+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 117334016 unmapped: 25763840 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:29:38.656927+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116654080 unmapped: 26443776 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:29:39.657130+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116654080 unmapped: 26443776 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:29:40.657663+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116654080 unmapped: 26443776 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:29:41.658120+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116654080 unmapped: 26443776 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:29:42.658390+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116654080 unmapped: 26443776 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:29:43.658508+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:29:44.658951+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116654080 unmapped: 26443776 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:29:45.659150+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116662272 unmapped: 26435584 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:29:46.659290+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116662272 unmapped: 26435584 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:29:47.659574+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116662272 unmapped: 26435584 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:29:48.660226+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116662272 unmapped: 26435584 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:29:49.660723+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116662272 unmapped: 26435584 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:29:50.660867+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116662272 unmapped: 26435584 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:29:51.660998+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116662272 unmapped: 26435584 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:29:52.661197+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116662272 unmapped: 26435584 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:29:53.661414+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116662272 unmapped: 26435584 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:29:54.661578+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116662272 unmapped: 26435584 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:29:55.661723+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116662272 unmapped: 26435584 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:29:56.661856+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116662272 unmapped: 26435584 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:29:57.661990+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116662272 unmapped: 26435584 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:29:58.662218+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116662272 unmapped: 26435584 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:29:59.662386+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116678656 unmapped: 26419200 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:30:00.662531+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116678656 unmapped: 26419200 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:30:01.662782+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116678656 unmapped: 26419200 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:30:02.662990+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116678656 unmapped: 26419200 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.1 total, 600.0 interval
                                           Cumulative writes: 13K writes, 48K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s
                                           Cumulative WAL: 13K writes, 3863 syncs, 3.48 writes per sync, written: 0.03 GB, 0.01 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 565 writes, 856 keys, 565 commit groups, 1.0 writes per commit group, ingest: 0.27 MB, 0.00 MB/s
                                           Interval WAL: 565 writes, 279 syncs, 2.03 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:30:03.663197+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116678656 unmapped: 26419200 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:30:04.663363+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116678656 unmapped: 26419200 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:30:05.663509+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116678656 unmapped: 26419200 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:30:06.663693+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116678656 unmapped: 26419200 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:30:07.663881+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116678656 unmapped: 26419200 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:30:08.664094+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116678656 unmapped: 26419200 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:30:09.664284+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116678656 unmapped: 26419200 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:30:10.664417+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116678656 unmapped: 26419200 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:30:11.664554+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116686848 unmapped: 26411008 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:30:12.664709+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116686848 unmapped: 26411008 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:30:13.664898+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116686848 unmapped: 26411008 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:30:14.664982+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116686848 unmapped: 26411008 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:30:15.665151+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116686848 unmapped: 26411008 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:30:16.665340+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116686848 unmapped: 26411008 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:30:17.665521+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116686848 unmapped: 26411008 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:30:18.665883+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116686848 unmapped: 26411008 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:30:19.666014+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116695040 unmapped: 26402816 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:30:20.666207+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116695040 unmapped: 26402816 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:30:21.666412+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116695040 unmapped: 26402816 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:30:22.666559+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116695040 unmapped: 26402816 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:30:23.666708+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116695040 unmapped: 26402816 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:30:24.666838+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116695040 unmapped: 26402816 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:30:25.666978+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116703232 unmapped: 26394624 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:30:26.667110+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116703232 unmapped: 26394624 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:30:27.667296+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116703232 unmapped: 26394624 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:30:28.667452+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116703232 unmapped: 26394624 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:30:29.667575+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116703232 unmapped: 26394624 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:30:30.667691+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116703232 unmapped: 26394624 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:30:31.667845+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116703232 unmapped: 26394624 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:30:32.667988+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116703232 unmapped: 26394624 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:30:33.668125+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116703232 unmapped: 26394624 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:30:34.668270+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116703232 unmapped: 26394624 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:30:35.668385+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116703232 unmapped: 26394624 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:30:36.668522+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116703232 unmapped: 26394624 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:30:37.668651+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116703232 unmapped: 26394624 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:30:38.668836+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116703232 unmapped: 26394624 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:30:39.669091+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116703232 unmapped: 26394624 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:30:40.669903+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116711424 unmapped: 26386432 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:30:41.670469+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116711424 unmapped: 26386432 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:30:42.670932+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116711424 unmapped: 26386432 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:30:43.671300+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116711424 unmapped: 26386432 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:30:44.671581+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116719616 unmapped: 26378240 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:30:45.671854+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116719616 unmapped: 26378240 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:30:46.671992+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116719616 unmapped: 26378240 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:30:47.672234+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116719616 unmapped: 26378240 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:30:48.672403+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116719616 unmapped: 26378240 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:30:49.672578+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116719616 unmapped: 26378240 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:30:50.672781+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116719616 unmapped: 26378240 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:30:51.673240+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116719616 unmapped: 26378240 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:30:52.673660+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116719616 unmapped: 26378240 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:30:53.674569+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116719616 unmapped: 26378240 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:30:54.674725+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116727808 unmapped: 26370048 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:30:55.674843+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116727808 unmapped: 26370048 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:30:56.675030+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116727808 unmapped: 26370048 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:30:57.675213+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116727808 unmapped: 26370048 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:30:58.675546+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116727808 unmapped: 26370048 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:30:59.675692+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116727808 unmapped: 26370048 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:31:00.675849+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116727808 unmapped: 26370048 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:31:01.676306+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116727808 unmapped: 26370048 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:31:02.676616+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116727808 unmapped: 26370048 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:31:03.676783+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116727808 unmapped: 26370048 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:31:04.676993+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116727808 unmapped: 26370048 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:31:05.677185+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116727808 unmapped: 26370048 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:31:06.677387+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116727808 unmapped: 26370048 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:31:07.677512+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 26361856 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:31:08.677686+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 26361856 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:31:09.677826+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 26361856 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:31:44 compute-0 ceph-osd[82836]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:31:44 compute-0 ceph-osd[82836]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1223377 data_alloc: 218103808 data_used: 2691072
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:31:10.677994+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 26361856 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: do_command 'config diff' '{prefix=config diff}'
Jan 20 19:31:44 compute-0 ceph-osd[82836]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 20 19:31:44 compute-0 ceph-osd[82836]: do_command 'config show' '{prefix=config show}'
Jan 20 19:31:44 compute-0 ceph-osd[82836]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:31:11.678172+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: do_command 'counter dump' '{prefix=counter dump}'
Jan 20 19:31:44 compute-0 ceph-osd[82836]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 20 19:31:44 compute-0 ceph-osd[82836]: do_command 'counter schema' '{prefix=counter schema}'
Jan 20 19:31:44 compute-0 ceph-osd[82836]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116686848 unmapped: 26411008 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:31:12.678379+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: prioritycache tune_memory target: 4294967296 mapped: 116686848 unmapped: 26411008 heap: 143097856 old mem: 2845415832 new mem: 2845415832
Jan 20 19:31:44 compute-0 ceph-osd[82836]: osd.0 167 heartbeat osd_stat(store_statfs(0x4f9fe9000/0x0/0x4ffc00000, data 0xd9abc3/0xe63000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x4daf9c5), peers [1,2] op hist [])
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: tick
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_tickets
Jan 20 19:31:44 compute-0 ceph-osd[82836]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:31:13.678529+0000)
Jan 20 19:31:44 compute-0 ceph-osd[82836]: do_command 'log dump' '{prefix=log dump}'
Jan 20 19:31:44 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1502: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:31:44 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.27112 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:44 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 20 19:31:44 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/885919997' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 20 19:31:44 compute-0 rsyslogd[1003]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 19:31:44 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.18336 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:44 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.27287 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:44 compute-0 ceph-mon[74381]: from='client.18276 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:44 compute-0 ceph-mon[74381]: from='client.18291 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:44 compute-0 ceph-mon[74381]: from='client.27067 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:44 compute-0 ceph-mon[74381]: from='client.27245 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:44 compute-0 ceph-mon[74381]: from='client.18300 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:44 compute-0 ceph-mon[74381]: from='client.27082 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:44 compute-0 ceph-mon[74381]: from='client.27260 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:44 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/3082253929' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Jan 20 19:31:44 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/3179761877' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 20 19:31:44 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/2398342518' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 20 19:31:44 compute-0 ceph-mon[74381]: from='client.27097 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:44 compute-0 ceph-mon[74381]: from='client.18318 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:44 compute-0 ceph-mon[74381]: from='client.27275 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:44 compute-0 ceph-mon[74381]: pgmap v1502: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:31:44 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/2932914022' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Jan 20 19:31:44 compute-0 ceph-mon[74381]: from='client.27112 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:44 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/885919997' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 20 19:31:44 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/2075204829' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 20 19:31:44 compute-0 ceph-mon[74381]: from='client.18336 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:44 compute-0 ceph-mon[74381]: from='client.27287 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:44 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Jan 20 19:31:44 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2253305485' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 20 19:31:45 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.27127 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:45 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.18348 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:45 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.27299 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:45 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:31:45 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:31:45 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:31:45.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:31:45 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon stat"} v 0)
Jan 20 19:31:45 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/679602397' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Jan 20 19:31:45 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.27142 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:45 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.18369 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:45 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.27317 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:45 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:31:45 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000027s ======
Jan 20 19:31:45 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:31:45.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000027s
Jan 20 19:31:45 compute-0 crontab[298248]: (root) LIST (root)
Jan 20 19:31:45 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.18381 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:31:45 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.27154 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:45 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.27338 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:46 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/1551551244' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Jan 20 19:31:46 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/2253305485' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 20 19:31:46 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/150945761' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 20 19:31:46 compute-0 ceph-mon[74381]: from='client.27127 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:46 compute-0 ceph-mon[74381]: from='client.18348 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:46 compute-0 ceph-mon[74381]: from='client.27299 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:46 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/3934610891' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Jan 20 19:31:46 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/1959845038' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 20 19:31:46 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/679602397' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Jan 20 19:31:46 compute-0 ceph-mon[74381]: from='client.27142 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:46 compute-0 ceph-mon[74381]: from='client.18369 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:46 compute-0 ceph-mon[74381]: from='client.27317 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:46 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/1620609960' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Jan 20 19:31:46 compute-0 ceph-mon[74381]: from='client.18381 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:31:46 compute-0 ceph-mon[74381]: from='client.27154 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:46 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.27175 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:31:46 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.18402 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:31:46 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.27347 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:31:46 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1503: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:31:46 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "node ls"} v 0)
Jan 20 19:31:46 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1468172566' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 20 19:31:46 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.27190 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:31:46 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.18414 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:31:46 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.27362 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:31:46 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Jan 20 19:31:46 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1428378427' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Jan 20 19:31:46 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.27202 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:31:47 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.18432 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:31:47 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.27374 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:31:47 compute-0 ceph-mon[74381]: from='client.27338 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:47 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/3188209980' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Jan 20 19:31:47 compute-0 ceph-mon[74381]: from='client.27175 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:31:47 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/3458540550' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Jan 20 19:31:47 compute-0 ceph-mon[74381]: from='client.18402 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:31:47 compute-0 ceph-mon[74381]: from='client.27347 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:31:47 compute-0 ceph-mon[74381]: pgmap v1503: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:31:47 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/1468172566' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 20 19:31:47 compute-0 ceph-mon[74381]: from='client.27190 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:31:47 compute-0 ceph-mon[74381]: from='client.18414 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:31:47 compute-0 ceph-mon[74381]: from='client.27362 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:31:47 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/1428378427' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Jan 20 19:31:47 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/3520448898' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 20 19:31:47 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Jan 20 19:31:47 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4285810744' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Jan 20 19:31:47 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:31:47.323Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:31:47 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:31:47 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:31:47 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:31:47.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:31:47 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.27217 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:31:47 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.27392 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:31:47 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Jan 20 19:31:47 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/270437616' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 20 19:31:47 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Jan 20 19:31:47 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3835758809' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 20 19:31:47 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:31:47 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:31:47 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:31:47.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:31:47 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:31:47 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Jan 20 19:31:47 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/452961278' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 20 19:31:47 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Jan 20 19:31:47 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1811467043' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Jan 20 19:31:48 compute-0 ceph-mon[74381]: from='client.27202 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:31:48 compute-0 ceph-mon[74381]: from='client.18432 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:31:48 compute-0 ceph-mon[74381]: from='client.27374 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:31:48 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/4285810744' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Jan 20 19:31:48 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/292494728' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Jan 20 19:31:48 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/3680803113' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Jan 20 19:31:48 compute-0 ceph-mon[74381]: from='client.27217 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:31:48 compute-0 ceph-mon[74381]: from='client.27392 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:31:48 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/270437616' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 20 19:31:48 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/3835758809' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 20 19:31:48 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/3469245052' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Jan 20 19:31:48 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/3375339644' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Jan 20 19:31:48 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/452961278' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 20 19:31:48 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/3539019445' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 20 19:31:48 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/1811467043' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Jan 20 19:31:48 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/374008922' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Jan 20 19:31:48 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/1854889701' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Jan 20 19:31:48 compute-0 nova_compute[254061]: 2026-01-20 19:31:48.193 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:31:48 compute-0 nova_compute[254061]: 2026-01-20 19:31:48.194 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:31:48 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Jan 20 19:31:48 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3913967838' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Jan 20 19:31:48 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1504: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:31:48 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Jan 20 19:31:48 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3224317637' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 20 19:31:48 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 20 19:31:48 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3389365456' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 19:31:48 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 20 19:31:48 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3389365456' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 19:31:48 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Jan 20 19:31:48 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2796585972' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 20 19:31:48 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Jan 20 19:31:48 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1533502780' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 20 19:31:48 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:31:48.924Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:31:48 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Jan 20 19:31:48 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3862199934' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 20 19:31:49 compute-0 systemd[1]: Starting Hostname Service...
Jan 20 19:31:49 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/1297096839' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 20 19:31:49 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/3913967838' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Jan 20 19:31:49 compute-0 ceph-mon[74381]: pgmap v1504: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:31:49 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/1969411652' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Jan 20 19:31:49 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/2394740420' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Jan 20 19:31:49 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/3224317637' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 20 19:31:49 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/2956451040' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Jan 20 19:31:49 compute-0 ceph-mon[74381]: from='client.? 192.168.122.10:0/3389365456' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Jan 20 19:31:49 compute-0 ceph-mon[74381]: from='client.? 192.168.122.10:0/3389365456' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Jan 20 19:31:49 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/4267601918' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 20 19:31:49 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/2796585972' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 20 19:31:49 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/1646804361' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 20 19:31:49 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/1055827096' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Jan 20 19:31:49 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/1533502780' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 20 19:31:49 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/1118470649' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Jan 20 19:31:49 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/3862199934' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 20 19:31:49 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/4180529349' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Jan 20 19:31:49 compute-0 systemd[1]: Started Hostname Service.
Jan 20 19:31:49 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Jan 20 19:31:49 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1412544954' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 20 19:31:49 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd utilization"} v 0)
Jan 20 19:31:49 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3135928015' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 20 19:31:49 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:31:49 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:31:49 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:31:49.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:31:49 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:31:49 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:31:49 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:31:49.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:31:49 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Jan 20 19:31:49 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2120045006' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Jan 20 19:31:49 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.18573 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:49 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-mgr-compute-0-cepfkm[74672]: ::ffff:192.168.122.100 - - [20/Jan/2026:19:31:49] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Jan 20 19:31:49 compute-0 ceph-mgr[74676]: [prometheus INFO cherrypy.access.140411700757024] ::ffff:192.168.122.100 - - [20/Jan/2026:19:31:49] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Jan 20 19:31:50 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Jan 20 19:31:50 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3430127266' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 20 19:31:50 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.18597 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:50 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1505: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:31:50 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/2877799098' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Jan 20 19:31:50 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/1412544954' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 20 19:31:50 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/2420082395' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 20 19:31:50 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/1536840217' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 20 19:31:50 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/3135928015' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 20 19:31:50 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/398378657' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Jan 20 19:31:50 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/2010236714' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Jan 20 19:31:50 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/2120045006' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Jan 20 19:31:50 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/3776226938' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 20 19:31:50 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/4241463156' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 20 19:31:50 compute-0 ceph-mon[74381]: from='client.18573 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:50 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/2748954301' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Jan 20 19:31:50 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/666367408' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Jan 20 19:31:50 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/3430127266' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 20 19:31:50 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/2139573955' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 20 19:31:50 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/1996355057' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Jan 20 19:31:50 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.18609 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:31:50 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.27503 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:50 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.27346 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:50 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.18621 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:31:50 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.27355 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:31:51 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.27527 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:51 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "quorum_status"} v 0)
Jan 20 19:31:51 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3456977199' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Jan 20 19:31:51 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.27533 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:31:51 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.27361 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:51 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.18636 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:31:51 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.27367 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:31:51 compute-0 ceph-mon[74381]: from='client.18597 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:51 compute-0 ceph-mon[74381]: pgmap v1505: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:31:51 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/3359588572' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Jan 20 19:31:51 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/3382037845' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Jan 20 19:31:51 compute-0 ceph-mon[74381]: from='client.18609 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:31:51 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/2158881023' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 20 19:31:51 compute-0 ceph-mon[74381]: from='client.27503 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:51 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/2417462206' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Jan 20 19:31:51 compute-0 ceph-mon[74381]: from='client.27346 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:51 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/3456977199' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Jan 20 19:31:51 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:31:51 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:31:51 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:31:51.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:31:51 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:31:51 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:31:51 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:31:51.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:31:51 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.27545 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:31:51 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions"} v 0)
Jan 20 19:31:51 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3727306076' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 20 19:31:51 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.18651 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:31:51 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.27379 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:31:51 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.27560 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:31:52 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.18666 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:31:52 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Jan 20 19:31:52 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/465053894' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 20 19:31:52 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.27391 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:31:52 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1506: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:31:52 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.27569 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:31:52 compute-0 ceph-mon[74381]: from='client.18621 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:31:52 compute-0 ceph-mon[74381]: from='client.27355 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:31:52 compute-0 ceph-mon[74381]: from='client.27527 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:52 compute-0 ceph-mon[74381]: from='client.27533 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:31:52 compute-0 ceph-mon[74381]: from='client.27361 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:52 compute-0 ceph-mon[74381]: from='client.18636 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:31:52 compute-0 ceph-mon[74381]: from='client.27367 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:31:52 compute-0 ceph-mon[74381]: from='client.27545 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:31:52 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/3727306076' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 20 19:31:52 compute-0 ceph-mon[74381]: from='client.18651 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:31:52 compute-0 ceph-mon[74381]: from='client.27379 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:31:52 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/880747740' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Jan 20 19:31:52 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/465053894' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 20 19:31:52 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/2325939028' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Jan 20 19:31:52 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.18675 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:31:52 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.27397 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:31:52 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Jan 20 19:31:52 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2447680740' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 20 19:31:52 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 20 19:31:52 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 20 19:31:52 compute-0 ceph-mon[74381]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 20 19:31:52 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.27584 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:31:52 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 20 19:31:52 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 20 19:31:52 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.18687 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:31:52 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.18708 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:31:53 compute-0 sudo[299402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:31:53 compute-0 sudo[299402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:31:53 compute-0 sudo[299402]: pam_unix(sudo:session): session closed for user root
Jan 20 19:31:53 compute-0 sudo[299432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Jan 20 19:31:53 compute-0 sudo[299432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:31:53 compute-0 nova_compute[254061]: 2026-01-20 19:31:53.195 254065 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 20 19:31:53 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.27599 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:31:53 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.27439 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:31:53 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:31:53 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:31:53 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:31:53.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:31:53 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 20 19:31:53 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 20 19:31:53 compute-0 ceph-mon[74381]: from='client.27560 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:31:53 compute-0 ceph-mon[74381]: from='client.18666 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:31:53 compute-0 ceph-mon[74381]: from='client.27391 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:31:53 compute-0 ceph-mon[74381]: pgmap v1506: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Jan 20 19:31:53 compute-0 ceph-mon[74381]: from='client.27569 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:31:53 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/2175370745' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 20 19:31:53 compute-0 ceph-mon[74381]: from='client.18675 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:31:53 compute-0 ceph-mon[74381]: from='client.27397 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:31:53 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/2447680740' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 20 19:31:53 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/2822699619' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Jan 20 19:31:53 compute-0 ceph-mon[74381]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 20 19:31:53 compute-0 ceph-mon[74381]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 20 19:31:53 compute-0 ceph-mon[74381]: from='client.27584 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:31:53 compute-0 ceph-mon[74381]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 20 19:31:53 compute-0 ceph-mon[74381]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 20 19:31:53 compute-0 ceph-mon[74381]: from='client.18687 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:31:53 compute-0 ceph-mon[74381]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 20 19:31:53 compute-0 ceph-mon[74381]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 20 19:31:53 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/4046082601' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 20 19:31:53 compute-0 ceph-mon[74381]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 20 19:31:53 compute-0 ceph-mon[74381]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 20 19:31:53 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/1463086372' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Jan 20 19:31:53 compute-0 ceph-mon[74381]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 20 19:31:53 compute-0 ceph-mon[74381]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 20 19:31:53 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/1667997069' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 20 19:31:53 compute-0 ceph-mon[74381]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 20 19:31:53 compute-0 ceph-mon[74381]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 20 19:31:53 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Jan 20 19:31:53 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:31:53 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Jan 20 19:31:53 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:31:53 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:31:53 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:31:53 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:31:53.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:31:53 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.27629 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:31:53 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 20 19:31:53 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 20 19:31:53 compute-0 sudo[299432]: pam_unix(sudo:session): session closed for user root
Jan 20 19:31:53 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 20 19:31:53 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 20 19:31:53 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump"} v 0)
Jan 20 19:31:53 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3496361360' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Jan 20 19:31:53 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 20 19:31:53 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 20 19:31:54 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.18783 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:54 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1507: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:31:54 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1508: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 20 19:31:54 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 19:31:54 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:31:54 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Jan 20 19:31:54 compute-0 ceph-mon[74381]: log_channel(audit) log [INF] : from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:31:54 compute-0 ceph-mon[74381]: from='client.18708 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:31:54 compute-0 ceph-mon[74381]: from='client.27599 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:31:54 compute-0 ceph-mon[74381]: from='client.27439 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:31:54 compute-0 ceph-mon[74381]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 20 19:31:54 compute-0 ceph-mon[74381]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 20 19:31:54 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/4094224018' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Jan 20 19:31:54 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:31:54 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:31:54 compute-0 ceph-mon[74381]: from='client.27629 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:31:54 compute-0 ceph-mon[74381]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 20 19:31:54 compute-0 ceph-mon[74381]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 20 19:31:54 compute-0 ceph-mon[74381]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 20 19:31:54 compute-0 ceph-mon[74381]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 20 19:31:54 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/3496361360' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Jan 20 19:31:54 compute-0 ceph-mon[74381]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 20 19:31:54 compute-0 ceph-mon[74381]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 20 19:31:54 compute-0 ceph-mon[74381]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 20 19:31:54 compute-0 ceph-mon[74381]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 20 19:31:54 compute-0 ceph-mon[74381]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 20 19:31:54 compute-0 ceph-mon[74381]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 20 19:31:54 compute-0 ceph-mon[74381]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 20 19:31:54 compute-0 ceph-mon[74381]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 20 19:31:54 compute-0 ceph-mon[74381]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 20 19:31:54 compute-0 ceph-mon[74381]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 20 19:31:54 compute-0 ceph-mon[74381]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 20 19:31:54 compute-0 ceph-mon[74381]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 20 19:31:54 compute-0 ceph-mon[74381]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 20 19:31:54 compute-0 ceph-mon[74381]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 20 19:31:54 compute-0 ceph-mon[74381]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 20 19:31:54 compute-0 ceph-mon[74381]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 20 19:31:54 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 19:31:54 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Jan 20 19:31:54 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:31:54 compute-0 ceph-mon[74381]: from='mgr.14712 ' entity='mgr.compute-0.cepfkm' 
Jan 20 19:31:54 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Jan 20 19:31:54 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Jan 20 19:31:54 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Jan 20 19:31:54 compute-0 sudo[299672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:31:54 compute-0 sudo[299672]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:31:54 compute-0 sudo[299672]: pam_unix(sudo:session): session closed for user root
Jan 20 19:31:54 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Jan 20 19:31:54 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3980217397' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Jan 20 19:31:54 compute-0 sudo[299698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Jan 20 19:31:54 compute-0 sudo[299698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:31:55 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.27701 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:55 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df"} v 0)
Jan 20 19:31:55 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1833272356' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Jan 20 19:31:55 compute-0 podman[299806]: 2026-01-20 19:31:55.086648981 +0000 UTC m=+0.080629362 container create 0837d5c8f37a33e2f13045f3839f9e759171dc73a72cdbb926de3f80a695a717 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_hodgkin, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Jan 20 19:31:55 compute-0 podman[299806]: 2026-01-20 19:31:55.02869297 +0000 UTC m=+0.022673381 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:31:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:31:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:31:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:31:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:31:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:31:55 compute-0 ceph-mgr[74676]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:31:55 compute-0 ceph-mgr[74676]: [balancer INFO root] Optimize plan auto_2026-01-20_19:31:55
Jan 20 19:31:55 compute-0 ceph-mgr[74676]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 19:31:55 compute-0 ceph-mgr[74676]: [balancer INFO root] do_upmap
Jan 20 19:31:55 compute-0 ceph-mgr[74676]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', '.nfs', '.rgw.root', 'default.rgw.log', 'volumes', 'default.rgw.meta', 'default.rgw.control', 'backups', '.mgr', 'cephfs.cephfs.meta', 'images']
Jan 20 19:31:55 compute-0 ceph-mgr[74676]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 19:31:55 compute-0 systemd[1]: Started libpod-conmon-0837d5c8f37a33e2f13045f3839f9e759171dc73a72cdbb926de3f80a695a717.scope.
Jan 20 19:31:55 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:31:55 compute-0 podman[299806]: 2026-01-20 19:31:55.269668018 +0000 UTC m=+0.263648399 container init 0837d5c8f37a33e2f13045f3839f9e759171dc73a72cdbb926de3f80a695a717 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_hodgkin, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Jan 20 19:31:55 compute-0 podman[299806]: 2026-01-20 19:31:55.279554975 +0000 UTC m=+0.273535356 container start 0837d5c8f37a33e2f13045f3839f9e759171dc73a72cdbb926de3f80a695a717 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_hodgkin, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Jan 20 19:31:55 compute-0 quizzical_hodgkin[299844]: 167 167
Jan 20 19:31:55 compute-0 systemd[1]: libpod-0837d5c8f37a33e2f13045f3839f9e759171dc73a72cdbb926de3f80a695a717.scope: Deactivated successfully.
Jan 20 19:31:55 compute-0 podman[299806]: 2026-01-20 19:31:55.283389428 +0000 UTC m=+0.277369829 container attach 0837d5c8f37a33e2f13045f3839f9e759171dc73a72cdbb926de3f80a695a717 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_hodgkin, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:31:55 compute-0 podman[299806]: 2026-01-20 19:31:55.289445111 +0000 UTC m=+0.283425502 container died 0837d5c8f37a33e2f13045f3839f9e759171dc73a72cdbb926de3f80a695a717 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_hodgkin, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:31:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-8fcf9785039b373bf821411a0e16a9d4bd671e558ee1a08a0e5064ad5c6bea30-merged.mount: Deactivated successfully.
Jan 20 19:31:55 compute-0 podman[299806]: 2026-01-20 19:31:55.34027131 +0000 UTC m=+0.334251681 container remove 0837d5c8f37a33e2f13045f3839f9e759171dc73a72cdbb926de3f80a695a717 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:31:55 compute-0 podman[299845]: 2026-01-20 19:31:55.351942323 +0000 UTC m=+0.146549706 container health_status 7e1325a59f7c34be185601c5aa062a2cb662a6da56e275496586fe6ed87831a1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'bb16ed9497be89365dfb69bb48a6faa8ef9ef60facde9cb61378eaa4dd9e2816-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-9a5e3eef29fc8a236838c9498ec798a592d223c61d6c9912136ed85e9e065e41-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 20 19:31:55 compute-0 systemd[1]: libpod-conmon-0837d5c8f37a33e2f13045f3839f9e759171dc73a72cdbb926de3f80a695a717.scope: Deactivated successfully.
Jan 20 19:31:55 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.27520 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:55 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:31:55 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.001000026s ======
Jan 20 19:31:55 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:31:55.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Jan 20 19:31:55 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs dump"} v 0)
Jan 20 19:31:55 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/222772057' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Jan 20 19:31:55 compute-0 podman[299911]: 2026-01-20 19:31:55.520457821 +0000 UTC m=+0.048604040 container create 1a59dc4a779e75b4f54a2dd4a720de2451945d5f5ee7877b12e62ef6d246afe7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 20 19:31:55 compute-0 systemd[1]: Started libpod-conmon-1a59dc4a779e75b4f54a2dd4a720de2451945d5f5ee7877b12e62ef6d246afe7.scope.
Jan 20 19:31:55 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:31:55 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:31:55 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.102 - anonymous [20/Jan/2026:19:31:55.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:31:55 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:31:55 compute-0 podman[299911]: 2026-01-20 19:31:55.499049364 +0000 UTC m=+0.027195613 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:31:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca37acbbc92521959440c703d8c37b208c26b15ad4e6e6614996183c13c50330/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:31:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca37acbbc92521959440c703d8c37b208c26b15ad4e6e6614996183c13c50330/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:31:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca37acbbc92521959440c703d8c37b208c26b15ad4e6e6614996183c13c50330/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:31:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca37acbbc92521959440c703d8c37b208c26b15ad4e6e6614996183c13c50330/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:31:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca37acbbc92521959440c703d8c37b208c26b15ad4e6e6614996183c13c50330/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:31:55 compute-0 podman[299911]: 2026-01-20 19:31:55.610448994 +0000 UTC m=+0.138595233 container init 1a59dc4a779e75b4f54a2dd4a720de2451945d5f5ee7877b12e62ef6d246afe7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_williams, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:31:55 compute-0 ceph-mon[74381]: from='client.18783 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:55 compute-0 ceph-mon[74381]: pgmap v1507: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Jan 20 19:31:55 compute-0 ceph-mon[74381]: pgmap v1508: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 20 19:31:55 compute-0 ceph-mon[74381]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 20 19:31:55 compute-0 ceph-mon[74381]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 20 19:31:55 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/3980217397' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Jan 20 19:31:55 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/4047131803' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Jan 20 19:31:55 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/255762849' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Jan 20 19:31:55 compute-0 ceph-mon[74381]: from='mgr.14712 192.168.122.100:0/1019599955' entity='mgr.compute-0.cepfkm' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Jan 20 19:31:55 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/1833272356' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Jan 20 19:31:55 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/2766933334' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Jan 20 19:31:55 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/222772057' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Jan 20 19:31:55 compute-0 podman[299911]: 2026-01-20 19:31:55.621137191 +0000 UTC m=+0.149283410 container start 1a59dc4a779e75b4f54a2dd4a720de2451945d5f5ee7877b12e62ef6d246afe7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:31:55 compute-0 podman[299911]: 2026-01-20 19:31:55.624353968 +0000 UTC m=+0.152500207 container attach 1a59dc4a779e75b4f54a2dd4a720de2451945d5f5ee7877b12e62ef6d246afe7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 20 19:31:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 19:31:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:31:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 19:31:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:31:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 20 19:31:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:31:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:31:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:31:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:31:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:31:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Jan 20 19:31:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:31:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Jan 20 19:31:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:31:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:31:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:31:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 19:31:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:31:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Jan 20 19:31:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:31:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:31:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:31:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 19:31:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:31:55 compute-0 ceph-mgr[74676]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Jan 20 19:31:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 19:31:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:31:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 19:31:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:31:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:31:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:31:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:31:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:31:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:31:55 compute-0 ceph-mgr[74676]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:31:55 compute-0 gracious_williams[299934]: --> passed data devices: 0 physical, 1 LVM
Jan 20 19:31:55 compute-0 gracious_williams[299934]: --> All data devices are unavailable
Jan 20 19:31:55 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs ls"} v 0)
Jan 20 19:31:55 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2178479785' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Jan 20 19:31:55 compute-0 systemd[1]: libpod-1a59dc4a779e75b4f54a2dd4a720de2451945d5f5ee7877b12e62ef6d246afe7.scope: Deactivated successfully.
Jan 20 19:31:55 compute-0 podman[299911]: 2026-01-20 19:31:55.94605208 +0000 UTC m=+0.474198359 container died 1a59dc4a779e75b4f54a2dd4a720de2451945d5f5ee7877b12e62ef6d246afe7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_williams, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 20 19:31:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca37acbbc92521959440c703d8c37b208c26b15ad4e6e6614996183c13c50330-merged.mount: Deactivated successfully.
Jan 20 19:31:56 compute-0 podman[299911]: 2026-01-20 19:31:56.00398711 +0000 UTC m=+0.532133349 container remove 1a59dc4a779e75b4f54a2dd4a720de2451945d5f5ee7877b12e62ef6d246afe7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Jan 20 19:31:56 compute-0 systemd[1]: libpod-conmon-1a59dc4a779e75b4f54a2dd4a720de2451945d5f5ee7877b12e62ef6d246afe7.scope: Deactivated successfully.
Jan 20 19:31:56 compute-0 sudo[299698]: pam_unix(sudo:session): session closed for user root
Jan 20 19:31:56 compute-0 sudo[299986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:31:56 compute-0 sudo[299986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:31:56 compute-0 sudo[299986]: pam_unix(sudo:session): session closed for user root
Jan 20 19:31:56 compute-0 sudo[300031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- lvm list --format json
Jan 20 19:31:56 compute-0 sudo[300031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:31:56 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.18828 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:56 compute-0 ceph-mgr[74676]: log_channel(cluster) log [DBG] : pgmap v1509: 337 pgs: 337 active+clean; 41 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Jan 20 19:31:56 compute-0 podman[300135]: 2026-01-20 19:31:56.546145927 +0000 UTC m=+0.037017978 container create 1746e0dd44590ce99ce0410dba2ebbfbbd8069f0670843b0e4289dc3a8a4d482 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325)
Jan 20 19:31:56 compute-0 systemd[1]: Started libpod-conmon-1746e0dd44590ce99ce0410dba2ebbfbbd8069f0670843b0e4289dc3a8a4d482.scope.
Jan 20 19:31:56 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:31:56 compute-0 podman[300135]: 2026-01-20 19:31:56.62127865 +0000 UTC m=+0.112150711 container init 1746e0dd44590ce99ce0410dba2ebbfbbd8069f0670843b0e4289dc3a8a4d482 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_almeida, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid)
Jan 20 19:31:56 compute-0 podman[300135]: 2026-01-20 19:31:56.530112065 +0000 UTC m=+0.020984137 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:31:56 compute-0 podman[300135]: 2026-01-20 19:31:56.628985207 +0000 UTC m=+0.119857258 container start 1746e0dd44590ce99ce0410dba2ebbfbbd8069f0670843b0e4289dc3a8a4d482 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_almeida, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325)
Jan 20 19:31:56 compute-0 ceph-mon[74381]: from='client.27701 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:56 compute-0 ceph-mon[74381]: from='client.27520 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:56 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/3212722816' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Jan 20 19:31:56 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/935329175' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Jan 20 19:31:56 compute-0 ceph-mon[74381]: from='client.? 192.168.122.100:0/2178479785' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Jan 20 19:31:56 compute-0 ceph-mon[74381]: from='client.? 192.168.122.102:0/487634295' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Jan 20 19:31:56 compute-0 ceph-mon[74381]: from='client.? 192.168.122.101:0/27767419' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Jan 20 19:31:56 compute-0 podman[300135]: 2026-01-20 19:31:56.63242003 +0000 UTC m=+0.123292081 container attach 1746e0dd44590ce99ce0410dba2ebbfbbd8069f0670843b0e4289dc3a8a4d482 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_almeida, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:31:56 compute-0 beautiful_almeida[300160]: 167 167
Jan 20 19:31:56 compute-0 systemd[1]: libpod-1746e0dd44590ce99ce0410dba2ebbfbbd8069f0670843b0e4289dc3a8a4d482.scope: Deactivated successfully.
Jan 20 19:31:56 compute-0 podman[300135]: 2026-01-20 19:31:56.634518306 +0000 UTC m=+0.125390357 container died 1746e0dd44590ce99ce0410dba2ebbfbbd8069f0670843b0e4289dc3a8a4d482 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_almeida, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:31:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-e288af4a5c683434e358a9ebf0e8568163dc56a25461542d13259a4ad47c93ce-merged.mount: Deactivated successfully.
Jan 20 19:31:56 compute-0 podman[300135]: 2026-01-20 19:31:56.682333543 +0000 UTC m=+0.173205594 container remove 1746e0dd44590ce99ce0410dba2ebbfbbd8069f0670843b0e4289dc3a8a4d482 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_almeida, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:31:56 compute-0 systemd[1]: libpod-conmon-1746e0dd44590ce99ce0410dba2ebbfbbd8069f0670843b0e4289dc3a8a4d482.scope: Deactivated successfully.
Jan 20 19:31:56 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds stat"} v 0)
Jan 20 19:31:56 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3178359944' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Jan 20 19:31:56 compute-0 podman[300194]: 2026-01-20 19:31:56.85046083 +0000 UTC m=+0.050835290 container create aa91f970532daa8e930141c5ad352d0fad51d33d442b0d8484ece4e7c7b03982 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_brown, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:31:56 compute-0 systemd[1]: Started libpod-conmon-aa91f970532daa8e930141c5ad352d0fad51d33d442b0d8484ece4e7c7b03982.scope.
Jan 20 19:31:56 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:31:56 compute-0 podman[300194]: 2026-01-20 19:31:56.82963389 +0000 UTC m=+0.030008350 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Jan 20 19:31:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d69f0a5448fe1fc4c61b4be3a85a5ad116079087aa91ccacfed29896d917f676/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:31:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d69f0a5448fe1fc4c61b4be3a85a5ad116079087aa91ccacfed29896d917f676/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:31:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d69f0a5448fe1fc4c61b4be3a85a5ad116079087aa91ccacfed29896d917f676/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:31:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d69f0a5448fe1fc4c61b4be3a85a5ad116079087aa91ccacfed29896d917f676/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:31:56 compute-0 podman[300194]: 2026-01-20 19:31:56.956554847 +0000 UTC m=+0.156929297 container init aa91f970532daa8e930141c5ad352d0fad51d33d442b0d8484ece4e7c7b03982 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_brown, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Jan 20 19:31:56 compute-0 podman[300194]: 2026-01-20 19:31:56.964197883 +0000 UTC m=+0.164572343 container start aa91f970532daa8e930141c5ad352d0fad51d33d442b0d8484ece4e7c7b03982 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_brown, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:31:56 compute-0 podman[300194]: 2026-01-20 19:31:56.967757098 +0000 UTC m=+0.168131558 container attach aa91f970532daa8e930141c5ad352d0fad51d33d442b0d8484ece4e7c7b03982 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_brown, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Jan 20 19:31:57 compute-0 competent_brown[300216]: {
Jan 20 19:31:57 compute-0 competent_brown[300216]:     "0": [
Jan 20 19:31:57 compute-0 competent_brown[300216]:         {
Jan 20 19:31:57 compute-0 competent_brown[300216]:             "devices": [
Jan 20 19:31:57 compute-0 competent_brown[300216]:                 "/dev/loop3"
Jan 20 19:31:57 compute-0 competent_brown[300216]:             ],
Jan 20 19:31:57 compute-0 competent_brown[300216]:             "lv_name": "ceph_lv0",
Jan 20 19:31:57 compute-0 competent_brown[300216]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:31:57 compute-0 competent_brown[300216]:             "lv_size": "21470642176",
Jan 20 19:31:57 compute-0 competent_brown[300216]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=aecbbf3b-b405-507b-97d7-637a83f5b4b1,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=5f53c0c6-6046-4836-83f9-ff93da7e674e,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:31:57 compute-0 competent_brown[300216]:             "lv_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 19:31:57 compute-0 competent_brown[300216]:             "name": "ceph_lv0",
Jan 20 19:31:57 compute-0 competent_brown[300216]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:31:57 compute-0 competent_brown[300216]:             "tags": {
Jan 20 19:31:57 compute-0 competent_brown[300216]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:31:57 compute-0 competent_brown[300216]:                 "ceph.block_uuid": "q6rYEu-qsKn-2muq-ix5g-hMpV-QCTr-dtQg4n",
Jan 20 19:31:57 compute-0 competent_brown[300216]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:31:57 compute-0 competent_brown[300216]:                 "ceph.cluster_fsid": "aecbbf3b-b405-507b-97d7-637a83f5b4b1",
Jan 20 19:31:57 compute-0 competent_brown[300216]:                 "ceph.cluster_name": "ceph",
Jan 20 19:31:57 compute-0 competent_brown[300216]:                 "ceph.crush_device_class": "",
Jan 20 19:31:57 compute-0 competent_brown[300216]:                 "ceph.encrypted": "0",
Jan 20 19:31:57 compute-0 competent_brown[300216]:                 "ceph.osd_fsid": "5f53c0c6-6046-4836-83f9-ff93da7e674e",
Jan 20 19:31:57 compute-0 competent_brown[300216]:                 "ceph.osd_id": "0",
Jan 20 19:31:57 compute-0 competent_brown[300216]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:31:57 compute-0 competent_brown[300216]:                 "ceph.type": "block",
Jan 20 19:31:57 compute-0 competent_brown[300216]:                 "ceph.vdo": "0",
Jan 20 19:31:57 compute-0 competent_brown[300216]:                 "ceph.with_tpm": "0"
Jan 20 19:31:57 compute-0 competent_brown[300216]:             },
Jan 20 19:31:57 compute-0 competent_brown[300216]:             "type": "block",
Jan 20 19:31:57 compute-0 competent_brown[300216]:             "vg_name": "ceph_vg0"
Jan 20 19:31:57 compute-0 competent_brown[300216]:         }
Jan 20 19:31:57 compute-0 competent_brown[300216]:     ]
Jan 20 19:31:57 compute-0 competent_brown[300216]: }
Jan 20 19:31:57 compute-0 systemd[1]: libpod-aa91f970532daa8e930141c5ad352d0fad51d33d442b0d8484ece4e7c7b03982.scope: Deactivated successfully.
Jan 20 19:31:57 compute-0 podman[300194]: 2026-01-20 19:31:57.245492527 +0000 UTC m=+0.445866997 container died aa91f970532daa8e930141c5ad352d0fad51d33d442b0d8484ece4e7c7b03982 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Jan 20 19:31:57 compute-0 ceph-mgr[74676]: log_channel(audit) log [DBG] : from='client.27737 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:31:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-d69f0a5448fe1fc4c61b4be3a85a5ad116079087aa91ccacfed29896d917f676-merged.mount: Deactivated successfully.
Jan 20 19:31:57 compute-0 podman[300194]: 2026-01-20 19:31:57.292860011 +0000 UTC m=+0.493234461 container remove aa91f970532daa8e930141c5ad352d0fad51d33d442b0d8484ece4e7c7b03982 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_brown, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:31:57 compute-0 systemd[1]: libpod-conmon-aa91f970532daa8e930141c5ad352d0fad51d33d442b0d8484ece4e7c7b03982.scope: Deactivated successfully.
Jan 20 19:31:57 compute-0 ceph-mon[74381]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump"} v 0)
Jan 20 19:31:57 compute-0 ceph-mon[74381]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1056574003' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Jan 20 19:31:57 compute-0 ceph-aecbbf3b-b405-507b-97d7-637a83f5b4b1-alertmanager-compute-0[106268]: ts=2026-01-20T19:31:57.324Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Jan 20 19:31:57 compute-0 sudo[300031]: pam_unix(sudo:session): session closed for user root
Jan 20 19:31:57 compute-0 radosgw[89571]: ====== starting new request req=0x7f0e9641d5d0 =====
Jan 20 19:31:57 compute-0 radosgw[89571]: ====== req done req=0x7f0e9641d5d0 op status=0 http_status=200 latency=0.000000000s ======
Jan 20 19:31:57 compute-0 radosgw[89571]: beast: 0x7f0e9641d5d0: 192.168.122.100 - anonymous [20/Jan/2026:19:31:57.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Jan 20 19:31:57 compute-0 sudo[300327]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:31:57 compute-0 sudo[300327]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:31:57 compute-0 sudo[300327]: pam_unix(sudo:session): session closed for user root
Jan 20 19:31:57 compute-0 sudo[300360]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/aecbbf3b-b405-507b-97d7-637a83f5b4b1/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid aecbbf3b-b405-507b-97d7-637a83f5b4b1 -- raw list --format json
Jan 20 19:31:57 compute-0 sudo[300360]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
